text
stringlengths 0
1.22M
|
---|
Chiral vs classical operad
Bojko Bakalov
Department of Mathematics, North Carolina State University,
Raleigh, NC 27695, USA
[email protected]
,
Alberto De Sole
Dipartimento di Matematica, Sapienza Università di Roma,
P.le Aldo Moro 2, 00185 Rome, Italy
[email protected]
www1.mat.uniroma1.it/$\sim$desole
,
Reimundo Heluani
IMPA, Rio de Janeiro, Brasil
[email protected]
and
Victor G. Kac
Department of Mathematics, MIT,
77 Massachusetts Ave., Cambridge, MA 02139, USA
[email protected]
Abstract.
We establish an explicit isomorphism between the associated graded of the filtered
chiral operad and the classical operad,
which is useful for computing the cohomology of vertex algebras.
Key words and phrases:
Chiral and classical operads,
$\Gamma$-residue and $\Gamma$-Fourier transform.
1991 Mathematics Subject Classification:
Primary 18D50;
Secondary 17B63, 17B69, 05C25
1. Introduction
This is the second in a series of papers aimed at computing the cohomology of vertex algebras.
In our first paper [BDSHK18], we introduced the chiral operad $P^{\mathrm{ch}}$ governing vertex algebras.
This is a local version of the chiral operad of Beilinson and Drinfeld [BD04],
associated to a $\mathcal{D}$-module on a smooth algebraic curve $X$,
where the geometric language of $\mathcal{D}$-modules
is replaced by the algebraic language of (integrals of) lambda-brackets.
(By a local version we mean taking $X=\mathbb{A}^{1}$ and the $\mathcal{D}$-module
translation equivariant.)
The operad $P^{\mathrm{ch}}$ governs vertex algebras in the following sense.
To each vector superspace $V$
over a field $\mathbb{F}$ of characteristic zero,
with an even endomorphism $\partial$,
it canonically associates a $\mathbb{Z}$-graded Lie superalgebra
$$W^{\mathrm{ch}}(V)=\bigoplus_{k=-1}^{\infty}W_{k}^{\mathrm{ch}}(V)\,,\qquad%
\text{ where}\quad W^{\mathrm{ch}}_{k}(V)=P^{\mathrm{ch}}(k+1)^{S_{k+1}}\,,$$
(1.1)
such that
$$W^{\mathrm{ch}}_{-1}(V)=V/\partial V\,,\qquad W^{\mathrm{ch}}_{0}(V)=%
\operatorname{End}_{\mathbb{F}[\partial]}V\,.$$
(1.2)
The space $W_{k}^{\mathrm{ch}}(V)$ consists of all elements from $P^{\mathrm{ch}}(k+1)$ that are invariant under the action of the symmetric group $S_{k+1}$, and
the Lie bracket on $W^{\mathrm{ch}}(V)$ is defined via the $\circ_{i}$-products of the operad $P^{\mathrm{ch}}$.
For the construction of the $\mathbb{Z}$-graded Lie superalgebra associated
to an arbitrary linear operad, see [Tam02, LV12, BDSHK18].
An odd element $X\in W_{1}^{\mathrm{ch}}(\Pi V)$ satisfying $[X,X]=0$,
where $\Pi V$ stands for $V$ with the reversed parity,
defines on $V$ the structure a non-unital vertex algebra.
Consequently, $(W^{\mathrm{ch}}(\Pi V),\operatorname{ad}X)$ is a differential graded Lie superalgebra
whose cohomology is the cohomology of the vertex algebra $V$ defined by $X$.
Let us recall the definition of the operad $P^{\mathrm{ch}}$ associated to a vector superspace $V$
with en even endomorphism $\partial$.
For a non-negative integer $n$, denote by $\mathcal{O}^{\star T}_{n}$
the algebra of Laurent polynomials in $z_{i}-z_{j}$, for $1\leq i<j\leq n$.
Denote by $\partial_{i}$ the endomorphism of $V^{\otimes n}$
acting as $\partial$ on the $i$-th factor.
Introduce the superspace
$$V_{n}=V[\lambda_{1},\dots,\lambda_{n}]/\langle\partial+\lambda_{1}+\dots+%
\lambda_{n}\rangle\,,$$
where all variables $\lambda_{i}$ have even parity
and $\langle\Phi\rangle$ stands for the image of the endomorphism $\Phi$.
The superspace of $n$-ary chiral operations $P^{\mathrm{ch}}(n)$
is defined as the set of all linear maps [BDSHK18, Eq. (6.11)]
$$X\colon V^{\otimes n}\otimes\mathcal{O}^{\star T}_{n}\to V_{n}\,,\qquad v%
\otimes f\mapsto X_{\lambda_{1},\dots,\lambda_{n}}(v\otimes f)\,,$$
(1.3)
satisfying the following two sesquilinearity axioms ($i,j=1,\dots,n$):
$$\displaystyle X_{\lambda_{1},\dots,\lambda_{n}}(v\otimes\partial_{z_{i}}f)$$
$$\displaystyle=X_{\lambda_{1},\dots,\lambda_{n}}((\partial_{i}+\lambda_{i})v%
\otimes f)\,,$$
(1.4)
$$\displaystyle X_{\lambda_{1},\dots,\lambda_{n}}(v\otimes(z_{i}-z_{j})f)$$
$$\displaystyle=(\partial_{\lambda_{j}}-\partial_{\lambda_{i}})X_{\lambda_{1},%
\dots,\lambda_{n}}(v\otimes f)\,.$$
Note that $P^{\mathrm{ch}}(n)=W^{\mathrm{ch}}_{n-1}(V)$ is given by (1.2) for $n=0,1$.
In [BDSHK18], we also introduced the action of $S_{n}$ on $P^{\mathrm{ch}}(n)$
and the $\circ_{i}$-products to make $P^{\mathrm{ch}}$ into an operad.
Now suppose that $V$ is equipped with an increasing filtration of $\mathbb{F}[\partial]$-submodules
$$\operatorname{F}^{-1}V=\{0\}\,\subset\,\operatorname{F}^{0}V\,\subset\,%
\operatorname{F}^{1}V\,\subset\,\operatorname{F}^{2}V\,\subset\,\cdots\,%
\subset\,V\,.$$
(1.5)
Taking the increasing filtration of $\mathcal{O}^{\star T}_{n}$ by the number of divisors,
the filtration (1.5)
induces an increasing filtration on $V^{\otimes n}\otimes\mathcal{O}^{\star T}_{n}$.
The latter induces a decreaing filtration on $P^{\mathrm{ch}}(n)$.
The associated graded pieces $\operatorname{gr}^{r}P^{\mathrm{ch}}(n)$, $r\geq 0$,
form a graded operad denoted by $\operatorname{gr}P^{\mathrm{ch}}$.
On the other hand, in [BDSHK18] we introduced the operad $P^{\mathrm{cl}}$, which governs the Poisson vertex algebras in a similar way. Moreover, assuming that $V$ is $\mathbb{Z}$-graded by $\mathbb{F}[\partial]$-submodules, we have the associated $\mathbb{Z}$-grading
$$P^{\mathrm{cl}}(n)=\bigoplus_{r\in\mathbb{Z}}\operatorname{gr}^{r}P^{\mathrm{%
cl}}(n)\,.$$
Next, assuming that $V$ is endowed with the filtration (1.5), we have the linear map
$$\operatorname{gr}^{r}P^{\mathrm{ch}}(n)(V)\to\operatorname{gr}^{r}P^{\mathrm{%
cl}}(n)(\operatorname{gr}V)\,,\qquad r\geq 0\,.$$
(1.6)
These constructions are recalled in Section 3.
We proved in [BDSHK18] that the map (1.6) is an injective morphism of operads.
The surjectivity of this map was proposed as a conjecture.
The main result of the present paper is that the map (1.6) is an isomorphism, provided that the filtration (1.5) is induced by a grading by $\mathbb{F}[\partial]$-modules (Theorem 5.1). In fact, we construct explicitly a map inverse to (1.6), using the notions of $\Gamma$-residue and $\Gamma$-Fourier transform introduced in Section 4.
Theorem 5.1 is important since it allows to compare the vertex algebra and Poisson vertex algebra cohomologies. For example, using the obvious fact that this theorem holds (without any assumptions) for $n=0,1$ and the results of [DSK12, DSK13] on variational Poisson cohomology, we calculated in [BDSHK18] the $0$-th and $1$-st cohomologies of the vertex algebra of free bosons, computing thereby its Casimirs and derivations.
The connection between the classical and variational Poisson cohomology is discussed in [BDSHKV19].
Our operad $P^{\mathrm{cl}}$ was shown to be related to Beilinson and Drinfeld’s operad of classical operations in [BDSHK18, Appendix]. The isomorphism of Theorem 5.1 is stated in [BD04, 3.2.5] in the geometric context under the assumption that the corresponding $\mathcal{D}$-modules are projective.
Throughout the paper the base field $\mathbb{F}$ has characteristic $0$.
Acknowledgments
This research was partially conducted during the authors’ visits
to RIMS in Kyoto and to the University of Rome La Sapienza.
We are grateful to these institutions for their kind hospitality.
The first author is supported in part by a Simons Foundation grant 584741.
The second author was partially supported by the national PRIN fund n. 2015ZWST2C$\_$001
and the University funds n. RM116154CB35DFD3 and RM11715C7FB74D63.
The third author is partially supported by the Bert and Ann Kostant fund.
2. The chiral operad
In this section, we recall the definition of the chiral operad $P^{\mathrm{ch}}$ from [BDSHK18, Section 6].
2.1. The spaces $\mathcal{O}_{n}^{\star T}$
Here and further, we will consider rational functions in the variables $z_{1},z_{2},\dots$ and use the shorthand notation $z_{ij}=z_{i}-z_{j}$.
For a fixed positive integer $n$, we denote by $\mathcal{O}_{n}=\mathbb{F}[z_{1},\dots,z_{n}]$ the algebra of polynomials, and by
$$\mathcal{O}_{n}^{T}=\mathbb{F}[z_{ij}]_{1\leq i<j\leq n}=\operatorname{Ker}%
\sum_{i=1}^{n}\partial_{z_{i}}$$
the subalgebra of translation invariant polynomials.
Let $\mathcal{O}_{n}^{\star}$ be the localization of $\mathcal{O}_{n}$ with respect to the diagonals $z_{i}=z_{j}$ for $i\neq j$, i.e.,
$$\mathcal{O}_{n}^{\star}=\mathbb{F}[z_{1},\dots,z_{n}][z_{ij}^{-1}]_{1\leq i<j%
\leq n},$$
and let
$$\mathcal{O}_{n}^{\star T}=\mathbb{F}[z_{ij}^{\pm 1}]_{1\leq i<j\leq n}.$$
We also set $\mathcal{O}_{0}=\mathcal{O}_{0}^{T}=\mathcal{O}_{0}^{\star}=\mathcal{O}_{0}^{%
\star T}=\mathbb{F}$.
Note that $\mathcal{O}_{1}=\mathcal{O}_{1}^{\star}=\mathbb{F}[z_{1}]$
and $\mathcal{O}_{1}^{T}=\mathcal{O}_{1}^{\star T}=\mathbb{F}$.
At times we will denote $\mathcal{O}^{\star}_{n}=\mathcal{O}^{\star}_{n}(z_{1},\dots,z_{n})$,
if we want to specify the variables $z_{1},\dots,z_{n}$.
We introduce an increasing filtration of $\mathcal{O}_{n}^{\star}$ given by the number of divisors:
$$\begin{split}\displaystyle\operatorname{F}^{-1}\mathcal{O}_{n}^{\star}&%
\displaystyle=\{0\}\subset\operatorname{F}^{0}\mathcal{O}_{n}^{\star}=\mathcal%
{O}_{n}\subset\operatorname{F}^{1}\mathcal{O}_{n}^{\star}=\sum_{i<j}\mathcal{O%
}_{n}[z_{ij}^{-1}]\subset\\
\displaystyle\cdots&\displaystyle\subset\operatorname{F}^{r}\mathcal{O}_{n}^{%
\star}=\sum\mathcal{O}_{n}[z_{i_{1},j_{1}}^{-1},\dots,z_{i_{r},j_{r}}^{-1}]%
\subset\cdots\subset\operatorname{F}^{n-1}\mathcal{O}_{n}^{\star}=\mathcal{O}_%
{n}^{\star}.\end{split}$$
(2.1)
In other words, the elements of $\operatorname{F}^{r}\mathcal{O}_{n}^{\star}$ are sums of rational functions with
at most $r$ poles each, not counting multiplicities.
The fact that $\operatorname{F}^{n-1}\mathcal{O}_{n}^{\star}=\mathcal{O}_{n}^{\star}$ was proved in [BDSHK18]
(it is a consequence of the proof of Lemma 8.4 there).
By restriction, we have the induced increasing filtration
$$\operatorname{F}^{r}\mathcal{O}^{\star T}_{n}=\operatorname{F}^{r}\mathcal{O}^%
{\star}_{n}\cap\mathcal{O}^{\star T}_{n}\,.$$
2.2. The operad $P^{\mathrm{ch}}$
Let $V=V_{\bar{0}}\oplus V_{\bar{1}}$ be a vector superspace endowed
with an even endomorphism $\partial$. For every $i=1,\dots,n$, we will denote by $\partial_{i}$ the action of $\partial$ on the $i$-th factor of the tensor power $V^{\otimes n}$:
$$\partial_{i}v=v_{1}\otimes\cdots\otimes\partial v_{i}\otimes\cdots\otimes v_{n%
}\quad\text{for}\quad v=v_{1}\otimes\cdots\otimes v_{n}\in V^{\otimes n}.$$
(2.2)
Consider the space
$$V[\lambda_{1},\dots,\lambda_{n}]\big{/}\big{\langle}\partial+\lambda_{1}+\dots%
+\lambda_{n}\big{\rangle}\,,$$
where here and further, $\langle\Phi\rangle$ denotes the image of an endomorphim $\Phi$.
The space of $n$-ary chiral operations $P^{\mathrm{ch}}(n)$ is defined as the set of all linear maps [BDSHK18, (6.11)]
$$\begin{split}\displaystyle X\colon V^{\otimes n}\otimes\mathcal{O}_{n}^{\star T%
}&\displaystyle\to V[\lambda_{1},\dots,\lambda_{n}]\big{/}\big{\langle}%
\partial+\lambda_{1}+\dots+\lambda_{n}\big{\rangle}\,,\\
\displaystyle\vphantom{\Big{(}}v_{1}\otimes\dots\otimes v_{n}\otimes&%
\displaystyle f(z_{1},\dots,z_{n})\mapsto X_{\lambda_{1},\dots,\lambda_{n}}(v_%
{1}\otimes\dots\otimes v_{n}\otimes f)\\
&\displaystyle=X_{\lambda_{1},\dots,\lambda_{n}}^{z_{1},\dots,z_{n}}(v_{1}%
\otimes\dots\otimes v_{n}\otimes f(z_{1},\dots,z_{n}))\,,\end{split}$$
(2.3)
satisfying the following two sesquilinearity conditions:
$$\displaystyle X_{\lambda_{1},\dots,\lambda_{n}}(v\otimes\partial_{z_{i}}f)$$
$$\displaystyle=X_{\lambda_{1},\dots,\lambda_{n}}((\partial_{i}+\lambda_{i})v%
\otimes f)\,,$$
(2.4)
$$\displaystyle X_{\lambda_{1},\dots,\lambda_{n}}(v\otimes z_{ij}f)$$
$$\displaystyle=(\partial_{\lambda_{j}}-\partial_{\lambda_{i}})X_{\lambda_{1},%
\dots,\lambda_{n}}(v\otimes f)\,.$$
(2.5)
For example, we have:
$$\displaystyle P^{\mathrm{ch}}(0)$$
$$\displaystyle=\operatorname{Hom}_{\mathbb{F}}(\mathbb{F},V/\langle\partial%
\rangle)\cong V/\partial V,$$
(2.6)
$$\displaystyle P^{\mathrm{ch}}(1)$$
$$\displaystyle=\operatorname{Hom}_{\mathbb{F}[\partial]}(V,V[\lambda_{0}]/%
\langle\partial+\lambda_{0}\rangle)\cong\operatorname{End}_{\mathbb{F}[%
\partial]}(V).$$
(2.7)
The $\mathbb{Z}/2\mathbb{Z}$-grading of the superspace $P^{\mathrm{ch}}(n)$ is induced
by that of the vector superspace $V$, where $\mathcal{O}_{n}^{\star T}$ and all
variables $\lambda_{i}$ are considered even.
One can also define an action of the symmetric group and compositions of chiral operations,
turning $P^{\mathrm{ch}}$ into an operad (see [BDSHK18, (6.25)]).
However, these structures will not be needed in the present paper,
hence we do not recall their definition.
2.3. Filtration of $P^{\mathrm{ch}}$
Now suppose that $V$ is equipped with an increasing filtration
of $\mathbb{F}[\partial]$-submodules
$$\operatorname{F}^{-1}V=\{0\}\,\subset\,\operatorname{F}^{0}V\,\subset\,%
\operatorname{F}^{1}V\,\subset\,\operatorname{F}^{2}V\,\subset\,\cdots\,%
\subset\,V\,.$$
(2.8)
Since $\mathcal{O}_{n}^{\star T}$ is also filtered by (2.1), we obtain
an increasing filtration on the tensor products
$$\operatorname{F}^{s}\big{(}V^{\otimes n}\otimes\mathcal{O}_{n}^{\star T}\big{)%
}=\sum_{s_{1}+\dots+s_{n}+p\leq s}\operatorname{F}^{s_{1}}V\otimes\cdots%
\otimes\operatorname{F}^{s_{n}}V\otimes\operatorname{F}^{p}\mathcal{O}_{n}^{%
\star T}\,,$$
if $s\geq 0$, and $\operatorname{F}^{s}(V^{\otimes n}\otimes\mathcal{O}_{n}^{\star T})=\{0\}$ if $s<0$.
This induces a decreasing filtration of $P^{\mathrm{ch}}(n)$, where $\operatorname{F}^{r}P^{\mathrm{ch}}(n)$ for $r\in\mathbb{Z}$ is defined
as the set of all elements $X$ such that
$$X\big{(}\operatorname{F}^{s}(V^{\otimes n}\otimes\mathcal{O}_{n}^{\star T})%
\big{)}\subset(\operatorname{F}^{s-r}V)[\lambda_{1},\dots,\lambda_{n}]/\langle%
\partial+\lambda_{1}+\dots+\lambda_{n}\rangle\,,$$
(2.9)
for every $s$.
Then, as usual, the associated graded spaces are defined by
$$\operatorname{gr}^{r}P^{\mathrm{ch}}(n)=\operatorname{F}^{r}P^{\mathrm{ch}}(n)%
/\operatorname{F}^{r+1}P^{\mathrm{ch}}(n).$$
(2.10)
In fact, the composition maps are compatible with the filtration (2.9)
and, therefore, the associated graded (2.10) is a graded operad (see [BDSHK18, Proposition 8.1]).
3. The classical operad
Here we recall the definition of the classical operad $P^{\mathrm{cl}}$ from [BDSHK18, Section 10].
3.1. $n$-graphs
For a positive integer $n$, we define an $n$-graph
as a graph $\Gamma$ with $n$ vertices labeled by $1,\dots,n$
and an arbitrary collection $E(\Gamma)$ of oriented edges.
We denote
by $\mathcal{G}(n)$ the collection of all $n$-graphs
without tadpoles,
and by $\mathcal{G}_{0}(n)$ the collection of all acyclic $n$-graphs,
i.e., $n$-graphs that have
no cycles (including tadpoles and multiple edges).
For example, $\mathcal{G}_{0}(1)=\{\,\begin{subarray}{c}\circ\\
1\end{subarray}\,\}$ consists of the graph with a single vertex labelled $1$ and no edges,
and $\mathcal{G}_{0}(2)$ consists of three graphs:
$$\begin{array}[]{l}\leavevmode\hbox to254.13pt{\vbox to37.62pt{\pgfpicture%
\makeatletter\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{%
pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }%
\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}%
\pgfsys@invoke{ }\nullfont\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}%
{{}}{}{{{}}
{}{}{}{}{}{}{}{}
}{}\pgfsys@moveto{14.226378pt}{28.452756pt}\pgfsys@moveto{16.218071pt}{28.4527%
56pt}\pgfsys@curveto{16.218071pt}{29.552738pt}{15.32636pt}{30.444449pt}{14.226%
378pt}{30.444449pt}\pgfsys@curveto{13.126396pt}{30.444449pt}{12.234685pt}{29.5%
52738pt}{12.234685pt}{28.452756pt}\pgfsys@curveto{12.234685pt}{27.352774pt}{13%
.126396pt}{26.461063pt}{14.226378pt}{26.461063pt}\pgfsys@curveto{15.32636pt}{2%
6.461063pt}{16.218071pt}{27.352774pt}{16.218071pt}{28.452756pt}%
\pgfsys@closepath\pgfsys@moveto{14.226378pt}{28.452756pt}\pgfsys@stroke%
\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{
{}{}}
{{}{{}}}{{}{}}{}{{}{}}
{
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1%
.0}{10.476435pt}{17.416967pt}\pgfsys@invoke{ }\hbox{{\definecolor{%
pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }%
\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{1}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{}{{}}{}{{{}}
{}{}{}{}{}{}{}{}
}{}\pgfsys@moveto{42.679134pt}{28.452756pt}\pgfsys@moveto{44.670827pt}{28.4527%
56pt}\pgfsys@curveto{44.670827pt}{29.552738pt}{43.779116pt}{30.444449pt}{42.67%
9134pt}{30.444449pt}\pgfsys@curveto{41.579152pt}{30.444449pt}{40.687441pt}{29.%
552738pt}{40.687441pt}{28.452756pt}\pgfsys@curveto{40.687441pt}{27.352774pt}{4%
1.579152pt}{26.461063pt}{42.679134pt}{26.461063pt}\pgfsys@curveto{43.779116pt}%
{26.461063pt}{44.670827pt}{27.352774pt}{44.670827pt}{28.452756pt}%
\pgfsys@closepath\pgfsys@moveto{42.679134pt}{28.452756pt}\pgfsys@stroke%
\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{
{}{}}
{{}{{}}}{{}{}}{}{{}{}}
{
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1%
.0}{38.929191pt}{17.416967pt}\pgfsys@invoke{ }\hbox{{\definecolor{%
pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }%
\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{2}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{
{}{}}
{{}{{}}}{{}{}}{}{{}{}}
{
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1%
.0}{53.155569pt}{21.684881pt}\pgfsys@invoke{ }\hbox{{\definecolor{%
pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }%
\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{,}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{}{{}}{}{{{}}
{}{}{}{}{}{}{}{}
}{}\pgfsys@moveto{105.275197pt}{28.452756pt}\pgfsys@moveto{107.26689pt}{28.452%
756pt}\pgfsys@curveto{107.26689pt}{29.552738pt}{106.375179pt}{30.444449pt}{105%
.275197pt}{30.444449pt}\pgfsys@curveto{104.175216pt}{30.444449pt}{103.283504pt%
}{29.552738pt}{103.283504pt}{28.452756pt}\pgfsys@curveto{103.283504pt}{27.3527%
74pt}{104.175216pt}{26.461063pt}{105.275197pt}{26.461063pt}\pgfsys@curveto{106%
.375179pt}{26.461063pt}{107.26689pt}{27.352774pt}{107.26689pt}{28.452756pt}%
\pgfsys@closepath\pgfsys@moveto{105.275197pt}{28.452756pt}\pgfsys@stroke%
\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{
{}{}}
{{}{{}}}{{}{}}{}{{}{}}
{
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1%
.0}{101.525254pt}{17.416967pt}\pgfsys@invoke{ }\hbox{{\definecolor{%
pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }%
\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{1}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{}{{}}{}{{{}}
{}{}{}{}{}{}{}{}
}{}\pgfsys@moveto{133.727953pt}{28.452756pt}\pgfsys@moveto{135.719646pt}{28.45%
2756pt}\pgfsys@curveto{135.719646pt}{29.552738pt}{134.827935pt}{30.444449pt}{1%
33.727953pt}{30.444449pt}\pgfsys@curveto{132.627972pt}{30.444449pt}{131.73626%
pt}{29.552738pt}{131.73626pt}{28.452756pt}\pgfsys@curveto{131.73626pt}{27.3527%
74pt}{132.627972pt}{26.461063pt}{133.727953pt}{26.461063pt}\pgfsys@curveto{134%
.827935pt}{26.461063pt}{135.719646pt}{27.352774pt}{135.719646pt}{28.452756pt}%
\pgfsys@closepath\pgfsys@moveto{133.727953pt}{28.452756pt}\pgfsys@stroke%
\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{
{}{}}
{{}{{}}}{{}{}}{}{{}{}}
{
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1%
.0}{129.97801pt}{17.416967pt}\pgfsys@invoke{ }\hbox{{\definecolor{%
pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }%
\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{2}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{}{{}}{}
{}{}{}{{
{\pgfsys@beginscope
\pgfsys@setlinewidth{0.32pt}\pgfsys@setdash{}{0.0pt}\pgfsys@roundcap%
\pgfsys@roundjoin{}
{}{}{}
{}{}{}
\pgfsys@moveto{-1.2pt}{1.6pt}\pgfsys@curveto{-1.1pt}{1.0pt}{0.0pt}{0.1pt}{0.3%
pt}{0.0pt}\pgfsys@curveto{0.0pt}{-0.1pt}{-1.1pt}{-1.0pt}{-1.2pt}{-1.6pt}%
\pgfsys@stroke\pgfsys@endscope}}
}{}{}{{}}\pgfsys@moveto{108.120473pt}{28.452756pt}\pgfsys@lineto{130.422678pt}%
{28.452756pt}\pgfsys@stroke\pgfsys@invoke{ }{{}{{}}{}{}{{}}{{{}}{{{}}{%
\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{130%
.422678pt}{28.452756pt}\pgfsys@invoke{ }\pgfsys@invoke{ \lxSVG@closescope%
}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}{{}}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{
{}{}}
{{}{{}}}{{}{}}{}{{}{}}
{
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1%
.0}{138.513837pt}{21.684881pt}\pgfsys@invoke{ }\hbox{{\definecolor{%
pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }%
\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{,}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{}{{}}{}{{{}}
{}{}{}{}{}{}{}{}
}{}\pgfsys@moveto{199.169292pt}{28.452756pt}\pgfsys@moveto{201.160985pt}{28.45%
2756pt}\pgfsys@curveto{201.160985pt}{29.552738pt}{200.269274pt}{30.444449pt}{1%
99.169292pt}{30.444449pt}\pgfsys@curveto{198.06931pt}{30.444449pt}{197.177599%
pt}{29.552738pt}{197.177599pt}{28.452756pt}\pgfsys@curveto{197.177599pt}{27.35%
2774pt}{198.06931pt}{26.461063pt}{199.169292pt}{26.461063pt}\pgfsys@curveto{20%
0.269274pt}{26.461063pt}{201.160985pt}{27.352774pt}{201.160985pt}{28.452756pt}%
\pgfsys@closepath\pgfsys@moveto{199.169292pt}{28.452756pt}\pgfsys@stroke%
\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{
{}{}}
{{}{{}}}{{}{}}{}{{}{}}
{
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1%
.0}{195.419349pt}{17.416967pt}\pgfsys@invoke{ }\hbox{{\definecolor{%
pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }%
\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{1}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{}{{}}{}{{{}}
{}{}{}{}{}{}{}{}
}{}\pgfsys@moveto{227.622048pt}{28.452756pt}\pgfsys@moveto{229.613741pt}{28.45%
2756pt}\pgfsys@curveto{229.613741pt}{29.552738pt}{228.72203pt}{30.444449pt}{22%
7.622048pt}{30.444449pt}\pgfsys@curveto{226.522066pt}{30.444449pt}{225.630355%
pt}{29.552738pt}{225.630355pt}{28.452756pt}\pgfsys@curveto{225.630355pt}{27.35%
2774pt}{226.522066pt}{26.461063pt}{227.622048pt}{26.461063pt}\pgfsys@curveto{2%
28.72203pt}{26.461063pt}{229.613741pt}{27.352774pt}{229.613741pt}{28.452756pt}%
\pgfsys@closepath\pgfsys@moveto{227.622048pt}{28.452756pt}\pgfsys@stroke%
\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{
{}{}}
{{}{{}}}{{}{}}{}{{}{}}
{
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1%
.0}{223.872105pt}{17.416967pt}\pgfsys@invoke{ }\hbox{{\definecolor{%
pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }%
\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{2}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{}{{}}{}
{}{}{}{}{}{}{{}}\pgfsys@moveto{202.474568pt}{28.452756pt}\pgfsys@lineto{224.77%
6772pt}{28.452756pt}\pgfsys@stroke\pgfsys@invoke{ }{{}{{}}{}{}{{}}{{{}}{{{}}{%
\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{-1.0}{0.0}{0.0}{-1.0}{2%
02.474568pt}{28.452756pt}\pgfsys@invoke{ }\pgfsys@invoke{ %
\lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}{{}}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{
{}{}}
{{}{{}}}{{}{}}{}{{}{}}
{
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1%
.0}{238.098483pt}{21.684881pt}\pgfsys@invoke{ }\hbox{{\definecolor{%
pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }%
\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{.}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}%
\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}%
\lxSVG@closescope\endpgfpicture}}\\
E(\Gamma)=\emptyset\,\,\,\,\,,\,\,\,\,\qquad E(\Gamma)\!=\!\{1\!\to\!2\}\,\,,%
\,\,\qquad E(\Gamma)\!=\!\{2\!\to\!1\}\end{array}$$
(3.1)
By convention, we also let $\mathcal{G}_{0}(0)=\mathcal{G}(0)=\{\emptyset\}$ be the set consisting of a single element (the empty graph with $0$ vertices).
A graph $L$ will be called a line if its set of edges is of the form $\{i_{1}\to i_{2},\,i_{2}\to i_{3},\dots,\,i_{n-1}\to i_{n}\}$ where $\{i_{1},\dots,i_{n}\}$ is a permutation of $\{1,\dots,n\}$:
$$L=$$$$i_{1}$$$$i_{2}$$$$\cdots$$$$i_{n}$$.
(3.2)
An oriented cycle $C$ in a graph $\Gamma$ is, by definition,
a collection of edges of $\Gamma$ forming a closed sequence (possibly with self intersections):
$$C=\{i_{1}\to i_{2},\,i_{2}\to i_{3},\dots,\,i_{s-1}\to i_{s},\,i_{s}\to i_{1}%
\}\subset E(\Gamma)\,.$$
(3.3)
There is a natural (left) action of the symmetric group $S_{n}$
on the set $\mathcal{G}(n)$ of $n$-graphs, which preserves the subset $\mathcal{G}_{0}(n)$ of acyclic graphs.
Given $\Gamma\in\mathcal{G}(n)$ and $\sigma\in S_{n}$,
we define $\sigma(\Gamma)$ to be the same graph as $\Gamma$,
but with the vertex that was labelled $1$ relabelled as $\sigma(1)$,
the vertex $2$ relabelled as $\sigma(2)$,
and so on up to the vertex $n$ now relabelled as $\sigma(n)$.
For example, if $L_{0}$ is the line with edges $\{1\to 2,2\to 3,\dots,n-1\to n\}$ and $\sigma\in S_{n}$,
then $\sigma(L_{0})=L$ is the line (3.2) where $i_{k}=\sigma(k)$.
3.2. The operad $P^{\mathrm{cl}}$
As before, let $V=V_{\bar{0}}\oplus V_{\bar{1}}$ be a vector superspace endowed
with an even endomorphism $\partial$.
As a vector superspace, $P^{\mathrm{cl}}(n)$ is defined as the vector superspace (with the pointwise addition and scalar multiplication)
of all maps
$$\displaystyle Y\colon\mathcal{G}(n)\times V^{\otimes n}$$
$$\displaystyle\to V[\lambda_{1},\dots,\lambda_{n}]/\langle\partial+\lambda_{1}+%
\dots+\lambda_{n}\rangle\,,$$
(3.4)
$$\displaystyle\Gamma\times v$$
$$\displaystyle\mapsto Y^{\Gamma}_{\lambda_{1},\dots,\lambda_{n}}(v)\,,$$
(3.5)
which depend linearly on
$v\in V^{\otimes n}$,
and satisfy
the cycle relations
and sesquilinearity conditions described below.
The $\mathbb{Z}/2\mathbb{Z}$-grading of the superspace $P^{\mathrm{cl}}(n)$ is induced
by that of the vector superspace $V$,
by letting $\Gamma$ and the variables $\lambda_{i}$ be even.
The cycle relations state that if an $n$-graph $\Gamma\in\mathcal{G}(n)$ contains an oriented cycle
$C\subset E(\Gamma)$, then:
$$Y^{\Gamma}=0\,,\qquad\sum_{e\in C}Y^{\Gamma\backslash e}=0\,,$$
(3.6)
where $\Gamma\backslash e\in\mathcal{G}(n)$ is the graph obtained from $\Gamma$
by removing the edge $e$ and keeping the same set of vertices.
In particular, applying the second cycle relation (3.6) for an oriented cycle of length $2$,
we see that changing the orientation of a single edge of $\Gamma\in\mathcal{G}(n)$
amounts to a change of sign of $Y^{\Gamma}$.
To write the sesquilinearity conditions, let us first introduce some notation.
For a graph $G$ with a set of vertices labeled by a subset $I\subset\{1,\dots,n\}$,
we let
$$\lambda_{G}=\sum_{i\in I}\lambda_{i}\,,\qquad\partial_{G}=\sum_{i\in I}%
\partial_{i}\,,$$
(3.7)
where as before $\partial_{i}$ denotes the action of $\partial$ on the $i$-th factor in $V^{\otimes n}$
(see (2.2)).
Then for every connected component $G$ of $\Gamma\in\mathcal{G}(n)$ with a set of vertices $I$,
we have two sesquilinearity conditions:
$$\displaystyle(\partial_{\lambda_{j}}-\partial_{\lambda_{i}})Y^{\Gamma}_{%
\lambda_{1},\dots,\lambda_{n}}(v)$$
$$\displaystyle=0\quad\text{for all}\quad i,j\in I\,,$$
(3.8)
$$\displaystyle Y^{\Gamma}_{\lambda_{1},\dots,\lambda_{n}}\bigl{(}(\partial_{G}+%
\lambda_{G})v\bigr{)}$$
$$\displaystyle=0\,,\qquad v\in V^{\otimes n}\,.$$
(3.9)
The first condition (3.8) means that the polynomial
$Y^{\Gamma}_{\lambda_{1},\dots,\lambda_{n}}(v)$
is a function of the variables $\lambda_{\Gamma_{\alpha}}$, where
the $\Gamma_{\alpha}$’s are the connected components of $\Gamma$,
and not of the variables $\lambda_{1},\dots,\lambda_{n}$ separately.
In [BDSHK18, (10.11)], we also defined the action of the symmetric group
and compositions of maps in $P^{\mathrm{cl}}$, turning it into an operad.
However, these structures will not be needed in the present paper.
3.3. Grading of $P^{\mathrm{cl}}$
Suppose now that $V=\bigoplus_{t\in\mathbb{Z}}\operatorname{gr}^{t}V$ is graded by $\mathbb{F}[\partial]$-submodules,
and consider the induced grading of the tensor powers $V^{\otimes n}$:
$$\operatorname{gr}^{t}V^{\otimes n}=\sum_{t_{1}+\dots+t_{n}=t}\operatorname{gr}%
^{t_{1}}V\otimes\cdots\otimes\operatorname{gr}^{t_{n}}V\,.$$
Then $P^{\mathrm{cl}}$
has a grading defined as follows:
$Y\in\operatorname{gr}^{r}P^{\mathrm{cl}}(n)$ if
$$Y^{\Gamma}_{\lambda_{1},\dots,\lambda_{n}}(\operatorname{gr}^{t}V^{\otimes n})%
\,\subset\,(\operatorname{gr}^{s+t-r}V)[\lambda_{1},\dots,\lambda_{n}]/\langle%
\partial+\lambda_{1}+\dots+\lambda_{n}\rangle$$
(3.10)
for every graph $\Gamma\in\mathcal{G}(n)$ with $s$ edges
(see [BDSHK18, Remark 10.2]).
3.4. The map from $\operatorname{gr}P^{\mathrm{ch}}$ to $P^{\mathrm{cl}}$
For a graph $\Gamma\in\mathcal{G}(n)$ with a set of edges $E(\Gamma)$, we introduce the function
$$p_{\Gamma}=p_{\Gamma}(z_{1},\dots,z_{n})=\prod_{(i\to j)\in E(\Gamma)}z_{ij}^{%
-1}\,,\qquad z_{ij}=z_{i}-z_{j}\,.$$
(3.11)
Note that $p_{\Gamma}\in\operatorname{F}^{s}\mathcal{O}_{n}^{\star T}$ if $\Gamma$ has $s$ edges.
Lemma 3.1.
Let $\Gamma\in\mathcal{G}(n)$ be a graph with $s$ edges, containing a cycle $C\subset E(\Gamma)$.
Then:
(a)
$p_{\Gamma}\in\operatorname{F}^{s-1}\mathcal{O}_{n}^{\star T}$;
(b)
$\sum_{e\in C}p_{\Gamma\backslash e}=0$.
Proof.
The proofs of both statements are contained
in the proof of [BDSHK18, Lemma 8.4].
∎
Let $V$ be filtered by $\mathbb{F}[\partial]$-submodules as in (2.8).
Then we have the filtered operad $P^{\mathrm{ch}}$ associated to $V$
and the graded operad $P^{\mathrm{cl}}$ associated to the graded superspace $\operatorname{gr}V$.
These two operads are related as follows [BDSHK18, Section 8].
Let $X\in\operatorname{F}^{r}P^{\mathrm{ch}}(n)(V)$ and $\Gamma\in\mathcal{G}(n)$ be a graph with $s$ edges. Then for every $v\in\operatorname{F}^{t}V^{\otimes n}$, we have
$v\otimes p_{\Gamma}\in\operatorname{F}^{s+t}(V^{\otimes n}\otimes\mathcal{O}_{%
n}^{\star T})$ and, by (2.9),
$$X_{\lambda_{1},\dots,\lambda_{n}}(v\otimes p_{\Gamma})\in(\operatorname{F}^{s+%
t-r}V)[\lambda_{1},\dots,\lambda_{n}]/\langle\partial+\lambda_{1}+\dots+%
\lambda_{n}\rangle\,.$$
(3.12)
We define $Y\in\operatorname{gr}^{r}P^{\mathrm{cl}}(n)(\operatorname{gr}V)$ by:
$$\begin{split}\displaystyle Y^{\Gamma}_{\lambda_{1},\dots,\lambda_{n}}\bigl{(}v%
&\displaystyle+\operatorname{F}^{t-1}V^{\otimes n}\bigr{)}=X_{\lambda_{1},%
\dots,\lambda_{n}}(v\otimes p_{\Gamma})\\
&\displaystyle+(\operatorname{F}^{s+t-r-1}V)[\lambda_{1},\dots,\lambda_{n}]/%
\langle\partial+\lambda_{1}+\dots+\lambda_{n}\rangle\\
&\displaystyle\in(\operatorname{gr}^{s+t-r}V)[\lambda_{1},\dots,\lambda_{n}]/%
\langle\partial+\lambda_{1}+\dots+\lambda_{n}\rangle\,.\end{split}$$
(3.13)
Clearly, the right-hand side depends only
on the image $\bar{v}=v+\operatorname{F}^{t-1}V^{\otimes n}\in\operatorname{gr}^{t}V^{%
\otimes n}$
and not on the choice of representative $v\in\operatorname{F}^{t}V^{\otimes n}$.
We write (3.13) simply as
$$Y^{\Gamma}_{\lambda_{1},\dots,\lambda_{n}}(\bar{v})=\overline{X_{\lambda_{1},%
\dots,\lambda_{n}}(v\otimes p_{\Gamma})}\,.$$
(3.14)
The fact that $Y\in\operatorname{gr}^{r}P^{\mathrm{cl}}(n)(\operatorname{gr}V)$ was proved in [BDSHK18, Corollary 8.8].
If $X\in\operatorname{F}^{r+1}P^{\mathrm{ch}}(n)$, then the right-hand side of (3.13) (or (3.14)) vanishes.
Thus, (3.13) defines a map
$$\operatorname{gr}^{r}P^{\mathrm{ch}}(n)(V)\,\to\,\operatorname{gr}^{r}P^{%
\mathrm{cl}}(n)(\operatorname{gr}V)\,,\qquad\bar{X}=X+\operatorname{F}^{r+1}%
\mapsto Y\,.$$
(3.15)
Theorem 3.2 ([BDSHK18]).
The map (3.15) is an injective homomorphism of graded operads.
We will not need the full statement here (see [BDSHK18, Theorem 10.12]), but let us observe that (3.13) is compatible with the actions of the symmetric group $S_{n}$.
In [BDSHK18, Remark 10.15], we also posed the question whether the map (3.15)
is an isomorphism.
The main result of the present paper is Theorem 5.1 below,
which says, in particular, that this is indeed the case under the assumption that $V$
is graded as an $\mathbb{F}[\partial]$-module.
4. $\Gamma$-residues and $\Gamma$-Fourier transform
4.1. Lines
Given a positive integer $n$, let $\mathcal{L}(n)\subset\mathcal{G}(n)$
be the set of graphs that are disjoint unions of lines,
i.e., graphs of the following form:
$$\Gamma=$$$$i^{1}_{1}$$$$i^{1}_{2}$$$$\cdots$$$$i^{1}_{k_{1}}$$$$i^{2}_{1}$$$$i^{2}_{2}$$$$\cdots$$$$i^{2}_{k_{2}}$$$$\cdots$$$$i^{p}_{1}$$$$i^{p}_{2}$$$$\cdots$$$$i^{p}_{k_{p}}$$$$=L_{1}\sqcup L_{2}\sqcup\dots\sqcup L_{p}\,,$$
(4.1)
where $k_{1},\dots,k_{p}\geq 1$ are such that $k_{1}+\dots+k_{p}=n$,
and the set of indices $\{i^{a}_{b}\}$ is a permutation of $\{1,\dots,n\}$ such that
$$i^{1}_{1}=1\,<\,i^{2}_{1}\,<\,\cdots\,<\,i^{p}_{1}\,,\qquad i^{\ell}_{1}=\min%
\{i^{\ell}_{1},\dots,i^{\ell}_{k_{\ell}}\}\,,\,\,\ell=1,\dots,p\,.$$
(4.2)
In (4.1), $L_{r}$ denotes the $r$-th connected component of $\Gamma$
(which is a connected oriented line of length $k_{r}$).
For example,
when $k_{r}=1$ the line $L_{r}$ consists of the single vertex indexed $i^{r}_{1}$.
We also denote by $\mathcal{L}(n,p)\subset\mathcal{L}(n)$
the subset of graphs $\Gamma$ as in (4.1)
with the fixed number $p$ of connected components.
Consider the vector space $\mathbb{F}\mathcal{G}(n)$ linearly spanned by the set $\mathcal{G}(n)$.
The cycle relations in $\mathbb{F}\mathcal{G}(n)$ are the following elements:
(i)
all graphs $\Gamma\in\mathcal{G}(n)$ containing a cycle;
(ii)
all linear combinations of the form
$\sum_{e\in C}\Gamma\backslash e$,
for $\Gamma\in\mathcal{G}(n)$
and all oriented cycles $C\subset E(\Gamma)$.
Note that if we reverse an arrow in a graph $\Gamma\in\mathcal{G}(n)$,
we obtain, modulo cycle relations, the element $-\Gamma\in\mathbb{F}\mathcal{G}(n)$.
Lemma 4.1.
The set $\mathcal{L}(n)$ spans the space $\mathbb{F}\mathcal{G}(n)$ modulo
the cycle relations.
Proof.
Let $\Gamma\in\mathcal{G}(n)$.
First, we claim that, modulo cycle relations,
we can assume that the vertex $1$ is a leaf,
i.e., there is no more than one edge in or out of it.
Indeed, if there are $\ell\geq 2$ edges in or out of $1$, then
up to reversing arrows
(i.e., up to a sign modulo cycle relations),
we can assume that there are two edges as follows:
$$1$$
Then, modulo cycle relations, this is equivalent to
$$1$$$$\equiv-$$$$1$$$$-$$$$1$$
Hence, $\Gamma$ is equivalent
to a linear combination of graphs in which there are $\ell-1$ edges in or out of the vertex $1$.
Proceeding by induction, we get the claim.
Next, suppose that $1$ is a leaf of $\Gamma$ connected with an edge to the vertex $i$
(if $1$ is an isolated vertex, let $i=2$).
Denote by $\Gamma^{\prime}\in\mathcal{G}(n-1)$
the subgraph of $\Gamma$ obtained by deleting the vertex $1$ and any edge attached to it.
Notice that, under the natural embedding of $\mathcal{G}(n-1)$ into $\mathcal{G}(n)$, every cycle relation in $\mathcal{G}(n-1)$ corresponds to a cycle relation in $\mathcal{G}(n)$.
By induction on $n$, $\Gamma^{\prime}$ is equivalent, modulo cycle relations,
to a disjoint union of lines, one of which starts at the vertex $i$
and the others satisfy the conditions (4.2).
Then $\Gamma$ is also a disjoint union of lines,
one of which starts with $1$.
This completes the proof.
∎
Remark 4.2.
In fact, in Theorem 4.7 below
we will prove that the set $\mathcal{L}(n)$ is a basis for $\mathbb{F}\mathcal{G}(n)/R(n)$,
where $R(n)$ is the subspace spanned by the cycle relations.
4.2. $\Gamma$-residues
Given $i\neq j\in\{1,\dots,n\}$,
we define the residue map
$$\operatorname{Res}_{z_{j}}\!dz_{i}\colon\mathcal{O}^{\star}_{n}(z_{1},\dots,z_%
{n})\to\mathcal{O}^{\star}_{n-1}(z_{1},\stackrel{{\scriptstyle i}}{{\check{%
\dots}}},z_{n})$$
(4.3)
where $\stackrel{{\scriptstyle i}}{{\check{\dots}}}$ means that the variable $z_{i}$ is skipped.
It is defined as the residue of a function $f(z_{1},\dots,z_{n})$,
viewed as a function of $z_{i}$, at $z_{i}=z_{j}$,
and is given by Cauchy’s formula.
Explicitly, let
$$f(z_{1},\dots,z_{n})=z_{ij}^{-\ell-1}g(z_{1},\dots,z_{n})\,\in\mathcal{O}^{%
\star}_{n}\,,$$
(4.4)
where $\ell\in\mathbb{Z}$ and $g$ has neither a zero nor a pole at $z_{i}=z_{j}$.
Then
$${\operatorname{Res}_{z_{j}}\!dz_{i}\,f(z_{1},\dots,z_{n})=\frac{1}{\ell!}\frac{%
\partial^{\ell}g}{\partial z_{i}^{\ell}}(z_{1},\dots,{\mathop{\vtop{\halign{%
\cr}$\hfil\displaystyle{z_{j}}\hfil$\cr\@@LTX@noalign{\kern 3.0pt%
\nointerlineskip}\omit\cr\cr\@@LTX@noalign{\kern 3.0pt}\omit}}}\limits_{i}},%
\dots,z_{n})\,\,\text{ if }\,\,\ell\geq 0\,,$$
and it is zero for $\ell<0$.
Next, given a line $L=i_{1}\to i_{2}\to\dots\to i_{k}$, we define the map
$$\operatorname{Res}_{w}dL\colon\mathcal{O}^{\star}_{n}(z_{1},\dots,z_{n})\to%
\mathcal{O}^{\star}_{n-k+1}(z_{1},\stackrel{{\scriptstyle i_{1}\dots i_{k}}}{{%
\check{\dots}}},z_{n},w)\,,$$
(4.5)
given by
$$\operatorname{Res}_{w}dL\,f(z_{1},\dots,z_{n})=\operatorname{Res}_{z_{i_{k}}}%
\!dz_{i_{k-1}}\cdots\operatorname{Res}_{z_{i_{2}}}\!dz_{i_{1}}\,f(z_{1},\dots,%
z_{n})\,\Big{|}_{z_{i_{k}}=w}\,.$$
(4.6)
For example, if $L$ is a single vertex $i$ (i.e., $k=1$),
then the residue map (4.5) is just the substitution $z_{i}=w$,
while if $L=i\to j$ is of length $2$,
then we recover the residue map (4.3):
$$\operatorname{Res}_{z_{j}}\!dL\,f(z_{1},\dots,z_{n})=\operatorname{Res}_{z_{j}%
}\!dz_{i}\,f(z_{1},\dots,z_{n})\,.$$
Finally, let $\Gamma\in\mathcal{L}(n)$ be a disjoint union of lines $L_{1}\sqcup\dots\sqcup L_{p}$
as in (4.1).
In this case, we define the $\Gamma$-residue map
$$\operatorname{Res}_{w_{1},\dots,w_{p}}\!d\Gamma\colon\mathcal{O}^{\star}_{n}(z%
_{1},\dots,z_{n})\to\mathcal{O}^{\star}_{p}(w_{1},\dots,w_{p})\,,$$
(4.7)
given by
$$\operatorname{Res}_{w_{1},\dots,w_{p}}\!d\Gamma=\operatorname{Res}_{w_{1}}\!dL%
_{1}\circ\dots\circ\operatorname{Res}_{w_{p}}\!dL_{p}\,.$$
(4.8)
Note that, by definition, we have
$$\operatorname{Res}_{w_{1},\dots,w_{p}}\!d\Gamma(z_{i}f)=w_{\ell}\operatorname{%
Res}_{w_{1},\dots,w_{p}}\!d\Gamma f\,\,\text{ if }\,\,i=i^{\ell}_{k_{\ell}},\,%
\,1\leq\ell\leq p\,.$$
(4.9)
In the following lemmas we list some elementary properties of the $\Gamma$-residue maps,
which will be needed later.
Lemma 4.3.
For every $\Gamma\in\mathcal{L}(n)$,
the $\Gamma$-residue map (4.7)
preserves the translation invariance of functions,
i.e.,
$$\operatorname{Res}_{w_{1},\dots,w_{s}}\!d\Gamma\colon\mathcal{O}^{\star,T}_{n}%
(z_{1},\dots,z_{n})\to\mathcal{O}^{\star,T}_{p}(w_{1},\dots,w_{p})\,.$$
Proof.
It is enough to prove it for the map (4.3),
in which case it is obvious.
∎
Lemma 4.4.
Let $\Gamma\in\mathcal{L}(n)$ be a graph as in (4.1);
in particular, $|E(\Gamma)|=n-p$.
Then
$$\operatorname{Res}_{w_{1},\dots,w_{p}}\!d\Gamma\colon\operatorname{F}^{r}%
\mathcal{O}^{\star}_{n}(z_{1},\dots,z_{n})\to\operatorname{F}^{r+p-n}\mathcal{%
O}^{\star}_{p}(w_{1},\dots,w_{p})\,.$$
(4.10)
In particular,
$$\operatorname{Res}_{w_{1},\dots,w_{p}}\!d\Gamma(\operatorname{F}^{r}\mathcal{O%
}^{\star}_{n})=0\,\,\text{ for }\,\,r<|E(\Gamma)|\,.$$
(4.11)
Proof.
By the definition (4.6)-(4.8) of the $\Gamma$-residue map,
it is enough to prove that
$$\operatorname{Res}_{z_{j}}\!dz_{i}\colon\operatorname{F}^{r}\mathcal{O}^{\star%
}_{n}(z_{1},\dots,z_{n})\to\operatorname{F}^{r-1}\mathcal{O}^{\star}_{n-1}(z_{%
1},\stackrel{{\scriptstyle i}}{{\check{\dots}}},z_{n})\,.$$
This is immediate, by Cauchy’s formula (4.2).
Indeed, if $f\in\operatorname{F}^{r}\mathcal{O}^{\star}_{n}$ is as in (4.4) with $\ell\geq 0$,
then $g\in\operatorname{F}^{r-1}\mathcal{O}^{\star}_{n}$,
and hence
the right-hand side of (4.2) lies in $\operatorname{F}^{r-1}\mathcal{O}^{\star}_{n-1}$.
By induction, we get (4.10).
Equation (4.11) is an obvious consequence of (4.10).
∎
Lemma 4.5.
Let $\Gamma\in\mathcal{L}(n)$ be as in (4.1).
For a function $f\in\mathcal{O}^{\star}_{n}$,
and $i\in\{1,\dots,n\}$, we have
$$\operatorname{Res}_{w_{1},\dots,w_{p}}\!d\Gamma\,(\partial_{z_{i}}f)=\begin{%
cases}\displaystyle{\partial_{w_{\ell}}\operatorname{Res}_{w_{1},\dots,w_{p}}%
\!d\Gamma\,f}&\text{ if }\,\,i=i^{\ell}_{k_{\ell}}\,,\,\,1\leq\ell\leq p\,,\\
0&\text{ if }\,\,i\not\in\{i^{1}_{k_{1}},\dots,i^{p}_{k_{p}}\}\,.\end{cases}$$
(4.12)
Proof.
If $f(z_{1},\dots,z_{n})\in\mathcal{O}^{\star}_{n}$ is as in (4.4),
then by Taylor expanding $g$, viewed as a rational function in $z_{i}$,
at $z_{i}=z_{j}$, we have
$$f(z_{1},\dots,z_{n})=\sum_{m=-\ell-1}^{\infty}z_{ij}^{m}f_{m}(z_{1},\stackrel{%
{\scriptstyle i}}{{\check{\dots}}},z_{n})\,,\quad f_{m}=\frac{1}{(m+\ell+1)!}%
\frac{\partial^{m+\ell+1}g}{\partial z_{i}^{m+\ell+1}}\Big{|}_{z_{i}=z_{j}}\,.$$
Then, by Cauchy’s formula (4.2), we have
$$\operatorname{Res}_{z_{j}}\!dz_{i}\/f=f_{-1}\,.$$
(4.13)
It follows from (4.13) that
$$\operatorname{Res}_{z_{j}}\!dz_{i}\/\frac{\partial f}{\partial z_{i}}=0\,,\,%
\text{ and }\,\operatorname{Res}_{z_{j}}\!dz_{i}\/\frac{\partial f}{\partial z%
_{k}}=\frac{\partial}{\partial z_{k}}\operatorname{Res}_{z_{j}}\!dz_{i}\/f\,\,%
\,\,\text{ if }k\neq i\,.$$
(4.14)
Equation (4.12) is an immediate consequence of (4.14)
and the definition (4.6)-(4.8) of the $\Gamma$-residue.
∎
Proposition 4.6.
Let $\Gamma,\Gamma^{\prime}\in\mathcal{L}(n)$
be such that $|E(\Gamma^{\prime})|=|E(\Gamma)|$.
Then, for every $q\in\mathcal{O}_{n}$, we have
$$\operatorname{Res}_{w_{1},\dots,w_{p}}\!d\Gamma\,p_{\Gamma^{\prime}}(z_{1},%
\dots,z_{n})q(z_{1},\dots,z_{n})=\delta_{\Gamma,\Gamma^{\prime}}q(z_{1},\dots,%
z_{n})\big{|}_{z_{i^{a}_{b}}=w_{a}\,\forall a,b}\,.$$
Proof.
If $|E(\Gamma)|=|E(\Gamma^{\prime})|=0$, the statement trivially holds.
Let $e=i\to j$ be the first edge of the first line of $\Gamma$ that is not a single vertex.
In other words, $e=1\to i^{1}_{2}$ if $k_{1}\geq 2$,
and, in general, $e=i^{\ell}_{1}\to i^{\ell}_{2}$ for the smallest $\ell$ such that $k_{\ell}\geq 2$.
Observe that if neither $i\to j$ nor $j\to i$ is an edge of the graph $\Gamma^{\prime}$,
then $p_{\Gamma^{\prime}}$ has no pole at $z_{i}=z_{j}$, and hence
$$\operatorname{Res}_{z_{j}}\!dz_{i}\,p_{\Gamma^{\prime}}q=0\,.$$
If instead $e=i\to j$ is an edge of $\Gamma^{\prime}$,
then
$$p_{\Gamma^{\prime}}=\frac{1}{z_{ij}}p_{\Gamma^{\prime}\backslash e}\,.$$
Hence, by Cauchy’s formula (4.2), we have
$$\operatorname{Res}_{z_{j}}\!dz_{i}\,p_{\Gamma^{\prime}}q=(p_{\Gamma^{\prime}%
\backslash e}\,q)\big{|}_{z_{i}=z_{j}}=p_{\bar{\Gamma}^{\prime}}(z_{1},%
\stackrel{{\scriptstyle i}}{{\check{\dots}}},z_{n})\cdot q|_{z_{i}=z_{j}}\,,$$
where $\bar{\Gamma}^{\prime}$ is the graph obtained from $\Gamma^{\prime}$
by contracting the edge $e$ into a single vertex labeled $j$.
We then have
$$\operatorname{Res}_{w_{1},\dots,w_{p}}\!d\Gamma\,p_{\Gamma^{\prime}}q=%
\operatorname{Res}_{w_{1},\dots,w_{p}}\!d\bar{\Gamma}\,p_{\bar{\Gamma}^{\prime%
}}q|_{z_{i}=z_{j}}\,,$$
where $\bar{\Gamma}$ is the graph obtained from $\Gamma$
by contracting the edge $e$ into a single vertex labeled $j$.
Note that both $\bar{\Gamma}$ and $\bar{\Gamma}^{\prime}$ have the same number of edges
and lie in $\mathcal{L}(n-1)$ after the relabelling of the vertices
$\varphi\colon\{1,\stackrel{{\scriptstyle i}}{{\check{\dots}}},n\}\to\{1,\dots,%
n-1\}$ given by
$$\varphi(m)=\begin{cases}m\,\,,\,\,\text{ for }\,\,m<j\,,\\
i\,\,,\,\,\text{ for }\,\,m=j\,,\\
m-1\,\,,\,\,\text{ for }\,\,m>j\,.\\
\end{cases}$$
As a consequence, we get by induction that
$$\operatorname{Res}_{w_{1},\dots,w_{p}}\!d\bar{\Gamma}\,p_{\bar{\Gamma}^{\prime%
}}q|_{z_{i}=z_{j}}=\delta_{\bar{\Gamma},\bar{\Gamma}^{\prime}}q(z_{1},\dots,z_%
{n})\big{|}_{z_{i^{a}_{b}}=w_{a}\,\forall a,b}\,.$$
If $\bar{\Gamma}\neq\bar{\Gamma}^{\prime}$, then $\Gamma\neq\Gamma^{\prime}$.
Conversely, if $\bar{\Gamma}=\bar{\Gamma}^{\prime}$,
then $\Gamma=\Gamma^{\prime}$ since they both lie in $\mathcal{L}(n)$
and $i<j$.
The claim follows.
∎
Theorem 4.7.
The set $\mathcal{L}(n)$ is a basis for the quotient space $\mathbb{F}\mathcal{G}(n)/R(n)$,
where $\mathbb{F}\mathcal{G}(n)$ is the vector space with basis the set of graphs $\mathcal{G}(n)$,
and $R(n)$ is the subspace spanned by the cycle relations (i) and (ii) from Section 4.1.
Proof.
We already know by Lemma 4.1
that $\mathcal{L}(n)$ spans $\mathbb{F}\mathcal{G}(n)$ modulo cycle relations.
Hence, we only need to prove linear independence.
Let
$$\sum_{\Gamma\in\mathcal{L}(n)}c_{\Gamma}\Gamma\in R(n)\,,\qquad c_{\Gamma}\in%
\mathbb{F}\,.$$
Since the cycle relations are homogeneous in the number of edges,
we can assume that all the graphs $\Gamma$ appearing above have the
same number of edges, $s$.
Then,
$\sum_{\Gamma\in\mathcal{L}(n)}c_{\Gamma}\Gamma$
is a linear combination of graphs $\Gamma_{1}\in\mathcal{G}(n)$
with $s$ edges and not acyclic,
and of
$\sum_{e\in C}\Gamma_{2}\backslash e$,
where $\Gamma_{2}\in\mathcal{G}(n)$ has $s+1$ edges and contains a cycle $C$.
It follows from Lemma 3.1(b) that
$$\sum_{\Gamma\in\mathcal{L}(n)}c_{\Gamma}p_{\Gamma}$$
(4.15)
is a linear combination of $p_{\Gamma_{1}}$,
where $\Gamma_{1}\in\mathcal{G}(n)$
have $s$ edges and are not acyclic.
Let $\Gamma\in\mathcal{L}(n)$ be as in (4.1) with $p=n-s$.
Applying $\operatorname{Res}_{w_{1},\dots,w_{p}}\!d\Gamma$
to (4.15), we get $c_{\Gamma}$ by Proposition 4.6.
On the other hand, by Lemma 3.1(a) and equation (4.11),
we get $c_{\Gamma}=0$.
∎
4.3. $\Gamma$-Fourier transforms
Let $\Gamma=L_{1}\sqcup\dots\sqcup L_{p}\in\mathcal{L}(n)$ be a disjoint
union of lines as in (4.1).
We define the $\Gamma$-exponential function as
$$E^{\Gamma}_{\lambda_{1},\dots,\lambda_{n}}(z_{1},\dots,z_{n})=\prod_{\ell=1}^{%
p}E^{L_{\ell}}_{\lambda_{i^{\ell}_{1}},\dots,\lambda_{i^{\ell}_{k_{\ell}}}}(z_%
{i^{\ell}_{1}},\dots,z_{i^{\ell}_{k_{\ell}}})\,\in\mathcal{O}^{T}_{n}[[\lambda%
_{1},\dots,\lambda_{n}]]\,,$$
(4.16)
where, for a line $L=i_{1}\to i_{2}\to\dots\to i_{k}$, we let
$$E^{L}_{\lambda_{i_{1}},\dots,\lambda_{i_{k}}}(z_{i_{1}},\dots,z_{i_{k}})=\exp%
\Big{(}-\sum_{a=1}^{k-1}z_{i_{a}i_{k}}\lambda_{i_{a}}\Big{)}\,.$$
(4.17)
For example, if $k=1$ and $L$ consists of the single vertex $i$,
then
$E^{L}_{\lambda_{i}}(z_{i})=1$.
If $k=2$ and $L$ has one edge $i\to j$,
then
$E^{L}_{\lambda_{i},\lambda_{j}}(z_{i},z_{j})=e^{-z_{ij}\lambda_{i}}$.
Definition 4.8.
For $\Gamma\in\mathcal{L}(n,p)$ and $f\in\mathcal{O}^{\star}_{n}$,
the $\Gamma$-Fourier transform of $f$ is
$$\mathcal{F}^{\Gamma}_{\lambda_{1},\dots,\lambda_{n}}(f;w_{1},\dots,w_{p})=%
\operatorname{Res}_{w_{1},\dots,w_{p}}\!d\Gamma\,f(z_{1},\dots,z_{n})E^{\Gamma%
}_{\lambda_{1},\dots,\lambda_{n}}(z_{1},\dots,z_{n})\,.$$
(4.18)
If we do not need to specify the variables $w_{1},\dots,w_{p}$, we will use the simplified notation
$\mathcal{F}^{\Gamma}_{\lambda_{1},\dots,\lambda_{n}}(f)$.
Example 4.9.
If $p=n$ and $\Gamma=\bullet\cdots\bullet$ has no edges,
the corresponding Fourier transform is
$$\mathcal{F}^{\bullet\cdots\bullet}_{\lambda_{1},\dots,\lambda_{n}}(f;w_{1},%
\dots,w_{n})=f(w_{1},\dots,w_{n})\,.$$
Example 4.10.
If $p=1$ and $\Gamma=L=1\to\cdots\to n$ is a single line,
the corresponding Fourier transform is
$$\displaystyle{\mathcal{F}^{1\to\cdots\to n}_{\lambda_{1},\dots,\lambda_{n}}(f;%
w)=\operatorname{Res}_{w}\!dz_{n-1}\cdots\operatorname{Res}_{z_{2}}\!dz_{1}e^{%
-\sum_{i=1}^{n-1}(z_{i}-w)\lambda_{i}}\,f(z_{1},\dots,z_{n-1},w)\,.}$$
Note that, by Lemma 4.11 below, if $f\in\mathcal{O}^{\star T}_{n}$,
then
$$\mathcal{F}^{1\to\cdots\to n}_{\lambda_{1},\dots,\lambda_{n}}(f;w)=\mathcal{F}%
^{1\to\cdots\to n}_{\lambda_{1},\dots,\lambda_{n}}(f;0)\in\mathbb{F}[\lambda_{%
1},\dots,\lambda_{n}]\,$$
since $\mathcal{O}^{\star T}_{1}=\mathbb{F}$. In particular, the Fourier transform is independent of $w$.
Lemma 4.11.
Let $\Gamma\in\mathcal{L}(n)$ be as in (4.1) and $f\in\mathcal{O}^{\star}_{n}$.
Then
$\mathcal{F}^{\Gamma}_{\lambda_{1},\dots,\lambda_{n}}(f)\in\mathcal{O}^{\star}_%
{p}[\lambda_{1},\dots,\lambda_{n}]$.
If $f\in\mathcal{O}^{\star T}_{n}$, then
$\mathcal{F}^{\Gamma}_{\lambda_{1},\dots,\lambda_{n}}(f)\in\mathcal{O}^{\star T%
}_{p}[\lambda_{1},\dots,\lambda_{n}]$.
Proof.
For the first claim, note that the $\Gamma$-residue and the $\Gamma$-exponential
are defined as products over the lines $L_{1},\dots,L_{p}$
in $\Gamma$.
Hence, it is enough to consider the case
of a single line $L=i_{1}\to i_{2}\to\dots\to i_{k}$.
We have
$$\sum_{a=1}^{k-1}z_{i_{a}i_{k}}\lambda_{i_{a}}=z_{i_{1}i_{2}}\lambda_{i_{1}}+%
\sum_{a=2}^{k-1}z_{i_{a}i_{k}}\widetilde{\lambda}_{i_{a}}\,,$$
where $\widetilde{\lambda}_{i_{a}}=\lambda_{i_{a}}+\delta_{a,2}\lambda_{i_{1}}$.
Expanding the exponential $e^{-z_{i_{1}i_{2}}\lambda_{i_{1}}}$,
since $f$ has a pole at $z_{i_{1}i_{2}}$ of finite order,
we obtain that
$$\operatorname{Res}_{z_{i_{2}}}\!dz_{i_{1}}\,f(z_{1},\dots,z_{n})E^{L}_{\lambda%
_{i_{1}},\dots,\lambda_{i_{k}}}(z_{i_{1}},\dots,z_{i_{k}})$$
is a polynomial in $\lambda_{i_{1}}$.
Proceeding by induction, we get the first claim.
The second claim follows from Lemma 4.3 and the fact that the $\Gamma$-exponential
(4.16) is translation invariant.
∎
Lemma 4.12.
Let $\Gamma\in\mathcal{L}(n)$ with $|E(\Gamma)|=s$.
Then
$\mathcal{F}^{\Gamma}_{\lambda_{1},\dots,\lambda_{n}}(\operatorname{F}^{r}%
\mathcal{O}^{\star}_{n})=0$
for all $r<s$.
Proof.
It follows immediately from (4.11).
∎
Lemma 4.13.
Let $\Gamma,\Gamma^{\prime}\in\mathcal{L}(n)$ with $|E(\Gamma)|=|E(\Gamma^{\prime})|$.
Then
$\mathcal{F}^{\Gamma}_{\lambda_{1},\dots,\lambda_{n}}(p_{\Gamma^{\prime}})=%
\delta_{\Gamma,\Gamma^{\prime}}$.
Proof.
It is an obvious consequence of Proposition 4.6.
∎
Lemma 4.14.
For $\Gamma\in\mathcal{L}(n,p)$ as in (4.1)
and $f\in\mathcal{O}^{\star}_{n}$, we have
$$\mathcal{F}^{\Gamma}_{\lambda_{1},\dots,\lambda_{n}}(\partial_{z_{i}}f)=\begin%
{cases}\displaystyle{\Big{(}\partial_{w_{\ell}}-\sum_{a=1}^{k_{\ell}-1}\lambda%
_{i^{\ell}_{a}}\Big{)}\mathcal{F}^{\Gamma}_{\lambda_{1},\dots,\lambda_{n}}(f)}%
&\text{ if }\,\,i=i^{\ell}_{k_{\ell}}\,,\,\,1\leq\ell\leq p\,,\\
\lambda_{i}\,\mathcal{F}^{\Gamma}_{\lambda_{1},\dots,\lambda_{n}}(f)&\text{ if%
}\,\,i\not\in\{i^{1}_{k_{1}},\dots,i^{p}_{k_{p}}\}\,.\end{cases}$$
(4.19)
Proof.
It follows from Lemma 4.5
and the definition (4.16)–(4.17)
of $\Gamma$-exponential functions.
∎
Lemma 4.15.
For $\Gamma\in\mathcal{L}(n,p)$ as in (4.1)
and $f\in\mathcal{O}^{\star}_{n}$, we have
$$\mathcal{F}^{\Gamma}_{\lambda_{1},\dots,\lambda_{n}}(z_{i}f)=(w_{\ell}-%
\partial_{\lambda_{i}})\mathcal{F}^{\Gamma}_{\lambda_{1},\dots,\lambda_{n}}(f)\,,$$
(4.20)
for
$i\in\{i^{\ell}_{1},\dots,i^{\ell}_{k_{\ell}}\}$, $1\leq\ell\leq p$.
Note that $\partial_{\lambda_{i}}\mathcal{F}^{\Gamma}_{\lambda_{1},\dots,\lambda_{n}}(f)=0$
for $i=i^{\ell}_{k_{\ell}}$.
Proof.
It follows from the definition (4.16)–(4.17)
of the $\Gamma$-exponential and equation (4.9).
∎
4.4. Convolution product
In this section, we define a bilinear convolution product
$$\mathcal{O}^{\star}_{p}\times\mathbb{F}[\Lambda_{1},\dots,\Lambda_{p}]\to%
\mathbb{F}[\Lambda_{1},\dots,\Lambda_{p}]\,,\qquad(F,Q)\mapsto F*Q(\Lambda_{1}%
,\dots,\Lambda_{p})\,,$$
(4.21)
as follows.
First, we introduce the map
$$\iota_{w_{1},\dots,w_{p}}\colon\mathcal{O}^{\star}_{p}(w_{1},\dots,w_{p})\to%
\mathbb{F}((w_{1}))\cdots((w_{p-1}))[[w_{p}]]\,,$$
(4.22)
defined as the geometric series expansion in the domain
$|w_{1}|>|w_{2}|>\dots>|w_{p}|$.
Then, for $F\in\mathcal{O}^{\star}_{p}$ and
$Q\in\mathbb{F}[\Lambda_{1},\dots,\Lambda_{p}]$,
we let
$$F*Q(\Lambda_{1},\dots,\Lambda_{p})=\iota_{w_{1},\dots,w_{p}}F(w_{1},\dots,w_{p%
})\big{|}_{w_{\ell}=-\partial_{\Lambda_{\ell}}}Q(\Lambda_{1},\dots,\Lambda_{p}%
)\,.$$
(4.23)
Let us explain why the right-hand side of (4.23)
is a well-defined polynomial.
We expand the formal Laurent series
$\widetilde{F}=\iota_{w_{1},\dots,w_{p}}F$
and the polynomial $Q$ as
$$\widetilde{F}=\sum_{a\in\mathbb{Z}^{p}}c_{a_{1},\dots,a_{p}}w_{1}^{a_{1}}%
\cdots w_{p}^{a_{p}}\;\text{ and }\;Q=\sum_{b\in\mathbb{Z}_{+}^{p}}d_{b_{1},%
\dots,b_{p}}\Lambda_{1}^{(b_{1})}\cdots\Lambda_{p}^{(b_{p})}\,.$$
Here and below we use the divided power notation
$\Lambda_{\ell}^{(b_{\ell})}:=\Lambda_{\ell}^{b_{\ell}}/b_{\ell}!$ for $b_{\ell}\geq 0$ and $\Lambda_{\ell}^{(b_{\ell})}=0$ for $b_{\ell}<0$.
Then we have
$$\begin{split}\displaystyle\widetilde{F}(w_{1},&\displaystyle\dots,w_{p})\big{|%
}_{w_{\ell}=-\partial_{\Lambda_{\ell}}}Q(\Lambda_{1},\dots,\Lambda_{p})\\
&\displaystyle=\sum_{a,b}c_{a_{1},\dots,a_{p}}d_{b_{1},\dots,b_{p}}(-1)^{a_{1}%
+\cdots+a_{p}}\Lambda_{1}^{(b_{1}-a_{1})}\cdots\Lambda_{p}^{(b_{p}-a_{p})}\,.%
\end{split}$$
(4.24)
We claim that the sum in the right-hand side of (4.24) is finite.
Indeed,
the sum over $b\in\mathbb{Z}_{+}^{p}$ is finite.
For every fixed $b$,
the sum over $a_{p}$ is finite;
for every $a_{p}$ the sum over $a_{p-1}$ is finite, and so on.
Hence, (4.24) is a well-defined polynomial.
Note that (4.22) is an algebra homomorphism.
However, the convolution product (4.21)
does not define an action of the algebra $\mathcal{O}^{\star}_{p}$
on the space of polynomials.
For example, we have
$$\frac{1}{w_{1}-w_{2}}*\big{(}(w_{1}-w_{2})*1\big{)}=0\,\,\text{ while }\,\,%
\frac{w_{1}-w_{2}}{w_{1}-w_{2}}*1=1\,.$$
Nevertheless, we have the following lemma.
Lemma 4.16.
For every $F\in\mathcal{O}^{\star}_{p}(w_{1},\dots,w_{p})$ and $Q\in\mathbb{F}[\Lambda_{1},\dots,\Lambda_{p}]$, we have $(\ell=1,\dots,p):$
$$\displaystyle(w_{\ell}F)*Q=-\partial_{\Lambda_{\ell}}(F*Q)\,,$$
(4.25)
$$\displaystyle\Lambda_{\ell}(F*Q)-F*(\Lambda_{\ell}Q)=(\partial_{w_{\ell}}F)*Q\,.$$
(4.26)
Proof.
For simplicity of notation, let $\ell=1$. By linearity, we can assume that
$F=w_{1}^{a_{1}}\cdots w_{p}^{a_{p}}$ and $Q=\Lambda_{1}^{(b_{1})}\cdots\Lambda_{p}^{(b_{p})}$,
where $a_{i}\in\mathbb{Z}$, $b_{i}\in\mathbb{Z}_{+}$. Then using (4.24), we find:
$$\displaystyle(w_{1}F)*Q$$
$$\displaystyle=(-1)^{1+a_{1}+\cdots+a_{p}}\,\Lambda_{1}^{(b_{1}-a_{1}-1)}%
\Lambda_{2}^{(b_{2}-a_{2})}\cdots\Lambda_{p}^{(b_{p}-a_{p})}\,,$$
$$\displaystyle\partial_{\Lambda_{1}}(F*Q)$$
$$\displaystyle=(-1)^{a_{1}+\cdots+a_{p}}\,\Lambda_{1}^{(b_{1}-a_{1}-1)}\Lambda_%
{2}^{(b_{2}-a_{2})}\cdots\Lambda_{p}^{(b_{p}-a_{p})}\,,$$
$$\displaystyle\Lambda_{1}(F*Q)$$
$$\displaystyle=(-1)^{a_{1}+\cdots+a_{p}}\,(b_{1}+1-a_{1})\Lambda_{1}^{(b_{1}+1-%
a_{1})}\Lambda_{2}^{(b_{2}-a_{2})}\cdots\Lambda_{p}^{(b_{p}-a_{p})}\,,$$
$$\displaystyle F*(\Lambda_{1}Q)$$
$$\displaystyle=(-1)^{a_{1}+\cdots+a_{p}}\,(b_{1}+1)\Lambda_{1}^{(b_{1}+1-a_{1})%
}\Lambda_{2}^{(b_{2}-a_{2})}\cdots\Lambda_{p}^{(b_{p}-a_{p})}\,,$$
$$\displaystyle(\partial_{w_{\ell}}F)*Q$$
$$\displaystyle=(-1)^{1+a_{1}+\cdots+a_{p}}\,a_{1}\Lambda_{1}^{(b_{1}+1-a_{1})}%
\Lambda_{2}^{(b_{2}-a_{2})}\cdots\Lambda_{p}^{(b_{p}-a_{p})}\,.$$
The claim follows.
∎
Note that neither side of (4.25)
is necessarily equal to $-F*(\partial_{\Lambda_{\ell}}Q)$.
Indeed, in the same setting as in the proof of Lemma 4.16,
we have
$$F*(\partial_{\Lambda_{1}}Q)=(-1)^{a_{1}+\cdots+a_{p}}\,\Lambda_{1}^{(b_{1}-a_{%
1}-1)}\Lambda_{2}^{(b_{2}-a_{2})}\cdots\Lambda_{p}^{(b_{p}-a_{p})}\,,$$
unless $a_{1}<b_{1}=0$, in which case $F*(\partial_{\Lambda_{1}}Q)=0$,
while the right-hand side above is not $0$.
5. Relation between chiral and classical operads
5.1. Preliminary notation
Let $V$ be a superspace with an even endomorphism $\partial$,
which is $\mathbb{Z}_{+}$-graded by $\mathbb{F}[\partial]$-submodules:
$$V=\bigoplus_{s\in\mathbb{Z}_{+}}V_{s}\,.$$
(5.1)
We have the induced increasing filtration by $\mathbb{F}[\partial]$-submodules
$$\operatorname{F}^{t}V=\bigoplus_{s=0}^{t}V_{s}\,,$$
(5.2)
and the associated graded $\operatorname{gr}V$ is canonically isomorphic to $V$.
Let $Y\in\operatorname{gr}^{r}P^{\mathrm{cl}}(n)$ and let $\Gamma\in\mathcal{L}(n,p)$
be as in (4.1).
Recall that, for $v\in V^{\otimes n}$,
by the sesquilinearity axiom (3.8),
$Y^{\Gamma}_{\lambda_{1},\dots,\lambda_{n}}(v)$ is a polynomial in the variables
$$\Lambda_{\ell}=\lambda_{L_{\ell}}=\lambda_{i^{\ell}_{1}}+\dots+\lambda_{i^{%
\ell}_{k_{\ell}}}\,,\qquad\ell=1,\dots,p\,.$$
(5.3)
By an abuse of notation, we shall then alternatively write
$$Y^{\Gamma}_{\Lambda_{1},\dots,\Lambda_{p}}(v)=Y^{\Gamma}_{\lambda_{1},\dots,%
\lambda_{n}}(v)\,,\qquad\Gamma\in\mathcal{L}(n,p)\,.$$
Recall also that, for $i=1,\dots,n$, $\partial_{i}\colon V^{\otimes n}\to V^{\otimes n}$
denotes the action of $\partial$ on the $i$-th factor.
For a polynomial
$$P(x_{1},\dots,x_{n})=\sum c_{j_{1},\dots,j_{n}}x_{1}^{j_{1}}\cdots x_{n}^{j_{n%
}}\,,$$
we will use the notation
$$P(x_{1},\dots,x_{n})\big{(}\big{|}_{x_{i}=\partial_{i}}\,v\big{)}=\sum c_{j_{1%
},\dots,j_{n}}\partial_{1}^{j_{1}}\cdots\partial_{n}^{j_{n}}v\,.$$
(5.4)
5.2. The inverse map
Given $f\in\mathcal{O}^{\star T}_{n}$
and $v\in V^{\otimes n}$,
we define
$$X_{\lambda_{1},\dots,\lambda_{n}}(v\otimes f)=\sum_{p=1}^{n}\sum_{\Gamma\in%
\mathcal{L}(n,p)}\!\!\!\mathcal{F}^{\Gamma}_{\lambda_{1}+x_{1},\dots,\lambda_{%
n}+x_{n}}(f)*Y^{\Gamma}_{\Lambda_{1},\dots,\Lambda_{p}}\big{(}\big{|}_{x_{i}=%
\partial_{i}}v\big{)}\,.$$
(5.5)
Let us explain the meaning of this formula.
By Lemma 4.11,
the $\Gamma$-Fourier transform is a finite sum
$$\mathcal{F}^{\Gamma}_{\lambda_{1}+x_{1},\dots,\lambda_{n}+x_{n}}(f)=\sum_{a\in%
\mathbb{Z}_{+}^{n}}F_{a}(w_{1},\dots,w_{p})\,(\lambda_{1}+x_{1})^{(a_{1})}%
\cdots(\lambda_{n}+x_{n})^{(a_{n})}$$
with coefficients $F_{a}\in\mathcal{O}^{\star T}_{p}$.
According to the notation (5.4),
we apply the $x_{i}$ as $\partial_{i}$ on the vector $v$
in the argument of $Y^{\Gamma}_{\Lambda_{1},\dots,\Lambda_{p}}$.
Then we take the convolution product (4.21)
of each $F_{a}$
with $Y^{\Gamma}_{\Lambda_{1},\dots,\Lambda_{p}}$,
which is a polynomial in $\Lambda_{1},\dots,\Lambda_{p}$.
As a result, each summand in the right-hand side of (5.5) is
$$\sum_{a,b\in\mathbb{Z}_{+}^{n}}\lambda_{1}^{(a_{1})}\cdots\lambda_{n}^{(a_{n})%
}F_{a+b}*Y^{\Gamma}_{\Lambda_{1},\dots,\Lambda_{p}}\big{(}\partial_{1}^{(b_{1}%
)}\cdots\partial_{n}^{(b_{n})}v\big{)}\,.$$
(5.6)
Finally, we make the substitution (5.3)
to get a polynomial in $\lambda_{1},\dots,\lambda_{n}$
with coefficients in $V$.
Theorem 5.1.
Let $V$ be a superspace with an even endomorphism $\partial$,
endowed with a $\mathbb{Z}_{+}$-grading (5.1) by $\mathbb{F}[\partial]$-submodules,
and with the associated increasing filtration (5.2).
Then for $Y\in\operatorname{gr}^{r}P^{\mathrm{cl}}(n)$, formula (5.5)
defines an element $X\in\operatorname{F}^{r}P^{\mathrm{ch}}(n)$.
The obtained linear map
$\operatorname{gr}^{r}P^{\mathrm{cl}}(n)\to\operatorname{F}^{r}P^{\mathrm{ch}}(n)$
sending $Y\mapsto X$
induces a map $\operatorname{gr}^{r}P^{\mathrm{cl}}(n)\to\operatorname{gr}^{r}P^{\mathrm{ch}}%
(n)$, which is
the inverse of the map (3.15).
Consequently,
the homomorphism of operads $\operatorname{gr}P^{\mathrm{ch}}(n)(V)\to\operatorname{gr}P^{\mathrm{cl}}(n)(%
\operatorname{gr}V)$
defined by (3.15) is an isomorphism.
Proof.
First, we prove that for $f\in\mathcal{O}^{\star T}_{n}$ the right-hand side of (5.5)
is a well-defined element of the quotient space
$V[\lambda_{1},\dots,\lambda_{n}]/\langle\partial+\lambda_{1}+\dots+\lambda_{n}\rangle$,
i.e., it does not depend on the choice of the representative
of $Y^{\Gamma}$ in the quotient space
$V[\Lambda_{1},\dots,\Lambda_{p}]/\langle\partial+\Lambda_{1}+\dots+\Lambda_{p}\rangle$.
Indeed, suppose
$$Y^{\Gamma}_{\Lambda_{1},\dots,\Lambda_{p}}\big{(}\partial_{1}^{(b_{1})}\cdots%
\partial_{n}^{(b_{n})}v\big{)}=(\partial+\Lambda_{1}+\dots+\Lambda_{p})Q_{b}%
\in\langle\partial+\Lambda_{1}+\dots+\Lambda_{p}\rangle\,,$$
for some $Q_{b}\in V[\Lambda_{1},\dots,\Lambda_{p}]$.
Then, by Lemma 4.16, we have
$$\displaystyle F$$
$${}_{a+b}*Y^{\Gamma}_{\Lambda_{1},\dots,\Lambda_{p}}\big{(}\partial_{1}^{(b_{1}%
)}\cdots\partial_{n}^{(b_{n})}v\big{)}$$
$$\displaystyle=F_{a+b}*\big{(}(\partial+\Lambda_{1}+\dots+\Lambda_{p})Q_{b}\big%
{)}$$
$$\displaystyle=(\partial+\Lambda_{1}+\dots+\Lambda_{p})(F_{a+b}*Q_{b})-\big{(}(%
\partial_{w_{1}}+\dots+\partial_{w_{p}})F_{a+b}\big{)}*Q_{b}$$
$$\displaystyle=(\partial+\lambda_{1}+\dots+\lambda_{n})(F_{a+b}*Q_{b})\equiv 0\,,$$
since $F_{a+b}$ is translation invariant.
Next, we check that the map
$$X\colon V^{\otimes n}\otimes\mathcal{O}^{\star T}_{n}\to V[\lambda_{1},\dots,%
\lambda_{n}]/\langle\partial+\lambda_{1}+\dots+\lambda_{n}\rangle$$
satisfies the sesquilinearity relations (2.4)–(2.5),
i.e., it is a chiral map $X\in P^{\mathrm{ch}}(n)$.
For $i\not\in\{i^{1}_{k_{1}},\dots,i^{p}_{k_{p}}\}$, we have,
by Lemma 4.14,
$$\displaystyle X$$
$${}_{\lambda_{1},\dots,\lambda_{n}}(v\otimes\partial_{z_{i}}f)$$
$$\displaystyle=\sum_{p=1}^{n}\sum_{\Gamma\in\mathcal{L}(n,p)}\!\!\!\mathcal{F}^%
{\Gamma}_{\lambda_{1}+x_{1},\dots,\lambda_{n}+x_{n}}(\partial_{z_{i}}f)*Y^{%
\Gamma}_{\Lambda_{1},\dots,\Lambda_{p}}\big{(}\big{|}_{x_{i}=\partial_{i}}v%
\big{)}$$
$$\displaystyle=\sum_{p=1}^{n}\sum_{\Gamma\in\mathcal{L}(n,p)}\!\!\!(\lambda_{i}%
+x_{i})\mathcal{F}^{\Gamma}_{\lambda_{1}+x_{1},\dots,\lambda_{n}+x_{n}}(f)*Y^{%
\Gamma}_{\Lambda_{1},\dots,\Lambda_{p}}\big{(}\big{|}_{x_{i}=\partial_{i}}v%
\big{)}$$
$$\displaystyle=X_{\lambda_{1},\dots,\lambda_{n}}((\lambda_{i}+\partial_{i})v%
\otimes f)\,.$$
Next, let $i=i^{\ell}_{k_{\ell}}$, $1\leq\ell\leq p$.
By Lemmas 4.14 and 4.16, we have
$$\displaystyle\mathcal{F}^{\Gamma}_{\lambda_{1}+x_{1},\dots,\lambda_{n}+x_{n}}(%
\partial_{z_{i}}f)*Y^{\Gamma}_{\Lambda_{1},\dots,\Lambda_{p}}\big{(}\big{|}_{x%
_{i}=\partial_{i}}v\big{)}$$
$$\displaystyle=\bigg{(}\Big{(}\partial_{w_{\ell}}-\sum_{a=1}^{k_{\ell}-1}(%
\lambda_{i^{\ell}_{a}}+x_{i^{\ell}_{a}})\Big{)}\mathcal{F}^{\Gamma}_{\lambda_{%
1}+x_{1},\dots,\lambda_{n}+x_{n}}(f)\bigg{)}*Y^{\Gamma}_{\Lambda_{1},\dots,%
\Lambda_{p}}\big{(}\big{|}_{x_{i}=\partial_{i}}v\big{)}$$
$$\displaystyle=\Big{(}\Lambda_{\ell}-\sum_{a=1}^{k_{\ell}-1}\lambda_{i^{\ell}_{%
a}}\Big{)}\mathcal{F}^{\Gamma}_{\lambda_{1}+x_{1},\dots,\lambda_{n}+x_{n}}(f)*%
Y^{\Gamma}_{\Lambda_{1},\dots,\Lambda_{p}}\big{(}\big{|}_{x_{i}=\partial_{i}}v%
\big{)}$$
$$\displaystyle-\mathcal{F}^{\Gamma}_{\lambda_{1}+x_{1},\dots,\lambda_{n}+x_{n}}%
(f)*\bigg{(}\Big{(}\Lambda_{\ell}+\sum_{a=1}^{k_{\ell}-1}x_{i^{\ell}_{a}}\Big{%
)}Y^{\Gamma}_{\Lambda_{1},\dots,\Lambda_{p}}\big{(}\big{|}_{x_{i}=\partial_{i}%
}v\big{)}\bigg{)}$$
$$\displaystyle=\lambda_{i}\mathcal{F}^{\Gamma}_{\lambda_{1}+x_{1},\dots,\lambda%
_{n}+x_{n}}(f)*Y^{\Gamma}_{\Lambda_{1},\dots,\Lambda_{p}}\big{(}\big{|}_{x_{i}%
=\partial_{i}}v\big{)}$$
$$\displaystyle+\mathcal{F}^{\Gamma}_{\lambda_{1}+x_{1},\dots,\lambda_{n}+x_{n}}%
(f)*Y^{\Gamma}_{\Lambda_{1},\dots,\Lambda_{p}}\big{(}\big{|}_{x_{i}=\partial_{%
i}}\partial_{i}v\big{)}\,.$$
For the last equality, we used the sesquilinearity (3.9) of $Y^{\Gamma}$.
This proves (2.4).
Next, let us prove equation (2.5).
Let $i\in\{i^{\ell}_{1},\dots,i^{\ell}_{k_{\ell}}\}$ and $j\in\{i^{h}_{1},\dots,i^{h}_{k_{h}}\}$.
By Lemma (4.15)
and equation (4.25), we have
$$\displaystyle\mathcal{F}^{\Gamma}_{\lambda_{1}+x_{1},\dots,\lambda_{n}+x_{n}}(%
z_{ij}f)*Y^{\Gamma}_{\Lambda_{1},\dots,\Lambda_{p}}\big{(}\big{|}_{x_{i}=%
\partial_{i}}v\big{)}$$
$$\displaystyle=\big{(}(w_{\ell}-w_{h}-\partial_{\lambda_{i}}+\partial_{\lambda_%
{j}})\mathcal{F}^{\Gamma}_{\lambda_{1}+x_{1},\dots,\lambda_{n}+x_{n}}(f)\big{)%
}*Y^{\Gamma}_{\Lambda_{1},\dots,\Lambda_{p}}\big{(}\big{|}_{x_{i}=\partial_{i}%
}v\big{)}$$
$$\displaystyle=(-\partial_{\Lambda_{\ell}}+\partial_{\Lambda_{h}})\big{(}%
\mathcal{F}^{\Gamma}_{\lambda_{1}+x_{1},\dots,\lambda_{n}+x_{n}}(f)*Y^{\Gamma}%
_{\Lambda_{1},\dots,\Lambda_{p}}\big{(}\big{|}_{x_{i}=\partial_{i}}v\big{)}%
\big{)}$$
$$\displaystyle\quad+\big{(}(-\partial_{\lambda_{i}}+\partial_{\lambda_{j}})%
\mathcal{F}^{\Gamma}_{\lambda_{1}+x_{1},\dots,\lambda_{n}+x_{n}}(f)\big{)}*Y^{%
\Gamma}_{\Lambda_{1},\dots,\Lambda_{p}}\big{(}\big{|}_{x_{i}=\partial_{i}}v%
\big{)}$$
$$\displaystyle=(-\partial_{\lambda_{i}}+\partial_{\lambda_{j}})\big{(}\mathcal{%
F}^{\Gamma}_{\lambda_{1}+x_{1},\dots,\lambda_{n}+x_{n}}(f)*Y^{\Gamma}_{\Lambda%
_{1},\dots,\Lambda_{p}}\big{(}\big{|}_{x_{i}=\partial_{i}}v\big{)}\big{)}\,.$$
For the last equality we used the chain rule
and $\partial_{\lambda_{i}}\Lambda_{\ell^{\prime}}=\delta_{\ell,\ell^{\prime}}$.
This proves (2.5).
Hence, $X\in P^{\mathrm{ch}}(n)$.
Next, we prove that $X$ lies in the $r$-th filtered space $\operatorname{F}^{r}P^{\mathrm{ch}}(n)$,
provided that $Y\in\operatorname{gr}^{r}P^{\mathrm{cl}}(n)$.
Let $f\in\operatorname{F}^{s}\mathcal{O}^{\star T}_{n}$
and $v\in\operatorname{F}^{t}(V^{\otimes n})$.
By Lemma 4.12,
$\mathcal{F}^{\Gamma}_{\lambda_{1},\dots,\lambda_{n}}(f)=0$
unless $|E(\Gamma)|=n-p\leq s$.
In this case, by the definition (3.10) of the grading of $P^{\mathrm{cl}}$,
we have
$$\displaystyle Y^{\Gamma}_{\Lambda_{1},\dots,\Lambda_{p}}(v)$$
$$\displaystyle\in(\operatorname{gr}^{n-p+t-r}V)[\lambda_{1},\dots,\lambda_{n}]/%
\langle\partial+\lambda_{1}+\dots+\lambda_{n}\rangle$$
$$\displaystyle\subset(\operatorname{F}^{s+t-r}V)[\lambda_{1},\dots,\lambda_{n}]%
/\langle\partial+\lambda_{1}+\dots+\lambda_{n}\rangle\,.$$
The claim follows from the facts that the filtration of $V$ is invariant under the action of $\partial$
and the convolution product does not act on the coefficients (in $V$) of the polynomials.
To complete the proof of the theorem,
we are left to check that, for $X$ as in (5.5),
the image of $X$ under the map (3.15) coincides with $Y$.
Indeed, by definition (3.13), the image of $X$ under (3.15)
maps $\Gamma^{\prime}\in\mathcal{G}(n)$ with $s$ edges
and $\bar{v}=v+\operatorname{F}^{t-1}V^{\otimes n}\in\operatorname{gr}^{t}V^{%
\otimes n}$
to
$$\begin{split}\displaystyle\overline{X_{\lambda_{1},\dots,\lambda_{n}}(v\otimes
p%
_{\Gamma^{\prime}})}&\displaystyle=\sum_{p=1}^{n}\sum_{\Gamma\in\mathcal{L}(n,%
p)}\!\!\!\mathcal{F}^{\Gamma}_{\lambda_{1}+x_{1},\dots,\lambda_{n}+x_{n}}(p_{%
\Gamma^{\prime}})*Y^{\Gamma}_{\Lambda_{1},\dots,\Lambda_{p}}\big{(}\big{|}_{x_%
{i}=\partial_{i}}v\big{)}\\
&\displaystyle+(\operatorname{F}^{s+t-r-1}V)[\lambda_{1},\dots,\lambda_{n}]/%
\langle\partial+\lambda_{1}+\dots+\lambda_{n}\rangle\,.\end{split}$$
(5.7)
Since $p_{\Gamma^{\prime}}\in\operatorname{F}^{s}\mathcal{O}^{\star T}_{n}$,
by Lemma 4.12 the sum over $p$ in (5.7)
can be restricted by the inequality $n-p=|E(\Gamma)|\leq s=|E(\Gamma^{\prime})|$.
On the other hand, if $n-p<s$, by (3.10)
we have
$$Y^{\Gamma}_{\Lambda_{1},\dots,\Lambda_{p}}(v)\in(\operatorname{F}^{s+t-r-1}V)[%
\lambda_{1},\dots,\lambda_{n}]/\langle\partial+\lambda_{1}+\dots+\lambda_{n}%
\rangle\,;$$
hence the corresponding terms vanish in (5.7).
We can thus restrict the sum over $p$ in (5.7) to $|E(\Gamma)|=|E(\Gamma^{\prime})|$.
In this case, by Lemma 4.13, we have
$$\mathcal{F}^{\Gamma}_{\lambda_{1},\dots,\lambda_{n}}(p_{\Gamma^{\prime}})=%
\delta_{\Gamma,\Gamma^{\prime}}\,.$$
Therefore, the right-hand side of (5.7)
becomes $\overline{Y^{\Gamma}_{\lambda_{1},\dots,\lambda_{n}}(v)}$,
completing the proof.
∎
5.3. Examples
In this section, we write down explicitly formula (5.5)
in some special cases.
First, assume that $Y\in P^{\mathrm{cl}}(n)$ is such that
$Y^{\Gamma}=0$ unless $|E(\Gamma)|=0$.
For example, this happens for $Y\in\operatorname{gr}^{0}P^{\mathrm{cl}}(n)$,
provided that $V=V_{0}$ has trivial grading (5.1).
Under the above assumption, only the summand with $p=n$
and $\Gamma=\bullet\cdots\bullet$ is non vanishing in (5.5).
Thus, by Example 4.9, we get
$$X_{\lambda_{1},\dots,\lambda_{n}}(v\otimes f)=f(w_{1},\dots,w_{n})*Y^{\bullet%
\cdots\bullet}_{\lambda_{1},\dots,\lambda_{n}}(v)\,.$$
(5.8)
Next, assume that $Y\in P^{\mathrm{cl}}(n)$ is such that,
for $\Gamma\in\mathcal{L}(n)$,
$Y^{\Gamma}=0$ unless $\Gamma$ is the single line $1\to\cdots\to n$.
In this case, by Example 4.10, we obtain
$$\begin{split}\displaystyle X_{\lambda_{1},\dots,\lambda_{n}}&\displaystyle(v%
\otimes f)=\operatorname{Res}_{0}\!dz_{n-1}\operatorname{Res}_{z_{n-1}}\!dz_{n%
-2}\cdots\operatorname{Res}_{z_{2}}\!dz_{1}\\
&\displaystyle\times f(z_{1},\dots,z_{n-1},0)Y^{1\to\cdots\to n}\big{(}e^{-%
\sum_{i=1}^{n-1}z_{i}(\lambda_{i}+\partial_{i})}v\big{)}\,.\end{split}$$
(5.9)
In the case $n=1$, formulas (5.8) and (5.9)
reduce to $X_{\lambda}(v\otimes c)=c\,Y^{\bullet}(v)$,
where $c\in\mathcal{O}^{\star T}_{1}=\mathbb{F}$.
Finally, we consider the case $n=2$. In this case, $\mathcal{L}(2)$ consists only of the two graphs
$\bullet\,\,\,\bullet$ and $1\to 2$.
Hence, by (5.8) and (5.9), we get
$$X_{\lambda_{1},\lambda_{2}}(v\otimes f)=f(w_{1},w_{2})*Y^{\bullet\,\,\bullet}_%
{\lambda_{1},\lambda_{2}}(v)+\operatorname{Res}_{0}\!dz_{1}f(z_{1},0)Y^{1\to 2%
}_{\Lambda_{1}}\big{(}e^{-z_{1}(\lambda_{1}+\partial_{1})}v\big{)}\,.$$
Note that $Y^{\bullet\,\,\bullet}$ has values in $V[\lambda_{1},\lambda_{2}]/\langle\partial+\lambda_{1}+\lambda_{2}\rangle%
\simeq V[\lambda]$,
where we set $\lambda_{1}=\lambda$ and $\lambda_{2}=-\lambda-\partial$.
Hence, we denote its values as $Y^{\bullet\,\,\bullet}_{\lambda}(v)$.
Recall also that $Y^{1\to 2}_{\Lambda_{1}}(v)$ is independent of $\Lambda_{1}$,
so we omit the subscript $\Lambda_{1}$.
Moreover, since $\mathcal{O}^{\star T}_{2}=\mathbb{F}[z_{12}^{\pm 1}]$,
we may take $f(z_{1},z_{2})=z_{12}^{m}$, $m\in\mathbb{Z}$.
Under this setting, the previous formula can be rewritten as follows:
$$X_{\lambda}(v_{1}\otimes v_{2}\otimes z_{12}^{m})=(-1)^{m}\partial_{\lambda}^{%
m}Y^{\bullet\,\,\bullet}_{\lambda}(v_{1}\otimes v_{2})+(-1)^{m+1}Y^{1\to 2}%
\big{(}(\lambda+\partial)^{(-m-1)}v_{1}\otimes v_{2}\big{)}\,.$$
(5.10)
As before, we are using the divided power notation:
$\lambda^{(-m-1)}=0$ for $m\geq 0$
and $\lambda^{(-m-1)}=\lambda^{-m-1}/(-m-1)!$ for $m<0$.
Equation (5.10)
agrees with the corresponding formulas in the proof of Theorem 10.10 of [BDSHK18].
5.4. Relation to the operad $\mathcal{L}ie$
Let $V=\Pi\mathbb{F}$ be an odd, $1$-dimensional vector superspace considered as an $\mathbb{F}[\partial]$-module with $\partial=0$. We see from (1.3)–(1.4) that $P^{\mathrm{ch}}(2)$ is a $1$-dimensional vector space. Indeed any operation is determined by the image of $z_{12}^{-1}\in\mathcal{O}_{n}^{\star T}$. In fact, it follows from [BDSHK18, (6.25)] that $P^{\mathrm{ch}}(2)$ is the non-trivial representation of the symmetric group $S_{2}$ on two elements. Let us call $\mu\in P^{\mathrm{ch}}(2)$ the operation such that $\mu\left(z_{12}^{-1}\right)=1$. Consider the operad $\mathcal{L}ie$ of Lie algebras, in which the vector space of $n$-ary operations has as basis:
$$\mathcal{L}ie(n)=\bigoplus_{\stackrel{{\scriptstyle\sigma\in S_{n}}}{{\sigma(1%
)=1}}}[x_{\sigma(1)},[x_{\sigma(2)},[\cdots,x_{\sigma(n)}]\cdots].$$
(5.11)
In particular,
$\mathcal{L}ie(2)$ is the non-trivial $1$-dimensional representation of $S_{2}$, with basis $[x_{1},x_{2}]$. As an application of Theorem 5.1 we obtain the following.
Theorem 5.2 ([BD04, 3.1.5]).
There is a unique isomorphism of operads
$$P^{\mathrm{ch}}(\Pi\mathbb{F})\simeq\mathcal{L}ie\qquad\text{such that}\qquad P%
^{\mathrm{ch}}(2)\ni\mu\mapsto[x_{1},x_{2}]\in\mathcal{L}ie(2).$$
Proof.
By Theorem 5.1, it is enough to prove the isomorphism
of graded operads $P^{\mathrm{cl}}(\Pi\mathbb{F})\simeq\mathcal{L}ie$.
Let $Y\in P^{\mathrm{cl}}(n)$ and $\Gamma\in\mathcal{G}(n)$. We see from (3.9) that $Y^{\Gamma}$ vanishes unless $\Gamma$ is connected,
in which case $Y^{\Gamma}\colon\mathbb{F}\rightarrow\mathbb{F}$. It follows that $P^{\mathrm{cl}}(n)$ is the quotient of $\mathbb{F}\mathcal{G}_{c}(n)$ by the cycle relations (3.6) where $\mathcal{G}_{c}(n)$ is the subset of connected graphs.
Recall from [BDSHK18, A.5] that to each $n$-ary operation in $\mathcal{L}ie(n)$ we can associate a connected graph in $\mathcal{G}(n)$ in a way compatible with the operadic compositions. We see from Theorem 4.7 that a basis for $P^{\mathrm{cl}}(n)$ is given by connected lines in $\mathcal{L}(n)$. It follows that $\dim P^{\mathrm{cl}}(n)=\dim\mathcal{L}ie(n)=(n-1)!$. The line
$$\Gamma=\xymatrix@R-2pc{\circ\ar[r]&\circ\ar[r]&\cdots\ar[r]&\circ\\
\sigma(1)=1&\sigma(2)&\cdots&\sigma(n)}$$
is associated to the operation (5.11); therefore, the map $\mathcal{L}ie(n)\rightarrow P^{\mathrm{cl}}(n)$ is an isomorphism. Uniqueness follows since any operation in $\mathcal{L}ie$ is a composition of binary operations.
∎
References
[BDSHK18]
B. Bakalov, A. De Sole, R. Heluani, and V.G. Kac,
An operadic approach to vertex algebra and Poisson vertex algebra cohomology.
Preprint arXiv:1806.08754.
[BDSHKV19]
B. Bakalov, A. De Sole, R. Heluani, V.G. Kac, and V. Vignoli,
Classical and variational Poisson cohomology,
in preparation.
[BD04]
A. Beilinson, V. Drinfeld, Chiral algebras.
American Mathematical Society Colloquium Publications, 51.
American Mathematical Society, Providence, RI, 2004.
[DSK12]
A. De Sole and V.G. Kac,
Essential variational Poisson cohomology.
Comm. Math. Phys. 313 (2012), no. 3, 837–864
[DSK13]
A. De Sole and V.G. Kac,
Variational Poisson cohomology.
Japan. J. Math. 8, (2013) 1-145.
[LV12]
J.-L. Loday and B. Vallette, Algebraic operads.
Grundlehren der Mathematischen Wissenschaften,
346, Springer, Heidelberg, 2012.
[Tam02]
D. Tamarkin,
Deformations of chiral algebras.
Proceedings of the International Congress of Mathematicians,
Vol. II (Beijing, 2002), 105–116. |
A SiPM-based ZnS:${}^{6}$LiF scintillation neutron detector
A. Stoykov, J.-B. Mosset, U. Greuter, M. Hildebrandt, N. Schlumpf
Paul Scherrer Institut, CH-5232 Villigen PSI, Switzerland
[1ex]
In the work presented here we built and evaluated a single-channel neutron detection unit
consisting of a ZnS:${}^{6}$LiF scintillator with embedded WLS fibers readout by a SiPM.
The unit has a sensitive volume of 2.4 x 2.8 x 50 mm${}^{3}$;
12 WLS fibers of diameter 0.25 mm are uniformly distributed over this volume
and are coupled to a 1 x 1 mm${}^{2}$ active area SiPM.
We report the following performance parameters:
neutron detection efficiency $\sim 65$ % at 1.2 Å,
background count rate $<10^{-3}$ Hz,
gamma-sensitivity with ${}^{60}$Co source $<10^{-6}$,
dead time $\sim 20\,\mu$s, multi-count ratio $<1$ %.
All these parameters were achieved up to the SiPM dark count rate of $\sim 2$ MHz.
We consider such detection unit as an elementary building block for realization
of one-dimensional multichannel detectors for applications in the neutron scattering
experimental technique. The dimensions of the unit and the number of embedded fibers
can be varied to meet the specific application requirements.
The upper limit of $\sim 2$ MHz on the SiPM dark count rate allows
to use SiPMs with larger active areas if required.
Keywords: SiPM, MPPC, neutron detector, ZnS:6LiF scintillator, WLS fiber
1 Introduction
Helium-3 has been for several decades the most widely used converting material
in detectors for neutron scattering experiments. The world-wide shortage of its supply
starting in 2009 increased significance and stimulated further development of
alternative detector technologies [1]. One of these alternatives is
the scintillation technology based on ZnS:${}^{6}$LiF or ZnS:${}^{10}$B${}_{2}$0${}_{3}$ scintillators
read out by wavelength-shifting (WLS) fibers [1, 2].
Currently all detectors of this kind utilize photomultiplier tubes (PMTs)
or multi-anode photomultiplier tubes (MaPMTs) as photosensors.
The application of silicon photomultipliers (SiPMs) in such detectors has been hindered
by their orders of magnitude higher dark count rate at room temperature:
the long emission time of the neutron scintillator and the deficient light collection
due to its poor transparency made it difficult to combine a high trigger efficiency
for the neutron signals with a reasonable suppression of the SiPM dark counts.
As mentioned in [2], the solution of this problem requires an improvement
of the light collection from the scintillator.
In [3, 4, 5] we presented a practical way how to combine this requirement with
a small active area of a SiPM. Also we developed an approach to the signal processing
based on “digitization” of the SiPM one-electron signals
(one primary electron = one standard pulse) followed by a pulse-train analysis
to identify the neutron related pulse sequences against the background
of the SiPM dark counts.
In this work we built a single-channel detection unit with SiPM readout
(prototype units of 1/4 height were used in [3, 4, 5])
and characterized its performance by determining such parameters as trigger efficiency,
background count rate, gamma-sensitivity, dead time, and multi-count ratio.
2 Detection unit
Figure 1 shows a cross-section of the sensitive volume of the detection unit.
The unit consists of four times two layers (thickness 0.25 mm and 0.45 mm)
of ZnS:${}^{6}$LiF scintillation material
(ND2:1 neutron detection screens from Applied Scintillation Technologies [6])
glued together using EJ-500 optical epoxy from Eljen [7].
Twelve WLS fibers Y11(400)M from Kuraray [8] are glued with the same epoxy
into the grooves machined in the thicker layers.
Compared to [3, 4, 5] we use here WLS fibers with twice higher dye concentration
(400 ppm instead of 200 ppm). This increases the light yield by about 20 %.
At one side of the unit the fibers are cut along its edge and polished.
An aluminized Mylar foil serving as specular reflector is glued on these polished fiber
ends to increase the light yield at the other fiber ends which are connected to a SiPM.
The total volume of the unit is 2.4 x 2.8 x 50 mm${}^{3}$ (width x height x length).
The net absorption volume excluding grooves with the fibers (effective volume)
amounts to 0.83 of this value.
The free ends of the WLS fibers are bundled and glued together into $\oslash 1.1$ mm hole
in a Plexiglas holder and polished afterwards. The coupling to a 1 x 1 mm${}^{2}$
active area SiPM is done via a so-called optical expander
(short $\oslash 1.2$ mm clear multiclad fiber) to ensure a uniform illumination
of the SiPM sensitive area. Note that the scintillation light from a neutron absorption
event is not distributed uniformly over the 12 WLS fibers but is rather concentrated
in one or few of them.
The width and the height of this detection unit satisfy the requirements of the POLDI
time-of-flight diffractometer concerning the channel pitch (2.5 mm) and the neutron
absorption probability ($\sim 80$ % at 1.2 Å) [5].
A one-dimensional array of 400 such units of 200 mm length arranged along a circle
of 2 m radius will constitute one detector module of the POLDI instrument
(in total the instrument will be equipped with four such modules).
The 50 mm length of the unit in the present work is chosen arbitrarily:
we do not expect variations in the performance of the detector and complications
in the manufacturing process changing later to the length of 200 mm.
3 Signal processing
The used SiPM is a 1 x 1 mm${}^{2}$ active area MPPC S12571-025C from Hamamatsu [9].
It is operated at an overvoltage of 2.5 V at room temperature. The dark count rate is
about 100 kHz. Higher dark count rates were induced by illuminating the SiPM with
a weak constant light source. The necessity to evaluate the performance of the detector
at the dark count rates substantially higher than 100 kHz is motivated by the following
considerations. First, even though 1 mm${}^{2}$ active area SiPMs with such low dark count
rates start to be commonly available, our goal is to develop a detector which will fulfil
the performance requirements with different types of SiPMs,
e.g. devices with higher intrinsic dark count rates or larger active areas.
And second, depending on the radiation environment, the dark count rate of the SiPM
might increase with time conditioned by an increase of the concentration of radiation
defects in silicon [10]. For example, indicated by long-term measurements
in the POLDI experimental area we expect an increase of the SiPM dark count rate
of about 100 kHz/mm${}^{2}$ per year.
Figure 2 shows the block-diagram of the signal-processing chain including
a high band-width amplifier, a leading-edge discriminator,
an analyzer (designated here as filter), and an event generator unit.
Figure 3 shows an oscilloscope “screen-short” (persistence mode) of the amplified
and shaped SiPM signals (SA) and the generated discriminator signals (SD).
The discrimination threshold is set low enough so that all SA-signals are accepted.
Independent of how many SiPM cells are triggered simultaneously by SiPM cell-to-cell
cross-talk per initial single primary charge carrier in one cell
(see multiple amplitudes of SA signals), after the discriminator the relation
“one primary charge carrier = one standard SD pulse” is always given.
This kind of “digitization” suppresses the cross-talk events, which is essential
to achieve low background count rate of the detector [5], and allows for further
signal processing schemes independent of the used SiPM type.
The SD pulse sequence is processed by the analyzer unit. Approaches for the realization
of the analyzer can be chosen differently as described in [3, 4, 5].
In the current case we use a multistage “single-pulse elimination” filter
described in [4] to extract the neutron signals from the dark-count background.
The tunable parameters of the filter, defining the detection threshold,
are the number of filtering stages N and the width of the internal gate signals
for the first and the following filter stages: gate(1) and gate(2$\ldots$N).
The dark count rejection is dominated by the gate width of the first filter stage
and by the total number of consecutive stages, while for better transmission
of the “extended” pulse trains corresponding to neutron scintillation events
longer gate values for the following stages are advantageous.
In this study, the filter time constants gate(2$\ldots$N) were fixed to 500 ns,
while the parameters gate(1) and N were varied.
Figure 4 shows an example of SD pulse sequence and of corresponding SF pulse sequence
from the filter as a result of “identification” of one neutron scintillation event.
The first pulse of the SF-signal triggers the event generator
(a retriggerable mono-flop with adjustable pulse width) which generates an event signal SN.
For the duration of the SN signal the system is blocked – no second pulse can be generated.
Such blocking is necessary to account for the long afterglow of the scintillator
and prevents multiple triggers from the same scintillation event. The actual blocking time
is automatically adjusted in accordance with the strength of the signal at the filter
output and is equal or longer than its initial set value (b-time). The parameter b-time
was varied in the present measurements in the range from 5 $\mu$s to 200 $\mu$s.
4 Measurements
4.1 Trigger Efficiency
The measurements were performed with a ${}^{241}$AmBe neutron source
(intensity $2\cdot 10^{4}$ fast neutrons per second) positioned in the center of
a 0.8 x 0.8 x 0.8 m${}^{3}$ moderator block made of polyethylene.
The detection unit was placed in a distance of $\sim 5$ cm from the source in
a slot made in the moderator block. To shield the detector from 60 keV gamma photons
accompanying the alpha-decay of ${}^{241}$Am, the source was shielded by placing it
inside a cylindrical tube made of lead with 5 mm thick walls.
To estimate the neutron absorption rate in the detection unit a calibration measurement
with a low threshold (gate(1) = 80 ns, N = 4) was performed.
Figure 5 shows the distribution of the number of SD-counts in the first 10 $\mu$s
of the signals. The trigger efficiency of 0.92 is obtained as the ratio of the number
of events in this measured histogram to that in the histogram obtained by extrapolating
the measured histogram to zero number of SD-counts. Taking into account the measured
event rate of 8.3 Hz the neutron absorption rate in the detection unit is estimated
to be 9.0 Hz. In the following measurements the trigger efficiency was determined
as the ratio of the measured event rate to this neutron absorption rate.
Figure 6 shows the trigger efficiency as a function of the filter parameters
gate(1) and N. By decreasing gate(1) and increasing N we increase the strength
of the filter and thus decrease its trigger efficiency. The operation point was chosen
at gate(1) = 30 ns and N = 10 which ensures the trigger efficiency at the level
of 80 %. Combined with the neutron absorption probability of $\sim 80$ %
at 1.2 Å(see above) this gives the neutron detection efficiency of
$\sim 65$ % at this wavelength.
4.2 Background Count Rate
Figure 7 shows the background count rate of the detector as a function of the number
of filter stages for three different values (100 kHz, 1000 kHz, and 2000 kHz)
of SiPM dark count rates. At the chosen operation point, where we reach
a trigger efficiency around 80 %, the background count rate below 10${}^{-3}$ Hz
is achieved for SiPM dark count rates of up to $\sim 2$ MHz.
4.3 Gamma Sensitivity
The gamma-sensitivity was measured with a ${}^{60}$Co source. The source is point-like
and incorporated into a tablet of $\oslash=25$ mm. Its activity is $\sim 60$ kBq.
The rate of $\sim 1.3$ MeV photons passing through our 2.4 mm wide detection unit
was estimated to be $10^{4}$ s${}^{-1}$.
This calibration was done with a 2 x 2 x 12 mm${}^{3}$ LYSO crystal mounted onto
a photocathode of a PMT: with the low detection threshold the rate of gamma-events
was measured to be $\sim 10^{3}$ s${}^{-1}$ and the interaction probability
for 1.3 MeV photons in 2 mm thick LYSO material was taken as 10 % [11].
Figure 8 shows the gamma-sensitivity as a function of the number of filter stages
at different SiPM dark count rates. The probability to trigger on gamma-events
is enhanced by the presence of dark counts and the gamma-sensitivity increases
with increasing the dark count rate. At the chosen operation point the gamma-sensitivity
amounts to $3\cdot 10^{-8}$, $10^{-7}$, and $8\cdot 10^{-7}$ at the SiPM dark count rates
of 140 kHz, 1000 kHz, and 2000 kHz, respectively.
4.4 Multi-Count Ratio
Figure 9 shows the multi-count ratio of the detector as a function of the set value
of the blocking time (b-time) of the event generator.
The multi-count ratio is defined [12] as the ratio between the measured
and the true number of neutron events minus 1. For the true number of neutron events
we take the value measured with b-time = 200 $\mu$s. Down to b-time = 15 $\mu$s
the multi-count ratio remains zero within the experimental accuracy of $\sim 0.05$ %.
Below 15 $\mu$s the multi-count ratio starts to increase and reaches the level of 1 %
at b-time $\sim 8\,\mu$s. At b-time = 15 $\mu$s (chosen setting)
the actual measured mean value of the blocking time (dead time) is $\sim 20\,\mu$s.
4.5 Influence of SiPM dark count rate
Figure 10 shows how the trigger efficiency and the multi-count ratio of the detector
are influenced by the dark count rate of the SiPM. The trigger probability is enhanced
by the presence of dark counts and both the trigger efficiency and the multi-count ratio
increase with increasing the dark count rate. However, up to a dark count rate
of $\sim 2$ MHz the variation of these two parameters is minor.
Summary
In this work we continued investigating the feasibility of the new approach,
proposed in [3, 4, 5], to realize one-dimensional multichannel neutron detectors
with ZnS:${}^{6}$LiF scintillators. In this approach the detector is built as an array
of single-channel detection units containing scintillator with embedded WLS fibers
readout by SiPMs. We built a prototype detection unit and performed characterization
studies demonstrating the performance parameters to be similar to those of the currently
used PMT-based detection systems [12, 13].
It is worth pointing out, that the compact size and the low price of SiPMs make
the discussed approach feasible even in case of certain 2D-detectors.
The large number of individual readout channels in such detectors will help solving
the problem of their limited rate capability (remember, that the count rate per readout
channel is limited by the necessity to introduce a certain dead-time
in the detection process to cope with the long afterglow of the scintillator).
This could be an option, for example, for large-area detectors for inelastic neutron
scattering instruments where currently no adequate replacement to helium-3 detectors
is found [1].
Acknowledgments
We express our gratitude to Andreas Hofer (Detector Group of the Laboratory
for Particle Physics) for designing and building our prototype detection units.
References
[1]
K. Zeitelhack, Neutron News 23(4) (2012) 10.
[2]
N.J. Rhodes, Neutron News 23(4) (2012) 26.
[3]
J.-B. Mosset et al.,
Journal of Physics: Conference Series 528 (2014) 012041.
[4]
A. Stoykov et al., 2014 JINST 9 P06015.
[5]
J.-B. Mosset et al., Nucl. Instr. and Meth. A 764 (2014) 299.
[6]
http://www.appscintech.com
[7]
http://www.eljentechnology.com
[8]
http://kuraraypsf.jp
[9]
http://www.hamamatsu.com
[10]
Y. Musienko et al., Nucl. Instr. and Meth. A 581 (2007) 433.
[11]
Saint-Gobain Crystals, âPreLude 420 datasheetâ,
http://www.crystals.saint-gobain.com/PreLude_420_Scintillator.aspx
[12]
T. Nakamura et al., Nucl. Instr. and Meth. A 600 (2009) 164.
[13]
T. Nakamura et al., Nucl. Instr. and Meth. A 686 (2012) 64. |
Abstract
We provide a formula for the $n^{th}$ term of the $k$-generalized Fibonacci-like number sequence using the $k$-generalized Fibonacci number or $k$-nacci number, and by utilizing the newly derived formula, we show that the limit of the ratio of successive terms of the sequence tends to a root of the equation $x+x^{-k}=2$. We then extend our results to $k$-generalized Horadam ($k$GH) and $k$-generalized Horadam-like ($k$GHL) numbers. In dealing with the limit of the ratio of successive terms of $k$GH and $k$GHL, a lemma due to Z. Wu and H. Zhang [8] shall be employed. Finally, we remark that an analogue result for $k$-periodic $k$-nary Fibonacci sequence can also be derived.
Applied Mathematical Sciences, Vol. x, 2015, no. xx, xxx - xxx
HIKARI Ltd, www.m-hikari.com
On Generalized Fibonacci Numbers111to appear by April 2015
Jerico B. Bacani${}^{*}$ and Julius Fergy T. Rabago${}^{\dagger}$
Department of Mathematics and Computer Science
College of Science
University of the Philippines Baguio
Baguio City 2600, Philippines
${}^{*}[email protected], ${}^{\dagger}[email protected]
Copyright $\copyright$ 2015 Jerico B. Bacani and Julius Fergy T. Rabago. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Mathematics Subject Classification: 11B39, 11B50.
Keywords: k-generalized Fibonacci numbers, k-generalized Fibonacci-like numbers, k-generalized Horadam numbers, k-generalized Horadam-like numbers, convergence of sequences
1 Introduction
A well-known recurrence sequence of order two is the widely studied Fibonacci sequence $\{F_{n}\}_{n=1}^{\infty}$, which is defined recursively by the recurrence relation
$$F_{1}=F_{2}=1,\quad F_{n+1}=F_{n}+F_{n-1}\quad(n\geq 1).$$
(1)
Here, it is conventional to define $F_{0}=0$.
In the past decades, many authors have extensively studied the Fibonacci sequence and its various generalizations (cf. [2, 3, 4, 6, 7]). We want to contribute more in this topic, so we present our results on the $k$-generalized Fibonacci numbers or $k$-nacci numbers and of its some generalizations. In particular, we derive a formula for the $k$-generalized Fibonacci-like sequence using $k$-nacci numbers.
Our work is motivated by the following statement:
Consider the set of sequences satisfying the relation $\mathcal{S}_{n}=\mathcal{S}_{n-1}+\mathcal{S}_{n-2}$. Since the sequence $\{\mathcal{S}_{n}\}$ is closed under term-wise addition (resp. multiplication) by a constant, it can be viewed as a vector space. Any such sequence is uniquely determined by a choice of two elements, so the vector space is two-dimensional. If we denote such sequence as $(\mathcal{S}_{0},\mathcal{S}_{1})$, then the Fibonacci sequence $F_{n}=(0,1)$ and the shifted Fibonacci sequence $F_{n-1}=(1,0)$ are seen to form a canonical basis for this space, yielding the identity:
$$\mathcal{S}_{n}=\mathcal{S}_{1}F_{n}+\mathcal{S}_{0}F_{n-1}$$
(2)
for all such sequences $\{\mathcal{S}_{n}\}$. For example, if $\mathcal{S}$ is the Lucas sequence $2,1,3,4,7,\ldots$, then we obtain $\mathcal{S}_{n}\colon\!\!\!=L_{n}=2F_{n-1}+F_{n}$.
One of our goals in this paper is to find an analogous result of the equation (2) for $k$-generalized Fibonacci numbers. The result is significant because it provides an explicit formula for the $n^{th}$ term of a $k$-nacci-like (resp. $k$-generalized Horadam and $k$-generalized Horadam-like) sequences without the need of solving a system of equations.
By utilizing the formula, we also show that the limit of the ratio of successive terms of a $k$-nacci sequence tends to a root of the equation $x+x^{-k}=2$. We then extend our results to $k$-generalized Horadam and $k$-generalized Horadam-like sequences. We also remark that an analogue result for $k$-periodic $k$-nary Fibonacci sequences can be derived.
2 Fibonacci-like sequences of higher order
We start off this section with the following definition.
Definition 2.1.
Let $n\in\mathbb{N}\cup\{0\}$ and $k\in\mathbb{N}\backslash\{1\}$. Consider the sequences $\{F^{(k)}_{n}\}_{n=0}^{\infty}$ and $\{G^{(k)}_{n}\}_{n=0}^{\infty}$ having the following properties:
$$F^{(k)}_{n}=\left\{\begin{array}[]{ll}0,&0\leq n<k-1;\\
\par 1,&n=k-1;\\
\par\sum_{i=1}^{k}F^{(k)}_{n-i},&n>k-1,\end{array}\right.$$
(3)
and
$$G^{(k)}_{n}=\left\{\begin{array}[]{ll}G^{(k)}_{n},&0\leq n\leq k-1;\\
\par\sum_{i=1}^{k}G^{(k)}_{n-i},&n>k-1,\end{array}\right.$$
(4)
and $G^{(k)}_{n}\neq 0$ for some $n\in[0,k-1]$. The terms $F^{(k)}_{n}$ and $G^{(k)}_{n}$ satisfying (3) and (4) are called the $n^{th}$ $k$-generalized Fibonacci number or $n^{th}$ $k$-step Fibonacci number (cf. [7]), and $n^{th}$ $k$-generalized Fibonacci-like number, respectively.
For $\{F^{(k)}_{n}\}_{n=0}^{\infty}$, some famous sequences of this type are the following:
$$k$$
name of sequence
first few terms of the sequence
2
Fibonacci
$$0,1,1,2,3,5,8,13,21,34,\ldots$$
3
Tribonacci
$$0,0,1,1,2,4,7,13,24,44,81,\ldots$$
4
Tetranacci
$$0,0,0,1,1,2,4,8,15,29,56,108,\ldots$$
5
Pentanacci
$$0,0,0,0,1,1,2,4,8,16,31,61,120,\ldots$$
By considering the sequences $\{F^{(k)}_{n}\}_{n=0}^{\infty}$ and $\{G^{(k)}_{n}\}_{n=0}^{\infty}$ we obtain the following relation.
Theorem 2.2.
Let $F^{(k)}_{n}$ and $G^{(k)}_{n}$ be the $n^{th}$ $k$-generalized Fibonacci and $k$-generalized Fibonacci-like numbers, respectively. Then, for all natural numbers $n\geq k,$
$$G^{(k)}_{n}=G^{(k)}_{0}F^{(k)}_{n-1}+\sum_{m=0}^{k-3}\left(G^{(k)}_{m+1}\sum_{%
j=0}^{m+1}F^{(k)}_{n-1-j}\right)+G^{(k)}_{k-1}\;F^{(k)}_{n}.$$
(5)
Proof.
We prove this using induction on $n$. Let $k$ be fixed. Equation (5) is obviously valid for $n<k$. Now, suppose (5) is true for $n\geq r\geq k$ where $r\in\mathbb{N}$. Then,
$$\displaystyle G^{(k)}_{r+1}$$
$$\displaystyle=\sum_{i=1}^{k}G^{(k)}_{(r+1)-i}$$
$$\displaystyle=G^{(k)}_{0}\sum_{i=1}^{k}F^{(k)}_{(r+1)-i-1}+\sum_{m=0}^{k-3}%
\left(G^{(k)}_{m+1}\sum_{j=0}^{m+1}\left(\sum_{i=1}^{k}F^{(k)}_{(r+1)-i-1-j}%
\right)\right)$$
$$\displaystyle +G^{(k)}_{k-1}\left(\sum_{i=1}^{k}F^{(k)}_{%
(r+1)-i}\right)$$
$$\displaystyle=G^{(k)}_{0}F^{(k)}_{(r+1)-1}+\sum_{m=0}^{k-3}\left(G^{(k)}_{m+1}%
\sum_{j=0}^{m+1}F^{(k)}_{(r+1)-1-j}\right)+G^{(k)}_{k-1}F^{(k)}_{r+1}.\qed$$
Remark 2.3.
Using the formula obtained by G. P. B. Dresden (cf. [2, Theorem 1]), we can now express $G^{(k)}_{n}$ explicitly in terms of $n$ as follows:
$$G^{(k)}_{n}=G^{(k)}_{0}\sum_{i=1}^{k}A(i;k)\alpha_{i}^{n-2}+\sum_{m=0}^{k-3}%
\left(G^{(k)}_{m+1}\sum_{j=0}^{m+1}\sum_{i=1}^{k}A(i;k)\alpha_{i}^{n-2-j}%
\right)+G^{(k)}_{k-1}\sum_{i=1}^{k}A(i;k)\alpha_{i}^{n-1},$$
where $A(i;k)=(\alpha_{i}-1)[2+(k+1)(\alpha_{i}-2)]^{-1}$ and $\alpha_{1},\alpha_{2},\ldots,\alpha_{k}$ are roots of $x^{k}-x^{k-1}-\cdots-1=0$. Another formula of Dresden for $F^{(k)}_{n}$ (cf. [2, Theorem 2]) can
also be used to express $G^{(k)}_{n}$ explicitly in terms of $n$. More precisely, we have
$$\displaystyle G^{(k)}_{n}$$
$$\displaystyle=G^{(k)}_{0}\sum_{i=1}^{k}\text{Round}\left[A(k)\alpha_{i}^{n-2}%
\right]+\sum_{m=0}^{k-3}\left(G^{(k)}_{m+1}\sum_{j=0}^{m+1}\sum_{i=1}^{k}\text%
{Round}\left[A(k)\alpha_{i}^{n-2-j}\right]\right)$$
$$\displaystyle%
+G^{(%
k)}_{k-1}\sum_{i=1}^{k}\text{Round}\left[A(k)\alpha_{i}^{n-1}\right],$$
where $A(k)=(\alpha-1)[2+(k+1)(\alpha-2)]^{-1}$ for all $n\geq 2-k$ and for $\alpha$ the unique positive root of $x^{k}-x^{k-1}-\cdots-1=0$.
Extending to Horadam numbers
In 1965, A. F. Horadam [5] defined a second-order linear recurrence sequence
$\{W_{n}(a,b;p,q)\}_{n=0}^{\infty},$ or simply $\{W_{n}\}_{n=0}^{\infty}$ by the recurrence relation
$$W_{0}=a,\quad W_{1}=b,\quad W_{n+1}=pW_{n}+qW_{n-1},\quad(n\geq 2).$$
The sequence generated is called the Horadam’s sequence which can be viewed easily as a certain generalization of $\{F_{n}\}$. The $n^{th}$ Horadam number $W_{n}$ with initial conditions $W_{0}=0$ and $W_{1}=1$ can be represented by the following Binet’s formula:
$$W_{n}(0,1;p,q)=\frac{\alpha^{n}-\beta^{n}}{\alpha-\beta}\quad(n\geq 2),$$
where $\alpha$ and $\beta$ are the roots of the quadratic equation $x^{2}-px-q=0$, i.e. $\alpha=(p+\sqrt{p^{2}+4q})/2$ and $\beta=(p-\sqrt{p^{2}+4q})$. We extend this definition to the concept of $k$-generalized Fibonacci sequence and we define the $k$-generalized Horadam (resp. Horadam-like) sequence as follows:
Definition 2.4.
Let $q_{i}\in\mathbb{N}$ for $i\in\{1,2,\ldots,n\}$. For $n\geq k$, the $n^{th}$ $k$-generalized Horadam sequence, denoted by $\{\mathcal{U}_{n}^{(k)}(0,\ldots,1;q_{1},\ldots,q_{k})\}_{n=0}^{\infty},$ or simply $\{\mathcal{U}_{n}^{(k)}\}_{n=0}^{\infty},$ is a sequence whose $n^{th}$ term is obtained by the recurrence relation
$$\mathcal{U}^{(k)}_{n}=q_{1}\mathcal{U}^{(k)}_{n-1}+q_{2}\mathcal{U}^{(k)}_{n-2%
}+\cdots+q_{k}\mathcal{U}^{(k)}_{n-k}=\sum_{i=1}^{k}q_{i}\mathcal{U}^{(k)}_{n-%
i},$$
(6)
with initial conditions $\mathcal{U}^{(k)}_{i}=0$ for all $0\leq i<k-1$ and $\mathcal{U}^{(k)}_{k-1}=1.$ Similarly, the $k$-generalized Horadam-like sequence, denoted by
$\{\mathcal{V}_{n}^{(k)}(a_{0},\ldots,a_{k-1};q_{1},\ldots,q_{k})\}_{n=0}^{\infty}$ or simply $\{\mathcal{V}_{n}^{(k)}\}_{n=0}^{\infty}$, has the same recurrence relation given by equation (6) but with initial conditions $\mathcal{V}^{(k)}_{i}=a_{i}$ for all $0\leq i\leq k-1$ where $a_{i}{}^{\prime}{}s\in\mathbb{N}\cup\{0\}$ with at least one of them is not zero.
It is easy to see that when $q_{1}=\cdots=q_{k}=1$, then $\mathcal{U}_{n}^{(k)}(0,\ldots,1;1,\ldots,1)=F^{(k)}_{n}$ and $\mathcal{V}_{n}^{(k)}(a_{0},\ldots,a_{k-1};1,\ldots,1)=G^{(k)}_{n}$. Using Definition 2.4 we obtain the following relation, which is an analogue of equation (5).
Theorem 2.5.
Let $\mathcal{U}^{(k)}_{n}$ and $\mathcal{V}^{(k)}_{n}$ be the $n^{th}$ $k$-generalized Horadam and $n^{th}$ $k$-generalized Horadam-like numbers, respectively. Then, for all $n\geq k,$
$$\mathcal{V}^{(k)}_{n}=q_{k}\mathcal{V}^{(k)}_{0}\mathcal{U}^{(k)}_{n-1}+\sum_{%
m=0}^{k-3}\left(\mathcal{V}^{(k)}_{m+1}\sum_{j=0}^{m+1}q_{k-(m+1)+j}\;\mathcal%
{U}^{(k)}_{n-1-j}\right)+\mathcal{V}^{(k)}_{k-1}\;\mathcal{U}^{(k)}_{n}.$$
(7)
Proof.
The proof uses mathematical induction and is similar to the proof of Theorem 2.2.
∎
Convergence properties
In the succeeding discussions, we present the convergence properties of the sequences $\{F^{(k)}_{n}\}^{\infty}_{n=0},\{G^{(k)}_{n}\}^{\infty}_{n=0},\{\mathcal{U}^{(%
k)}_{n}\}^{\infty}_{n=0}$, and $\{\mathcal{V}^{(k)}_{n}\}^{\infty}_{n=0}$. First, it is known (e.g. in [7]) that $\lim_{n\rightarrow\infty}F_{n}^{(k)}/F_{(n-1)}^{(k)}=\alpha,$ where $\alpha$ is a $k$-nacci constant. This constant is the unique positive real root of $x^{k}-x^{k-1}-\cdots-1=0$ and can also be obtained by solving the zero of the polynomial $x^{k}(2-x)-1$. Using this result, we obtain the following:
Theorem 2.6.
$$\lim_{n\rightarrow\infty}G^{(k)}_{n}/G^{(k)}_{n-1}=\alpha,$$
(8)
where $\alpha$ the unique positive root of $x^{k}-x^{k-1}-\cdots-1=0.$
Proof.
The proof is straightforward. Letting $n\rightarrow\infty$ in $G^{(k)}_{n}/G^{(k)}_{n-1}$ we have
$$\displaystyle\lim_{n\rightarrow\infty}G^{(k)}_{n}/G^{(k)}_{n-1}$$
$$\displaystyle=\lim_{n\rightarrow\infty}\left[\frac{G^{(k)}_{0}F^{(k)}_{n-1}+%
\sum_{m=0}^{k-3}\left(G^{(k)}_{m+1}\sum_{j=0}^{m+1}F^{(k)}_{n-1-j}\right)+G^{(%
k)}_{k-1}\;F^{(k)}_{n}}{G^{(k)}_{0}F^{(k)}_{n-2}+\sum_{m=0}^{k-3}\left(G^{(k)}%
_{m+1}\sum_{j=0}^{m+1}F^{(k)}_{n-2-j}\right)+G^{(k)}_{k-1}\;F^{(k)}_{n-1}}\right]$$
$$\displaystyle=\lim_{n\rightarrow\infty}\left[\frac{G^{(k)}_{0}+\sum_{m=0}^{k-3%
}\left(G^{(k)}_{m+1}\sum_{j=0}^{m+1}\frac{F^{(k)}_{n-1-j}}{F^{(k)}_{n-1}}%
\right)+G^{(k)}_{k-1}\;\frac{F^{(k)}_{n}}{F^{(k)}_{n-1}}}{G^{(k)}_{0}\frac{F^{%
(k)}_{n-2}}{F^{(k)}_{n-1}}+\sum_{m=0}^{k-3}\left(G^{(k)}_{m+1}\sum_{j=0}^{m+1}%
\frac{F^{(k)}_{n-2-j}}{F^{(k)}_{n-1}}\right)+G^{(k)}_{k-1}}\right]$$
$$\displaystyle=\frac{G^{(k)}_{0}+\sum_{m=0}^{k-3}\left(G^{(k)}_{m+1}\sum_{j=0}^%
{m+1}\alpha^{-j}\right)+\alpha G^{(k)}_{k-1}}{\alpha^{-1}G^{(k)}_{0}+\sum_{m=0%
}^{k-3}\left(G^{(k)}_{m+1}\sum_{j=0}^{m+1}\alpha^{-(j+1)}\right)+G^{(k)}_{k-1}}$$
$$\displaystyle=\alpha.\qed$$
Now, to find the limit of $\mathcal{U}^{(k)}_{n}/\mathcal{U}^{(k)}_{n-1}$ (resp. $\mathcal{V}^{(k)}_{n}/\mathcal{V}^{(k)}_{n-1}$) as $n\rightarrow\infty$ we need the following results due to Wu and Zhang [8]. Here, it is assumed that the $q_{i}{}^{\prime}{}s$ satisfy the inequality $q_{i}\geq q_{j}\geq 1$ for all $j\geq i$, where $1\leq i,j\leq k$ with $2\leq k\in\mathbb{N}.$
Lemma 2.7.
[8]
Let $q_{1},q_{2},\ldots,q_{k}$ be positive integers with $q_{1}\geq q_{2}\geq\cdots\geq q_{k}\geq 1$ and $k\in\mathbb{N}\backslash\{1\}$. Then, the polynomial
$$f(x)=x^{k}-q_{1}x^{k-1}-q_{2}x^{k-2}-\cdots-q_{k-1}x-q_{k},$$
(9)
(i)
has exactly one positive real zero $\alpha$ with $q_{1}<\alpha<q_{1}+1;$ and
(ii)
its other $k-1$ zeros lie within the unit circle in the complex plane.
Lemma 2.8.
[8]
Let $k\geq 2$ and let $\{u_{n}\}_{n=0}^{\infty}$ be an integer sequence satisfying the recurrence relation given by
$$u_{n}=q_{1}u_{n-1}+q_{2}u_{n-2}+\cdots+q_{k-1}u_{n-k+1}+q_{k}u_{n-k},\;n>k,$$
(10)
where $q_{1},q_{2},\ldots,q_{k}\in\mathbb{N}$ with initial conditions $u_{i}\in\mathbb{N}\cup\{0\}$ for $0\leq i<k$ and at least one of them is not zero. Then, a formula for $u_{n}$ may be given by
$$u_{n}=c\alpha^{n}+\mathcal{O}(d^{-n})\;\;\;(n\rightarrow\infty),$$
(11)
where $c>0,d>1,$ and $q_{1}<\alpha<q_{1}+1$ is the positive real zero of $f(x).$
We now have the following results.
Theorem 2.9.
Let $\{\mathcal{U}_{n}\}_{n=0}^{\infty}$ be the integer sequence satisfying the recurrence relation (6) with initial conditions $\mathcal{U}^{(k)}_{i}=0$ for all $0\leq i<k-1,2\leq k\in\mathbb{N}$ and $\mathcal{U}^{(k)}_{k-1}=1$ with $q_{1}\geq q_{2}\geq\cdots\geq q_{k}\geq 1.$ Then,
$$\mathcal{U}^{(k)}_{n}=c\alpha^{n}+\mathcal{O}(d^{-n})\;\;\;(n\rightarrow\infty),$$
(12)
where $c>0,d>1,$ and $\alpha\in(q_{1},q_{1}+1)$ is the positive real zero of $f(x).$ Furthermore,
$$\lim_{n\rightarrow\infty}\mathcal{U}^{(k)}_{n}/\mathcal{U}^{(k)}_{n-1}=\alpha.$$
(13)
Proof.
Equation (12) follows directly from Lemmas 2.7 and 2.8. To obtain (13), we simply use (12) and take the limit of the ratio $\mathcal{U}^{(k)}_{n}/\mathcal{U}^{(k)}_{n-1}$ as $n\rightarrow\infty$; that is, we have the following manipulation:
$$\displaystyle\lim_{n\rightarrow\infty}\mathcal{U}^{(k)}_{n}/\mathcal{U}^{(k)}_%
{n-1}$$
$$\displaystyle=\lim_{n\rightarrow\infty}\frac{c\alpha^{n}+\mathcal{O}(d^{-n})}{%
c\alpha^{n-1}+\mathcal{O}(d^{-(n-1)})}$$
$$\displaystyle=\frac{c\alpha+\lim_{n\rightarrow\infty}\left(\mathcal{O}(d^{-n})%
/\alpha^{n-1}\right)}{c+\lim_{n\rightarrow\infty}\left(\mathcal{O}(d^{-(n-1)})%
/\alpha^{n-1}\right)}$$
$$\displaystyle=\alpha.\qed$$
Consequently, we have the following corollary.
Corollary 2.10.
Let $\{\mathcal{V}_{n}\}_{n=0}^{\infty}$ be an integer sequence satisfying (6) but with initial conditions $\mathcal{V}^{(k)}_{i}=a_{i}$ for all $0\leq i\leq k-1$ where $a_{i}{}^{\prime}{}s\in\mathbb{N}\cup\{0\}$ with atleast one of them is not zero. Furthermore, assume that $q_{1}\geq q_{2}\geq\cdots\geq q_{k}\geq 1,$ where $2\leq k\in\mathbb{N}$ then
$$\lim_{n\rightarrow\infty}\mathcal{V}^{(k)}_{n}/\mathcal{V}^{(k)}_{n-1}=\alpha,$$
(14)
where $q_{1}<\alpha<q_{1}+1$ is the positive real zero of $f(x).$
Proof.
The proof uses Theorem 2.5 and the arguments used are similar to the proof of Theorem 2.6.
∎
Remark 2.11.
Observe that when $q_{i}=1$ for all $i=0,1,\ldots,k$ in Corollary (2.10), then $\lim_{n\rightarrow\infty}r\colon\!\!\!=\lim_{n\rightarrow\infty}F^{(k)}_{n}/F^%
{(k)}_{n-1}=\alpha$, where $1<\alpha<2$. Indeed, the limit of the ratio $r$ is $2$ as $n$ increases.
$k$-Periodic Fibonacci Sequences
In [3], M. Edson and O. Yayenie gave a generalization of Fibonacci sequence. They called it generalized Fibonacci sequence $\{F_{n}^{(a,b)}\}_{n=0}^{\infty}$ which they defined it by using a non-linear recurrence relation depending on two real parameters $(a,b)$. The sequence
is defined recursively as
$$F_{0}^{(a,b)}=0,\quad F_{1}^{(a,b)}=1,\quad F_{n}^{(a,b)}=\left\{\begin{array}%
[]{cc}aF_{n-1}^{(a,b)}+F_{n-2}^{(a,b)},&\text{if}\;n\;\text{is even},\\
bF_{n-1}^{(a,b)}+F_{n-2}^{(a,b)},&\text{if}\;n\;\text{is odd}.\\
\end{array}\right.$$
(15)
This generalization has its own Binet-like formula and satisfies identities that are analogous to the identities satisfied by the classical Fibonacci sequence (see [3]). A further generalization of this sequence, which is called $k$-periodic Fibonacci sequence has been presented by M. Edson, S. Lewis, and O. Yayenie in [4]. A related result concerning to two-periodic ternary sequence is presented in [1] by M. Alp, N. Irmak and L. Szalay. We expect that analogous results of (5), (7), and (12) can easily be found for these generalizations of Fibonacci sequence. For instance, if we alter the starting values of (15), say we start at two numbers $A$ and $B$ and preserve the recurrence relation in (15), then we obtain a sequence that we may call $2$-periodic Fibonacci-like sequence, which is defined as follows:
$$G_{0}^{(a,b)}=A,\quad G_{1}^{(a,b)}=B,\quad G_{n}^{(a,b)}=\left\{\begin{array}%
[]{cc}aG_{n-1}^{(a,b)}+G_{n-2}^{(a,b)},&\text{if}\;n\;\text{is even},\\
bG_{n-1}^{(a,b)}+G_{n-2}^{(a,b)},&\text{if}\;n\;\text{is odd}.\\
\end{array}\right.$$
(16)
The first few terms of $\{F_{n}^{(a,b)}\}_{n=0}^{\infty}$ and $\{G_{n}^{(a,b)}\}_{n=0}^{\infty}$ are as follows:
$$\begin{array}[]{l|c|c}n&F_{n}^{(a,b)}&G_{n}^{(a,b)}\\
\hline\hline 0&0&A\\
1&1&B\\
2&a&aB+A\\
3&ab+1&(ab+1)B+bA\\
4&a^{2}b+2a&(a^{2}b+2a)B+(ab+1)A\\
5&a^{2}b^{2}+3ab+1&(a^{2}b^{2}+3ab+1)B+(ab^{2}+2b)A\\
6&a^{3}b^{2}+4a^{2}b+3a&(a^{3}b^{2}+4a^{2}b+3a)B+(a^{2}b^{2}+3ab+1)A\\
7&a^{3}b^{3}+5a^{2}b^{2}+6ab+1&(a^{3}b^{3}+5a^{2}b^{2}+6ab+1)B+(a^{2}b^{3}+4ab%
^{2}+3b)A\\
\end{array}$$
Suprisingly, by looking at the table above, $G_{n}^{(a,b)}$ can be obtained using $F_{n}^{(a,b)}$ and $F_{n}^{(b,a)}$.
More precisely, we have the following result.
Theorem 2.12.
Let $F_{n}^{(a,b)}$ and $G_{n}^{(a,b)}$ be the $n^{th}$ terms of the sequences defined in (15) and (16), respectively. Then, for all $n\in\mathbb{N}$, the following formula holds
$$G^{(a,b)}_{n}=G_{1}^{(a,b)}F_{n}^{(a,b)}+G_{0}^{(a,b)}F_{n-1}^{(b,a)}.$$
(17)
Proof.
The proof is by induction on $n$. Evidently, the formula holds for $n=0,1,2$. We suppose that the formula also holds for some $n\geq 2$. Hence, we have
$$\displaystyle G^{(a,b)}_{n-1}$$
$$\displaystyle=G_{1}^{(a,b)}F_{n-1}^{(a,b)}+G_{0}^{(a,b)}F_{n-2}^{(b,a)},$$
$$\displaystyle G^{(a,b)}_{n}$$
$$\displaystyle=G_{1}^{(a,b)}F_{n}^{(a,b)}+G_{0}^{(a,b)}F_{n-1}^{(b,a)}.$$
Suppose that $n$ is even. (The case when $n$ is odd can be proven similarly.) So we have
$$\displaystyle G^{(a,b)}_{n+1}$$
$$\displaystyle=aG^{(a,b)}_{n}+G^{(a,b)}_{n-1}$$
$$\displaystyle=a\left(G_{1}^{(a,b)}F_{n}^{(a,b)}+G_{0}^{(a,b)}F_{n-1}^{(b,a)}%
\right)+\left(G_{1}^{(a,b)}F_{n-1}^{(a,b)}+G_{0}^{(a,b)}F_{n-2}^{(b,a)}\right)$$
$$\displaystyle=G_{1}^{(a,b)}\left(aF_{n}^{(a,b)}+F_{n-1}^{(a,b)}\right)+G_{0}^{%
(a,b)}\left(aF_{n-1}^{(b,a)}+F_{n-2}^{(b,a)}\right)$$
$$\displaystyle=G_{1}^{(a,b)}F_{n+1}^{(a,b)}+G_{0}^{(a,b)}F_{n}^{(b,a)},$$
proving the theorem.
∎
The sequence $\{G_{n}^{(a,b)}\}_{n=0}^{\infty}$ has already been studied in ([3], Section 4). The authors [3] have related the two sequences $\{F_{n}^{(a,b)}\}_{n=0}^{\infty}$ and $\{G_{n}^{(a,b)}\}_{n=0}^{\infty}$ using the formula
$$G_{n}^{(a,b)}=G_{1}^{(a,b)}F_{n}^{(a,b)}+G_{0}^{(a,b)}\left(\frac{b}{a}\right)%
^{n-2\lfloor n/2\rfloor}F_{n-1}^{(a,b)}.$$
(18)
Notice that by simply comparing the two identities (17) and (18), we see that
$$F_{n-1}^{(b,a)}=\left(\frac{b}{a}\right)^{n-2\lfloor n/2\rfloor}F_{n-1}^{(a,b)%
},\quad\forall n\in\mathbb{N}.$$
The convergence property of $\{F_{n+1}^{(a,b)}/F_{n}^{(a,b)}\}_{n=0}^{\infty}$ has also been discussed in ([3], Remark 2). It was shown that, for $a=b$, we have
$$\frac{F_{n+1}^{(a,b)}}{F_{n}^{(a,b)}}\longrightarrow\frac{\alpha}{a}=\frac{a+%
\sqrt{a^{2}+4}}{2}\quad\text{as}\quad n\longrightarrow\infty.$$
(19)
Using (17) and (19), we can also determine the limit of the sequence $\{G_{n+1}^{(a,b)}/G_{n}^{(a,b)}\}$ as $n$ tends to infinity, and for $a=b$, as follows:
$$\displaystyle\lim_{n\rightarrow\infty}\frac{G_{n+1}^{(a,b)}}{G_{n}^{(a,b)}}$$
$$\displaystyle=\lim_{n\rightarrow\infty}\frac{G_{1}^{(a,b)}F_{n+1}^{(a,b)}+G_{0%
}^{(a,b)}F_{n}^{(b,a)}}{G_{1}^{(a,b)}F_{n}^{(a,b)}+G_{0}^{(a,b)}F_{n-1}^{(b,a)%
}}=\lim_{n\rightarrow\infty}\frac{G_{1}^{(a,a)}F_{n+1}^{(a,a)}+G_{0}^{(a,a)}F_%
{n}^{(a,a)}}{G_{1}^{(a,a)}F_{n}^{(a,a)}+G_{0}^{(a,a)}F_{n-1}^{(a,a)}}$$
$$\displaystyle=\frac{G_{1}^{(a,a)}\lim_{n\rightarrow\infty}\frac{F_{n+1}^{(a,a)%
}}{F_{n}^{(a,a)}}+G_{0}^{(a,a)}}{G_{1}^{(a,a)}+G_{0}^{(a,a)}\lim_{n\rightarrow%
\infty}\frac{F_{n-1}^{(a,a)}}{F_{n}^{(a,a)}}}=\frac{\alpha a^{-1}G_{1}^{(a,a)}%
+G_{0}^{(a,a)}}{G_{1}^{(a,a)}+a\alpha^{-1}G_{0}^{(a,a)}}=\displaystyle\frac{%
\alpha}{a}.$$
For the case $a\neq b$, the ratio of successive terms of $\{F_{n}^{(a,b)}\}$ does not converge. However, it is easy to see that
$$\frac{F_{2n}^{(a,b)}}{F_{2n-1}^{(a,b)}}\longrightarrow\frac{\alpha}{b},\quad%
\frac{F_{2n+1}^{(a,b)}}{F_{2n}^{(a,b)}}\longrightarrow\frac{\alpha}{a},\quad%
\text{and}\quad\frac{F_{n+2}^{(a,b)}}{F_{n}^{(a,b)}}\longrightarrow\alpha+1,$$
where $\alpha=(ab+\sqrt{a^{2}b^{2}+4ab})/2$ (cf. [3]). Knowing all these limits, we can investigate the convergence property of the sequences $\{G_{2n}^{(a,b)}/G_{2n-1}^{(a,b)}\},\;\{G_{2n+1}^{(a,b)}/G_{2n}^{(a,b)}\}$, and $\{G_{n+2}^{(a,b)}/G_{n}^{(a,b)}\}$. Notice that $F_{n}^{(a,b)}=F_{n}^{(b,a)}$ for every $n\in\{1,3,5,\ldots\}$. So
$$\displaystyle\lim_{n\rightarrow\infty}\frac{G_{2n}^{(a,b)}}{G_{2n-1}^{(a,b)}}$$
$$\displaystyle=\lim_{n\rightarrow\infty}\frac{G_{1}^{(a,b)}F_{2n}^{(a,b)}+G_{0}%
^{(a,b)}F_{2n-1}^{(b,a)}}{G_{1}^{(a,b)}F_{2n-1}^{(a,b)}+G_{0}^{(a,b)}F_{2n-2}^%
{(b,a)}}=\lim_{n\rightarrow\infty}\frac{G_{1}^{(a,b)}\frac{F_{2n}^{(b,a)}}{F_{%
2n-1}^{(a,b)}}+G_{0}^{(a,b)}}{G_{1}^{(a,b)}+G_{0}^{(a,b)}\frac{F_{2n-2}^{(a,b)%
}}{F_{2n-1}^{(b,a)}}}$$
$$\displaystyle=\lim_{n\rightarrow\infty}\frac{G_{1}^{(a,b)}\frac{F_{2n}^{(b,a)}%
}{F_{2n-1}^{(b,a)}}+G_{0}^{(a,b)}}{G_{1}^{(a,b)}+G_{0}^{(a,b)}\frac{F_{2n-2}^{%
(a,b)}}{F_{2n-1}^{(a,b)}}}=\frac{\alpha a^{-1}G_{1}^{(a,b)}+G_{0}^{(a,b)}}{G_{%
1}^{(a,b)}+a\alpha^{-1}G_{0}^{(a,b)}}=\frac{\alpha}{a}.$$
Similarly, it can be shown that $G_{2n+1}^{(a,b)}/G_{2n}^{(a,b)}\rightarrow\alpha/b$ and $G_{n+2}^{(a,b)}/G_{n}^{(a,b)}\rightarrow\alpha+1$ as $n\rightarrow\infty$.
The recurrence relations discussed above can easily be extended into subscripts with real numbers. For instance, consider the piecewise defined function $G_{\lfloor x\rfloor}^{(a,b)}$:
$$G_{0}^{(a,b)}=A,\;G_{1}^{(a,b)}=B,\;G_{\lfloor x\rfloor}^{(a,b)}=\left\{\begin%
{array}[]{cc}aG_{{\lfloor x\rfloor}-1}^{(a,b)}+G_{{\lfloor x\rfloor}-2}^{(a,b)%
},&\text{if}\;{\lfloor x\rfloor}\;\text{is even},\\
&\\
bG_{{\lfloor x\rfloor}-1}^{(a,b)}+G_{{\lfloor x\rfloor}-2}^{(a,b)},&\text{if}%
\;{\lfloor x\rfloor}\;\text{is odd}.\\
\end{array}\right.$$
(20)
Obviously, the properties of (16) will be inherited by (20). For example, suppose $G_{0}^{(a,b)}=2,\;G_{1}^{(a,b)}=3,\;a=0.2$, and $b=0.3$. Then, $G^{(.2,.3)}_{\lfloor x\rfloor}=G_{1}^{(.2,.3)}F_{\lfloor x\rfloor}^{(.2,.3)}+G%
_{0}^{(.2,.3)}F_{{\lfloor x\rfloor}-1}^{(.3,.2)}$. Also,
$$\lim_{x\rightarrow\infty}\frac{G_{2{\lfloor x\rfloor}}^{(.2,.3)}}{G_{2{\lfloor
x%
\rfloor}-1}^{(.2,.3)}}=1.3839,\quad\lim_{x\rightarrow\infty}\frac{G_{2{\lfloor
x%
\rfloor}+1}^{(.2,.3)}}{G_{2{\lfloor x\rfloor}}^{(.2,.3)}}=0.921886,\quad\lim_{%
x\rightarrow\infty}\frac{G_{{\lfloor x\rfloor}+2}^{(.2,.3)}}{G_{{\lfloor x%
\rfloor}}^{(.2,.3)}}=1.276807.$$
If $a=b=0.1$, then the ratio of successive terms of $\{G_{n}^{(.1,.1)}\}$ with $G_{0}^{(.1,.1)}=2$ and $G_{1}^{(.1,.1)}=3$ converges to $1.05125$. See Figure 1 for the plots of these limits.
Now, we may take the generalized Fibonacci sequence (15) a bit further by considering a 3-periodic ternary recurrence sequence related to the usual Tribonacci sequence:
$$\displaystyle\quad\quad T_{0}^{(a,b,c)}=0,\quad T_{1}^{(a,b,c)}=0,\quad T_{2}^%
{(a,b,c)}=1,$$
$$\displaystyle T_{n}^{(a,b,c)}=$$
$$\displaystyle\left\{\begin{array}[]{cc}aT_{n-1}^{(a,b,c)}+T_{n-2}^{(a,b,c)}+T_%
{n-3}^{(a,b,c)},&\text{if}\;n\equiv 0\;(\text{mod}\;3),\\
bT_{n-1}^{(a,b,c)}+T_{n-2}^{(a,b,c)}+T_{n-3}^{(a,b,c)},&\text{if}\;n\equiv 1\;%
(\text{mod}\;3),\\
cT_{n-1}^{(a,b,c)}+T_{n-2}^{(a,b,c)}+T_{n-3}^{(a,b,c)},&\text{if}\;n\equiv 2\;%
(\text{mod}\;3).\\
\end{array}\right.(n\geq 3)$$
(21)
Suppose we define the sequence $\{U_{n}^{(a,b,c)}\}_{n=0}^{\infty}$ satisfying the same recurrence equation as in (21) but with arbitrary initial conditions $U_{0}^{(a,b,c)},\;U_{1}^{(a,b,c)},\;$ and $U_{2}^{(a,b,c)}$. Then, the sequences $\{U_{n}^{(a,b,c)}\}_{n=0}^{\infty}$ and $\{T_{n}^{(a,b,c)}\}_{n=0}^{\infty}$ are related as follows:
$$U_{n}^{(a,b,c)}=U_{0}^{(a,b,c)}T_{n-1}^{(b,c,a)}+U_{1}^{(a,b,c)}\left(T_{n-1}^%
{(b,c,a)}+T_{n-2}^{(c,a,b)}\right)+U_{2}^{(a,b,c)}T_{n}^{(a,b,c)},\quad(n\geq 2).$$
Remark 2.13.
In general, the $k$-periodic $k$-nary sequence $\{\mathcal{F}_{n}^{(a_{1},a_{2},\ldots,a_{k})}\}\colon\!\!\!=\{\mathfrak{F}_{n%
}^{(k)}\}$ related to $k$-nacci sequence:
$$\displaystyle\mathfrak{F}_{0}^{(k)}=\mathfrak{F}_{1}^{(k)}=\cdots=\mathfrak{F}%
_{k-2}^{(k)}=0,\quad\mathfrak{F}_{k-1}^{(k)}=1,$$
$$\displaystyle\mathfrak{F}_{n}^{(k)}=\left\{\begin{array}[]{cc}a_{1}\mathfrak{F%
}_{n-1}^{(k)}+\sum_{j=2}^{k}\mathfrak{F}_{n-j}^{(k)},&\text{if}\;n\equiv 0\;(%
\text{mod}\;k),\\
a_{2}\mathfrak{F}_{n-1}^{(k)}+\sum_{j=2}^{k}\mathfrak{F}_{n-j}^{(k)},&\text{if%
}\;n\equiv 1\;(\text{mod}\;k),\\
&\vdots\\
a_{k}\mathfrak{F}_{n-1}^{(k)}+\sum_{j=2}^{k}\mathfrak{F}_{n-j}^{(k)},&\text{if%
}\;n\equiv-1\;(\text{mod}\;k).\\
\end{array}\right.(n\geq k)$$
(22)
and the sequence $\{\mathcal{G}_{n}^{(a_{1},a_{2},\ldots,a_{k})}\}\colon\!\!\!=\{\mathfrak{G}_{n%
}^{(k)}\}$ defined in the same recurrence equation (22) but with arbitrary initial conditions $\mathfrak{G}_{0}^{(k)},\mathfrak{G}_{1}^{(k)},\ldots,\mathfrak{G}_{k-1}^{(k)}$ are related in the following fashion:
$$\mathfrak{G}^{(k)}_{n}=\mathfrak{G}^{(k)}_{0}\mathcal{F}^{(k;1)}_{n-1}+\sum_{m%
=0}^{k-3}\left(\mathfrak{G}^{(k)}_{m+1}\sum_{j=0}^{m+1}\mathcal{F}^{(k;j+1)}_{%
n-1-j}\right)+\mathfrak{G}^{(k)}_{k-1}\;\mathfrak{F}^{(k)}_{n},$$
(23)
where $(k;j)\colon\!\!\!=(a_{j+1},a_{j+2},\ldots,a_{k},a_{1},a_{2},\ldots,a_{j}),j=0,%
1,\ldots,k-1$. This result can be proven by mathematical induction and we leave this to the interested reader.
References
[1]
Alp, M., Irmak, N., and Szalay, L., Two-periodic ternary recurrences and their binet-formula, Acta Math. Univ. Comenianae, Vol. LXXXI, 2 (2012), pp. 227–232.
[2]
Dresden, G. P. B., A Simplifed Binet Formula for $k$-Generalized Fibonacci Numbers, J. Integer Sequences, 19, (2013).
[3]
Edson, M., Yayenie, O., A New Generalization of Fibonacci Sequence and Extended Binetâs Formula, Integers, 9 (# A48) (2009), pp. 639–654.
[4]
Edson, M., Lewis, S., Yayenie, O., The $k$-periodic Fibonacci sequence and an extended Binet’s formula, Integers, 11 (# A32) (2011), pp. 639–652.
[5]
Horadam, A. F., Basic properties of certain generalized sequence of numbers, Fibonacci Quarterly, 3 (1965), pp. 161–176.
[6]
Koshy, T., Fibonacci and Lucas Numbers with Applications, John Wiley, New York, 2001.
[7]
Noe, Tony; Piezas, Tito III; and Weisstein, Eric W. ”Fibonacci n-Step Number.” From MathWorld–A Wolfram Web Resource. http://mathworld.wolfram.com/Fibonaccin-StepNumber.html
[8]
Wu, Z., Zhang, H., On the reciprocal sums of higher-order sequences, Adv. Diff. Equ., 2013, 2013:189.
Received: March 3, 2015 |
Functional Connectome of the Human Brain with Total Correlation
Qiang Li
Image Processing Laboratory
University of Valencia
Valencia, 46980
[email protected]
&Greg Ver Steeg
Information Sciences Institute
University of Southern California
Marina del Rey, CA 90292
[email protected]
&Shujian Yu
Machine Learning Group
UiT - The Arctic University of Norway
9037 Tromsø, Norway
[email protected]
&Jesus Malo
Image Processing Laboratory
University of Valencia
Valencia, 46980
Abstract
Recent studies proposed the use of Total Correlation to describe functional
connectivity among brain regions as a multivariate alternative to conventional pair-wise measures such as correlation or mutual information.
In this work we build on this idea to infer a large scale (whole brain) connectivity network based on Total Correlation and show the possibility of using this kind of networks as biomarkers of brain alterations.
In particular, this work uses Correlation Explanation (CorEx) to estimate Total Correlation.
First, we prove that CorEx estimates of total correlation and clustering results are trustable compared to ground truth values.
Second, the inferred large scale connectivity network extracted from the more extensive open fMRI datasets is consistent with existing neuroscience studies but, interestingly, can estimate additional relations beyond pair-wise regions.
And finally, we show how the connectivity graphs based on Total Correlation can also be an effective tool to aid in the discovery of brain diseases.
Keywords Total Correlation $\cdot$
CorEx $\cdot$
fMRI $\cdot$
Functional Connectivity $\cdot$
Large Scale Connectome $\cdot$
Biomarkers
1 Introduction
Understanding intelligence is a fundamental scientific problem for both primates and machines. Millions of neurons in the brain interact with each other at both a structural and functional level to drive efficient inference and processing in the brain. Furthermore, the functional connectivity among these regions also reveals how they interact with each other in specific tasks. The brain connectome includes both structural and functional connectivity at different levels. Functional connectivity refers to the statistical dependency of activation patterns between various brain regions that emerges as a result of direct and indirect interactions [1]. A variety of ways to analyze functional connectivity exist. A seed-wise analysis can be performed by selecting a seed-driven hypothesis and analyzing its statistical dependencies with all other voxels outside its limits. Another option is to perform a wide analysis of the voxel or region of interest (ROI), where statistical dependencies on all voxels or ROIs are studied [2]. Structural connectivity refers to the anatomical organization of the brain by means of fiber tracts [3]. The sharing of communication between neurons in multiple regions is coordinated dynamically via changes in neural oscillation synchronizations [4]. When it comes to the brain connectome, functional connectivity refers to how different areas of the brain communicate with one another during task-related or resting-state activities [5].
Information theory is a valuable method to measure interactions in nonlinear systems [6], leading to its increasing use in neuroscience [7, 8, 9, 10, 11, 12, 13, 14].
However, although functional connectivity has already become a hot research topic in neuroscience [15, 16], systematic studies
on the information flow or the redundancy and synergy amongst brain regions remain limited.
Most functional connectivity approaches until now have mainly concentrated on pairwise relationships between two regions.
The conventional approach used to estimate indirect functional connectivity among brain regions is Pearson correlation [17] and Mutual Information [18, 19, 20, 8].
However, real brain network relationships are often complex, involving more than two regions, and the pairwise dependencies measured by correlation or mutual information could not reflect these multivariate dependencies.
Therefore, recent studies in neuroscience focus on the development of information-theoretic measures that can handle more than two regions simultaneously such as the Partial Information Decomposition (PID) [21], and the Total Correlation [22, 23].
The PID framework [24] extends the pairwise relationship to three variables and characterizes the mutual information between a pair of source variables $\{X_{1},X_{2}\}$ and a target variable $Y$ (i.e., $I(\{X_{1},X_{2}\};Y)$) by decomposing it into unique, redundant, and synergistic components. A big problem with the PID framework is that its governing equations form an underdetermined system, with only three equations relating the four components. That is, to actually calculate the decomposition, additional assumptions need to be made [25, 26, 27].
Total Correlation (TC) [28] (also known as multi-information [29, 30]) mainly describes the amount of dependence observed in the data and, by definition can be applied to multiple multivariate variables. Its use to describe functional connectivity in the brain was first proposed as a empirical measure in [23], but in [22] the superiority of total correlation over mutual information was proved analytically. The consideration of low-level vision models allows to derive analytical expressions for the Total Correlation as a function of the connectivity. These analytical results show that pairwise Mutual Information cannot capture the effect of different intra-cortical inhibitory connections while the Total Correlation can.
Similarly, in analytical models with feedback, synergy can be shown using Total Correlation, while it is not so obvious using mutual information [22].
Moreover, these analytical results allow to calibrate computational estimators of TC.
In this work we build on these empirical and theoretical results [22, 23] to infer a larger scale (whole brain) network based on total correlation for the first time.
As opposed to [22, 23] where the number of considered nodes was limited to the range of tens and focused on specialized subsystems, here we consider wider recordings [31, 32] so we use signals coming from hundreds of nodes across the whole brain. Additionally, applying our analysis to data of the same scale for regular and altered brains111http://fcon_1000.projects.nitrc.org/indi/ACPI/html/. We also show the possibility of using this kind of wide-range networks as biomarkers. From the technical point of view, here we use Correlation Explanation (CorEx) [33, 34] to estimate Total Correlation in these high-dimensional scenarios.
Furthermore, graph theory and clustering [15, 16] are used here to represent the relationships between the considered regions.
The rest of this paper is organized as follows: Section 2 introduces the necessary information-theoretic concepts and explains CorEx. Sections 3 and 4 show two synthetic experiments that prove that CorEx results are trustable. Section 5 estimates the large-scale connectomes with fMRI datasets that involve more than 100 regions accross the whole brain. Moreover, we show how the analysis of these large scale networks based on Total Correlation may indicate brain alterations. Sections 6 and 7 give a general discussion and the conclusion of the paper, respectively.
2 Total Correlation as neural connectivity descriptor
2.1 Definitions and Preliminaries
Mutual Information: Given two multivariate random variables $X_{1}$ and $X_{2}$, the mutual information between them, $\mathrm{I}(X_{1};X_{2})$, can be calculated as the difference between the sum of individual entropies, $H(X_{i})$ and the entropy of the variables considered jointly as a single system, $H(X_{1},X_{2})$ [35]:
$$\mathrm{I}(X_{1};X_{2})=\mathrm{H}(X_{1})+\mathrm{H}(X_{2})-\mathrm{H}(X_{1},X_{2})$$
(1)
where for each (multivariate) random variable $\mathrm{v}$, the entropy is $\mathrm{H}(\mathrm{v})=\left\langle-\log_{2}\mathrm{p}(\mathrm{v})\right\rangle$ and the brackets represent expectation values spanning random variables.
The mutual information also can be seen as the information shared by the two variables or the reduction of uncertainty in one variable given the information about the other [36].
Mutual information is better than linear correlation: For Gaussian sources mutual information reduces to linear correlation because the entropy factors in Eq. 1 just depend on $|\langle X_{1}\cdot X_{2}^{\top}\rangle|$. However, for more general (non-Gaussian) sources mutual information cannot be reduced to covariance and cross-covariance matrices. In these (more realistic) situations $I$ is better than the linear correlation because $I$ captures nonlinear relations that are ruled out by $|\langle X_{1}\cdot X_{2}^{\top}\rangle|$.
For an illustration of the qualitative differences between $I$ and linear correlation see the examples in Section 2.2 of [23].
As a result, mutual information has been proposed as a good alternative to linear correlation for estimating functional connectivity [18, 8]. However, mutual information cannot not capture dependencies beyond pairs of nodes. And this may be a limitation in complex networks.
Total Correlation:
This magnitude describes the dependence among $n$ variables and it is a generalization of the mutual information concept from two parties to $n$ parties. The Venn Diagram in Fig. 1 qualitatively illustrates this for three variables. The definition of total correlation from Watanabe [28] can be denoted as,
$$\mathrm{T}\mathrm{C}\left(\mathrm{X}_{1},\ldots,\mathrm{X}_{\mathrm{n}}\right)\equiv\sum_{i=1}^{n}\mathrm{H}\left(\mathrm{X}_{\mathrm{i}}\right)-\mathrm{H}\left(\mathrm{X}_{1},\ldots,\mathrm{X}_{\mathrm{n}}\right)={\mathrm{D}}_{\mathrm{KL}}\left(\mathrm{p}\left(\mathrm{X}_{1},\ldots,\mathrm{X}_{\mathrm{n}}\right)\|\prod_{\mathrm{i}=1}^{\mathrm{n}}\mathrm{p}\left(\mathrm{X}_{\mathrm{i}}\right)\right)$$
(2)
where $\mathrm{X}\equiv\left(\mathrm{X}_{1},\ldots,\mathrm{X}_{\mathrm{n}}\right)$, and TC can also be expressed as the Kullback Leibler divergence, $D_{KL}$ between the joint probability density and the product of the marginal densities.
From these definitions, if all variables are independent then TC will be zero.
The estimation method used in this work (CorEx presented in the next subsection) uses the TC after conditioning on some other variable $Y$, which can be defined as [35],
$$TC(X|Y)=\sum_{i}H(X_{i}|Y)-H(X|Y)={D}_{KL}(p(\mathrm{x}|\mathrm{y})\|\prod_{\mathrm{i}=1}^{\mathrm{n}}\mathrm{p}(\mathrm{x}_{\mathrm{i}}|\mathrm{y}))$$
(3)
Total correlation is better than Mutual information:
This superiority is not only due to the obvious $n$-wise versus pair-wise definitions in Eqs. 1 and 2. It also has to do with the different properties of these magnitudes.
To illustrate this point let us recall one of the analytical examples in [22]. Consider the following feedforward network:
$$\textstyle{X_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$$$\textstyle{X_{2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$$$\textstyle{\mathbf{e}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$$$\scriptstyle{f}$$$$\textstyle{X_{3}}$$
where one is interested in the connectivity between the neurons (nodes or coefficients) in the hidden layer $\mathbf{e}$ but the nonlinear function $f(\cdot)$ is unknown and one only may have experimental access to the signal in the regions $X_{1}$, $X_{2}$ and $X_{3}$.
In this situation one could think on measuring $I(X_{1},X_{3})=I(X_{1},f(\mathbf{e}))$ or $I(X_{2},X_{3})=I(X_{1},f(\mathbf{e}))$. However, the invariance of $I$ under arbitrary nonlinear re-parametrization of the variables [36] implies that these measures are insensitive to $f$ and the connectivity there in.
On the contrary, as pointed out in [22], using the expression for the variation of TC under nonlinear transforms [37, 13], the variation of $H$ under nonlinear transforms [35], and the definition in Eq. 2, one obtains $TC(X_{1},X_{2},X_{3})=[TC(X_{1},X_{2},\mathbf{e})-TC(\mathbf{e})]+TC(X_{3})$, where the term in the bracket does not depend on $f(\cdot)$, but the last term definitely does.
2.2 Total Correlation estimated from CorEx
Straightforward application of the direct definition of TC is not feasible in high dimensional scenarios, and alternatives are required [30, 38].
A practical approach to estimate total correlation is via latent factor modelling. The idea is to explicitly construct latent factors, $Y$, that somehow capture the dependencies in the data. If we measure dependencies via total correlation, $TC(X)$, then we say that the latent factors explain the dependencies if $TC(X|Y)=0$. We can measure the extent to which $Y$ explains the correlations in $X$ by looking at how much the total correlation is reduced
$$\displaystyle TC(X)-TC(X|Y)=\sum_{i=1}^{n}I(X_{i};Y)-I(X;Y)$$
(4)
The total correlation is always non-negative, and the decomposition on the right in terms of mutual information can be verified directly from the definitions. Any latent factor model can be used to lower bound total correlation, and the terms on the right-hand side of Eq. 4 can be further lower-bounded with tractable estimators using variational methods, and VAEs are a popular example [39].
Although latent factor models do not give a direct total correlation estimation as the Rotation-based Iterative Gaussianization (RBIG) [30] and the Matrix-based Rényi’s entropy [40] did, the approach can be complementary because the construction of latent factors can help in dealing with the curse of dimensionality and for interpreting the dependencies in the data. With these goals in mind, we now describe a particular latent factor approach known as Total Cor-relation Ex-planation (CorEx)[33].
CorEx constructs a factor model by reconstructing latent factors using a factorized probabilistic function of the input data, $p(y|x)=\prod_{j=1}^{m}p(y_{j}|x)$, with $m$ discrete latent factors, $Y_{j}$. This function is optimized to give the tightest lower bound possible for Eq. 4.
$$\displaystyle TC(X)\geq\max_{p(Y_{j}|x)}\sum_{i=1}^{n}I(X_{i};Y)-I(X;Y)=\sum_{j=1}^{m}\left(\sum_{i=1}^{n}\alpha_{i,j}I(X_{i};Y_{j})-I(Y_{j};X)\right)$$
The factorization of the latents leads to the terms $I(X;Y)=\sum_{j}I(Y_{j};X)$ which can be directly calculated. The term $I(X_{i};Y)$ is still intractable and is decomposed using the chain rule into $I(X_{i};Y)\approx\sum\alpha_{i,j}I(X_{i};Y_{j})$. Each $I(X_{i};Y_{j})$ can then be tractably estimated [34, 33]. There are free parameters $\alpha_{i,j}$ that must be updated while searching for latent factors and achieving objective functions. When $t=0$, the $\alpha_{i,j}$ initializes and then updates according to the following rules:
$$\alpha_{i,j}^{t+1}=(1-\lambda)\alpha_{i,j}^{t}+\lambda\alpha_{i,j}^{**}$$
The second term $\alpha_{i,j}^{**}=\exp\left(\gamma\left(I\left(X_{i}:Y_{j}\right)-\max_{j}I\left(X_{i}:Y_{j}\right)\right)\right)$, $\lambda$ and $\gamma$ are constant parameters. This decomposition allows us to quantify the contribution to the total correlation bound from each latent factor, which we can aid interpretability.
CorEx can be further extended into a hierarchy of latent factors [34], helping to reveal hierarchical structure that we expect to play an important role in the brain. The latent factors at layer $k$ explain dependence of the variables in the layer below.
$$TC(X)\geq\sum_{k=1}^{r}\left(\sum_{j=1}^{m}\left(\sum_{i=1}^{n}\alpha_{i,j}^{k}I(Y_{i}^{k-1};Y_{j}^{k})-\sum_{j=1}^{m}I(Y_{j}^{k};Y^{k-1})\right)\right)$$
Here $k$ gives the layer and $Y^{0}\equiv X$ denotes the observed variables. Ultimately, we have a bound on TC that gets tighter as we add more latent factors and layers and for which we can quantify the contribution for each factor to the bound. We exploit this decomposition for interpretability [41] as illustrated in Fig. 2. CorEx prefers to find modular or tree-like latent factor models which are beneficial for dealing with the curse of dimensionality [42]. For neuroimaging, we expect this modular decomposition to be effective because functional specialization in the brain are often associated with spatially localized regions. We explore this hypothesis in the experiments.
3 Experiment 1: Total Correlation for independent mixtures
In this experiment, we estimate the total correlation of three independent variables $X$, $Y$ and $Z$, and each follows a Gaussian distribution. For this setup, the ground truth of TC should satisfy $TC(X,Y,Z)=0$, and generated various samples with different lengths. Then estimated total correlation values are shown in the Fig 3. Here, we compared CorEx with other different total correlation estimators, such as, RBIG [30], Matrix-based Rényi’s entropy [40], Shannon discrete entropy222https://github.com/nmtimme/Neuroscience-Information-Theory-Toolbox, and ground truth. The left figure (2 dimensional) is mutual information, and the middle (3 dimensional) and right figure (4 dimensional) are total correlation. As we mentioned above, the simulation data is totally Gaussian distributed. Therefore, their dependency should be zero. We find that CorEx and RBIG both perform very well and are very stable, and matrix-based Renyi entropy performance becomes more and more nice with increased dimensions, while Shannon discrete entropy becomes more and more accurate with an increase of samples. All these make sense, and it also explains the accuracy of total correlation estimation with CorEx. In the next section, we estimate functional connectivity by using CorEx on real fMRI datasets.
4 Experiment 2: Clustering by Total Correlation for dependent and independent mixtures
In order to evaluate CorEx’s performance in clustering tasks. The elements in group $X$ include $X1\_ND$, $X2\_ND$, and $X3\_ND$, which satisfy Gaussian distributions and are completely independent from each other and from group $Y$, and variables in group $Y$ include $Y1\_D$, $Y2\_D$ from $Y1\_D$, and $Y3\_D$ from $Y2\_D$, which are connected to each other. Then we compare the CorEx cluster results with pairwise Pearson correlation and pairwise mutual information to find the groups.
In Fig. 4, we found that CorEx based on total correlation has high accuracy in estimating their dependencies(Fig. 4d) compared to pairwise Pearson correlation(Fig. 4b) and pairwise mutual information(Fig. 4c). As we established in this experiment, the elements in group $Y$ should be clustered together, and the elements in group $X$ should be completely independent of each other and of group $Y$. The ground truth is presented in Fig. 4a. Then we estimating cluster result with pairwise Pearson correlation with threshold $0.1$ and pairwise mutual information with threshold $0.4$. Obviously, we found that pairwise approaches have high errors in accurately estimating their statistical dependencies, and pairwise mutual information is better than pairwise Pearson correlation, but still has high errors in correctly clustering tasks. However, the clustering results with CorEx by total correlation get the best performance compared to pairwise approaches. Moreover, we use Purity as a criterion of clustering quality to qualify the performance of clustering because it’s a straightforward and transparent evaluation metric [43]. To calculate purity, each cluster is allocated to the class that occurs most frequently within it, and the accuracy of this assignment is determined by counting the number of correctly assigned elements and dividing by $N(N=6)$. Formally:
$$\operatorname{Purity}(X,Y)=\frac{1}{N}\sum_{i}\max_{j}\left|X_{i}\cap Y_{j}\right|$$
(5)
where $X=\{X1\_ND,X2\_ND,X3\_ND\}$ is the set of clusters and $Y=\{Y1\_D,Y2\_D,Y3\_D\}$ is the set of classes. Fig. 4e presents the clustering performance of pairwise approaches and CorEx with purity as a criterion. Poor clusters have near-zero purity ratings (lower bound). A perfect cluster possesses a purity of one (maximum value). Based on Eq.5, we get purity values of $0.17$ and $0.33$ for pairwise approaches, and the purity value for CorEx is $0.83$. All in all, we showed that CorEx based on total correlation has the best performance compared to pairwise approaches.
5 Experiment 3: Brain functional connectivity analysis using Total Correlation
A network is a collection of nodes and edges, where nodes represent fundamental elements (e.g., brain regions) within the system of interest (e.g., the brain), and edges represent the dependencies that exist between those fundamental elements with considered weights. The brain as a self-organizing and chaotic system, where information is coupled between and among brain regions via functional connectivity, is key to understanding information processing in the brain. Fig. 5 illustrates the schematic representation of network construction using fMRI.
5.1 First total correlation-based clustering example from fMRI data
The data was taken from a resting-state fMRI experiment in which a subject was watching and maintaining alert wakefulness but not performing any other behavioral task. Meanwhile, the BOLD signal was recorded. This data was downloaded from Nitime 333https://nipy.org/nitime/index.html. The data was preprocessed, and time series were extracted from different regions of interest (ROIs) in the brain. The ROIs abbreviations and related full name were listed as follows: Cau, Caudate; Pau, Paudate; Thal, Thalamus; Fpol, Frontal pole; Ang, Angular gyrus; SupraM, Supramarginal Gyrus; MTG, Middle Temporal Gyrus; Hip, Hippocampus; PostPHG, Posterior; Parahippocamapl gyrus; APHG, Anterior parahippocamapl gyrus; Amy, Amygdala; ParaCing, Paracingulate gyrus; PCC, Posterior cingulate cortex; Prec, Precuneus; R, right hemisphere; L, left hemisphere. First, we estimated the pairwise functional connectivity metrics with Pearson correlation, mutual information, and the corresponding functional connectivity, a circle-weighted graph used to visualize the outcome of pairwise functional connectivity. In Fig. 7 top row (left) and (right), Pearson correlation and mutual information estimate the same pairwise dependencies, but later approaches capture stronger weights between ROIs, such as LPCC and RPCC, LThal and RThal, and LAmy and RAmy.
Meanwhile, we also use weighted graph theory to cluster dependence among ROIs and we threshold edges with a weight of less than 0.16 for legibility with the CorEx approach. As we mentioned above, mutual information only estimates a more robust relationship between ROIs compared to correlation. However, when we go beyond pairwise ROIs, CorEx captures richer information among all ROIs (see Fig. 7(bottom row)). Here, we use a representation with $m_{1}=10$, $m_{2}=3$, $m_{3}=1$ in estimate of T with CorEx,and their corresponding convergent curves are plotted in Fig. 6, it shows total correlation lower bound stops increasing. Fig. 7(bottom row) shows the overall structure of the learned hierarchical model. Edge thickness is determined by $\alpha_{i,j}I\left(X_{i}:Y_{j}\right)$. The size of each node is proportional to the total correlation that a latent factor explains about its children. The discovered structure captures several significant relationships among ROIs that are consistent with correlation and mutual information results, e.g., LPCC and RPCC, LThal and RThal, LParaCing and RParaCing, LPut and RPut. Furthermore, TC discovered some beyond pairwise unknown relationships, for example, LCau, RCau, LFpol, and RFpol are clustered under node $0$, which explains why they have dense dependency during this cognitive task compared to other ROIs in the brain.
5.2 Large scale Connectome with resting-state fMRI
5.2.1 A selection of pre-defined atlas
We use the Automated Anatomical Labeling (AAL) atlas [44], a structural atlas with 116 ROIs identified from the anatomy of a reference subject(see Fig. 8.).
5.2.2 Time-series signals extraction
HCP and ACPI can access raw and preprocessed data as well as phenotypic information about data samples. The raw rs-fMRI data was preprocessed using the Configurable Pipeline for the Analysis of Connectomes, an open-source software pipeline that allows for automated rs-fMRI data preprocessing and analysis. We extract time-series for each ROI in each subject after defining anatomical brain ROIs with the AAL atlas. We calculate the weighted average of the fMRI BOLD signals across all voxels in each region. Furthermore, the BOLD signal in each region is normalized and subsampled by repetition time (TR). Finally, we average all of the subjects’ time-series signals in each ROI.
5.2.3 HCP900
The Human Connectome Project contains imaging and behavioral data from healthy people [31]. To investigate rest-state functional connectivity, we used preprocessed rest-fMRI data from the HCP900444https://www.humanconnectome.org/ release [32]. Here, we use a representation with $m_{1}=10$, $m_{2}=5$, $m_{3}=1$ in estimate of T with CorEx. We threshold edges with a weight of less than 0.16 for legibility. The Fig. 9 has shown that whole brain resting-state functional connectivity is estimated with CorEx compared to Pearson correlation and mutual information. It mostly captures relationships among brain regions and neighboring brain regions cluster together and communicate with other areas, e.g., node $0$ has a bigger node size than other nodes.
From Fig. 9, we found that brain regions are functionally clustered together, which is also consistent with structure connectivity based on their physical connectivity distance. For example, under node $0$, the cerebellum and vermis regions densely cluster together, while under node $1$, the frontal lobes cluster together and are also densely functionally connected with the temporal lobe, and so on. The different colors indicate different brain regions, which are based on Table. 1. In addition, we can see that functional integration and separation exist in our brain from Fig. 9.
5.2.4 Computational psychiatry applications with ACPI
The Addiction Connectome Preprocessed Initiative is a longitudinal study to investigate the effects of cannabis use among adults with a childhood diagnosis of ADHD. In particular, we use readily-preprocessed rest-fMRI data from the Multimodal Treatment Study of Attention Deficit Hyperactivity Disorder (MTA). We attempt to use functional connectivity as a bio-marker to discriminate whether individuals have consumed marijuana or not (62 marijuana group vs 64 control group). In a comparison of whole brain functional connectivity between control and patient groups, we found altered functional connectivity in the patient group compared to the healthy group (see Fig. 10.). The significant altered functional connectivity happened between the frontoparietal and motor regions. Meanwhile, we found sparse functional connectivity in the patient group compared to the control group in general. Meanwhile, we also discovered that marijuana users had more interaction between BOLD time series in particular ROIs such as the cerebellum, fronto-parietal, and default model regions than controls, e.g., cerebellum regions mainly densely cluster around node$0$ compared to the control group. It also explains why marijuana users are more active and offensive because the fronto-parietal network controls cognitive behavior execution and decision-making, cerebellum-related action, and default model network dysfunction in addiction users. All the above results are consistent with previous related research [45, 46, 47]. Moreover, we found some unknown disconnect between some visual regions and other brain areas. Based on related similarity research [48, 49], we suggest that marijuana patients may alter visual perception too.
6 Discussion
This manuscript presents a higher-order information-theoretic measure to estimate functional connectivity. We demonstrated that total correlation performance in a real-world scenario was analyzed using the actual larger fMRI dataset across HCP and ACPI. In this study, we estimated total correlation with CorEx under different situations. However, the approach has its own pros and cons, which we will discuss later. Furthermore, we found total correlation can be a metric to estimate functional connectivity in the human brain. It can identify some well-known functional connectivities and capture a few unknown nonlinear relationships among brain regions as well. To our limited knowledge, this is the first time that total correlation has been used to estimate larger-scale functional connectivity for a whole-brain-AAL atlas with 116 structural ROIs. Total correlation can also be a tool to find biomarkers to help us diagnose brain-related diseases.
Here, we will discuss some advantages and limitations of this research now. Firstly, given the curse of dimensionality of fMRI, we need to find a low-dimensional representation that helps us characterize the connectivity. Traditional general linear models (GLM), such as expert-defined ROIs or the ALL atlas, are frequently used to find ROIs in task-related or resting-state experiments. However, we should be able to do better with a data-driven approach. Sample sizes and statistical thresholds are known to have a major impact on the statistical power and accuracy of GLM-based ROI selection. Previous research has revealed that GLM has limited statistical power when inferring from fMRI data [50, 51]. However, we used GLM-based ROI selection in the task-related fMRI datasets, which may affect the final result when we estimate functional connectivity.
Secondly, CorEx is model-independent, which means no anatomical or functional prior knowledge is required to estimate ROIs. The method is entirely data-driven; this way, it is possible to analyze networks that have not been investigated and could be a future extension of work. It is also possible to use total correlation as a pre-analysis for other techniques like DCM, which need constraints about the underlying network. What differentiates the CorEx algorithm is that it tries to break the variables into clusters with high TC. In other words, the bound tries to find a tree of latent factors that explain the total correlation, which uses these clusters as ROIs, is a more data-driven way to define regions and then connectivity. This prioritization of ”modular” solutions in CorEx was not realized or emphasized in the original research. The second reason why we used CorEx to estimate functional connectivity on larger-scale fMRI datasets is that it is a clustering approach via TC. Furthermore, CorEx estimates total correlation via hierarchical maximization correlation between previous layer and current layer variables with a tight information bound that push mode estimates a more accurate relationship among variables in real neural signals. Thirdly, TC is an indirect information quantitative tool that cannot determine the direction of information flow between brain regions. Meanwhile, we discovered some unknown functional connectivity in the real fMRI dataset before.
7 Conclusions
We have introduced total correlation to capture multivariate interactions within brain regions. They have been experimentally verified as effective steps for reconstructing multivariate relationships in the brain. In this study, CorEx was adopted to estimate total correlation. The CorEx approach can capture functional connectivity characteristics when going beyond pairwise brain regions. On the other hand, we evaluated the method with task-related and benchmarking resting-state fMRI datasets. We found that multivariable relationships cannot be detected if we use pairwise correlation and mutual information quantities only. More generally, multivariable relationships can be clustered only if we use total correlation. Therefore, total correlation measures are significant to find complicated functional connectivity among brain regions. Also, we have shown that total correlation can accurately estimate functional connectivity in the real neural dataset and find biomarkers for diagnosing brain diseases.
In the future, we plan to use the functional connectivity relationships discovered by total correlation as an input to existing graph neural networks (GNNs) [52] for the purpose of interpretable brain disease diagnosis, such that practitioners or doctors can identify the most informative subgraphs (or modules) to the decision (e.g., autism patients or health-control groups). The recently proposed approaches (e.g., [53, 54]) all rely on pairwise relationships estimated by linear correlation coefficient as the input, which ignores high-order dependence essentially. In this sense, we believe our approach has the potential to improve the explanation performances of existing GNN explainers on brains.
Acknowledgement
QL and JM was partially funded by these spanish/european grants from GVA/AEI/FEDER/EU: MICINN PID2020-118071GB-I00, MICINN PDC2021-121522-C21, and GVA Grisolía-P/2019/035. GVS acknowledges support from the Defense Advanced Research Projects Agency (DARPA) under award FA8750-17-C-0106. SY was funded by the Research Council of Norway under grant no.309439. Final, we thank the organizers of the HCP, and ACPI for providing these interesting dataset which used in this studies.
Author Contributions:
Conceptualization, methodology, software, validation, writing—original draft preparation, writing—review, and editing, QL., GVS., SY., and JM. All authors have read and agreed to the published version of the manuscript.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
TC: Total Correlation
CorEx: Correlation Explanation
fMRI: functional Magnetic Resonance Imaging
BOLD: Blood-Oxygen-Level-Dependent Imaging
DCM: Dynamic causal modeling
GLM: General Linear Model
ROI: Region of Interest
HCP:Human Connectome Project
MTA: Multimodal Treatment of Attention Deficit Hyperactivity Disorder
References
[1]
Karl Friston.
Functional and effective connectivity: A review.
Brain connectivity, 1:13–36, 01 2011.
[2]
Martijn Heuvel and Hilleke Pol.
Exploring the brain network: A review on resting-state fmri
functional connectivity.
European neuropsychopharmacology : the journal of the European
College of Neuropsychopharmacology, 20:519–34, 08 2010.
[3]
Olaf Sporns, Giulio Tononi, and Rolf Kötter.
The human connectome: A structural description of the human brain.
PLoS computational biology, 1:e42, 10 2005.
[4]
Andre Bastos and Jan-Mathijs Schoffelen.
A tutorial review of functional connectivity analysis methods and
their interpretational pitfalls.
Frontiers in Systems Neuroscience, 9, 01 2016.
[5]
J.T. Lizier, J. Heinzle, A. Horstmann, J. Haynes, and M. Prokopenko.
Multivariate information-theoretic measures reveal directed
information structure and task relevant changes in fMRI connectivity.
J. Comput. Neurosci., 30(1):85–107, 2011.
[6]
Claude E. Shannon.
A mathematical theory of communication.
Bell Syst. Tech. J., 27(3):379–423, 1948.
[7]
Eugenio Piasini and Stefano Panzeri.
Information theory in neuroscience.
Entropy, 21:62, 01 2019.
[8]
Robin Ince, Bruno Giordano, Christoph Kayser, Guillaume Rousselet, Joachim
Gross, and Philippe Schyns.
A statistical framework for neuroimaging data analysis based on
mutual information estimated via a gaussian copula.
Human brain mapping, 38, 11 2016.
[9]
Alexander Dimitrov, Aurel Lazar, and Jonathan Victor.
Information theory in neuroscience.
Journal of computational neuroscience, 30:1–5, 02 2011.
[10]
Alexander Borst and Frédéric Theunissen.
Information theory and neural coding.
Nature neuroscience, 2:947–57, 12 1999.
[11]
Gasper Tkacik, Olivier Marre, Thierry Mora, Dario Amodei, Michael Berry II, and
William Bialek.
The simplest maximum entropy model for collective behavior in a
neural network.
Journal of Statistical Mechanics: Theory and Experiment, 2013,
07 2012.
[12]
A. Gomez-Villa, M. Bertalmio, and J. Malo.
Visual information flow in wilson-cowan networks.
J. Neurophysiol., 2020.
[13]
J. Malo.
Spatio-chromatic information available from different neural layers
via gaussianization.
J. Math. Neurosci., 10(18), 2020.
[14]
Jesús Malo.
Information flow in biological networks for color vision.
Entropy, 24(10), 2022.
[15]
Farzad Farahani, Waldemar Karwowski, and Nichole Lighthall.
Application of graph theory for identifying connectivity patterns in
human brain networks: A systematic review.
Frontiers in Neuroscience, 13:585, 06 2019.
[16]
Olaf Sporns.
Graph theory methods: applications in brain networks.
Dialogues in Clinical Neuroscience, 20:111 – 121, 2018.
[17]
Ernesto Pereda, Rodrigo Quian, and Joydeep Bhattacharya.
Nonlinear multivariate analysis of neurophysiological signals.
Progress in neurobiology, 77:1–37, 09 2005.
[18]
Barry Chai, Dirk B. Walther, Diane M. Beck, and Li Fei-Fei.
Exploring functional connectivity of the human brain using
multivariate information analysis.
In Proceedings of the 22nd International Conference on Neural
Information Processing Systems, NIPS’09, page 270–278, Red Hook, NY, USA,
2009. Curran Associates Inc.
[19]
Zhe Wang, Ahmed Alahmadi, David Zhu, and Tongtong li.
Brain functional connectivity analysis using mutual information.
In 2015 IEEE Global Conference on Signal and Information
Processing (GlobalSIP), pages 542–546, 12 2015.
[20]
Mohamad El Sayed Hussein Jomaa, Marcelo Colominas, Nisrine Jrad, Patrick
Van Bogaert, and Anne Humeau-Heurtier.
A new mutual information measure to estimate functional connectivity:
Preliminary study.
In Conference proceedings: Annual International Conference of
the IEEE Engineering in Medicine and Biology Society, volume 2019, pages
640–643, 07 2019.
[21]
SP. Sherrill, NM. Timme, JM. Beggs, and EL. Newman.
Partial information decomposition reveals that synergistic neural
integration is greater downstream of recurrent information flow in
organotypic cortical cultures.
PLoS Comput. Biol., 17(7):e1009196, 2021.
[22]
Qiang Li, Greg Ver Steeg, and Jesus Malo.
Functional connectivity in visual areas from total correlation.
ArXiV https://arxiv.org/abs/2208.05770, 08 2022.
[23]
Qiang Li.
Functional connectivity inference from fmri data using multivariate
information measures.
Neural Networks, 146:85–97, 2022.
[24]
Paul L Williams and Randall D Beer.
Nonnegative decomposition of multivariate information.
arXiv preprint arXiv:1004.2515, 2010.
[25]
James Kunert-Graf, Nikita Sakhanenko, and David Galas.
Partial information decomposition and the information delta: a
geometric unification disentangling non-pairwise information.
Entropy, 22(12):1333, 2020.
[26]
Artemy Kolchinsky.
A novel approach to the partial information decomposition.
Entropy, 24(3), 2022.
[27]
David Balduzzi and Giulio Tononi.
Integrated information in discrete dynamical systems: Motivation and
theoretical framework.
PLOS Computational Biology, 4(6):1–18, 06 2008.
[28]
Satosi Watanabe.
Information theoretical analysis of multivariate correlation.
IBM Journal of research and development, 4(1):66–82, 1960.
[29]
M. Studeny and J. Vejnarova.
The multi-information function as a tool for measuring stochastic
dependence.
Learning in graphical models, pages 261–298, January 1998.
[30]
V. Laparra, G. Camps-Valls, and J. Malo.
Iterative gaussianization: from ICA to random rotations.
IEEE Trans. Neural Networks, 22(4):537–549, 2011.
[31]
David Van Essen, Stephen Smith, Deanna Barch, Timothy Behrens, Essa Yacoub, and
Kamil Ugurbil.
The wu-minn human connectome project: an overview.
NeuroImage, 80, 05 2013.
[32]
D.C. Essen, K Ugurbil, Edward Auerbach, Deanna Barch, T.E.J. Behrens, Richard
Bucholz, A Chang, Liyong Chen, Maurizio Corbetta, Sandra Curtiss, Stefania
Della Penna, David Feinberg, Matthew Glasser, Noam Harel, A.C. Heath, Linda
Larson-Prior, Daniel Marcus, Georgios Michalareas, Steen Moeller, and Essa
Yacoub.
The human connectome project: A data acquisition perspective.
NeuroImage, 62:2222–31, 02 2012.
[33]
Greg Ver Steeg and Aram Galstyan.
Discovering structure in high-dimensional data through correlation
explanation.
In Advances in Neural Information Processing Systems,
NIPS’14, 2014.
[34]
Greg Ver Steeg and Aram Galstyan.
Maximally informative hierarchical representations of
high-dimensional data.
In AISTATS’15, 2015.
[35]
Thomas M. Cover and Joy A. Thomas.
Elements of Information Theory (Wiley Series in
Telecommunications and Signal Processing).
Wiley-Interscience, USA, 2006.
[36]
Alexander Kraskov, Harald Stögbauer, and Peter Grassberger.
Estimating mutual information.
Phys. Rev. E, 69:066138, Jun 2004.
[37]
Siwei Lyu and Eero P. Simoncelli.
Nonlinear Extraction of Independent Components of Natural Images
Using Radial Gaussianization.
Neural Computation, 21(6):1485–1519, 2009.
[38]
Valero Laparra, J. Emmanuel Johnson, Gustau Camps, Raul Santos, and Jesus Malo.
Information theory measures via multidimensional gaussianization.
ArXiV: Stats. Mach. Learn., page
https://arxiv.org/abs/2010.03807, 2020.
[39]
Shuyang Gao, Robert Brekelmans, Greg Ver Steeg, and Aram Galstyan.
Auto-encoding correlation explanation.
In Proceedings of the 22nd International Conference on AI and
Statistics (AISTATS), 2019.
[40]
Shujian Yu, Luis Gonzalo Sanchez Giraldo, Robert Jenssen, and Jose C Principe.
Multivariate extension of matrix-based rényi’s $\alpha$-order
entropy functional.
IEEE transactions on pattern analysis and machine intelligence,
42(11):2960–2966, 2019.
[41]
Greg Ver Steeg.
Unsupervised learning via total correlation explanation.
In IJCAI, 2017.
[42]
Greg Ver Steeg, Hrayr Harutyunyan, Daniel Moyer, and Aram Galstyan.
Fast structure learning with modular regularization.
In Advances in Neural Information Processing Systems, pages
15567–15577, 2019.
[43]
Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schütze.
Introduction to Information Retrieval.
Cambridge University Press, Cambridge, UK, 2008.
[44]
Nathalie Tzourio-Mazoyer, Brigitte Landeau, Papathanassiou DF, Fabrice
Crivello, O.N.D. Etard, Nicolas Delcroix, Bernard Mazoyer, and Joliot Marc.
Automated anatomical labeling of activations in spm using a
macroscopic anatomical parcellation of the mni mri single-subject brain.
NeuroImage, 15:273–89, 02 2002.
[45]
B Behan, Gregory Connolly, S Datwani, M Doucet, J Ivanovic, R Morioka, A Stone,
Richard Watts, B Smyth, and Hugh Garavan.
Response inhibition and elevated parietal-cerebellar correlations in
chronic adolescent cannabis users.
Neuropharmacology, 84, 06 2013.
[46]
Emanuel Bubl, Ludger Tebartz van Elst, Matthias Gondan, Dieter Ebert, and Mark
Greenlee.
Vision in depressive disorder.
The world journal of biological psychiatry : the official
journal of the World Federation of Societies of Biological Psychiatry,
10:377–84, 10 2007.
[47]
Rui Zhang and Nora Volkow.
Brain default-mode network dysfunction in addiction.
NeuroImage, 200, 06 2019.
[48]
Jay Giedd, Matcheri Keshavan, and Tomas Paus.
Why do many psychiatric disorders emerge during adolescence?
Nat Rev Neurosci, 9:947–957, 01 2008.
[49]
Krista Medina, Karen Hanson, Alecia Dager, Mairav Cohen-Zion, Bonnie Nagel, and
Susan Tapert.
Neuropsychological functioning in adolescent marijuana users: Subtle
deficits detectable after a month of abstinence.
Journal of the International Neuropsychological Society : JINS,
13:807–20, 09 2007.
[50]
Jean-Baptiste Poline and Matthew Brett.
The general linear model and fmri: Does love last forever?
NeuroImage, 62:871–80, 02 2012.
[51]
Logan T. Dowdle, Geoffrey Ghose, Clark C.C. Chen, Kamil Ugurbil, Essa Yacoub,
and Luca Vizioli.
Statistical power or more precise insights into neuro-temporal
dynamics? assessing the benefits of rapid temporal sampling in fmri.
Progress in Neurobiology, 207:102171, 2021.
[52]
Max Welling and Thomas N Kipf.
Semi-supervised classification with graph convolutional networks.
In J. International Conference on Learning Representations (ICLR
2017), 2016.
[53]
Hejie Cui, Wei Dai, Yanqiao Zhu, Xiaoxiao Li, Lifang He, and Carl Yang.
Brainnnexplainer: An interpretable graph neural network framework for
brain network based disease analysis.
arXiv preprint arXiv:2107.05097, 2021.
[54]
Kaizhong Zheng, Shujian Yu, Baojuan Li, Robert Jenssen, and Badong Chen.
Brainib: Interpretable brain network-based psychiatric diagnosis with
graph information bottleneck.
arXiv preprint arXiv:2205.03612, 2022.
8 Appendix |
On the origin of anomalous velocity clouds in the Milky Way.
Tim W. Connors11affiliation: Centre for Astrophysics & Supercomputing,
Swinburne University, Hawthorn, VIC 3122, Australia ,
Daisuke Kawata11affiliation: Centre for Astrophysics & Supercomputing,
Swinburne University, Hawthorn, VIC 3122, Australia 22affiliation: The Observatories of the Carnegie Institution of Washington,
813 Santa Barbara Street, Pasadena, CA 91101, USA ,
Jeremy Bailin11affiliation: Centre for Astrophysics & Supercomputing,
Swinburne University, Hawthorn, VIC 3122, Australia ,
Jason Tumlinson33affiliation: Yale Center for Astronomy and Astrophysics,
Department of Physics, P.O. Box 208101, New Haven, CT 06520, USA ,
Brad K. Gibson44affiliation: Centre for Astrophysics, University of Central
Lancashire, Preston, PR1 2HE, United Kingdom
[email protected]
Abstract
We report that neutral hydrogen (H i) gas clouds, resembling High
Velocity Clouds (HVCs) observed in the Milky Way (MW), appear in
MW-sized disk galaxies formed in high-resolution Lambda Cold Dark
Matter ($\Lambda$CDM) cosmological simulations which include
gas-dynamics, radiative cooling, star formation, supernova feedback,
and metal enrichment. Two such disk galaxies are analyzed, and
H i column density and velocity distributions in all-sky Aitoff
projections are constructed. The simulations demonstrate that
$\Lambda$CDM is able to create galaxies with sufficient numbers of
anomalous velocity gas clouds consistent with the HVCs observed
within the MW, and that they are found within a galactocentric
radius of $150{\rm\,kpc}$. We also find that one of the galaxies has a
polar gas ring, with radius $30{\rm\,kpc}$, which appears as a large
structure of HVCs in the Aitoff projection. Such large structures
may share an origin similar to extended HVCs observed in the MW,
such as Complex C.
Subject headings:
methods: N-body simulations –
galaxies: formation –
galaxies: ISM –
Galaxy: evolution –
Galaxy: halo –
ISM: kinematics and dynamics
1. Introduction
High Velocity Clouds (HVCs) are neutral hydrogen (H i) gas clouds
with velocities inconsistent with galactic rotation (Wakker & van Woerden, 1997).
From our vantage point within the Galaxy, they appear to cover a large
portion of the sky relatively isotropically. HVCs do not appear to
possess a stellar component (e.g. Simon & Blitz, 2002; Siegel et al., 2005) and their
distances (and masses) are generally unknown. Direct distance
constraints have only been made for a select number of HVCs
(Wakker, 2001; Thom et al., 2006). There are still open questions as to whether HVCs
are local to the Milky Way (MW) or distributed throughout the Local
Group (LG); whether they are peculiar to the MW or are common in disk
galaxies; whether they are gravitationally bound or pressure confined;
whether they contain Dark Matter (DM); and their degree of
metal-enrichment.
Pisano et al. (2004) report that there are no HVC-like objects with H i
mass in excess of $4\times 10^{5}{\rm\,M}_{\sun}$ in three LG-analogs. They suggest that
if HVCs are a generic feature, they must be clustered within $160{\rm\,kpc}$
of the host galaxy, ruling out the original Blitz et al. (1999) model in
which HVCs are gas clouds distributed in filaments on Mpc-scales.
Thilker et al. (2004) and Westmeier et al. (2005) find 16 HVCs around M31, with H i
masses ranging from $10^{4}$ to $6\times 10^{5}{\rm\,M}_{\sun}$. Most of the HVCs are at a
projected distance $<15{\rm\,kpc}$ from the disk of M31. Some clouds
appear to be gravitationally dominated by either DM or as yet
undetected ionized gas. They also found two populations of clouds,
with some of the HVCs appearing to be part of a tidal stream, and
others appearing to be primordial DM dominated clouds, left over from
the formation of the LG. Finally, it has also been speculated that
cooling instabilities in Cold Dark Matter (CDM) halos could lead to
clouds within $150{\rm\,kpc}$ of MW type galaxies, with HVC-like properties
(Maller & Bullock, 2004).
In this Letter, we report that these mysterious H i clouds
also appear in MW-size disk galaxies formed in $\Lambda$CDM
cosmological simulations. We demonstrate that the simulated galaxies
show HVCs comparable in population to the observed ones. We also find
that large HVCs resembling Complex C appear in simulated galaxies.
Therefore, we conclude that HVCs appear to be a natural byproduct of
galaxy formation in the $\Lambda$CDM Universe. The next section
describes our methodology, including a brief description of the
simulations and how we “observe” the simulated disk galaxies. In
Section 3, we show our results and discuss our findings.
2. Methodology
We analyze two disk galaxy models found in cosmological simulations
that use the multi-mass technique to self-consistently model the
large-scale tidal field, while simulating the galactic disk at high
resolution. These simulations include self-consistently many of the
the important physical processes in galaxy formation, such as
self-gravity, hydrodynamics, radiative cooling, star formation,
supernova feedback, and metal enrichment. The disk galaxies we analyze
correspond to “KGCD” and “AGCD” in Bailin et al. (2005), and we use
these names hereafter. Both simulations are carried out with our
galactic chemodynamics package GCD+ (Kawata & Gibson, 2003).
The details of these simulations are given in Bailin et al. (2005).
Table 1 summarizes the simulation parameters and the properties
of the galaxies. Column 1 is the galaxy name; Column 2, the virial
mass; Column 3, the virial radius; Column 4, the radial extent of the
gas disk, defined as the largest radius at which we find gas particles
in the disk plane. Columns 5 and 6 contain the mass of each gas and
DM particle in the highest resolution region, and Columns 7 and 8 are
the softening lengths in that region. The cosmological parameters are
presented in Columns 9–11. Note that the spatial resolution for the
gas is determined by the smoothing length of the smoothed particle
hydrodynamics scheme. The minimum smoothing length is set to be half
of the softening length of the gas particles (see Kawata & Gibson, 2003).
The smoothing length depends on the density, and the average smoothing
length in the simulated HVCs we focus on here is $\sim 5{\rm\,kpc}$.
Both galaxies are similar in size and mass to the MW, and have clear
gas and stellar disk components. Figure 1 shows edge-on and
face-on views of the projected gas density of each galaxy at the final
timestep. We use the simulation output at $z=0.1$ for KGCD, as
contamination from low-resolution particles in the simulated galaxy
start to become significant at this redshift. We use the output at
$z=0$ for AGCD.
In order to compare the simulations with the HVC observations in the
MW, we set the position of the “observer” to an arbitrary position
on the disk plane of the simulated galaxies, with galactocentric
distance of $8.5{\rm\,kpc}$, and “observe” the H i column density and
velocity of the gas particles from that position. Figure 2
demonstrates the H i column density of HVCs using all-sky Aitoff
projections. Here, we define HVCs as consisting of gas particles with
absolute lines-of-sight velocities, ${v}_{\rm LSR}$, deviating from the Local
Standard of Rest (LSR) by more than $100{\rm\,km\,s}^{-1}$—Figure 2 shows
the H i column density of these gas particles with $|{v}_{\rm LSR}|>100{\rm\,km\,s}^{-1}$. We set the rotation velocity of the LSR to $220{\rm\,km\,s}^{-1}$,
similar to that of the MW Lockman et al. (2002). We can confirm that both
simulated galaxies have gas disks rotating at $\sim 220{\rm\,km\,s}^{-1}$. In
this paper, results are based only on those particles within two
virial radii ($r_{\mathrm{vir}}$; see Table 1). We have also confirmed that
the results are not sensitive to the cutoff radius chosen for column
densities $N({\rm HI}{})\gtrsim 10^{17}{\rm\,cm}^{-2}$. We only display results for
one chosen observer, however we have confirmed the generality of these
results, with the sky coverage fraction typically changing by no more
than 20% for a given column density, as we change the observer’s
position and/or we analyze other outputs of the simulation near the
final redshift.
Our chemodynamical simulation follows the hydrogen and other elemental
abundances for each gas particle, but does not calculate the
ionization fraction of each species. Instead, the H i mass for each
gas particle is calculated assuming collisional ionization equilibrium
(CIE). The CIE neutral hydrogen fraction is estimated using Cloudy94
(Ferland et al., 1998). We multiply the fraction by the hydrogen abundance,
and obtain a H i mass fraction for each particle as a function of
its temperature. We ignore any effect from the background radiation
field.
3. Results and discussion
Figure 2 demonstrates that both galaxies have a significant
number of HVCs, with column densities comparable to those observed by
Wakker (1991). KGCD displays several large linear HVCs at galactic
longitudes $l\sim 60\arcdeg$ and $l\sim 270\arcdeg$. These components
correspond to the outer ring structure seen at a galactocentric radius
of $\sim 30{\rm\,kpc}$ in Figure 1. We name this structure the
“polar gas ring”, and discuss it later.
To compare the HVC population of our simulations with the MW HVCs
quantitatively, Figure 3 shows the fraction of sky covered
by HVCs as a function of limiting column density for both simulations
and for the observations in Lockman et al. (2002). In this plot, we exclude
the area with low galactic latitude $|b|<20\arcdeg$, to avoid
contamination by the disk component (the sample of sightlines in
Lockman et al., 2002 was limited in a similar fashion). As is obvious
from Figure 2, KGCD has more high column density HVCs than AGCD,
and almost all of the sky is covered down to $10^{16}{\rm\,cm}^{-2}$. At a
fixed column density, the sky coverage in the simulations bracket the
observed sky coverage in the MW. Note that we ignore any effects of
background radiation. It is expected that such a field would decrease
the population of HVCs with Galactocentric distance less than $10{\rm\,kpc}$ or $N({\rm HI}{})\lesssim 10^{19}{\rm\,cm}^{-2}$ (Maloney, 1993; Bland-Hawthorn & Maloney, 1997, 1999). Thus,
since as mentioned below, the distances of the simulated HVCs are
greater than $10{\rm\,kpc}$, our estimated coverage fractions should be
interpreted as an upper limit for $N({\rm HI}{})\lesssim 10^{19}{\rm\,cm}^{-2}$, and a
lower limit for $N({\rm HI}{})\gtrsim 10^{19}{\rm\,cm}^{-2}$, where the highest
column density HVCs may not be fully resolved. With these caveats,
KGCD appears to have a sufficient population of HVCs to explain the
observed population of HVCs within the MW. We conclude that current
cosmological simulations can produce MW-size disk galaxies with
similar populations of HVCs to those in the MW. The differences
between KGCD and AGCD may demonstrate real differences in the
populations of HVCs among disk galaxies. However, to understand the
causes of such differences, we need a larger sample of high-resolution
simulated disk galaxies, the subject of a future study.
We find that velocities of the simulated HVCs seen in Figure 4,
are distributed similarly to those observed by Putman et al. (2002).
Overall, the clouds between $l=0\arcdeg$ and $180\arcdeg$ have a negative
velocity, while the clouds between $l=180\arcdeg$ and $360\arcdeg$ have a
positive velocity. This is natural, since the LSR is moving towards
$l=90\arcdeg$. In KGCD, we find that there is a relatively large HVC
complex whose velocity is very high, $-450\lesssim{v}_{\rm LSR}\lesssim-300{\rm\,km\,s}^{-1}$. These very high velocity clouds (VHVCs) are located
between $-45\arcdeg\lesssim b\lesssim 15\arcdeg$ and $60\arcdeg\lesssim l\lesssim 120\arcdeg$, in the top panel of Figure 4. The MW also has
such VHVCs in the Anti-Center complex (Hulsbosch, 1978; Hulsbosch & Wakker, 1988). The
galactocentric distance to the VHVC in KGCD is $\sim 10$–$25{\rm\,kpc}$,
and we find that the cloud is a gas clump which has recently fallen
into the galaxy. We convert the H i mass for each particle into a
H i flux using $M_{\rm HI}=0.235D^{2}_{\rm kpc}S_{\rm tot}$
(Wakker & van Woerden, 1991), where the H i mass $M_{\rm HI}$ is in ${\rm M}_{\sun}$, total
H i flux in ${\rm Jy\,{\rm\,km\,s}^{-1}{}}$ is $S_{\rm tot}$, and distance from observer
to the particle in ${\rm kpc}$ is $D_{\rm kpc}$. We find the H i mass
of the VHVC complex is $1.2\times 10^{8}{\rm\,M}_{\sun}$, and the total H i flux (for
the chosen observer) is $2.5\times 10^{6}{\rm\,Jy\,{\rm\,km\,s}^{-1}{}}$. Since it is
infalling on a retrograde orbit, its velocity relative to the LSR
becomes very large depending on its location relative to the observer.
Therefore, the observed VHVCs may be explained by such infalling gas
clumps within the Galaxy. It is also worth noting that we do not find
any associated stellar or DM components in the simulated VHVCs.
In the simulations, we are able to measure the distance of HVCs—data
which are not yet generally available for the real MW. In
Figure 5, we show a flux-weighted histogram of the
galactocentric distances, $r_{\rm\,MW}$, of the high velocity gas particles
with $|b|>20\arcdeg$. In both simulations, less than 1% of the H i
flux visible in Figure 2 originates from HVCs with $r_{\rm\,MW}>150{\rm\,kpc}$. This is consistent with the aforementioned limits
established by Pisano et al. (2004).
It is also clear that most of the emission in the KGCD simulation
results from the polar gas ring, whose radius is $\sim 30{\rm\,kpc}$. The
mass of this ring (including low velocity gas) is $3.6\times 10^{9}{\rm\,M}_{\sun}$, of
which $2.3\times 10^{9}{\rm\,M}_{\sun}$ is H i. Although there is no prominent polar
ring in AGCD, at larger distances, the flux distribution is found to
follow that of the KGCD simulation.
Figure 2 demonstrates that the polar gas ring forms a linear high
velocity structure found in all quadrants of the sky. Large HVC
components, such as Complex C and the Magellanic Stream, are well
known in our MW, and several authors (e.g. Haud, 1988) argue that
the MW is surrounded by a polar gas ring. Neglecting the Magellanic
Stream, which has likely originated from the infalling Magellanic
clouds (Gardiner & Noguchi, 1996; Yoshizawa & Noguchi, 2003; Connors et al., 2005), the largest HVC structure is
Complex C. We measure the mass of the HVC ring in one quadrant as
$\sim 4\times 10^{8}{\rm\,M}_{\sun}$. If we place Complex C at a galactocentric distance
of
$30{\rm\,kpc}$ (only a lower limit has been obtained for its galactocentric
distance of $8.8{\rm\,kpc}$; van Woerden et al., 1999), its mass would be $\sim\ 1.5\times 10^{7}{\rm\,M}_{\sun}$. Nevertheless, this indicates that there is enough
H i gas to create such large observed HVCs in the simulated galaxy.
In KGCD, the polar gas ring is a relatively recently formed structure
that begins forming at redshift $z\sim 0.2$, and prior to this time,
the associated gas particles are found flowing inward along
filamentary structures. Thus, this simulated ring structure
demonstrates that current $\Lambda$CDM numerical simulations can
explain the existence of such large HVCs like Complex C, as recently
accreted gas which rotates in a near-polar orbit. On the other hand,
AGCD does not show such a prominent large structure. This may
indicate that such HVC structures are not common in all disk galaxies.
Although this study focuses on only two simulated galaxies due to the
limit of our computational resources, more samples of high-resolution
simulated galaxies would elucidate how common such large structures
are, and what kind of evolution history is required to make such HVCs.
We also analyze the metallicity of the simulated HVCs. In KGCD, we
find that the prominent HVCs have H i flux weighted metallicities of
$-4\lesssim\log(Z/{\rm\,Z}_{\sun})\lesssim-2$ with a flux weighted mean of
$\log(Z/{\rm\,Z}_{\sun})\sim-2.4$. This is much lower than the metallicities
of the observed HVCs in the MW, $-1<\log(Z/Z\sun)<0$ (Wakker, 2001).
The polar gas ring seen in KGCD has a metallicity of $-3\lesssim\log(Z/Z\sun)\lesssim 0.5$ with a mean of $\log(Z/{\rm\,Z}_{\sun})\sim-1.7$,
which is lower than the observed metallicity of Complex C,
$\log(Z/Z\sun)\sim-1$ (e.g. Gibson et al., 2001; Richter et al., 2001). Thus, our
numerical simulations seem to underestimate the metallicity of the
later infalling gas clouds. This is likely because we adopt a weak
supernova feedback model in our simulations. If we use a model with
strong feedback, more enriched gas is blown out from the system at
high redshift, which can enrich the inter-galactic medium which then
falls into galaxies at a later epoch.
The majority of the simulated HVCs, including the polar gas ring, do
not have any obviously associated stellar or DM components, which is
consistent with current observations (Simon & Blitz, 2002; Siegel et al., 2005). However, a
few compact HVCs are found to be associated with stellar components.
It would be interesting to estimate how bright they are, and if they
are detectable within the current observational limits.
Unfortunately, the resolution of the current simulations are too poor
to estimate the luminosity, and it is also likely that our simulations
produce too many stars due to our assumed minimal effect of supernova
feedback.
The clouds in the simulation may be destroyed by effects that our
simulations are not able to reproduce accurately. The resolution and
nature of the SPH simulations make it difficult to resolve shocks
between the HVCs and the MW halo; however simple analytic estimates
show that there will not be strong shocking of the HVCs due to the low
density of the halo. Indebetouw & Shull (2004), using more detailed simulations,
argue that the Mach number of the HVCs are only $M_{s}\sim 1.2$–$1.5$,
marginally sufficient to form shocks, and heating the leading $\sim 0.1{\rm\,pc}$ of the HVC. Quilis & Moore (2001) show that the lifetime of such HVCs
in the presence of shocks is $\sim 1{\rm\,Gyr}$, long enough for our HVCs to
survive. Maller & Bullock (2004) discuss various physical processes, including
conduction, evaporation, ram-pressure drag, Jeans instabilities, and
Kelvin–Helmholtz instabilities that limit the mass ranges of stable
clouds. The most massive HVC apparent in our simulations is found to
be $1.5\times 10^{8}{\rm\,M}_{\sun}$, and the mass
resolution and hence smallest resolvable HVC in our simulations are
$\sim 10^{6}{\rm\,M}_{\sun}$; clouds of both extremes are clearly within the
stable mass ranges summarized in fig. 6 of Maller & Bullock (2004). Therefore,
the simulated clouds are also expected to be stable.
We report that HVCs seem to be a natural occurrence in a $\Lambda$CDM
Universe. We emphasize that the galaxies that result from our
simulations were not created specifically to reproduce the MW
exactly—they were selected for resimulation at higher resolution on
the basis of being disk-like $L_{\star}$ galaxies. However, we have
serendipitously discovered that simulated galaxies that are similar in
size to the MW naturally contain H i gas in the vicinity of the disk
that are similar to the anomalous velocity features seen in the MW.
TC thanks Stuart Gill for his PView visualization tool source code
and Chris Power for useful advice. We acknowledge the Astronomical
Data Analysis Center of the National Astronomical Observatory, Japan
(project ID: wmn14a), the Institute of Space and Astronautical Science
of Japan Aerospace Exploration Agency, and the Australian and
Victorian Partnerships for Advanced Computing, where the numerical
computations for this paper were performed. DK acknowledges the
financial support of the JSPS, through the Postdoctoral Fellowship for
research abroad. The financial support of the Australian Research
Council and the Particle Physics and Astronomy Research Council of the
United Kingdom is gratefully acknowledged.
References
Bailin et al. (2005)
Bailin, J., et al. 2005, ApJ, 627, L17
Bland-Hawthorn & Maloney (1997)
Bland-Hawthorn, J., & Maloney, P. R. 1997, Publications of the
Astronomical Society of Australia, 14, 59
Bland-Hawthorn & Maloney (1999)
—. 1999, ApJ, 510, L33
Blitz et al. (1999)
Blitz, L., Spergel, D. N., Teuben, P. J., Hartmann, D., & Burton,
W. B. 1999, ApJ, 514, 818
Connors et al. (2005)
Connors, T. W., Kawata, D., & Gibson, B. K. 2005, arXiv:astro-ph/0508390
Ferland et al. (1998)
Ferland, G. J., Korista, K. T., Verner, D. A., Ferguson, J. W.,
Kingdon, J. B., & Verner, E. M. 1998, PASP, 110, 761
Gardiner & Noguchi (1996)
Gardiner, L. T., & Noguchi, M. 1996, MNRAS, 278, 191
Gibson et al. (2001)
Gibson, B. K., Giroux, M. L., Penton, S. V., Stocke, J. T., Shull,
J. M., & Tumlinson, J. 2001, AJ, 122, 3280
Haud (1988)
Haud, U. 1988, A&A, 198, 125
Hulsbosch (1978)
Hulsbosch, A. N. M. 1978, A&A, 66, L5
Hulsbosch & Wakker (1988)
Hulsbosch, A. N. M., & Wakker, B. P. 1988, A&AS, 75, 191
Indebetouw & Shull (2004)
Indebetouw, R., & Shull, J. M. 2004, ApJ, 605, 205
Kawata & Gibson (2003)
Kawata, D., & Gibson, B. K. 2003, MNRAS, 340, 908
Lockman et al. (2002)
Lockman, F. J., Murphy, E. M., Petty-Powell, S., & Urick, V. J. 2002,
ApJS, 140, 331
Maller & Bullock (2004)
Maller, A. H., & Bullock, J. S. 2004, MNRAS, 355, 694
Maloney (1993)
Maloney, P. 1993, ApJ, 414, 41
Pisano et al. (2004)
Pisano, D. J., Barnes, D. G., Gibson, B. K., Staveley-Smith, L.,
Freeman, K. C., & Kilborn, V. A. 2004, ApJ, 610, L17
Putman et al. (2002)
Putman, M. E., et al. 2002, AJ, 123, 873
Quilis & Moore (2001)
Quilis, V., & Moore, B. 2001, ApJ, 555, L95
Richter et al. (2001)
Richter, P., et al. 2001, ApJ, 559, 318
Siegel et al. (2005)
Siegel, M. H., Majewski, S. R., Gallart, C., Sohn, S. T., Kunkel,
W. E., & Braun, R. 2005, ApJ, 623, 181
Simon & Blitz (2002)
Simon, J. D., & Blitz, L. 2002, ApJ, 574, 726
Thilker et al. (2004)
Thilker, D. A., Braun, R., Walterbos, R. A. M., Corbelli, E.,
Lockman, F. J., Murphy, E., & Maddalena, R. 2004, ApJ, 601, L39
Thom et al. (2006)
Thom, C., Putman, M. E., Gibson, B. K., Christlieb, N., Flynn, C.,
Beers, T. C., Wilhelm, R., & Lee, Y. S. 2006, ApJ, 638, L97
van Woerden et al. (1999)
van Woerden, H., Peletier, R. F., Schwarz, U. J., Wakker, B. P., &
Kalberla, P. M. W. 1999, in ASP Conf. Ser. 165: The Third Stromlo
Symposium: The Galactic Halo, 469–+
Wakker (1991)
Wakker, B. P. 1991, A&A, 250, 499
Wakker (2001)
—. 2001, ApJS, 136, 463
Wakker & van Woerden (1991)
Wakker, B. P., & van Woerden, H. 1991, A&A, 250, 509
Wakker & van Woerden (1997)
—. 1997, ARA&A, 35, 217
Wakker et al. (2003)
Wakker, B. P., et al. 2003, ApJS, 146, 1
Westmeier et al. (2005)
Westmeier, T., Braun, R., & Thilker, D. 2005, A&A, 436, 101
Yoshizawa & Noguchi (2003)
Yoshizawa, A. M., & Noguchi, M. 2003, MNRAS, 339, 1135 |
Uniqueness of weighted Sobolev spaces with weakly differentiable weights
Jonas M. Tölle
Institut für Mathematik, Technische Universität Berlin (MA 7-5)
Straße des 17. Juni 136, 10623 Berlin, Germany
[email protected]
Abstract.
We prove that weakly differentiable weights $w$ which, together with their reciprocals, satisfy certain local integrability conditions, admit a
unique associated first-order $p$-Sobolev space, that is
$$H^{1,p}(\mathbbm{R}^{d},w\,dx)=V^{1,p}(\mathbbm{R}^{d},w\,dx)=W^{1,p}(\mathbbm%
{R}^{d},w\,dx),$$
where $d\in\mathbbm{N}$ and $p\in[1,\infty)$.
If $w$ admits a (weak) logarithmic gradient $\nabla w/w$ which is in $L^{q}_{\textup{loc}}(w\,dx;\mathbbm{R}^{d})$, $q=p/(p-1)$, we propose an alternative definition of the weighted $p$-Sobolev
space based on an integration by parts formula involving $\nabla w/w$.
We prove that weights of the form $\exp(-\beta\lvert\cdot\rvert^{q}-W-V)$ are $p$-admissible, in particular, satisfy a
Poincaré inequality, where $\beta\in(0,\infty)$, $W$, $V$ are convex and bounded below such that $\lvert\nabla W\rvert$ satisfies a growth condition (depending on $\beta$ and $q$) and $V$ is bounded.
We apply the uniqueness result to weights of this type. The associated nonlinear degenerate evolution equation is also discussed.
Key words and phrases:
$H=W$, weighted Sobolev spaces, smooth approximation, density of smooth functions, Poincaré inequality, $p$-Laplace operator, nonlinear Kolmogorov operator, weighted $p$-Laplacian evolution, nonlinear degenerate parabolic equation.
2000 Mathematics Subject Classification: 46E35; 35J92, 35K65
The research was partly supported by the German Science Foundation (DFG), IRTG 1132, “Stochastics and Real World Models” and the Collaborative Research Center 701 (SFB 701), “Spectral Structures and Topological Methods in Mathematics”, Bielefeld.
1. Introduction
Consider the following quasi-linear PDE in $\mathbbm{R}^{d}$ (in the weak sense)
(1.1)
$$-\operatorname{div}\left[w\lvert\nabla u\rvert^{p-2}\nabla u\right]=fw,$$
(here $1<p<\infty$) where $w\geq 0$ is a locally integrable function, the weight and $f$ is sufficiently regular
(e.g $f\in L^{q}(w\,dx)$, see below).
Let $\mu(dx):=w\,dx$, $q:=p/(p-1)$.
The nonlinear weighted $p$-Laplace operator involved in (1.1) can be identified with the Gâteaux derivative of the
convex functional
(1.2)
$$E_{0}^{\mu}:u\mapsto\frac{1}{p}\int\lvert\nabla u\rvert^{p}\,d\mu.$$
By methods well known
in calculus of variations, solutions to (1.1) are characterized by minimizers of the convex functional
(1.3)
$$E_{f}^{\mu}:u\mapsto E_{0}^{\mu}(u)-\int fu\,d\mu.$$
Of course, the minimizer obtained depends on the energy space chosen for the
functional (1.2). It is natural to demand that the space of test functions $C_{0}^{\infty}$ is included in this energy space (where we take the subscript zero to denote functions with compact support rather than functions vanishing at infinity).
Therefore, let $H^{1,p}(\mu)$ be the completion of $C_{0}^{\infty}$ w.r.t. the Sobolev norm
$$\left\lVert\cdot\right\rVert_{1,p,\mu}:=\left(\left\lVert\nabla\cdot\right%
\rVert_{L^{p}(\mu;\mathbbm{R}^{d})}^{p}+\left\lVert\cdot\right\rVert_{L^{p}(%
\mu)}^{p}\right)^{1/p}.$$
$H^{1,p}(\mu)$ is referred to as the so-called strong weighted Sobolev space.
Of course, in order to guarantee that $H^{1,p}(\mu)$ will be a space of functions
we need a “closability condition”, see equation (2.1) below.
Let $V$ be a weighted Sobolev space such that
•
$V\subset L^{p}(\mu)$ densely and continuously,
•
$V$ admits a linear gradient-operator $\nabla^{V}:V\to L^{p}(\mu;\mathbbm{R}^{d})$ that respects $\mu$-classes,
•
$V$ is complete w.r.t. the Sobolev norm,
•
$C_{0}^{\infty}\subset V$ and $\nabla u=\nabla^{V}u$ $\mu$-a.e. for $u\in C_{0}^{\infty}$ and hence $H^{1,p}_{0}(\mu)\subset V$.
In the case that
$$H^{1,p}(\mu)\subsetneqq V,$$
the so-called Lavrent’ev phenomenon, first described in [31],
occurs if
$$\min_{u\in V}E_{f}(u)<\min_{u\in H^{1,p}(\mu)}E_{f}(u).$$
This leads to different variational solutions to equation (1.1), as discussed in detail in [38].
In order to prevent this possibility, we are concerned with the problem
$$H^{1,p}(\mu)=V,$$
which is equivalent to the density of $C_{0}^{\infty}$ in $V$ and therefore is called
“smooth approximation”. Classically, if $w\equiv 1$, the solution to this problem is
known as the Meyers-Serrin theorem [34] and briefly denoted by $H=W$.
If $p=2$, the problem is also known as “Markov uniqueness”, see [5, 6, 13, 40, 41].
$H=W$ for weighted Sobolev spaces ($p\not=2$) has been studied e.g. in [12, 25, 46].
$H=W$ is in particular useful for identifying a Mosco limit [27, 44]
We are going to investigate two types of weighted Sobolev spaces substituting $V$.
Let $\varphi:=w^{1/p}$. Consider following condition for $p\in[1,\infty)$
$$\varphi\in W^{1,p}_{\textup{loc}}(dx),\quad\beta:=p\frac{\nabla\varphi}{%
\varphi}\in L^{q}_{\textup{loc}}(\mu;\mathbbm{R}^{d})$$
Assuming (Diff), we shall define the Sobolev space $V^{1,p}(\mu)$ (which
extends $H^{1,p}(\mu)$) by saying
that $f\in V^{1,p}(\mu)$ if
$f\in L^{p}(\mu)$ and
there is a gradient
$${\nabla}^{\mu}f:=({\partial}^{\mu}_{1}f,\ldots,{\partial}^{\mu}_{d}f)\in L^{p}%
(\mu;\mathbbm{R}^{d})$$
such that the integration by parts formula
(1.3)
$$\int{\partial}^{\mu}_{i}f\eta\,d\mu=-\int f\partial_{i}\eta\,d\mu-\int f\eta%
\beta_{i}\,d\mu$$
holds for all $\eta\in C_{0}^{\infty}(\mathbbm{R}^{d})$ and all $i\in\{1,\ldots,d\}$.
We point out that, in general, we do not expect $f\in L^{1}_{\textup{loc}}(dx)$! Therefore we cannot use distributional
derivatives here. Formula (1.3) is based on the weak derivative of $fw$ rather
than on that of $f$, see Section 2.1
for details.
For $p=2$, this framework has been carried out by Albeverio et al.
in [2, 3, 4, 6].
Assuming (Diff),
equation (1.1) has the following heuristic reformulation
$$-\operatorname{div}\left[\lvert\nabla u\rvert^{p-2}\nabla u\right]-\left%
\langle\lvert\nabla u\rvert^{p-2}\nabla u,{\beta}\right\rangle=f,$$
which suggests that (1.1) can be regarded as a first-order perturbation of
the unweighted $p$-Laplace equation. In these terms, (1.1) mimics a nonlinear
Kolmogorov operator.
Let us state our main result.
Theorem 1.1.
Assume (Diff). If $p=1$, assume additionally that
(1.4)
$$\nabla\varphi\in L^{\infty}_{\textup{loc}}(dx;\mathbbm{R}^{d}).$$
Then $C_{0}^{\infty}(\mathbbm{R}^{d})$ is dense in $V^{1,p}(\mu)$, and, in particular,
$$H^{1,p}(\mu)=V^{1,p}(\mu).$$
For $p=2$, Theorem 1.1 was proved by Röckner and Zhang [40, 41] using methods from
the theory of Dirichlet forms depending strongly on the $L^{2}$-framework.
For weights of the type $\mu(dx)=Z^{-1}e^{-U(x)}\,dx$, $Z:=\int e^{-U(x)}\,dx$, Lorenzi
and Bertoldi proved Theorem 1.1 under much stronger differentiability assumptions,
see [32, Theorem 8.1.26]. We also refer to Chapter 2.6 of Bogachev’s book [10] for related results.
Our proof is carried out in Section 3 and
inspired by the work of Patrick Cattiaux and
Myriam Fradon [11]. In contrary to their proof, in which Fourier transforms are used (relying on the $L^{2}$-framework),
we shall use maximal functions in order to obtain the fundamental uniform estimate.
Of course,
formula (1.3) provides highly useful for the proof.
Consider the following well known condition for $p\in[1,\infty)$
$$\left.\begin{aligned} \displaystyle\varphi^{-q}\in L^{1}_{\textup{loc}}(dx),&%
\displaystyle\quad\text{for $p\in(1,\infty)$}\\
\displaystyle\varphi^{-1}\in L^{\infty}_{\textup{loc}}(dx),&\displaystyle\quad%
\text{for $p=1$}\end{aligned}\right\}.$$
Condition (Reg) (“regular”) implies that each Sobolev function is a regular
(Schwartz) distribution, see Section 4.
Let D be the gradient in the sense of Schwartz distributions. Assuming (Reg), we define
$$W^{1,p}(\mu):=\left\{u\in L^{p}(\mu)\;|\;\textup{D}u\in L^{p}(\mu;\mathbbm{R}^%
{d})\right\},$$
see e.g. [29]. We shall refer to $W^{1,p}(\mu)$ as the so-called Kufner-Sobolev space, due to [26], and remark that its definition is the standard one in the literature of weighted Sobolev spaces.
It is well known that $H^{1,p}(\mu)=W^{1,p}(\mu)$ is implied by the famous $p$-Muckenhoupt condition, due to [36], in symbols $w\in A_{p}$, $1<p<\infty$, where $A_{p}$ is defined as follows:
$w=\varphi^{p}\in A_{p}$ if and only if there is a global constant $K>0$ such that
(1.4)
$$\left(\fint_{B}\varphi^{p}\,dx\right)\cdot\left(\fint_{B}\varphi^{-q}\,dx%
\right)^{p-1}\leq K,$$
for all balls $B\subset\mathbbm{R}^{d}$. See Proposition 4.3 below for the proof.
We refer to the lecture notes by Bengt Ove Turesson [45] for a detailed discussion of the
class $A_{p}$. See also [23, Ch. 15].
As a consequence of Theorem 1.1, we obtain the following result:
Corollary 1.2.
Assume (Reg), (Diff), and if $p=1$ assume also that (1.4) holds. Then
$$H^{1,p}(\mu)=V^{1,p}(\mu)=W^{1,p}(\mu).$$
We shall give a precise proof in Section 4.
As an application, we investigate the evolution problem related to PDE (1.1) in
Section 5. In particular, we provide existence and uniqueness of the following (global)
evolution equation in $L^{2}(\mu)$, $p\geq 2$,
(1.5)
$$\left.\begin{aligned} \displaystyle\partial_{t}u&\displaystyle=\frac{1}{w}%
\operatorname{div}\left[w\lvert\nabla u\rvert^{p-2}\nabla u\right],&&%
\displaystyle\quad\text{in}\;\;(0,T)\times\mathbbm{R}^{d},\\
\displaystyle u(\cdot,0)&\displaystyle=u_{0}\in L^{2}(\mu),&&\displaystyle%
\quad\text{in}\;\;\mathbbm{R}^{d}.\end{aligned}\right\}$$
See [7] for an example of the (local and nonlocal) weighted evolution problem with Muckenhoupt weights. We also refer the work by Hauer and Rhandi [20], who prove a non-existence result for the global weighted evolution problem.
An application related to nonlinear potential theory and the elliptic equation (1.1)
is given by
the notion of $p$-admissibility, as introduced by Heinonen, Kilpeläinen and Martio in [23]
(see Definition 6.1 below).
We say that a function $F:\mathbbm{R}^{d}\to\mathbbm{R}$ has property (D), if there are constants $c_{1}\geq 1$, $c_{2}\in\mathbbm{R}$ such that
$F(2x)\leq c_{1}F(x)+c_{2}$. If $F$ is concave, it has property (D) with $c_{1}=2$ and $c_{2}=F(0)$. With the help of the ideas of
Hebisch and Zegarliński [21] we are able to prove:
Theorem 1.3.
Let $1<p<\infty$, $q:=p/(p-1)$. Let $\beta\in(0,\infty)$, let $W\in C^{1}(\mathbbm{R}^{d})$ be bounded below
and suppose that
$$\lvert\nabla W(x)\rvert\leq\delta\lvert x\rvert^{q-1}+\gamma$$
for some $\delta<\beta q$ and $\gamma\in(0,\infty)$. Suppose also that $-W$ has property (D).
Let $V:\mathbbm{R}^{d}\to\mathbbm{R}$ be a measurable function such that $\operatorname{osc}V:=\sup V-\inf V<\infty$ and $-V$ has property (D).
Then
$$x\mapsto\exp(-\beta\lvert x\rvert^{q}-W(x)-V(x))$$
is a $p$-admissible weight. If, additionally, $V\in W^{1,\infty}_{\textup{loc}}(dx)$, this weight satisfies the
conditions of Corollary 1.2.
Remark 1.4.
If $V$ is convex, then $V$ is locally Lipschitz by [39, Theorem 10.4] and hence $V\in W^{1,\infty}_{\textup{loc}}(dx)$ by
[14, §4.2.3, Theorem 5].
Remark 1.5.
If $\operatorname{osc}V<\infty$, then the weight $\exp(-V)$ obviously satisfies Muckenhoupt’s condition (1.4)
for all $1<p<\infty$.
As an application of the main result 1.1, the weighted Poincaré inequality
$$\int\left\lvert f-\frac{\int f\,w\,dx}{\int\,w\,dx}\right\rvert^{p}\,w\,dx\leq
c%
\int\lvert\nabla f\rvert^{p}\,w\,dx,$$
for the weight
$w:=\exp(-\beta\lvert\cdot\rvert^{q}-W-V)$ also holds for $f\in V^{1,p}(w\,dx)$ and for $f\in W^{1,p}(w\,dx)$. We also point out, that by Kinderlehrer and Stampacchia [26] the stationary problem (1.1) can be solved for $p$-admissible
weights, see [23, Ch. 17, Appendix I].
Notation
Equip $\mathbbm{R}^{d}$ with the Euclidean norm $\lvert\cdot\rvert$ and the Euclidean scalar product $\left\langle\cdot,{\cdot}\right\rangle$.
For $i\in\{1,\ldots,d\}$, denote by $e_{i}$ the $i$-th unit vector in $\mathbbm{R}^{d}$. For $\mathbbm{R}^{d}$-valued functions $v$
we indicate the projection on the $i$-th coordinate by $v_{i}$.
We denote the (weak or strong)
partial derivative $\frac{\partial}{\partial x_{i}}$ by $\partial_{i}$. Also $\nabla:=(\partial_{1},\ldots,\partial_{d})$.
Denote by $C^{\infty}=C^{\infty}(\mathbbm{R}^{d})$, $C_{0}^{\infty}=C_{0}^{\infty}(\mathbbm{R}^{d})$ resp., the spaces of infinitely often continuously differentiable functions on $\mathbbm{R}^{d}$, with compact support resp.
We denote the standard Sobolev spaces (local Sobolev spaces resp.) on $\mathbbm{R}^{d}$ by $W^{1,p}(dx)$ and $W^{1,p}_{\textup{loc}}(dx)$,
with $1\leq p\leq\infty$.
For $x\in\mathbbm{R}^{d}$, let
$$\operatorname{sign}(x):=\begin{cases}\dfrac{x}{\lvert x\rvert},&\;\;\text{if}%
\;\;x\not=0,\\
0,&\;\;\text{if}\;\;x=0.\end{cases}$$
Denote by D the gradient in the sense of Schwartz distributions. For $x\in\mathbbm{R}^{d}$ and $\rho>0$, set
$B(x,\rho):=\big{\{}y\in\mathbbm{R}^{d}\;\big{|}\;\lvert x-y\rvert<\rho\big{\}}$ and
$\overline{B}(x,\rho):=\big{\{}y\in\mathbbm{R}^{d}\;\big{|}\;\lvert x-y\rvert%
\leq\rho\big{\}}$.
With a standard mollifier we mean a family of functions $\{\eta_{\varepsilon}\}_{{\varepsilon}>0}$ such that
$$\eta_{\varepsilon}(x):=\frac{1}{{\varepsilon}^{d}}\eta\left(\frac{x}{{%
\varepsilon}}\right),$$
where $\eta\in C_{0}^{\infty}(\mathbbm{R}^{d})$ with $\eta\geq 0$,
$\eta(x)=\eta(\lvert x\rvert)$, $\operatorname{supp}\eta\subset\overline{B}(0,1)$ and $\int\eta\,dx=1$.
2. Weighted Sobolev spaces
For all what follows, fix $1\leq p<\infty$ and $d\in\{1,2,\ldots\}$. Set $q:=p/(p-1)$.
Definition 2.1.
For an a.e.-nonnegative measurable function $f$ on $\mathbbm{R}^{d}$, we define the regular set
$$R(f):=\left\{y\in\mathbbm{R}^{d}\;\bigg{|}\;\int_{B(y,{\varepsilon})}\frac{1}{%
f(x)}\,dx<\infty\;\;\text{for some}\;{\varepsilon}>0\right\},$$
where we adopt the convention that $1/0:=+\infty$ and $1/+\infty:=0$.
Define also
$$\widehat{R}(f):=\left\{y\in\mathbbm{R}^{d}\;\bigg{|}\;\operatorname{ess\;sup}%
\displaylimits_{x\in B(y,{\varepsilon})}\frac{1}{f(x)}<\infty\;\;\text{for %
some}\;{\varepsilon}>0\right\}.$$
Obviously, $R(f)$ is the largest open set $O\subset\mathbbm{R}^{d}$, such that $1/f\in L^{1}_{\textup{loc}}(O)$.
Also, it always holds that $f>0$ $dx$-a.e. on $R(f)$. $\widehat{R}(f)$ is the largest open set $\widehat{O}\subset\mathbbm{R}^{d}$
such that $1/f\in L^{\infty}_{\textup{loc}}(\widehat{O})$. By abuse of notation, we denote the regular set for functions $\psi:\mathbb{R}\to\mathbb{R}$ by the same symbol.
Fix a weight $w$, that is a measurable function $w\in L^{1}_{\textup{loc}}(\mathbbm{R}^{d})$, $w\geq 0$ a.e. Set $\mu(dx):=w\,dx$.
Following the notation of [40], we set $\varphi:=w^{1/p}$.
Definition 2.2.
Consider the following conditions:
(Ham1)
For each $i\in\{1,\ldots,d\}$ and for ($(d-1)$-dimensional) Lebesgue a.a. $y\in\{e_{i}\}^{\perp}$ it holds that the map
$\psi_{y}:t\mapsto\varphi(y+te_{i})$ satisfies $\psi_{y}^{p}(t)=0$ for $dt$-a.e. $t\in\mathbbm{R}\setminus R(\psi_{y}^{q})$ if $p\in(1,\infty)$ and satisfies $\psi_{y}(t)=0$ for $dt$-a.e. $t\in\mathbbm{R}\setminus\widehat{R}(\psi_{y})$ if $p=1$.
(Ham2)
$\varphi^{p}(x)=0$ for $dx$-a.e. $x\in\mathbbm{R}^{d}\setminus R(\varphi^{q})$ if $p\in(1,\infty)$ and $\varphi(x)=0$ for
$dx$-a.e. $x\in\mathbbm{R}^{d}\setminus\widehat{R}(\varphi)$ if $p=1$.
Both (Ham1), (Ham2)
are called Hamza’s condition (“on rays” resp. “on $\mathbbm{R}^{d}$”), due to [19].
It is straightforward that the following implications hold
$$(\textbf{Reg})\quad\Longrightarrow\quad(\textbf{Ham2})\quad\Longrightarrow%
\quad(\textbf{Ham1}).$$
Also, if (Reg) holds, $\mu$ and $dx$ are equivalent measures.
Remark 2.3.
Suppose that for $dx$-a.a. $x\in\{\varphi^{p}>0\}$,
$$\operatorname{ess\;inf}\displaylimits_{y\in B(x,\delta)}\varphi^{p}(y)>0$$
for some $\delta=\delta(x)>0$.
Then (Ham2) holds (and is indeed equivalent to (Ham2) for $p=1$). In particular, (Ham2) holds whenever
$\varphi^{p}\geq 0$ is lower semi-continuous.
The following lemma is analogous to [3, Lemma 2.1].
Lemma 2.4.
Assume that (Ham2) holds. Then for $p\in(1,\infty)$,
$$L^{p}(\mathbbm{R}^{d},\mu)\subset L^{1}_{\textup{loc}}(R(\varphi^{q}),dx)$$
continuously and for $p=1$
$$L^{1}(\mathbbm{R}^{d},\mu)\subset L^{1}_{\textup{loc}}(\widehat{R}(\varphi),dx),$$
continuously.
Proof.
Let $u\in L^{p}(\mathbbm{R}^{d},\mu)$ and let $B\Subset R(\varphi^{q})$ be a ball. By Hölder’s inequality, if $p\in(1,\infty)$,
$$\int_{B}\lvert u\rvert\,dx\leq\left(\int_{R(\varphi^{q})}\lvert u\rvert^{p}\,%
\varphi^{p}\,dx\right)^{1/p}\cdot\left(\int_{B}\varphi^{-q}\,dx\right)^{1/q}.$$
$\int_{B}\varphi^{-q}\,dx$ is finite by (Ham2).
For $p=1$, just observe that for balls $B\Subset\widehat{R}(\varphi)$
$$\int_{B}\lvert u\rvert\,dx\leq\left(\int_{\widehat{R}(\varphi)}\lvert u\rvert%
\,\varphi\,dx\right)\cdot\left(\operatorname{ess\;sup}\displaylimits_{x\in B}%
\frac{1}{\varphi(x)}\right).$$
∎
Definition 2.5.
Let
$$X:=\left\{u\in C^{\infty}(\mathbbm{R}^{d})\;\Big{|}\;\left\lVert u\right\rVert%
_{1,p,\mu}:=\left(\left\lVert\nabla u\right\rVert_{L^{p}(\mu;\mathbbm{R}^{d})}%
^{p}+\left\lVert u\right\rVert_{L^{p}(\mu)}^{p}\right)^{1/p}<\infty\right\}.$$
Let $H^{1,p}(\mu):=\widetilde{X}$ be the abstract completion of $X$ w.r.t. the pre-norm $\left\lVert\cdot\right\rVert_{1,p,\mu}$.
Lemma 2.6.
Suppose that (Ham1) holds. Then for all sequences $\{u_{n}\}\subset C^{\infty}$ the following condition holds:
(2.1)
$$\begin{split}&\displaystyle\lim_{n}\left\lVert u_{n}\right\rVert_{L^{p}(\mu)}=%
0\;\text{and}\;\{u_{n}\}\;\text{is}\;\left\lVert\nabla\cdot\right\rVert_{L^{p}%
(\mu;\mathbbm{R}^{d})}\text{-Cauchy}\\
&\displaystyle\qquad\text{always imply}\\
&\displaystyle\lim_{n}\left\lVert\nabla u_{n}\right\rVert_{L^{p}(\mu;\mathbbm{%
R}^{d})}=0.\end{split}$$
Condition (2.1) is referred to as closability.
Proof.
We shall consider partial derivatives first. Fix $i\in\{1,\ldots,d\}$.
Let $\{u_{n}\}\in C^{\infty}$ such that $\left\lVert u_{n}\right\rVert_{L^{p}(\mu)}\to 0$ and such that $\{u_{n}\}$ is $\left\lVert\partial_{i}\cdot\right\rVert_{L^{p}(\mu)}$-Cauchy. By the Riesz-Fischer theorem, $\{\partial_{i}u_{n}\}$
converges to some $v\in L^{p}(\mu)$. Fix $y\in\{e_{i}\}^{\perp}$. Set $\psi_{y}:t\mapsto\varphi(y+te_{i})$. By (Ham1) and
Lemma 2.4 for $d=1$, setting $I_{y}:=R(\psi^{q}_{y})$, if $p\in(1,\infty)$ and $I_{y}:=\widehat{R}(\psi_{y})$ if $p=1$,
we conclude that the sequence of maps $\{t\mapsto\partial_{i}u_{n}(y+te_{i})\}$ converges to $t\mapsto v(y+te_{i})$ in $L^{1}_{\textup{loc}}(I_{y})$.
Let $\eta\in C_{0}^{\infty}(I_{y})$,
$$\displaystyle 0$$
$$\displaystyle=\lim_{n}\int_{I_{y}}u_{n}(y+te_{i})\frac{d}{ds}\eta(s)\Big{|}_{s%
=t}\,dt=-\lim_{n}\int_{\operatorname{supp}\eta\cap I_{y}}(\partial_{i}u_{n})(y%
+te_{i})\eta(t)\,dt$$
$$\displaystyle=-\int_{\operatorname{supp}\eta\cap I_{y}}v(y+te_{i})\eta(t)\,dt.$$
We conclude that $v(y+te_{i})=0$ for $dy$-a.e. $y\in\{e_{i}\}^{\perp}$ and $dt$-a.e $t\in I_{y}$. By (Ham1) it follows that
$v=0$ $\mu$-a.e. on $\mathbbm{R}^{d}$.
Assume now that $\{u_{n}\}\in C^{\infty}$ such that $\left\lVert u_{n}\right\rVert_{L^{p}(\mu)}\to 0$ and such that $\{u_{n}\}$ is $\left\lVert\nabla\cdot\right\rVert_{L^{p}(\mu;\mathbbm{R}^{d})}$-Cauchy.
Clearly each $\{\partial_{i}u_{n}\}$ is a Cauchy-sequence in $L^{p}(\mu)$.
Therefore, for some constant $C=C(p,d)>0$,
$$\int_{\mathbbm{R}^{d}}\lvert\nabla u_{n}\rvert^{p}\,d\mu\leq C\sum_{i=1}^{d}%
\int_{\mathbbm{R}^{d}}\lvert\partial_{i}u_{n}\rvert^{p}\,d\mu\to 0,$$
as $n\to\infty$ by the arguments above.
∎
Proposition 2.7.
Assume (Ham1). Then $H^{1,p}(\mu)$ is a space of $\mu$-classes of functions and
is continuously embedded into $L^{p}(\mu)$. Also, $H^{1,p}(\mu)$ is separable and reflexive whenever $p\in(1,\infty)$.
Proof.
The proof works by similar arguments as in the unweighted case.
∎
Denote the (class of the) gradient of an element $u\in H^{1,p}(\mu)$ by $\nabla^{\mu}u$.
Proposition 2.8.
Assume (Ham1). The $\mu$-classes of $C_{0}^{\infty}(\mathbbm{R}^{d})$ functions are dense in $H^{1,p}(\mu)$.
Proof.
The proof is a standard localization argument using partition of unity, see e.g. [23, Theorem 1.27].
∎
2.1. Integration by parts
We follow the approach of Albeverio, Kusuoka and Röckner [2], which is to define a weighted
Sobolev space via an integration by parts formula. Recall that $w=\varphi^{p}$. A function $f\in L^{p}(\mu)$
might fail to be a Schwartz distribution. Instead, consider $f\varphi^{p}=(f\varphi)\varphi^{p-1}$, which is in $L^{1}_{\textup{loc}}(dx)$
by Hölder’s inequality and therefore $\textup{D}(f\varphi^{p})$ is well defined.
For $f\in C_{0}^{\infty}$, the Leibniz formula yields
(2.2)
$$(\nabla f)\varphi^{p}=\textup{D}(f\varphi^{p})-pf\frac{\textup{D}\varphi}{%
\varphi}\varphi^{p},$$
which motivates the definition of the logarithmic derivative of $\mu$:
$$\beta:=p\frac{\textup{D}\varphi}{\varphi},$$
where we set $\beta\equiv 0$ on $\{\varphi=0\}$.
The name arises from the (solely formal) identity $\beta=\nabla(\log(\varphi^{p}))$.
Lemma 2.9.
Condition (Diff) implies $\varphi^{p}\in W^{1,1}_{\textup{loc}}(dx)$ and
(2.3)
$$\beta=p\frac{\nabla\varphi}{\varphi}=\frac{\nabla(\varphi^{p})}{\varphi^{p}},$$
where $\nabla$ denotes the usual weak gradient.
Moreover, $\beta\in L^{p}_{\textup{loc}}(\mu;\mathbbm{R}^{d})$ and, if $p\in(1,\infty)$, $\lvert\nabla\varphi\rvert\varphi^{p-2}\in L^{q}_{\textup{loc}}$.
Proof.
For $p=1$, the claim follows from (Diff).
Assume (Diff) and that $p\in(1,\infty)$. $\varphi^{p}\in L^{1}_{\textup{loc}}$ is clear.
We claim that
(2.4)
$$\nabla(\varphi^{p})=p\varphi^{p-1}\nabla\varphi.$$
Let $\varphi_{\varepsilon}:=\eta_{\varepsilon}\ast\varphi$, where $\{\eta_{\varepsilon}\}$ is a standard mollifier.
It follows from the classical chain rule that for all ${\varepsilon}>0$
$$\nabla((\varphi_{\varepsilon})^{p})=p\varphi_{\varepsilon}^{p-1}\nabla\varphi_%
{\varepsilon}.$$
Since $\varphi^{p-1}\in L^{q}_{\textup{loc}}$ and $\nabla\varphi\in L^{p}_{\textup{loc}}$, we can pass to the limit
in $L^{1}_{\textup{loc}}$ and get that $\varphi^{p}\in W^{1,1}_{\textup{loc}}(dx)$.
(2.4) follows now from the uniqueness of the gradient in $W^{1,1}_{\textup{loc}}(dx)$.
The first equality in (2.3) is clear. The second follows from (2.4). $\beta\in L^{p}_{\textup{loc}}(\mu;\mathbbm{R}^{d})$
is clear.
The last equality follows from (Diff) by
$$\left\lvert\frac{\nabla\varphi}{\varphi}\right\rvert^{q}\varphi^{p}=\left(%
\lvert\nabla\varphi\rvert\varphi^{p-2}\right)^{q}.$$
∎
Lemma 2.10.
Assume (Diff) and that $p\in(1,\infty)$. Then $\varphi^{p-1}\in W^{1,q}_{\textup{loc}}(dx)$. Also,
$$\nabla(\varphi^{p-1})=(p-1)\varphi^{p-2}\nabla\varphi.$$
Proof.
Fix $1\leq i\leq d$. For $N\in\mathbbm{N}$, define $\psi_{N}:\mathbbm{R}\to\mathbbm{R}$ by $\psi_{N}(t):=(\lvert t\rvert\vee N^{-1}\wedge N)^{p-1}$.
Clearly, $\psi_{N}$ is a Lipschitz function.
By the chain rule for Sobolev functions [47, Theorem 2.1.11],
$$\partial_{i}\psi_{N}(\varphi)=(p-1)1_{\{N^{-1}\leq\varphi\leq N\}}\frac{%
\varphi^{p-1}}{\varphi}\partial_{i}\varphi.$$
We have that $\psi_{N}(\varphi)\to\varphi^{p-1}$ $dx$-a.s. as $N\to\infty$. Also,
$$\lvert\psi_{N}(\varphi)\rvert^{q}\leq\lvert(\varphi\vee N^{-1})^{p}\rvert\leq C%
\lvert\varphi\rvert^{p}+C\in L^{1}_{\textup{loc}}.$$
Furthermore, by Lemma 2.9,
$$\left\lvert 1_{\{N^{-1}\leq\varphi\leq N\}}\frac{\varphi^{p-1}}{\varphi}%
\partial_{i}\varphi\right\rvert\leq\lvert\varphi^{p-2}\partial_{i}\varphi%
\rvert\in L^{q}_{\textup{loc}}.$$
Hence by Lebesgue’s dominated convergence theorem, $\psi_{N}(\varphi)\to\varphi^{p-1}$ in $L^{q}_{\textup{loc}}$
and $\partial_{i}\psi_{N}(\varphi)\to(p-1)\varphi^{p-2}\partial_{i}\varphi$ in $L^{q}_{\textup{loc}}$.
The claim is proved.
∎
Lemma 2.11.
Fix $1\leq i\leq d$.
Suppose that (Diff) holds. Then there is a version $\widetilde{\varphi^{p}}$ of $\varphi^{p}$, such that for $y\in\{e_{i}\}^{\perp}$ the
map $\widetilde{\psi^{p}_{y}}:t\mapsto\widetilde{\varphi^{p}}(y+te_{i})$
is absolutely continuous for almost all $y\in\{e_{i}\}^{\perp}$. Furthermore, for almost all $y\in\{e_{i}\}^{\perp}$, setting $\psi_{y}:t\mapsto\varphi(y+te_{i})$,
$$\mathbbm{R}\setminus R(\psi^{q}_{y})\supset\{t\in\mathbbm{R}\,|\,\widetilde{%
\psi^{p}_{y}}(t)=0\},$$
if $p\in(1,\infty)$ and
$$\mathbbm{R}\setminus\widehat{R}(\psi_{y})\supset\{t\in\mathbbm{R}\,|\,%
\widetilde{\psi^{1}_{y}}(t)=0\}$$
if $p=1$.
Recall that in both cases the $dt$-almost sure inclusion “$\,\subset\,$” holds automatically.
Proof.
Note that $\varphi^{p}\in W^{1,1}_{\textup{loc}}(dx)$ by Lemma 2.9.
Then the first part follows from a well known theorem due to Nikodým, cf. [35, Theorem 2.7].
The second part follows from absolute continuity and Remark 2.3 for $d=1$.
∎
We immediately get that:
Corollary 2.12.
It holds that
$$\textup{({Diff})}\quad\Longrightarrow\quad\textup{({Ham1})}.$$
Motivated by (2.2), we shall define the weighted Sobolev space $V^{1,p}(\mu)$.
Definition 2.13.
If (Diff) holds, we define the space $V^{1,p}(\mu)$ to be the set of all $\mu$-classes of functions $f\in L^{p}(\mu)$
such that there exists a gradient
$$\nabla^{\mu}f=(\partial_{1}^{\mu}f,\ldots,\partial_{d}^{\mu}f)\in L^{p}(\mu;%
\mathbbm{R}^{d})$$
which satisfies
(2.5)
$$\int\partial_{i}^{\mu}f\eta\varphi^{p}\,dx=-\int f\partial_{i}\eta\varphi^{p}%
\,dx-\int f\eta\beta_{i}\varphi^{p}\,dx$$
for all $i\in\{1,\ldots,d\}$ and all $\eta\in C_{0}^{\infty}(\mathbbm{R}^{d})$.
Define also $V^{1,p}_{\textup{loc}}(\mu)$ by replacing
$L^{p}(\mu)$ and $L^{p}(\mu;\mathbbm{R}^{d})$ above by $L^{p}_{\textup{loc}}(\mu)$ and $L^{p}_{\textup{loc}}(\mu;\mathbbm{R}^{d})$ resp.
The first two integrals in (2.5) are obviously well defined. The third integral is finite
by (Diff). It follows immediately that the gradient $\nabla^{\mu}$ is unique.
Also, if $f\in C^{1}(\mathbbm{R}^{d})$, then $f\in V^{1,p}_{\textup{loc}}(\mu)$ and $\nabla f=\nabla^{\mu}f$ $\mu$-a.s.
Proposition 2.14.
Assume (Diff). Then $V^{1,p}(\mu)$ is a Banach space with the obvious choice of a norm
$$\left\lVert\cdot\right\rVert_{1,p,\mu}:=\left(\left\lVert\nabla^{\mu}\cdot%
\right\rVert_{L^{p}(\mu;\mathbbm{R}^{d})}^{p}+\left\lVert\cdot\right\rVert_{L^%
{p}(\mu)}^{p}\right)^{1/p}.$$
Moreover, $H^{1,p}(\mu)\subset V^{1,p}(\mu)$ and their gradients coincide $\mu$-a.e.
Proof.
Let $\{f_{n}\}\subset V^{1,p}(\mu)$ be a $\left\lVert\cdot\right\rVert_{1,p,\mu}$-Cauchy sequence. By the Riesz-Fischer theorem,
$\{f_{n}\}$ converges to some $f\in L^{p}(\mu)$ and $\{\nabla^{\mu}f_{n}\}$ converges to some $g\in L^{p}(\mu;\mathbbm{R}^{d})$.
Let $i\in\{1,\ldots,d\}$ and $\eta\in C_{0}^{\infty}(\mathbbm{R}^{d})$. Passing on to
the limit in (2.5)
yields that
$$\int g_{i}\eta\varphi^{p}\,dx=-\int f\partial_{i}\eta\varphi^{p}\,dx-\int f%
\eta\beta_{i}\varphi^{p}\,dx.$$
Therefore $g=\nabla^{\mu}f$ and $\left\lVert f_{n}-f\right\rVert_{1,p,\mu}\to 0$.
Let us prove the second part. Note that by Corollary 2.12 and the discussion above, $H^{1,p}(\mu)$
is a well defined set of elements in $L^{p}(\mu)$.
Let $f\in C_{0}^{\infty}(\mathbbm{R}^{d})\subset H^{1,p}(\mu)$. By (Diff) and the Leibniz formula for unweighted
Sobolev spaces, (2.2) is satisfied. By classical integration by parts, $f$ satisfies
(2.5) with $\nabla^{\mu}f=\nabla f$. We extend to all of $H^{1,p}(\mu)$ by Proposition
2.8 using that $V^{1,p}(\mu)$ is complete.
∎
For our main result further below, we need to be able to truncate $V^{1,p}(\mu)$-functions.
Therefore, we need to verify
absolute continuity on lines parallel to the coordinate axes in $V^{1,p}(\mu)$:
Proposition 2.15.
Suppose that (Diff) holds. Fix $1\leq i\leq d$.
Then $f\in V^{1,p}(\mu)$ has
a representative $\widetilde{f}^{i}$ such that $t\mapsto\widetilde{f}^{i}(y+te_{i})$ is absolutely continuous
for (($d-1$)-dim.) Lebesgue almost all $y\in\{e_{i}\}^{\perp}$ on any compact subinterval of $R(\varphi^{q}(y+\cdot e_{i}))$ if $p\in(1,\infty)$, on any compact subinterval of $\widehat{R}(\varphi(y+\cdot e_{i}))$ resp. if $p=1$.
In that case, for $dy$-a.a. $y\in\{e_{i}\}^{\perp}$, $dt$-a.a. $t\in R(\varphi^{q}(y+\cdot e_{i}))$ (if $p\in(1,\infty)$), $\widehat{R}(\varphi(y+\cdot e_{i}))$ (if $p=1$) resp. setting $x:=y+te_{i}$, it holds that
$$\partial^{\mu}_{i}f(x)=\frac{d}{dt}\widetilde{f}^{i}(y+te_{i}).$$
Proof.
The claim can be proved arguing similar to [6, Proof of Lemma 2.2].
Compare also with [14, §4.9.2].
∎
Picking appropriate absolutely continuous versions, one immediately obtains the following
Leibniz formula:
Corollary 2.16.
Suppose that (Diff) holds.
If $f,g\in V^{1,p}(\mu)$ and if $fg$, $f\partial_{i}^{\mu}g$ and $g\partial_{i}^{\mu}f$ are in $L^{p}(\mu)$ for
all $1\leq i\leq d$, then $fg\in V^{1,p}(\mu)$ and $\partial_{i}^{\mu}(fg)=f\partial_{i}^{\mu}g+g\partial_{i}^{\mu}f$
for all $1\leq i\leq d$. Then also, $\nabla^{\mu}(fg)=f\nabla^{\mu}g+g\nabla^{\mu}f$.
The following lemma guarantees that we can truncate Sobolev functions.
Lemma 2.17.
Suppose that (Diff) holds.
Suppose that $f\in V^{1,p}(\mu)$ and that $F:\mathbbm{R}\to\mathbbm{R}$ is Lipschitz. Then $F\circ f\in V^{1,p}(\mu)$ with
$${\nabla}^{\mu}(F\circ f)=(F^{\prime}\circ f)\cdot{\nabla}^{\mu}f\quad\mu\text{%
-a.s.}$$
In particular, when $F(t):=N\wedge t\vee-N$, $N\in\mathbbm{N}$ is a cut-off function,
(2.6)
$$\lvert{\nabla}^{\mu}(F\circ f)\rvert\leq\lvert{\nabla}^{\mu}f\rvert\quad\mu%
\text{-a.s.}$$
Proof.
The claim can be proved arguing similar to [47, Theorem 2.1.11].
∎
Lemma 2.18.
Suppose that (Diff) holds.
The set of bounded and compactly supported functions in $V^{1,p}(\mu)$ is dense in $V^{1,p}(\mu)$.
Proof.
The claim follows by a truncation argument from Corollary 2.16 and Lemma 2.17.
We shall omit the proof.
∎
Note that the last two statements also hold for $H^{1,p}(\mu)$. Anyhow, the proof
of Lemma 2.17 for $H^{1,p}(\mu)$ needs some caution, we refer to [33, Proposition I.4.7, Example II.2.c)].
3. Proof of Theorem 1.1
We arrive at our main result. Our proof is inspired by that of Patrick Cattiaux and
Myriam Fradon in [11]. See also [15]. However, our method in estimating (3.10) is different
from theirs, as we use maximal function-estimates instead of Fourier transforms.
For all of this section, assume (Diff).
By Lemma 2.18, bounded and compactly supported functions in $V^{1,p}(\mu)$ are dense.
We will show that a subsequence of a standard mollifier of such a function $f$ converges in $\left\lVert\cdot\right\rVert_{1,p,\mu}$-norm to $f$.
The claim will then follow from Lemma 2.8.
First, we need to collect some facts about the so-called centered Hardy–Littlewood maximal function defined
for $g\in L^{1}_{\textup{loc}}(dx)$ by
$$Mg(x):=\sup_{\rho>0}\fint_{B(x,\rho)}\lvert g(y)\rvert\,dy.$$
We shall need the useful inequality
(3.1)
$$\lvert u(x)-u(y)\rvert\leq c\lvert x-y\rvert\left[M\lvert\nabla u\rvert(x)+M%
\lvert\nabla u\rvert(y)\right]$$
for any $u\in W^{1,p}(dx)$, for all $x,y\in\mathbbm{R}^{d}\setminus N$, where $N$ is a set of Lebesgue measure zero
and $c$ is a positive constant depending only on $d$ and $p$. For a proof see e.g. [1, Corollary 4.3].
The inequality is credited to L. I. Hedberg [22].
Also for all $u\in L^{p}(dx)$, $p\in(1,\infty]$,
(3.2)
$$\left\lVert Mu\right\rVert_{L^{p}}\leq c^{\prime}\left\lVert u\right\rVert_{L^%
{p}}$$
by the maximal function theorem [42, Theorem I.1 (c), p. 5] and $c^{\prime}>0$ depends only on $d$ and $p$.
For the approximation, we shall prove the following key-lemma. Compare with [11, Lemma 2.9].
Lemma 3.1.
Suppose that (Diff) holds. Let $f\in V^{1,p}(\mu)$ such that $f$ is bounded. Then for every $\zeta\in C_{0}^{\infty}(\mathbbm{R}^{d})$
and every $1\leq i\leq d$
(3.3)
$$\int{\partial}^{\mu}_{i}f\zeta\varphi\,dx+\int f\partial_{i}\zeta\varphi\,dx+%
\int f\zeta\partial_{i}\varphi\,dx=0.$$
In particular, $f\varphi\in W^{1,1}_{\textup{loc}}(dx)$ and $\partial_{i}(f\varphi)=\varphi\partial_{i}^{\mu}f+f\partial_{i}\varphi$.
Proof.
For all of the proof fix $1\leq i\leq d$. For $p=1$, the formula follows from (2.5). So, let $p\in(1,\infty)$.
Let us first assure ourselves that all three integrals in (3.3) are well defined.
Clearly,
$$\lvert{\partial}^{\mu}_{i}f\zeta\varphi\rvert^{p}\leq\left\lVert\zeta\right%
\rVert_{\infty}^{p}\lvert{\partial}^{\mu}_{i}f\rvert^{p}\varphi^{p}1_{%
\operatorname{supp}\zeta}\in L^{1}(dx),$$
and hence,
$$\lvert{\partial}^{\mu}_{i}f\zeta\varphi\rvert\in L^{1}(dx).$$
A similar argument works for the second integral. The third integral is well defined because by $\varphi\in W^{1,p}_{\textup{loc}}(dx)$ we have that
$$\lvert f\zeta\partial_{i}\varphi\rvert^{p}\leq\left\lVert f\zeta\right\rVert_{%
\infty}^{p}\lvert\partial_{i}\varphi\rvert^{p}1_{\operatorname{supp}\zeta}\in L%
^{1}(dx)$$
and hence,
$$\lvert f\zeta\partial_{i}\varphi\rvert\in L^{1}(dx).$$
Let $M\in\mathbbm{N}$ and $\vartheta_{M}\in C_{0}^{\infty}(\mathbbm{R})$ with
$$\vartheta_{M}(t)=t\;\text{for}\;t\in[-M,M],\;\lvert\vartheta_{M}\rvert\leq M+1%
,\;\lvert\vartheta_{M}^{\prime}\rvert\leq 1$$
and
$$\operatorname{supp}(\vartheta_{M})\subset[-3M,3M].$$
Define
$$\varphi_{M}:=\vartheta_{M}\left(\frac{1}{\varphi^{p-1}}\right)1_{\{\varphi>0\}}.$$
Clearly, $\varphi_{M}\in L^{p}_{\textup{loc}}$. Furthermore, define
$$\Phi_{M}:=(1-p)\vartheta_{M}^{\prime}\left(\frac{1}{\varphi^{p-1}}\right)\frac%
{\partial_{i}\varphi}{\varphi^{p}}1_{\{\varphi>0\}}.$$
Since $\vartheta_{M}^{\prime}(1/\varphi^{p-1})\equiv 0$ on $\{\varphi^{p-1}\leq 1/(3M)\}$ and
$$\lvert\Phi_{M}\rvert\leq(p-1)\frac{\lvert\partial_{i}\varphi\rvert}{\varphi^{p%
}}1_{\{\varphi^{p-1}>1/(3M)\}}=(p-1)\frac{\lvert\partial_{i}\varphi\rvert}{%
\varphi^{p}}1_{\{\varphi^{p}>(1/(3M))^{q}\}},$$
hence $\Phi_{M}\in L^{p}_{\textup{loc}}$. We claim that $\varphi_{M}\in W^{1,p}_{\textup{loc}}(dx)$ and that $\partial_{i}\varphi_{M}=\Phi_{M}$.
Let ${\varepsilon}>0$ and define
$$\varphi_{M}^{\varepsilon}:=\vartheta_{M}\left(\frac{1}{(\varphi+{\varepsilon})%
^{p-1}}\right).$$
Clearly, $\varphi_{M}^{\varepsilon}\to\varphi_{M}$ in $L^{p}_{\textup{loc}}$ as ${\varepsilon}\searrow 0$. Also, by the chain rule for Sobolev functions
(see e.g. [47, Theorem 2.1.11]),
$$\partial_{i}\varphi_{M}^{\varepsilon}=(1-p)\vartheta_{M}^{\prime}\left(\frac{1%
}{(\varphi+{\varepsilon})^{p-1}}\right)\frac{\partial_{i}\varphi}{(\varphi+{%
\varepsilon})^{p}}1_{\{\varphi+{\varepsilon}>(3M)^{-1/(p-1)}\}}$$
and
$$\lvert\partial_{i}\varphi_{M}^{\varepsilon}\rvert\leq(p-1)\frac{\lvert\partial%
_{i}\varphi\rvert}{(\varphi+{\varepsilon})^{p}}1_{\{(\varphi+{\varepsilon})^{p%
}>(1/(3M))^{q}\}}\in L^{p}_{\textup{loc}}.$$
Hence $\varphi_{M}^{\varepsilon}\in W^{1,p}_{\textup{loc}}(dx)$ and $\partial_{i}\varphi_{M}^{\varepsilon}\to\Phi_{M}$ in $L^{p}_{\textup{loc}}$ as ${\varepsilon}\searrow 0$.
Since $\varphi\in W^{1,p}_{\textup{loc}}(dx)$ and since $\varphi_{M}$ is bounded, we have that $\varphi_{M}\partial_{i}\varphi\in L^{p}_{\textup{loc}}$. Also, $\varphi\partial_{i}\varphi_{M}\in L^{p}_{\textup{loc}}$, since
(3.4)
$$\lvert\varphi\partial_{i}\varphi_{M}\rvert\leq(p-1)\frac{\lvert\partial_{i}%
\varphi\rvert}{\varphi^{p-1}}1_{\{\varphi^{p-1}>1/(3M)\}}\leq(p-1)3M\lvert%
\partial_{i}\varphi\rvert.$$
Now by the usual Leibniz rule for weak derivatives
$$\varphi\varphi_{M}\in W^{1,p}_{\textup{loc}}(dx)\quad\text{and}\quad\partial_{%
i}(\varphi\varphi_{M})=\varphi_{M}\partial_{i}\varphi+(1-p)\vartheta_{M}^{%
\prime}\left(\frac{1}{\varphi^{p-1}}\right)\frac{\partial_{i}\varphi}{\varphi^%
{p-1}}$$
where by definition $\partial_{i}\varphi/\varphi^{p-1}\equiv 0$ on $\{\varphi=0\}$. Consider the term $\varphi_{M}\varphi^{p}$.
Recall that $\varphi^{p}\in W^{1,1}_{\textup{loc}}(dx)$ by Lemma 2.9. As already seen, $\varphi\varphi_{M}\in W^{1,p}_{\textup{loc}}(dx)$.
By Lemma 2.10, $\varphi^{p-1}\in W^{1,q}_{\textup{loc}}(dx)$ and $\partial_{i}(\varphi^{p-1})=(p-1)\varphi^{p-2}\partial_{i}\varphi\in L^{q}_{%
\textup{loc}}$.
Hence $\varphi\varphi_{M}(\partial_{i}(\varphi^{p-1}))\in L^{1}_{\textup{loc}}$ and $\partial_{i}(\varphi\varphi_{M})\varphi^{p-1}\in L^{1}_{\textup{loc}}$. It follows that $\varphi_{M}\varphi^{p}\in W^{1,1}_{\textup{loc}}(dx)$ and by
the Leibniz rule for weak derivatives
$$\partial_{i}(\varphi_{M}\varphi^{p})=p\varphi_{M}\varphi^{p-1}\partial_{i}%
\varphi+(1-p)\vartheta_{M}^{\prime}\left(\frac{1}{\varphi^{p-1}}\right)%
\partial_{i}\varphi\in L^{1}_{\textup{loc}}.$$
Let $\zeta\in C_{0}^{\infty}(\mathbbm{R}^{d})$. Applying integration by parts, we see that
(3.5)
$$\int\partial_{i}\zeta\varphi_{M}\varphi^{p}\,dx=-p\int\zeta\varphi_{M}\frac{%
\partial_{i}\varphi}{\varphi}\varphi^{p}\,dx+(p-1)\int\zeta\frac{\partial_{i}%
\varphi}{\varphi^{p}}\vartheta_{M}^{\prime}\left(\frac{1}{\varphi^{p-1}}\right%
)\varphi^{p}\,dx.$$
Moreover, by (3.4), $\partial_{i}\varphi_{M}\in L^{p}_{\textup{loc}}(\varphi^{p}\,dx)$. $\varphi_{M}\in L^{p}_{\textup{loc}}(\varphi^{p}\,dx)$ is clear. Therefore
$\varphi_{M}\in V^{1,p}_{\textup{loc}}(\mu)$ and
$${\partial}^{\mu}_{i}\varphi_{M}=(1-p)\frac{\partial_{i}\varphi}{\varphi^{p}}%
\vartheta_{M}^{\prime}\left(\frac{1}{\varphi^{p-1}}\right).$$
The Leibniz rule in Corollary 2.16 also holds in $V^{1,p}_{\textup{loc}}(\mu)$, and so we would like to give sense to the expression
${\partial}^{\mu}_{i}(f\varphi_{M})=\varphi_{M}{\partial}^{\mu}_{i}f+f{\partial%
}^{\mu}_{i}\varphi_{M}$. But $\varphi_{M}\in V^{1,p}_{\textup{loc}}(\mu)$, $f\in V^{1,p}(\mu)$ and $f$ is bounded,
$f{\partial}^{\mu}_{i}\varphi_{M}\in L^{p}_{\textup{loc}}(\mu)$ since $f$ is bounded and finally $\varphi_{M}{\partial}^{\mu}_{i}f\in L^{p}_{\textup{loc}}(\mu)$ since $\varphi_{M}$ is bounded.
Hence $f\varphi_{M}\in V^{1,p}_{\textup{loc}}(\mu)$ and the Leibniz rule holds (locally). By definition of ${\partial}^{\mu}_{i}$ for $\zeta\in C_{0}^{\infty}(\mathbbm{R}^{d})$
(3.6)
$$\begin{split}\displaystyle\int{\partial}^{\mu}_{i}f\zeta\varphi_{M}\varphi^{p}%
\,dx=&\displaystyle(p-1)\int f\zeta\frac{\partial_{i}\varphi}{\varphi^{p}}%
\vartheta_{M}^{\prime}\left(\frac{1}{\varphi^{p-1}}\right)\varphi^{p}\,dx\\
&\displaystyle-\int f\partial_{i}\zeta\varphi_{M}\varphi^{p}\,dx-p\int f\zeta%
\varphi_{M}\frac{\partial_{i}\varphi}{\varphi}\varphi^{p}\,dx\end{split}$$
Now let $M\to\infty$ in (3.6). Note that
$$\varphi_{M}\to(1/\varphi^{p-1})1_{\{\varphi>0\}}$$
$dx$-a.s. and
$$\vartheta_{M}^{\prime}(1/\varphi^{p-1})\to 1$$
$dx$-a.s.
In order to apply Lebesgue’s dominated convergence theorem, we verify
$$\lvert{\partial}^{\mu}_{i}f\zeta\varphi_{M}\varphi^{p}\rvert\leq 2\lvert{%
\partial}^{\mu}_{i}f\varphi\rvert\left\lVert\zeta\right\rVert_{\infty}1_{%
\operatorname{supp}\zeta}\in L^{1}(dx),$$
where we have used that
$$\lvert\varphi_{M}\varphi^{p-1}\rvert\leq 1,$$
because $\vartheta_{M}$ is Lipschitz and $\vartheta_{M}(0)=0$.
Furthermore,
$$\displaystyle\lvert f\zeta\partial_{i}\varphi\vartheta_{M}^{\prime}\left(1/%
\varphi^{p-1}\right)\rvert$$
$$\displaystyle\leq\lvert f\partial_{i}\varphi\rvert\left\lVert\zeta\right\rVert%
_{\infty}1_{\operatorname{supp}\zeta}\in L^{1}(dx),$$
$$\displaystyle\lvert f\partial_{i}\zeta\varphi_{M}\varphi^{p}\rvert$$
$$\displaystyle\leq 2\lvert f\varphi\rvert\left\lVert\partial_{i}\zeta\right%
\rVert_{\infty}1_{\operatorname{supp}\zeta}\in L^{1}(dx),$$
and
$$\displaystyle\lvert f\zeta\varphi_{M}\partial_{i}\varphi\varphi^{p-1}\rvert$$
$$\displaystyle\leq 2\lvert f\partial_{i}\varphi\rvert\left\lVert\zeta\right%
\rVert_{\infty}1_{\operatorname{supp}\zeta}\in L^{1}(dx).$$
The formula obtained, when passing on to the limit $M\to\infty$ in (3.6), is exactly the desired statement.
∎
Below, we shall need a lemma on difference quotients. Compare with [17, Proof of Lemma 7.23] and [47, Theorem 2.1.6].
Lemma 3.2.
Let $z\in B(0,1)\subset\mathbbm{R}^{d}$ and $u\in W^{1,p}(dx)$.
Set for ${\varepsilon}>0$
$$\Delta_{\varepsilon}u(x):=\frac{u(x-{\varepsilon}z)-u(x)}{{\varepsilon}}$$
for some representative of $u$.
Then
$$\left\lVert\Delta_{\varepsilon}u+\left\langle\nabla u,{z}\right\rangle\right%
\rVert_{L^{p}(dx)}\to 0$$
as ${\varepsilon}\searrow 0$.
Proof.
Start with $u\in C^{1}\cap W^{1,p}(dx)$. By the fundamental theorem of calculus
$$\Delta_{\varepsilon}u(x)=-\frac{1}{{\varepsilon}}\int_{0}^{\varepsilon}\left%
\langle\nabla u(x-sz),{z}\right\rangle\,ds.$$
Use Fubini’s theorem to get
(3.7)
$$\int\left\lvert\Delta_{\varepsilon}u(x)+\left\langle\nabla u(x),{z}\right%
\rangle\right\rvert^{p}\,dx=\frac{1}{{\varepsilon}}\int_{0}^{\varepsilon}\int%
\left\lvert\left\langle\nabla u(x-sz),{z}\right\rangle-\left\langle\nabla u(x)%
,{z}\right\rangle\right\rvert^{p}\,dx\,ds.$$
By a well known property of $L^{p}$-norms [42, p. 63] the map
$$s\mapsto\int\left\lvert\left\langle\nabla u(x-sz),{z}\right\rangle-\left%
\langle\nabla u(x),{z}\right\rangle\right\rvert^{p}\,dx$$
is continuous in zero. Hence $s=0$ is a Lebesgue point of this map. Therefore the right-hand side of
(3.7) tends to zero as ${\varepsilon}\searrow 0$. The claim can be extended to functions in
$W^{1,p}(dx)$ by an approximation by smooth functions as e.g. in [47, Theorem 2.3.2].
∎
Proof of Theorem 1.1.
Let $f\in V^{1,p}(\mu)$ be (a class of) a function which is bounded and compactly supported. By Lemma 2.18,
we are done if we can approximate $f$ by $C_{0}^{\infty}$-functions. Let $\{\eta_{\varepsilon}\}_{{\varepsilon}>0}$ be a
standard mollifier. Since $f$ is bounded and compactly supported,
$\eta_{\varepsilon}\ast f\in C_{0}^{\infty}(\mathbbm{R}^{d})$ with $\operatorname{supp}(\eta_{\varepsilon}\ast f)\subset\operatorname{supp}f+{%
\varepsilon}B(0,1)$ and $\lvert\eta_{\varepsilon}\ast f\rvert\leq\left\lVert f\right\rVert_{\infty}$.
We claim that there exists a sequence ${\varepsilon}_{n}\searrow 0$ such that $\eta_{{\varepsilon}_{n}}\ast f$ converges to $f$ in $V^{1,p}(\mu)$.
The $L^{p}(\mu)$-part is easy. Since $\eta_{\varepsilon}\ast f,f\in L^{1}(dx)$, $\lim_{{\varepsilon}\searrow 0}\left\lVert\eta_{\varepsilon}\ast f-f\right%
\rVert_{L^{1}(dx)}=0$. Therefore
we can extract a subsequence $\{{\varepsilon}_{n}\}$ such that $\eta_{{\varepsilon}_{n}}\ast f\to f$ $dx$-a.s. For ${\varepsilon}_{n}\leq 1$
$$\lvert(\eta_{{\varepsilon}_{n}}\ast f)\varphi-f\varphi\rvert^{p}\leq 2^{p}%
\left\lVert f\right\rVert_{\infty}^{p}\lvert\varphi\rvert^{p}1_{\operatorname{%
supp}f+B(0,1)}\in L^{1}(dx).$$
By Lebesgue’s dominated convergence theorem, $\lim_{n}\left\lVert\eta_{{\varepsilon}_{n}}\ast f-f\right\rVert_{L^{p}(\mu)}=0$.
Fix $1\leq i\leq d$.
We are left to prove $\partial_{i}(\eta_{{\varepsilon}_{n}}\ast f)\to{\partial}^{\mu}_{i}f$ in $L^{p}(\mu)$ for some sequence ${\varepsilon}_{n}\searrow 0$.
Or equivalently,
$$\varphi\partial_{i}(\eta_{{\varepsilon}_{n}}\ast f)\to\varphi{\partial}^{\mu}_%
{i}f\quad\text{in}\;L^{p}(dx).$$
Write
(3.8)
$$\begin{split}&\displaystyle\int\lvert\varphi\partial_{i}(\eta_{\varepsilon}%
\ast f)-\varphi{\partial}^{\mu}_{i}f\rvert^{p}\,dx\\
\displaystyle\leq&\displaystyle 2^{p-1}\left[\int\lvert\varphi{\partial}^{\mu}%
_{i}f-(\eta_{\varepsilon}\ast(\varphi{\partial}^{\mu}_{i}f))\rvert^{p}\,dx+%
\int\lvert(\eta_{\varepsilon}\ast(\varphi{\partial}^{\mu}_{i}f))-\varphi%
\partial_{i}(\eta_{\varepsilon}\ast f)\rvert^{p}\,dx\right].\end{split}$$
The first term tends to zero as ${\varepsilon}\searrow 0$ by a well known fact [42, Theorem III.2 (c), p. 62]. We continue with studying the second term.
Recall that $\eta_{\varepsilon}(x)=\eta_{\varepsilon}(\lvert x\rvert)$.
$$\displaystyle\int\lvert\varphi\partial_{i}(\eta_{\varepsilon}\ast f)-(\eta_{%
\varepsilon}\ast(\varphi{\partial}^{\mu}_{i}f))\rvert^{p}\,dx$$
$$\displaystyle=$$
$$\displaystyle\int\left\lvert\varphi(x)\int\partial_{i}\eta_{\varepsilon}(x-y)f%
(y)\,dy-\int\eta_{\varepsilon}(x-y)\varphi(y){\partial}^{\mu}_{i}f(y)\,dy%
\right\rvert^{p}\,dx$$
$$\displaystyle=$$
$$\displaystyle\int\bigg{\lvert}\int\partial_{i}\eta_{\varepsilon}(x-y)f(y)[%
\varphi(x)-\varphi(y)]\,dy$$
$$\displaystyle+\int\partial_{i}\eta_{\varepsilon}(x-y)f(y)\varphi(y)-\eta_{%
\varepsilon}(x-y)\varphi(y){\partial}^{\mu}_{i}f(y)\,dy\bigg{\rvert}^{p}\,dx$$
apply Lemma 3.1 with $$\zeta(y):=\eta_{\varepsilon}(x-y)$$
and noting that $$\partial_{i}\eta_{\varepsilon}(x-y)=\frac{\partial}{\partial x_{i}}\eta_{%
\varepsilon}(x-y)=-\frac{\partial}{\partial y_{i}}\eta_{\varepsilon}(x-y)$$
$$\displaystyle=$$
$$\displaystyle\int\left\lvert\int\partial_{i}\eta_{\varepsilon}(x-y)f(y)[%
\varphi(x)-\varphi(y)]\,dy+\int\eta_{\varepsilon}(x-y)f(y)\partial_{i}\varphi(%
y)\,dy\right\rvert^{p}\,dx$$
$$\displaystyle\leq$$
$$\displaystyle 2^{p-1}\left[\int\left\lvert\int\partial_{i}\eta_{\varepsilon}(x%
-y)f(y)[\varphi(x)-\varphi(y)]\,dy\right\rvert^{p}\,dx+\int\left\lvert\eta_{%
\varepsilon}\ast(f\partial_{i}\varphi)\right\rvert^{p}\,dx\right]$$
$$\displaystyle\leq$$
$$\displaystyle 2^{p-1}\int\left\lvert\int\partial_{i}\eta_{\varepsilon}(x-y)f(y%
)[\varphi(x)-\varphi(y)]\,dy\right\rvert^{p}\,dx+2^{p-1}\left\lVert f\partial_%
{i}\varphi\right\rVert_{L^{p}(dx)}^{p}.$$
We would like to control the first term. Replace $\varphi$ by $\widehat{\varphi}\in W^{1,p}(dx)$ defined by:
$$\widehat{\varphi}=\varphi\xi\;\;\text{with}\;\;\xi\in C_{0}^{\infty}(\mathbbm{%
R}^{d})\;\;\text{and}\;\;1_{\operatorname{supp}f+B(0,2)}\leq\xi\leq 1_{%
\operatorname{supp}f+B(0,3)}.$$
Let $h_{\varepsilon}:\mathbbm{R}^{d}\to\mathbbm{R}^{d}$, $h_{\varepsilon}(x):=-{\varepsilon}x$. Then upon substituting $y=x+{\varepsilon}z$ (which leads to $dy={\varepsilon}^{d}\,dz$)
$$\displaystyle\int\left\lvert\int\partial_{i}\eta_{\varepsilon}(x-y)f(y)\left[%
\varphi(x)-\varphi(y)\right]\,dy\right\rvert^{p}\,dx$$
$$\displaystyle=$$
$$\displaystyle\int\left\lvert\int_{B(0,1)}\partial_{i}\eta_{\varepsilon}(-{%
\varepsilon}z)f(x+{\varepsilon}z)\left[\widehat{\varphi}(x)-\widehat{\varphi}(%
x+{\varepsilon}z)\right]{\varepsilon}^{d}\,dz\right\rvert^{p}\,dx$$
By the chain rule $-{\varepsilon}(\partial_{i}\eta_{\varepsilon})(-{\varepsilon}z)=\partial_{i}(%
\eta_{\varepsilon}\circ h_{\varepsilon})(z)=(1/{\varepsilon}^{d})\partial_{i}(%
\eta)(z)$
and hence the latter is equal to
$$\displaystyle\int\left\lvert\int_{B(0,1)}\partial_{i}\eta(z)f(x+{\varepsilon}z%
)\frac{\widehat{\varphi}(x)-\widehat{\varphi}(x+{\varepsilon}z)}{{\varepsilon}%
}\,dz\right\rvert^{p}\,dx$$
$$\displaystyle\leq$$
$$\displaystyle 2^{p-1}\int\left\lvert\int_{B(0,1)}\partial_{i}\eta(z)f(x+{%
\varepsilon}z)\left\langle-\nabla\widehat{\varphi}(x+{\varepsilon}z),{z}\right%
\rangle\,dz\right\rvert^{p}\,dx$$
$$\displaystyle+2^{p-1}$$
$$\displaystyle\times\int\left\lvert\int_{B(0,1)}\partial_{i}\eta(z)f(x+{%
\varepsilon}z)\left[\frac{\widehat{\varphi}(x)-\widehat{\varphi}(x+{%
\varepsilon}z)}{{\varepsilon}}+\left\langle\nabla\widehat{\varphi}(x+{%
\varepsilon}z),{z}\right\rangle\right]\,dz\right\rvert^{p}\,dx$$
By Jensen’s inequality and Fubini’s theorem, the first term is bounded by
$$C(p,d)\left\lVert\partial_{i}\eta\right\rVert_{\infty}^{p}\sum_{j=1}^{d}\left%
\lVert f\partial_{j}\varphi\right\rVert_{L^{p}(dx)}^{p},$$
where $C(p,d)$ is a positive constant depending only on $p$ and $d$.
Concerning the second term, we use again Jensen’s inequality and Fubini’s theorem to see that it is bounded by
(3.9)
$$C^{\prime}(p,d)\left\lVert\partial_{i}\eta\right\rVert_{\infty}^{p}\left\lVert
f%
\right\rVert_{\infty}^{p}\int_{B(0,1)}\int\left\lvert\frac{\widehat{\varphi}(x%
)-\widehat{\varphi}(x+{\varepsilon}z)}{{\varepsilon}}+\left\langle\nabla%
\widehat{\varphi}(x+{\varepsilon}z),{z}\right\rangle\right\rvert^{p}\,dx\,dz,$$
where $C^{\prime}(p,d)$ is a positive constant depending only on $p$ and $d$. Let us investigate the inner integral.
By variable substitution, we get that the inner integral in (3.9) is equal to
(3.10)
$$\int\left\lvert\frac{\widehat{\varphi}(x-{\varepsilon}z)-\widehat{\varphi}(x)}%
{{\varepsilon}}+\left\langle\nabla\widehat{\varphi}(x),{z}\right\rangle\right%
\rvert^{p}\,dx.$$
By Lemma 3.2, the term converges to zero pointwise as ${\varepsilon}\searrow 0$ for each fixed $z\in B(0,1)$.
By inequality (3.1),
for $dz$-a.a. $z\in B(0,1)$
$$\displaystyle\int\left\lvert\frac{\widehat{\varphi}(x-{\varepsilon}z)-\widehat%
{\varphi}(x)}{{\varepsilon}}+\left\langle\nabla\widehat{\varphi}(x),{z}\right%
\rangle\right\rvert^{p}\,dx\\
\displaystyle\leq C(p,d)\left\lVert M\lvert\nabla\widehat{\varphi}\rvert\right%
\rVert_{L^{p}(dx)}^{p}\lvert z\rvert^{p}1_{B(0,1)}(z),$$
where $M$ denotes the centered Hardy-Littlewood maximal function.
If $p\in(1,\infty)$, then $\widehat{\varphi}\in W^{1,p}(dx)$ and the right-hand side is in
$L^{1}(dz)$ by estimate (3.2).
If $p=1$, then $\nabla\widehat{\varphi}\in L^{\infty}(dx)$ by (1.4) and
$$\displaystyle\int\left\lvert\frac{\widehat{\varphi}(x-{\varepsilon}z)-\widehat%
{\varphi}(x)}{{\varepsilon}}+\left\langle\nabla\widehat{\varphi}(x),{z}\right%
\rangle\right\rvert\,dx\\
\displaystyle\leq C(d,\operatorname{supp}f)\left\lVert M\lvert\nabla\widehat{%
\varphi}\rvert\right\rVert_{L^{\infty}(dx)}\lvert z\rvert^{p}1_{B(0,1)}(z)$$
and the right-hand side is again in
$L^{1}(dz)$ by estimate (3.2).
The desired convergence to zero as ${\varepsilon}\searrow 0$ follows now by the preceding discussion and
Lebesgue’s dominated convergence theorem.
We have proved that
(3.11)
$$\begin{split}&\displaystyle\int\lvert\varphi\partial_{i}(\eta_{\varepsilon}%
\ast f)-(\eta_{\varepsilon}\ast(\varphi{\partial}^{\mu}_{i}f))\rvert^{p}\,dx\\
\displaystyle\leq&\displaystyle C(d,p,\operatorname{supp}f,\eta)\left[\sum_{j=%
1}^{d}\left\lVert f\partial_{j}\varphi\right\rVert_{L^{p}(dx)}^{p}+\left\lVert
f%
\right\rVert_{\infty}^{p}\theta({\varepsilon})\right]\end{split}$$
with $\theta({\varepsilon})\to 0$ as ${\varepsilon}\searrow 0$, and $\theta$ depends only on $\operatorname{supp}f$.
We shall go back to the right-hand side of (3.8). Let $f_{\delta}:=\eta_{\delta}\ast f$ for $\delta>0$. By Lebesgue’s dominated convergence theorem again, we can prove
that there is a subnet (also denoted by $\{f_{\delta}\}$), such that
(3.12)
$$\sum_{j=1}^{d}\left\lVert(f-f_{\delta})\partial_{j}\varphi\right\rVert_{L^{p}(%
dx)}^{p}\to 0$$
as $\delta\searrow 0$. Taking (3.11) into account, ($f$ replaced by $f-f_{\delta}$ therein), we get that
$$\displaystyle\left\lVert\varphi\partial_{i}(\eta_{\varepsilon}\ast f)-(\eta_{%
\varepsilon}\ast(\varphi{\partial}^{\mu}_{i}f))\right\rVert^{p}_{L^{p}(dx)}$$
$$\displaystyle\leq$$
$$\displaystyle 2^{p-1}\left\lVert\varphi\partial_{i}(\eta_{\varepsilon}\ast(f-f%
_{\delta}))-(\eta_{\varepsilon}\ast(\varphi{\partial}^{\mu}_{i}(f-f_{\delta}))%
)\right\rVert^{p}_{L^{p}(dx)}$$
$$\displaystyle+2^{p-1}\left\lVert\varphi\partial_{i}(\eta_{\varepsilon}\ast f_{%
\delta})-(\eta_{\varepsilon}\ast(\varphi{\partial}^{\mu}_{i}f_{\delta}))\right%
\rVert^{p}_{L^{p}(dx)}$$
$$\displaystyle\leq$$
$$\displaystyle C(d,p,\operatorname{supp}f)\left[\sum_{j=1}^{d}\left\lVert(f-f_{%
\delta})\partial_{j}\varphi\right\rVert_{L^{p}(dx)}^{p}+\left\lVert f-f_{%
\delta}\right\rVert_{\infty}^{p}\theta({\varepsilon})\right]$$
$$\displaystyle+2^{p-1}\left\lVert\varphi\partial_{i}(\eta_{\varepsilon}\ast f_{%
\delta})-(\eta_{\varepsilon}\ast(\varphi{\partial}^{\mu}_{i}f_{\delta}))\right%
\rVert^{p}_{L^{p}(dx)}.$$
The use of (3.11) is justified, since $\widehat{\varphi}=\varphi$ on $\operatorname{supp}f+B(0,2)$, thus on $\operatorname{supp}(f-f_{\delta})+B(0,1)$.
Taking (3.12) into account, by choosing first $\delta$ and then letting ${\varepsilon}\searrow 0$, the first term above can be controlled (since $\left\lVert f-f_{\delta}\right\rVert_{\infty}\leq 2\left\lVert f\right\rVert_{\infty}$).
If we can prove for any $\zeta\in C_{0}^{\infty}$
(3.13)
$$\left\lVert\varphi\partial_{i}(\eta_{\varepsilon}\ast\zeta)-(\eta_{\varepsilon%
}\ast(\varphi{\partial}^{\mu}_{i}\zeta))\right\rVert^{p}_{L^{p}(dx)}\to 0$$
as ${\varepsilon}\searrow 0$, we can control the second term above and hence are done. But
$$\displaystyle\left\lVert\varphi\partial_{i}(\eta_{\varepsilon}\ast\zeta)-(\eta%
_{\varepsilon}\ast(\varphi\partial_{i}\zeta))\right\rVert^{p}_{L^{p}(dx)}$$
$$\displaystyle\leq$$
$$\displaystyle\int\left\lvert\int\eta_{\varepsilon}(x-y)\partial_{i}\zeta(y)%
\left[\varphi(x)-\varphi(y)\right]\,dy\right\rvert^{p}\,dx.$$
Substituting $y=x+{\varepsilon}z$ ($dy={\varepsilon}^{d}\,dz$) and using Jensen’s inequality and Fubini’s theorem again,
the latter is dominated by
$$C(d,p)\left\lVert\eta\right\rVert_{\infty}^{p}\left\lVert\partial_{i}\zeta%
\right\rVert^{p}_{\infty}\int_{B(0,1)}\left\lVert(\varphi\xi_{\zeta})(\cdot)-(%
\varphi\xi_{\zeta})(\cdot+{\varepsilon}z)\right\rVert^{p}_{L^{p}(dx)}\,dz,$$
where $\xi_{\zeta}\in C_{0}^{\infty}(\mathbbm{R}^{d})$ with $\xi_{\zeta}\equiv 1$ on $\operatorname{supp}\zeta+B(0,1)$.
$$\left\lVert(\varphi\xi_{\zeta})(\cdot)-(\varphi\xi_{\zeta})(\cdot+{\varepsilon%
}z)\right\rVert^{p}_{L^{p}(dx)}$$
tends to zero as ${\varepsilon}\searrow 0$ again by [42, p. 63].
By inequalities (3.1) and (3.2) for $dz$-a.a. $z\in B(0,1)$
$$\left\lVert(\varphi\xi_{\zeta})(\cdot)-(\varphi\xi_{\zeta})(\cdot+{\varepsilon%
}z)\right\rVert^{p}_{L^{p}(dx)}\leq c(d,p)\left\lVert\nabla(\varphi\xi_{\zeta}%
)\right\rVert_{L^{p}(dx)}^{p}\lvert{\varepsilon}z\rvert^{p}1_{B(0,1)}\in L^{1}%
(dz),$$
for $p\in(1,\infty)$,
and together with (1.4), for $p=1$,
$$\displaystyle\left\lVert(\varphi\xi_{\zeta})(\cdot)-(\varphi\xi_{\zeta})(\cdot%
+{\varepsilon}z)\right\rVert_{L^{1}(dx)}\\
\displaystyle\leq c(d,p,\operatorname{supp}f)\left\lVert\nabla(\varphi\xi_{%
\zeta})\right\rVert_{L^{\infty}(dx)}\lvert{\varepsilon}z\rvert 1_{B(0,1)}\in L%
^{1}(dz).$$
Thus we can apply Lebesgue’s dominated convergence theorem.
The proof is complete.∎
4. The Kufner-Sobolev space $W^{1,p}(\mu)$
We shall briefly deal with the Kufner-Sobolev space $W^{1,p}(\mu)$ first introduced in [28] and studied e.g. in [29, 30, 37].
Definition 4.1.
Assume (Reg). Let
$$W^{1,p}(\mu):=\left\{u\in L^{p}(\mu),\;|\;\textup{D}u\in L^{p}(\mu;\mathbbm{R}%
^{d})\right\}.$$
Note that in the above definition, by (Reg) and Lemma 2.4,
$u\in L^{1}_{\textup{loc}}$ and hence $\textup{D}u$ is well defined.
Proposition 4.2.
Assume (Reg). Then $W^{1,p}(\mu)$ is a Banach space with the obvious choice of a norm.
Also, by definition $H^{1,p}(\mu)\subset W^{1,p}(\mu)$. Moreover, for all $u\in H^{1,p}(\mu)$, $\nabla^{\mu}u=\textup{D}u$ $dx$-a.s.
Proof.
See [29, Theorem 1.11] and [23, §1.9].
∎
The following well known result demonstrates
the power of maximal functions. We include its proof for the sake of completeness.
Proposition 4.3.
Assume $1<p<\infty$. Assume that there is a global constant $K>0$ such that
(4.1)
$$\left(\fint_{B}\varphi^{p}\,dx\right)\cdot\left(\fint_{B}\varphi^{-q}\,dx%
\right)^{p-1}\leq K,$$
for all balls $B\subset\mathbbm{R}^{d}$. Then $H^{1,p}(\mu)=W^{1,p}(\mu)$.
Proof.
Let $f\in W^{1,p}(\mu)$, $f$ bounded and compactly supported. Let $\{\eta_{\varepsilon}\}_{{\varepsilon}>0}$
be a standard mollifier. $\varphi$ satisfying condition (4.1) is equivalent in
saying that $\varphi^{p}=w\in A_{p}$, where $A_{p}$ is the so-called $p$-Muckenhoupt class.
Note that this implies (Reg).
Let
$$Mf(x):=\sup_{\rho>0}\fint_{B(x,\rho)}\lvert f(y)\rvert\,dy.$$
be the centered Hardy-Littlewood maximal function of $f$. By [43, Ch. II, §2, p. 57] we
have the pointwise estimate
$$\lvert f\ast\eta_{\varepsilon}\rvert\leq Mf\quad\forall{\varepsilon}>0.$$
Also, by [43, Ch. V, §2, p. 198], and the sublinearity of $M$ it is easy to prove that
$$\lvert\nabla(f\ast\eta_{\varepsilon})\rvert\leq M\lvert\textup{D}f\rvert\quad%
\forall{\varepsilon}>0.$$
By [43, Ch. V, §3, p. 201, Theorem 1], $w\in A_{p}$ implies that there exists a
constant $C>0$ such that
$$\int(Mf(x))^{p}\,w(x)\,dx\leq C\int\lvert f(x)\rvert^{p}\,w(x)\,dx\quad\forall
f%
\in L^{p}(\mu).$$
Since $f$ was assumed bounded and compactly supported, by (Reg), $f\in L^{1}(dx)$ and $\{f\ast\eta_{\varepsilon}\}$ converges to $f$ in
$L^{1}(dx)$ as ${\varepsilon}\downarrow 0$. A similar statement holds for $\lvert\textup{D}f\rvert$. Hence a subsequence converges $dx$-a.e. Taking the above
estimates into account, we see that a subsequence of $\{f\ast\eta_{\varepsilon}\}$ converges in $W^{1,p}(\mu)$ to $f$ by Lebesgue’s dominated convergence theorem.
∎
We arrive at our major contribution to the study of the “classical” weighted Sobolev space $W^{1,p}(\mu)$.
For $p=2$ it was proved in [4].
Proposition 4.4.
Assume (Reg), (Diff), and if $p=1$, assume also that (1.4) holds. Then
$$H^{1,p}(\mu)=V^{1,p}(\mu)=W^{1,p}(\mu).$$
Proof.
The first equality follows from Theorem 1.1. Therefore by Proposition 4.2, $V^{1,p}(\mu)\subset W^{1,p}(\mu)$
and for $u\in V^{1,p}(\mu)$, $\nabla^{\mu}u=\textup{D}u$ both $\mu$-a.e. and $dx$-a.e. (recall that (Reg) implies
that $dx$ and $\mu$ are equivalent measures).
Conversely, let $f\in W^{1,p}(\mu)\cap L^{\infty}(\mu)$.
Since by Lemma 2.9, $\varphi^{p}\in W^{1,1}_{\textup{loc}}(dx)$, we have for each $i\in\{1,\ldots,d\}$ and each $\eta\in C_{0}^{\infty}(\mathbbm{R}^{d})$
that
$$\int\textup{D}_{i}f\eta\varphi^{p}\,dx=-\int f\partial_{i}(\eta\varphi^{p})\,dx,$$
where $\partial_{i}$ is the usual weak derivative in $W^{1,1}_{\textup{loc}}(dx)$. But, again by Lemma 2.9, the right-hand side is equal to
$$-\int f\partial_{i}\eta\varphi^{p}\,dx-\int f\eta\beta_{i}\varphi^{p}\,dx.$$
Therefore $f\in V^{1,p}(\mu)$ and $\textup{D}f=\nabla^{\mu}f$ both $\mu$-a.e. and $dx$-a.e.
It is well known that, given (Reg), bounded functions in $W^{1,p}(\mu)$ are dense in $W^{1,p}(\mu)$ and hence
$W^{1,p}(\mu)\subset V^{1,p}(\mu)$.
∎
5. The weighted $p$-Laplacian evolution problem
Main result 1.1 can be used to investigate the evolution problem
related to the weighted $p$-Laplacian equation. We shall briefly illustrate the procedure
for the so-called degenerate case, that is, $p\in[2,\infty)$.
With a weak solution to equation (1.7), we mean a variational solution in the sense of [8, Ch. 4.1, Theorem 4.10].
Theorem 5.1.
Let $p\in[2,\infty)$.
Suppose also
that $\mu$ is a finite measure, so that $L^{p}(\mu)\subset L^{2}(\mu)$ densely and continuously.
Suppose that (Diff)
holds for $\varphi^{p}=w\geq 0$. Then the evolution problem (1.5) admits a unique (weak) solution.
Proof.
We represent the monotone operator
$$\begin{split}&\displaystyle A:V^{1,p}(\mu)\to(V^{1,p}(\mu))^{\ast},\\
&\displaystyle\sideset{{}_{(V^{1,p}(\mu))^{\ast}}}{{}_{V^{1,p}(\mu)}}{\mathop{%
\left\langle{A(u)},{v}\right\rangle}}=\int\lvert\nabla^{\mu}u\rvert^{p-2}\left%
\langle\nabla^{\mu}u,{\nabla^{\mu}v}\right\rangle\,w\,dx,\end{split}$$
as the Gâteaux derivative of
$$E_{0}^{\mu}(u):=\frac{1}{p}\int\lvert\nabla^{\mu}u\rvert^{p}\,w\,dx$$
in the triple of dense and continuous embeddings
$V^{1,p}(\mu)\subset L^{2}(\mu)\subset(V^{1,p}(\mu))^{\ast}$.
Since $p\geq 2$, the operator is demicontinuous, compare with [8, Ch. 2.4, Theorem 2.5]. Boundedness of the operator $A$
follows straightforwardly. See [8, Ch. 4.1, Theorem 4.10] for details and the terminology.
Existence follows now from [9, Theorem 4.4]. Uniqueness follows from
monotonicity.
∎
For $p\in(2,\infty)$, consider the following additional condition on $w$:
(5.1)
$$w^{-1/(p-2)}\in L^{1}(dx)$$
Lemma 5.2.
Let $p\in(2,\infty)$.
Assume that condition (5.1) is satisfied for $w$.
Then $L^{p}(\mu)\subset L^{2}(dx)$ continuously and (Reg) is satisfied.
Proof.
Let $u\in L^{p}(\mathbbm{R}^{d},\mu)$. By Hölder’s inequality,
$$\left(\int\lvert u\rvert^{2}\,dx\right)^{1/2}\leq\left(\int\lvert u\rvert^{p}%
\,\varphi^{p}\,dx\right)^{1/p}\cdot\left(\int\left(\frac{1}{\varphi}\right)^{p%
/(p-2)}\,dx\right)^{(p-2)/(2p)},$$
which is finite by (5.1).
Since for any ball $B\subset\mathbbm{R}^{d}$, it holds that
$$1_{B}w^{-1/(p-1)}\leq\left(1_{B}w^{-1/(p-2)}+1_{B}\right)\in L^{1}_{\textup{%
loc}}(dx),$$
we see that (Reg) is satisfied.
∎
Consider the following evolution equation in $L^{2}(dx)$
(5.2)
$$\left.\begin{aligned} \displaystyle\partial_{t}u&\displaystyle=\operatorname{%
div}\left[w\lvert\nabla u\rvert^{p-2}\nabla u\right],&&\displaystyle\quad\text%
{in}\;\;(0,T)\times\mathbbm{R}^{d},\\
\displaystyle u(\cdot,0)&\displaystyle=u_{0}\in L^{2}(dx),&&\displaystyle\quad%
\text{in}\;\;\mathbbm{R}^{d}.\end{aligned}\right\}$$
The above equation differs in a “weight term” due to the dualization in $L^{2}(dx)$ rather
than in $L^{2}(\mu)$.
Theorem 5.3.
Let $p\in(2,\infty)$.
Assume that condition (5.1) is satisfied for $w$. Assume (Diff).
Then
(5.2) admits a unique (weak) solution.
Proof.
By Lemma 5.2, $L^{p}(\mu)\subset L^{2}(dx)$ continuously. Hence the proof follows again from [9, Theorem 4.4] and
monotonicity.
∎
6. A new class of $p$-admissible weights
We shall recall the definition of $p$-admissible weights from [23] by Heinonen, Kilpeläilen and Martio.
Note the similarities between (6.2) and (2.1) above.
Definition 6.1.
A weight $w\in L^{1}_{\textup{loc}}(\mathbbm{R}^{d})$, $w\geq 0$ is called $p$-admissible if the following four conditions are satisfied.
•
$0<w<\infty$ $dx$-a.e. and the weight is doubling, i.e. there is a constant $C_{1}>0$ such that
(6.1)
$$\int_{2B}w\,dx\leq C_{1}\int_{B}w\,dx\quad\forall\text{\;balls\;}B\subset%
\mathbbm{R}^{d}.$$
•
If $\Omega\subset\mathbbm{R}^{d}$ is open and $\{\eta_{k}\}\subset C^{\infty}(\Omega)$ is a sequence of functions such that
(6.2)
$$\int_{\Omega}\lvert\eta_{k}\rvert^{p}w\,dx\to 0\quad\text{and}\quad\int_{%
\Omega}\lvert\nabla\eta_{k}-v\rvert^{p}w\,dx\to 0$$
for some $v\in L^{p}(\Omega,w\,dx;\mathbbm{R}^{d})$, then $v\equiv 0\in\mathbbm{R}^{d}$.
•
There are constants $\kappa>1$ and $C_{3}>0$ such that
(6.3)
$$\left(\frac{1}{\int_{B}w\,dx}\int_{B}\lvert\eta\rvert^{\kappa p}w\,dx\right)^{%
1/(\kappa p)}\leq C_{3}\operatorname{diam}B\left(\frac{1}{\int_{B}w\,dx}\int_{%
B}\lvert\nabla\eta\rvert^{p}w\,dx\right)^{1/p},$$
whenever $B\subset\mathbbm{R}^{d}$ is a ball and $\eta\in C_{0}^{\infty}(B)$.
•
There is a constant $C_{4}>0$ such that
(6.4)
$$\int_{B}\lvert\eta-\eta_{B}\rvert^{p}w\,dx\leq C_{4}(\operatorname{diam}B)^{p}%
\int_{B}\lvert\nabla\eta\rvert^{p}w\,dx,$$
whenever $B\subset\mathbbm{R}^{d}$ is a ball and $\eta\in C^{\infty}_{b}(B)$. Here
$$\eta_{B}:=\frac{1}{\int_{B}w\,dx}\int_{B}\eta\,w\,dx.$$
The next results were basically proved by Hebisch and Zegarliński in [21, Section 2].
We include the proofs in order to make this paper self-contained and obtain
concrete bounds due to a more specific situation.
Lemma 6.2.
Let $1<q<\infty$, $\beta\in(0,\infty)$. Let $\mu(dx):=\exp(-\beta\lvert x\rvert^{q})\,dx$.
Then for any $C\geq(\beta q)^{-1}$, any ${\varepsilon}>0$ and any $D\geq(1+{\varepsilon})^{q-1}+({\varepsilon}^{-1}+d-1)C$, we have that
(6.5)
$$\int\lvert f\rvert\lvert x\rvert^{q-1}\,\mu(dx)\leq C\int\lvert\nabla f\rvert%
\,\mu(dx)+D\int\lvert f\rvert\,\mu(dx),$$
for all $f\in C^{1}_{0}(\mathbbm{R}^{d})$.
Proof.
Let $f\in C_{0}^{1}(\mathbbm{R}^{d})$ such that $f\geq 0$ and $f$ is equal to zero on the unit ball.
By the Leibniz rule we get that
$$(\nabla f)e^{-\beta\lvert\cdot\rvert^{q}}=\nabla\left(fe^{-\beta\lvert\cdot%
\rvert^{q}}\right)+\beta qf\lvert\cdot\rvert^{q-1}\operatorname{sign}(\cdot)e^%
{-\beta\lvert\cdot\rvert^{q}}.$$
Plugging into the functional $g\mapsto\int\left\langle g(x),{\operatorname{sign}(x)}\right\rangle\,dx$ yields
(6.6)
$$\begin{split}&\displaystyle\int\left\langle\operatorname{sign}(x),{\nabla f(x)%
}\right\rangle e^{-\beta\lvert x\rvert^{q}}\,dx\\
\displaystyle=&\displaystyle\int\left\langle\operatorname{sign}(x),{\nabla%
\left(fe^{-\beta\lvert x\rvert^{q}}\right)}\right\rangle\,dx+\beta q\int f(x)%
\lvert x\rvert^{q-1}e^{-\beta\lvert x\rvert^{q}}\,dx.\end{split}$$
Clearly, for the left-hand side,
(6.7)
$$\int\left\langle\operatorname{sign}(x),{\nabla f(x)}\right\rangle e^{-\beta%
\lvert x\rvert^{q}}\,dx\leq\int\lvert\nabla f(x)\rvert e^{-\beta\lvert x\rvert%
^{q}}\,dx.$$
Recall that
(6.8)
$$\operatorname{div}(\operatorname{sign}(x))=\left\{\begin{aligned} %
\displaystyle 2\delta_{0},&\displaystyle\;\;\text{if}\;\;d=1,\\
\displaystyle\frac{d-1}{\lvert x\rvert},&\displaystyle\;\;\text{if}\;\;d\geq 2%
\end{aligned}\right.$$
(in the sense of distributions), where $\delta_{0}$ denotes the Dirac measure in $0$.
Hence after an approximation by mollifiers, for $d=1$, we get the formula
(6.9)
$$\int\left\langle\operatorname{sign}(x),{\nabla\left(fe^{-\beta\lvert x\rvert^{%
q}}\right)}\right\rangle\,dx=-2\int fe^{-\beta\lvert x\rvert^{q}}\,\delta_{0}(%
dx)=-2f(0)=0.$$
For $d\geq 2$, we get that
(6.10)
$$\begin{split}&\displaystyle\int\left\langle\operatorname{sign}(x),{\nabla\left%
(fe^{-\beta\lvert x\rvert^{q}}\right)}\right\rangle\,dx\\
\displaystyle=&\displaystyle(1-d)\int\frac{1}{\lvert x\rvert}fe^{-\beta\lvert x%
\rvert^{q}}\,dx\geq(1-d)\int fe^{-\beta\lvert x\rvert^{q}}\,dx.\end{split}$$
Gathering (6.6), (6.7), (6.9) and (6.10) gives
(6.11)
$$\beta q\int f\lvert x\rvert^{q-1}\,\mu(dx)\leq\int\lvert\nabla f\rvert\,\mu(dx%
)+(d-1)\int f\,\mu(dx).$$
Replacing $f$ by $\lvert f\rvert$ and noting that $\nabla(\lvert f\rvert)=\operatorname{sign}(f)\nabla f$, we can extend to arbitrary $f\in C_{0}^{1}$
such that $f\equiv 0$ on $B(0,1)$.
Now, let $f\in C_{0}^{1}$ be arbitrary.
Let ${\varepsilon}>0$. Let $\varphi(x):=1\wedge({\varepsilon}^{-1}((1+{\varepsilon})-\lvert x\rvert)\vee 0)$.
Then $f=g+h$, where $g:=\varphi f$ and $h:=(1-\varphi)f$. Also, $h\equiv 0$ on $B(0,1)$. Now,
(6.12)
$$\begin{split}\displaystyle\int\lvert f\rvert\lvert x\rvert^{q-1}\,\mu(dx)&%
\displaystyle=\int_{\lvert x\rvert\leq 1+{\varepsilon}}\lvert f\rvert\lvert x%
\rvert^{q-1}\,\mu(dx)+\int_{\lvert x\rvert>1+{\varepsilon}}\lvert f\rvert%
\lvert x\rvert^{q-1}\,\mu(dx)\\
&\displaystyle\leq(1+{\varepsilon})^{q-1}\int_{\lvert x\rvert\leq 1+{%
\varepsilon}}\lvert f\rvert\,\mu(dx)+\int_{\lvert x\rvert>1+{\varepsilon}}%
\lvert h\rvert\lvert x\rvert^{q-1}\,\mu(dx)\\
&\displaystyle\leq(1+{\varepsilon})^{q-1}\int\lvert f\rvert\,\mu(dx)+\int%
\lvert h\rvert\lvert x\rvert^{q-1}\,\mu(dx).\end{split}$$
Note that $\lvert\nabla h\rvert\leq\lvert\nabla f\rvert+{\varepsilon}^{-1}\lvert f\rvert$ $dx$-a.s. Let $C\geq(\beta q)^{-1}$.
By an approximation in $W^{1,\infty}$-norm, we see that (6.11) is also valid for $h$ and hence
$$\begin{split}&\displaystyle\int\lvert h\rvert\lvert x\rvert^{q-1}\,\mu(dx)\leq
C%
\int\lvert\nabla h\rvert\,\mu(dx)+C(d-1)\int\lvert h\rvert\,\mu(dx)\\
\displaystyle\leq&\displaystyle C\int\lvert\nabla f\rvert\,\mu(dx)+({%
\varepsilon}^{-1}+d-1)C\int\lvert f\rvert\,\mu(dx),\end{split}$$
which, combined with (6.12), yields inequality (6.5) with
$D\geq(1+{\varepsilon})^{q-1}+({\varepsilon}^{-1}+d-1)C$.
∎
Lemma 6.3.
Let $1<p<\infty$, $q:=p/(p-1)$, $\beta\in(0,\infty)$. Let $\mu(dx):=\exp(-\beta\lvert x\rvert^{q})\,dx$. Let $C\geq(\beta q)^{-1}$.
Let $W\in C^{1}(\mathbbm{R}^{d})$ be a differentiable potential (in particular, is bounded below) such that
(6.13)
$$\lvert\nabla W(x)\rvert\leq\delta\lvert x\rvert^{q-1}+\gamma$$
with some constants $0<\delta<C^{-1}$, $\gamma\in(0,\infty)$.
Let $V$ be measurable such that $\operatorname{osc}V:=\sup V-\inf V<\infty$. Let $d\nu:=\exp(-W-V)\,d\mu$.
Then for any ${\varepsilon}_{0}>0$, any
$$C^{\prime}\geq(1-C\delta)^{-1}{\varepsilon}_{0}pCe^{2\operatorname{osc}V},$$
any ${\varepsilon}_{1}>0$ and any
$$D^{\prime}\geq(1-C\delta)^{-1}e^{2\operatorname{osc}V}\left((1+{\varepsilon}_{%
1})^{q-1}+({\varepsilon}_{1}^{-1}+d-1)C+({\varepsilon}_{0}p)^{-q/p}Cpq^{-1}+%
\gamma\right)$$
it holds that
(6.14)
$$\int\lvert f\rvert^{p}\lvert x\rvert^{q-1}\,\nu(dx)\leq C^{\prime}\int\lvert%
\nabla f\rvert^{p}\,\nu(dx)+D^{\prime}\int\lvert f\rvert^{p}\,\nu(dx),$$
for any $f\in C_{0}^{1}$.
Proof.
Plug $\lvert f\rvert^{p}e^{-W}$ into (6.5). By Leibniz’s rule we get that
$$\displaystyle\int\lvert f\rvert^{p}\lvert x\rvert^{q-1}e^{-W}\,\mu(dx)$$
$$\displaystyle\leq$$
$$\displaystyle Cp\int\lvert f\rvert^{p-1}\lvert\nabla f\rvert e^{-W}\,\mu(dx)$$
$$\displaystyle+C\int\lvert f\rvert^{p}\lvert\nabla W\rvert e^{-W}\mu(dx)+D\int%
\lvert f\rvert^{p}e^{-W}\mu(dx).$$
For the first term,
$$\displaystyle Cp\int\lvert f\rvert^{p-1}\lvert\nabla f\rvert e^{-W}\,\mu(dx)$$
$$\displaystyle\leq$$
$$\displaystyle Cp\left(\int\lvert\nabla f\rvert^{p}e^{-W}\,\mu(dx)\right)^{1/p}%
\cdot\left(\int\lvert f\rvert^{p}e^{-W}\,\mu(dx)\right)^{1/q}$$
$$\displaystyle\leq$$
$$\displaystyle{\varepsilon}_{0}pC\int\lvert\nabla f\rvert^{p}e^{-W}\,\mu(dx)+({%
\varepsilon}_{0}p)^{-q/p}Cpq^{-1}\int\lvert f\rvert^{p}e^{-W}\,\mu(dx),$$
by the Hölder and Young inequalities resp. Since $\operatorname{osc}V<\infty$, the claim follows by an easy perturbation argument,
see e.g. [16, preuve du théorème 3.4.1].
∎
Usually, one would set ${\varepsilon}_{0}:=p^{-1}$ and ${\varepsilon}_{1}:=1$.
Theorem 6.4.
Let $1<p<\infty$ and let $w$ be a weight such that $w$ satisfies a local $p$-Poincaré inequality (6.4)
with constant $C_{4}>0$. Let $\beta$, $W$, $V$, $C^{\prime}>0$, $D^{\prime}>0$ be as in Lemma 6.3.
Let $L>D^{\prime}$. Let
$$a_{L}:=\operatorname{osc}\displaylimits_{B(0,L^{p-1})}\left[-\beta\lvert\cdot%
\rvert^{q}-W-V\right].$$
Let
$$c\geq 2^{q}\frac{e^{2a_{L}}C_{4}L^{p(p-1)}+\frac{C^{\prime}}{L}}{1-\frac{D^{%
\prime}}{L}}.$$
Suppose that
$d\nu_{w}:=\exp(-\beta\lvert\cdot\rvert^{q}-W-V)\,w\,dx$ is a finite measure.
Then $\nu_{w}$ satisfies the Poincaré inequality
$$\int\left\lvert f-\frac{\int f\,d\nu_{w}}{\int\,d\nu_{w}}\right\rvert^{p}\,d%
\nu_{w}\leq c\int\lvert\nabla f\rvert^{p}\,d\nu_{w},$$
for all $f\in C_{b}^{\infty}(\mathbbm{R}^{d})$.
Proof.
By the results of Lemma 6.3, we can apply [21, Theorem 3.1].
∎
Before we prove Theorem 1.3, let us note that, under our assumptions,
the results of Hebisch and Zegarliński (in this particular case) extend to $V^{1,p}(\mu)=W^{1,p}(\mu)$.
Of course, other Poincaré and Sobolev type inequalities for smooth functions extend similarly to $V^{1,p}(\mu)$ if
the weight satisfies (Diff).
Proof of Theorem 1.3.
Let us prove that $\exp(-\beta\lvert\cdot\rvert^{q}-W-V)$ is doubling.
Let $c_{1}^{W},c_{1}^{V}\geq 1$, $c_{2}^{W},c_{2}^{V}\in\mathbbm{R}$ be the constants from property (D).
Let $a:=\inf W$, $b:=\inf V$. Let $B\subset\mathbbm{R}^{d}$ be any ball. Then
$$\displaystyle\int_{2B}e^{-\beta\lvert x\rvert^{q}-W(x)-V(x)}\,dx=2\int_{B}e^{-%
2^{q}\beta\lvert x\rvert^{q}-W(2x)-V(2x)}\,dx\\
\displaystyle\leq 2e^{-(c^{W}_{1}-1)a+c_{2}^{W}-(c^{V}_{1}-1)b+c_{2}^{V}}\int_%
{B}e^{-\beta\lvert x\rvert^{q}-W(x)-V(x)}\,dx,$$
which proves the doubling property.
By similar arguments as in the proof of Lemma 2.6, condition (6.2) is implied condition (Reg) which is obviously satisfied,
since $\beta\lvert\cdot\rvert^{q}$, $W$ and $V$ are locally bounded.
However, by a general result due to Semmes, (6.2) is implied by (6.1) and (6.4),
see [24, Lemma 5.6].
The weighted Poincaré inequality (6.4) follows from Theorem 6.4
by noting that $\exp(-\beta\lvert x\rvert^{q}-W-V)\,dx$ is a finite measure.
The weighted Sobolev inequality (6.3) follows from (6.1) and (6.4) by a general result of Hajłasz and Koskela [18].
Suppose now that $V\in W^{1,\infty}_{\textup{loc}}(dx)$. Since $W\in C^{1}$, also $W\in W^{1,\infty}_{\textup{loc}}(dx)$. A similar statement
holds for $-\beta\lvert\cdot\rvert^{q}$. Therefore, it is an easy exercise to check that the conditions (Reg) and (Diff) are satisfied.
∎
Acknowledgements
The author would like to thank Michael Röckner for his interest in the subject and several helpful discussions. The author would like
to thank Oleksandr Kutovyi for checking the proof of the main result. The author would like to express his gratitude to the referees, who have provided valuable remarks.
The author acknowledges that some results of this
work can be found in Chapter 2.6 of the book [10] by Vladimir I. Bogachev. The research on this work, however,
has been carried out independently and a preliminary form of
the results has been published in [44], at about the same time when the book by Bogachev
appeared.
References
[1]
D. Aalto and J. Kinnunen, Maximal functions in Sobolev spaces,
Sobolev spaces in mathematics I, Sobolev type inequalities (V. Maz’ya,
ed.), International Mathematical Series, vol. 8, Springer and Tamara
Rozhkovskaya Publisher, 2009, pp. 25–67.
[2]
S. Albeverio, S. Kusuoka, and M. Röckner, On partial integration in
infinite-dimensional space and applications to Dirichlet forms, J. London
Math. Soc. (2) 42 (1990), no. 1, 122–136.
[3]
S. Albeverio and M. Röckner, Classical Dirichlet forms on
topological vector spaces - closability and a Cameron-Martin formula, J.
Funct. Anal. 88 (1990), no. 2, 395–436.
[4]
S. Albeverio and M. Röckner, New developments in the theory and application of Dirichlet
forms, Stochastic processes, physics and geometry (Ascona and Locarno,
1988), World Sci. Publ., Teaneck, NJ, 1990, pp. 27–76.
[5]
S. Albeverio, M. Röckner, and T.-S. Zhang, Markov uniqueness and its
applications to martingale problems, stochastic differential equations and
stochastic quantization, C. R. Math. Rep. Acad. Sci. Canada 15
(1993), no. 1, 1–6.
[6]
S. Albeverio, M. Röckner, and T.-S. Zhang, Markov uniqueness for a class of infinite dimensional
Dirichlet operators, Stochastic processes and optimal control, Stochastic
Monographs 7, 1993, pp. 1–26.
[7]
F. Andreu, J. M. Mazón, J. D. Rossi, J. Toledo, Local and nonlocal weighted $p$-Laplacian evolution equations with Neumann boundary conditions, Publ. Math. 55 (2011), 27–66.
[8]
V. Barbu, Nonlinear differential equations of monotone types in
Banach spaces, Springer Monographs in Mathematics, Springer, 2010.
[9]
W. Bian and J. R. L. Webb, Solutions of nonlinear evolution
inclusions, Nonlinear Anal. 37 (1999), no. 7, Ser. A: Theory
Methods, 915–932.
[10]
V. I. Bogachev, Differentiable measures and the Malliavin calculus,
Mathematical Surveys and Monographs, vol. 164, American Mathematical Society,
Providence, RI, 2010.
[11]
P. Cattiaux and M. Fradon, Entropy, reversible diffusion processes, and
Markov uniqueness, J. Funct. Anal. 138 (1996), no. 1, 243–272.
[12]
V. Chiadò Piat and F. Serra Cassano, Some remarks about the density
of smooth functions in weighted Sobolev spaces, J. Convex Anal. 1
(1994), no. 2, 135–142.
[13]
A. Eberle, Uniqueness and non-uniqueness of semigroups generated by
singular diffusion operators, Lecture Notes in Mathematics, vol. 1718,
Springer, Berlin–Heidelberg–New York, 1999.
[14]
L. C. Evans and R. F. Gariepy, Measure theory and fine properties of
functions, Studies in Advanced Mathematics, CRC Press, 1992.
[15]
M. Fradon, Diffusions réfléchies réversibles dégénérées,
Potential Anal. 6 (1997), no. 4, 369–414.
[16]
I. Gentil, Tensorisation et perturbation de l’inégalité de Sobolev
logarithmique, Sur les inégalités de Sobolev logarithmiques,
Panoramas et Synthèses, no. 10, Société Mathématique de France, 2000.
[17]
D. Gilbarg and N. S. Trudinger, Elliptic partial differential equations
of second order, Grundlehren der mathematischen Wissenschaften, vol. 224,
Springer-Verlag, Berlin–Heidelberg–New York, 1977.
[18]
P. Hajłasz and P. Koskela, Sobolev meets Poincaré, C. R. Acad.
Sci. Paris, Série I Math. 320 (1995), no. 10, 1211–1215.
[19]
M. M. Hamza, Détermination des formes de Dirichlet sur $\mathbbm{R}^{n}$,
Ph.D. thesis, Université Paris-Sud 11, Orsay, 1975.
[20]
D. Hauer, A. Rhandi, New weighted Hardy’s inequalities with application to nonexistence of global solutions, to appear in Arch. Math. (2012), 19. pp, http://arxiv.org/abs/1207.3587.
[21]
W. Hebisch and B. Zegarliński, Coercive inequalities on metric
measure spaces, J. Funct. Anal. 258 (2010), no. 3, 814–851.
[22]
L. I. Hedberg, On certain convolution inequalities, Proc. Amer. Math.
Soc. 36 (1972), no. 2, 505–510.
[23]
J. Heinonen, T. Kilpeläinen, and O. Martio, Nonlinear potential
theory of degenerate elliptic equations, Clarendon Press, Oxford University
Press, 1993.
[24]
J. Heinonen and P. Koskela, Weighted Sobolev and Poincaré
inequalities and quasiregular mappings of polynomial type, Math. Scand.
77 (1995), no. 2, 251–271.
[25]
T. Kilpeläinen, Smooth approximation in weighted Sobolev spaces,
Comment. Math. Univ. Carolinæ 38 (1997), no. 1, 29–35.
[26]
D. Kinderlehrer and G. Stampacchia, An introduction to variational
inequalities and their applications, Pure and Applied Mathematics, vol. 88,
Academic Press Inc., New York, 1980.
[27]
A. V. Kolesnikov, Convergence of Dirichlet forms with changing speed
measures on $\mathbb{R}^{d}$, Forum Math. 17 (2005), no. 2, 225–259.
[28]
A. Kufner, Weighted Sobolev spaces, Teubner–Texte zur Mathematik,
vol. 31, Teubner, Stuttgart–Leipzig–Wiesbaden, 1980.
[29]
A. Kufner and B. Opic, How to define reasonably weighted Sobolev
spaces, Comment. Math. Univ. Carolinæ 25 (1984), no. 3,
537–554.
[30]
A. Kufner and A.-M. Sändig, Some applications of weighted
Sobolev spaces, Teubner–Texte zur Mathematik, vol. 100, Teubner,
Stuttgart–Leipzig–Wiesbaden, 1987.
[31]
M. Lavrent’ev, Sur quelques problèmes du calcul des variations, Ann.
Math. Pura Appl. 4 (1926), 107–124.
[32]
L. Lorenzi and M. Bertoldi, Analytical methods for Markov
semigroups, Pure and Applied Mathematics (Boca Raton), vol. 283, Chapman &
Hall/CRC, Boca Raton, FL, 2007.
[33]
Z.-M. Ma and M. Röckner, Introduction to the theory of
(non-symmetric) Dirichlet forms, Universitext, Springer-Verlag,
Berlin–Heidelberg–New York, 1992.
[34]
N. Meyers and J. Serrin, $H=W$, Proc. Nat. Acad. Sci. U.S.A.
51 (1964), 1055–1056.
[35]
S. Mizohata, The theory of partial differential equations, Cambridge
University Press, London, New York, 1973.
[36]
B. Muckenhoupt, Weighted norm inequalities for the Hardy maximal
function, Trans. Amer. Math. Soc. 165 (1972), 207–226.
[37]
M. Ohtsuka, Extremal length and precise functions, GAKUTO
international series mathematical sciences and applications, vol. 19,
Gakkōtosho, 2003.
[38]
S. E. Pastukhova, Degenerate equations of monotone type: Lavrent’ev
phenomenon and attainability problems, Sb. Math. 198 (2007),
no. 10, 1465–1494, translated from Math. Sb. 198 (2007), no. 10,
89–118.
[39]
R. T. Rockafellar, Convex analysis, Princeton University Press, 1970.
[40]
M. Röckner and T. S. Zhang, Uniqueness of generalized Schrödinger
operators and applications, J. Funct. Anal. 105 (1992), 187–231.
[41]
M. Röckner and T. S. Zhang, Uniqueness of generalized Schrödinger operators and
applications II, J. Funct. Anal. 119 (1994), 455–467.
[42]
E. M. Stein, Singular integrals and differentiability properties of
functions, Princeton mathematical series, vol. 30, Princeton University
Press, Princeton, New Jersey, 1970.
[43]
E. M. Stein, Harmonic analysis: real-variable methods, orthogonality, and
oscillatory integrals, Princeton mathematical series, vol. 43, Princeton
University Press, Princeton, New Jersey, 1993.
[44]
J. M. Tölle, Variational convergence of nonlinear partial
differential operators on varying Banach spaces, Ph.D. thesis,
Universität Bielefeld, 2010,
http://www.math.uni-bielefeld.de/~bibos/preprints/E10-09-360.pdf.
[45]
B. O. Turesson, Nonlinear potential theory and weighted Sobolev
spaces, Lecture Notes in Mathematics, vol. 1736, Springer,
Berlin–Heidelberg–New York, 2000.
[46]
V. V. Zhikov, On weighted Sobolev spaces, Sb. Math. 189
(1998), no. 8, 1139–1170, translated from Math. Sb. 189 (1998), no. 8,
27–58.
[47]
W. P. Ziemer, Weakly differentiable functions: Sobolev spaces and
functions of bounded variation, Graduate texts in mathematics, vol. 120,
Springer-Verlag, Berlin–Heidelberg–New York, 1989. |
Towards attochemistry: Control of nuclear motion through conical intersections and electronic coherences
Caroline Arnold
[email protected]
Center for Free-Electron Laser Science, DESY, Notkestrasse 85, 22607 Hamburg, Germany
Department of Physics, University of Hamburg, Jungiusstrasse 9, 20355 Hamburg, Germany
The Hamburg Centre for Ultrafast Imaging, Luruper Chaussee 149, 22761 Hamburg, Germany
Oriol Vendrell
[email protected]
Center for Free-Electron Laser Science, DESY, Notkestrasse 85, 22607 Hamburg, Germany
The Hamburg Centre for Ultrafast Imaging, Luruper Chaussee 149, 22761 Hamburg, Germany
Department of Physics and Astronomy, Aarhus University, Ny Munkegade 120, 8000 Aarhus, Denmark
Ralph Welsch
[email protected]
Center for Free-Electron Laser Science, DESY, Notkestrasse 85, 22607 Hamburg, Germany
Robin Santra
Center for Free-Electron Laser Science, DESY, Notkestrasse 85, 22607 Hamburg, Germany
Department of Physics, University of Hamburg, Jungiusstrasse 9, 20355 Hamburg, Germany
The Hamburg Centre for Ultrafast Imaging, Luruper Chaussee 149, 22761 Hamburg, Germany
(December 2, 2020)
Abstract
The effect of nuclear dynamics and conical intersections on electronic coherences is investigated employing a two-state, two-mode linear vibronic coupling model. Exact quantum dynamical calculations are performed using the multi-configuration time-dependent Hartree method (MCTDH). It is found that the presence of a non-adiabatic coupling close to the Franck-Condon point can preserve electronic coherence to some extent. Additionally, the possibility of steering the nuclear wavepackets by imprinting a relative phase between the electronic states during the photoionization process is discussed. It is found that the steering of nuclear wavepackets is possible given that a coherent electronic wavepacket embodying the phase difference passes through a conical intersection. A conical intersection close to the Franck-Condon point is thus a necessary prerequisite for control, providing a clear path towards attochemistry.
Ultrashort laser pulses allow to resolve electronic and nuclear motion in molecules on their natural timescales Corkum and Krausz (2007); Krausz and Ivanov (2009); Cerullo et al. (2015); Calegari et al. (2016a). With the dawn of attosecond pulses, it is now possible to create coherent superpositions of excited electronic states of a photo-ionized molecule. Electronic coherences are believed to be important for a wide range of processes, e.g., electron hole oscillations Calegari et al. (2014) and efficient energy conversion in light-harvesting complexes Engel et al. (2007).
In theoretical descriptions of electronic coherence, the nuclei are often fixed as they are heavy compared to the electrons. Such calculations predict long-lived coherences and electron hole migrations driven by electron correlation Calegari et al. (2014); Kuleff and Cederbaum (2007); Golubev and Kuleff (2015); Lingerfelt et al. (2017). However, recent quantum-dynamical studies show that the motion of nuclei cannot be neglected and that nuclear motion can lead to electronic decoherence within few femtoseconds Halász et al. (2013); Hermann et al. (2014); Vacher et al. (2015a); Paulus et al. (2016); Vacher et al. (2017); Arnold et al. (2017).
The interplay of electronic and nuclear motion becomes especially relevant in the presence of strong non-adiabatic couplings, as the Born-Oppenheimer separation breaks down and the timescales of electronic and nuclear motion become comparable Stolow (2013). Non-adiabatic couplings are particularly strong at conical intersections (C.I.), which are abundant in the potential energy landscape of poly-atomic molecules Domcke et al. (2004); Kowalewski et al. (2015). First insight into the influence of C.I.s on electronic coherence was obtained recently with a quantum-dynamical treatment of paraxylene and BMA[5,5], but a systematic understanding remains elusive Vacher et al. (2017).
Non-adiabatic couplings and C.I.s are already exploited in control schemes employing femtosecond laser pulses. The underlying processes are typically well-understood and the nuclear wavepacket can be steered to desired reaction products Zewail (2000); Abe et al. (2005); von den Hoff et al. (2012a); Stolow (2013); Liekhus-Schmaltz et al. (2016). With attosecond pulses, due to their large width in the energy domain, it becomes feasible to control the electronic rather than the nuclear degrees of freedom. Through non-adiabatic couplings, the relative weight and phase between electronic states may affect the velocity as well as the direction of nuclear dynamics, as investigated in models of toluene and benzene employing approximate Ehrenfest dynamics Vacher et al. (2015b); Meisner et al. (2015). This might open the path towards attochemistry, where, by controlling the relative phase between electronic states, nuclear dynamics on a time scale of tens to hundreds of femtoseconds is influenced Salières et al. (2012); Lépine et al. (2014); Nisoli et al. (2017). Thus, attochemistry will allow for directing the system towards desired, but unlikely reaction products.
In this Letter, we present a systematic study of the influence of non-adiabatic couplings on electronic coherence and discuss possible pathways towards attochemistry by imprinting a relative phase between the electronic states forming a coherent superposition. To this end, we employ a two-state, two-mode model system and consider different positions of the C.I. relative to the Franck-Condon region Worth and Cederbaum (2004) as well as different coupling strengths and relative phases.
The linear vibronic coupling Hamiltonian Koppel et al. (1984) is employed to describe the potential energy surfaces of two electronically excited states in a local diabatic picture. Two coordinates forming a Jahn-Teller type C.I., the tuning mode $x$ and the coupling mode $y$, are considered in mass- and frequency-weighted ground-state normal modes. The corresponding excited-state Hamiltonian reads
$$H=\begin{pmatrix}T+V_{1}(x,y)&W_{12}\\
W_{12}&T+V_{2}(x,y)+\Delta E\end{pmatrix},$$
(1)
with the kinetic energy operator $T$ and the two diabatic states given as
$V_{1,2}(x,y)=\frac{\gamma}{2}\left(x^{2}+y^{2}\right)+\kappa_{1,2}^{(x)}x+%
\kappa_{1,2}^{(y)}y,$
where $\gamma$ refers to the vibrational frequencies of the excited state, $\kappa^{(x,y)}_{1,2}$ defines the slope at the C.I. along $x$ and $y$, and $\Delta E$ is the gap at the Franck-Condon point $(x_{\mathrm{C.I.}}=y_{\mathrm{C.I.}}=0)$. The non-adiabatic coupling is introduced by $W_{12}=\lambda y$. It is considered up to first order and its strength is varied between $\lambda=0.0\,\mathrm{a.u.}$ and $0.02\,\mathrm{a.u.}$ Throughout this letter, atomic units (a.u.) are used. The C.I. is moved to arbitrary positions $(x_{\mathrm{C.I.}},y_{\mathrm{C.I.}})$ by adjusting the model parameters. Details on the model and the numerical parameters can be found in the supplemental material (S.M.) sup .
The initial state is assumed to be an equally weighted coherent superposition of both electronic states, where the ground-state nuclear wavepacket is lifted vertically to the diabatic potential energy surfaces, thus modeling a short-time impulsive excitation from the common electronic ground state to the excited-state manifold:
$$\braket{x,y}{\Psi(t=0)}=c_{1}\chi_{1}(x,y)\ket{1}+c_{2}\chi_{2}(x,y)\mathrm{e}%
^{i\varphi}\ket{2},$$
(2)
where $c_{1}=c_{2}=1/\sqrt{2}$, $\chi_{1}=\chi_{2}=\chi$, and $\varphi$ is a relative phase between the electronic states. The ground-state nuclear wavepacket is given as a product of Gaussians,
$$\chi(x,y)=\frac{1}{\sqrt{\pi}}\mathrm{e}^{-(x^{2}+y^{2})/2}.$$
(3)
The wavepacket is propagated employing the Multi-Configuration Time-Dependent Hartree method (MCTDH) in its multiset implementation in the Heidelberg package Worth et al. (2015); Beck et al. (2000); Meyer et al. (1990). The numerical accuracy of the simulations is assured by adjusting the number of single-particle functions (SPF) used such that the natural weight of the highest SPF is below $10^{-4}$ Beck et al. (2000).
A basis-independent measure for the electronic coherence is given by the electronic purity $\mathrm{Tr}(\rho^{2})$ Arnold et al. (2017); Vacher et al. (2017), where $\rho$ is the reduced density matrix of the electronic subsystem expressed as
$$\rho_{\mu\nu}(t)=\int\mathrm{d}x\,\int\mathrm{d}y\,\braket{\mu}{\Psi(t)}%
\braket{\Psi(t)}{\nu}.$$
(4)
For our two-state system, $\mathrm{Tr}(\rho^{2})=\rho_{11}^{2}+\rho_{22}^{2}+2|\rho_{12}|^{2}$. Note that $\mathrm{Tr}(\rho^{2})=1$ corresponds to a fully coherent electronic superposition, and, for $c_{1}=c_{2}$, $\mathrm{Tr}(\rho^{2})=0.5$ to an incoherent mixture. Electronic decoherence is caused through three mechanisms Fiete and Heller (2003); Vacher et al. (2017): (i) dephasing due to the width of the nuclear wavepacket, (ii) loss of overlap of nuclear wavepackets propagated on different potential energy surfaces, and (iii) transfer of nuclear density between electronic states. From an analytic expansion of electronic density matrix elements up to second order in time, and considering the initial state given in Eq. (2), it can be shown that the diabatic populations $\rho_{11},\rho_{22}$ are constant up to second order in time, while the coherences are phase-dependent in the presence of a non-adiabatic coupling $\lambda$:
$$\displaystyle|\rho_{12}|^{2}=$$
$$\displaystyle\,|c_{1}|^{2}|c_{2}|^{2}-t^{2}|c_{1}|^{2}|c_{2}|^{2}\braket{\chi}%
{(H_{1}+H_{2})^{2}}{\chi}$$
$$\displaystyle-2t^{2}\lambda^{2}|c_{1}|^{2}|c_{2}|^{2}\braket{\chi}{y^{2}}{\chi%
}\sin^{2}\varphi+\mathcal{O}(t^{3}),$$
(5)
where $H_{\mu}=T+V_{\mu}+\Delta E_{\mu},\mu=1,2$. The second term in Eq. (5) is due to decoherence caused by the dephasing and loss of overlap (mechanisms (i) and (ii)) while the third term is due to the coupling of the electronic states. The latter is the only term carrying a phase dependence to second order. Hence, the influence of non-adiabatic coupling on decoherence can be controlled by the relative electronic phase. It vanishes in second order for the case of $\varphi=0$, i.e., no relative phase is imprinted on the electronic states. Details can be found in the S.M.
The electronic purity for different positions of the C.I. along the tuning mode, for different relative electronic phases and coupling strengths, is shown in Fig. 1. In the adiabatic case $(\lambda=0\,\mathrm{a.u.})$, dephasing and the loss of spatial overlap between the nuclear wavepackets evolving on the different potential energy surfaces leads to ultrafast electronic decoherence within a few femtoseconds, in accordance with the results obtained with adiabatic models Vacher et al. (2015a); Arnold et al. (2017). In the presence of non-adiabatic couplings, the relative electronic phase affects the electronic purity. For $\varphi=0$, if the C.I. is located within the Franck-Condon region, defined with respect to the initial extension of the ground-state nuclear wavepacket, the coupling region is reached before decoherence occurs. In these cases, the non-adiabatic coupling preserves coherence to some extent, see panels (i)–(iii). However, if the intersection is located far from the Franck-Condon point as in panel (iv), decoherence takes place before the intersection is reached. By imprinting a relative electronic phase of $\varphi=\frac{\pi}{2}$, electronic coherences are destroyed rapidly even for a strong non-adiabatic coupling and a C.I. within the Franck-Condon region. This is in accordance with the analytic result at short times given in Eq. (5). In all cases, we do not observe a substantial increase in coherence upon the passage of the wavepacket through the C.I. Liekhus-Schmaltz et al. (2016). The full electronic purity can be decomposed into different contributions related to the three decoherence mechanisms. For all cases considered here, dephasing (mechanism (i)), is the main cause of decoherence. Contributions from mechanism (iii) are small, but get stronger the further the Franck-Condon point is located from the C.I. The dynamical evolution of the wavepackets causes the small revivals seen in the full electronic purity. Details can be found in the S.M. We also performed similar calculations for shifts of the C.I. along the coupling mode $y$. In this case, for strong non-adiabatic coupling the initial superposition reduces to the trivial case of a pure state involving only one adiabatic surface. The corresponding results can be found in the S.M.
In recent work, the electronic decoherence mechanisms in two specific molecules were studied in a non-adiabatic, quantum-dynamical framework Vacher et al. (2017). It was seen that the decoherence time in paraxylene with a C.I. near the Franck-Condon point amounts to $3\,\mathrm{fs}$, while in BMA[5,5] with a C.I. far from the Franck-Condon point, it amounts to $6\,\mathrm{fs}$. As was pointed out in Ref. Vacher et al. (2017), the decoherence time is due to a complex interplay of several mechanisms influenced by different molecular parameters. With our model system we can disentangle the contributions of the position of the C.I. or the non-adiabatic coupling, while keeping all other PES parameters the same, which is not possible if specific molecules are used. We find that, for $\varphi=0$, the further the C.I. is from the Franck-Condon point, the faster the decoherence, and the stronger the non-adiabatic coupling, the more coherence can be preserved.
The $\varphi$-dependence of time-dependent expectation values of nuclear coordinates is a potential path to attochemistry: a nuclear wavepacket could be steered in the desired direction by imprinting a relative phase between electronic states. For the system described by the Hamiltonian in Eq. (1) and the initial wavefunction given in Eq. (2), the expectation value of the tuning coordinate $y$ is expanded up to second order in time as
$$\langle y\rangle(t)=\langle y\rangle_{1}(t)+\langle y\rangle_{2}(t)-t^{2}%
\lambda c_{1}c_{2}\cos\varphi+\mathcal{O}(t^{3}),$$
(6)
where
$\langle y\rangle_{\mu}=|c_{\mu}|^{2}\braket{\chi_{\mu}}{y+t[H_{\mu},y]+t^{2}[[%
H_{\mu},y],H_{\mu}]}{\chi_{\mu}}$
is the motion of the nuclear wavepackets on the uncoupled diabatic state $\ket{\mu}$ and $H_{\mu}=T+V_{\mu}+\Delta E_{\mu}$. The uncoupled motion of the nuclear wavepackets on the diabatic states is modified by the non-adiabatic coupling at second order in time. This modification also carries a phase dependence which allows for the steering of the nuclear dynamics by controlling the electronic phase. Note that the relative electronic phase between electronic states that are not coupled is irrelevant for the motion of the nuclei. Within the model considered here, $\langle x\rangle(t)$ is independent of the relative phase. Details of the derivation and the expectation value of an arbitrary chemical observable can be found in the S.M.
In Fig. 2, panels (i)–(iii), we present the time-evolution of the one-dimensional density along the coupling coordinate $y$ for non-adiabatic coupling $\lambda=0.01\,\mathrm{a.u.}$ and the different positions of the C.I. employed before. If the electronic coherence persists once the nuclear wavepacket reaches the region of non-adiabatic coupling, then it can be steered along $y$ by varying $\varphi$, see panels (i)–(ii). Once electronic coherence is lost, the wavepacket cannot be controlled, see panel (iii) for a C.I. far from the Franck-Condon region. At the same C.I. position and with strong non-adiabatic coupling $(\lambda=0.02\,\mathrm{a.u.})$, control can be achieved even in this setting, see panel (iv). This implies that nuclear controllability requires the possibility of interference, at the C.I., of the wavepackets initially created on different diabatic surfaces, carrying a phase difference, as indicated in Fig. 3. Electronic decoherence suppresses this. This view is further validated by considering the evolution of the part of the wavepacket that is projected on the upper adiabatic potential energy surface. In this case, control is still possible, if the C.I. is close to the Franck-Condon point or for strong non-adiabatic coupling (see S.M.). If the C.I. is close to the Franck-Condon point and thus the energy separation between the electronic states is small and the electron dynamics is on a femtosecond rather than an attosecond time scale Stolow (2013), coherent superpositions might be created by femtosecond pulses. For cases of strong non-adiabatic coupling and excitations far away from the C.I., the separation of the electronic states becomes larger and the use of broadband attosecond pulses is required for the excitations, leading to ”true” attochemistry.
Creating a coherent initial state with an imprinted phase by ultrashort pulses is an experimental challenge. Beyond the limit of sudden ionization employed in this work, nuclear dynamics and entanglement with the photoelectron may decrease the degree of initial electronic coherence reached in the remaining cation Calegari et al. (2016b); Nisoli et al. (2017). Coherent two-color pulses can in principle be used to excite two electronic states with varying relative phases Krausz and Ivanov (2009). To find the optimal pulses, methods from coherent control of quantum phenomena or quantum optical control could then be adapted Shapiro and Brumer (2003); Rabitz et al. (2000); von den Hoff et al. (2012b); Kling et al. (2013). Light-induced C.I., created in molecules with the help of external laser fields, could be used to control the position of the intersection and the strength of the non-adiabatic coupling Moiseyev et al. (2008); Natan et al. (2016).
To conclude, we discussed the influence of non-adiabatic dynamics and relative electronic phases on electronic coherences created by ultrashort pulses. It is found that non-adiabatic coupling stabilizes electronic coherences if the C.I. is close to the Franck-Condon point. The further the C.I. is from the Franck-Condon point, the stronger the decoherence. Changing the relative electronic phase may enhance decoherence. If the wavepacket maintains electronic coherence in the region of the C.I., it can be steered in a desired direction by a relative phase imprinted initially between the electronic states. This steering of nuclear wavepackets opens a clear, but limited, path towards attochemistry. While attochemistry will not create new reaction pathways, it will provide steering possibilities along less likely paths. Novel schemes can then be developed to follow light-induced chemical reactions on an attosecond timescale and to control chemical observables by manipulating the electronic degrees of freedom.
Acknowledgements.
This work has been supported by the excellence cluster The Hamburg Centre for Ultrafast Imaging - Structure, Dynamics and Control of Matter at the Atomic Scale of the Deutsche Forschungsgemeinschaft.
References
Corkum and Krausz (2007)
P. B. Corkum and F. Krausz, Nature Physics 3, 381 (2007).
Krausz and Ivanov (2009)
F. Krausz and M. Ivanov, Reviews of Modern Physics 81, 163 (2009).
Cerullo et al. (2015)
G. Cerullo, S. D. Silvestri, and M. Nisoli, EPL (Europhysics Letters) 112, 24001 (2015).
Calegari et al. (2016a)
F. Calegari, G. Sansone,
S. Stagira, C. Vozzi, and M. Nisoli, Journal of Physics B: Atomic, Molecular and Optical Physics 49, 062001 (2016a).
Calegari et al. (2014)
F. Calegari, D. Ayuso,
A. Trabattoni, L. Belshaw, S. D. Camillis, S. Anumula, F. Frassetto, L. Poletto, A. Palacios, P. Decleva, J. B. Greenwood, F. Martín, and M. Nisoli, Science 346, 336 (2014).
Engel et al. (2007)
G. S. Engel, T. R. Calhoun,
E. L. Read, T.-K. Ahn, T. Mančal, Y.-C. Cheng, R. E. Blankenship, and G. R. Fleming, Nature 446, 782 (2007).
Kuleff and Cederbaum (2007)
A. I. Kuleff and L. S. Cederbaum, Chemical Physics Molecular Wave Packet Dynamics(in honour of Jörn Manz), 338, 320 (2007).
Golubev and Kuleff (2015)
N. V. Golubev and A. I. Kuleff, Physical Review A 91, 051401 (2015).
Lingerfelt et al. (2017)
D. B. Lingerfelt, P. J. Lestrange, J. J. Radler, S. E. Brown-Xu, P. Kim,
F. N. Castellano,
L. X. Chen, and X. Li, The
Journal of Physical Chemistry A 121, 1932 (2017).
Halász et al. (2013)
G. J. Halász, A. Perveaux, B. Lasorne,
M. A. Robb, F. Gatti, and Á. Vibók, Physical Review A 88, 023425 (2013).
Hermann et al. (2014)
G. Hermann, B. Paulus,
J. F. Pérez-Torres, and V. Pohl, Physical Review A 89, 052504 (2014).
Vacher et al. (2015a)
M. Vacher, L. Steinberg,
A. J. Jenkins, M. J. Bearpark, and M. A. Robb, Physical Review A 92, 040502 (2015a).
Paulus et al. (2016)
B. Paulus, J. F. Pérez-Torres, and C. Stemmle, Physical Review A 94, 053423 (2016).
Vacher et al. (2017)
M. Vacher, M. J. Bearpark, M. A. Robb,
and J. P. Malhado, Physical Review Letters 118, 083001 (2017).
Arnold et al. (2017)
C. Arnold, O. Vendrell, and R. Santra, Physical Review A 95, 033425 (2017).
Stolow (2013)
A. Stolow, Faraday Discussions 163, 9 (2013).
Domcke et al. (2004)
W. Domcke, D. R. Yarkony, and H. Köppel, eds., Conical Intersections: Electronic
Structure, Dynamics and Spectroscopy, Advanced Series in Physical Chemistry, Vol. 15 (World Scientific, 2004).
Kowalewski et al. (2015)
M. Kowalewski, K. Bennett,
K. E. Dorfman, and S. Mukamel, Physical Review Letters 115, 193003 (2015).
Zewail (2000)
A. H. Zewail, The Journal of Physical Chemistry A 104, 5660 (2000).
Abe et al. (2005)
M. Abe, Y. Ohtsuki,
Y. Fujimura, and W. Domcke, The
Journal of Chemical Physics 123, 144508 (2005).
von den Hoff et al. (2012a)
P. von den Hoff, R. Siemering, M. Kowalewski, and R. de Vivie-Riedle, IEEE Journal of Selected Topics in Quantum
Electronics 18, 119
(2012a).
Liekhus-Schmaltz et al. (2016)
C. Liekhus-Schmaltz, G. A. McCracken, A. Kaldun,
J. P. Cryan, and P. H. Bucksbaum, The
Journal of Chemical Physics 145, 144304 (2016).
Vacher et al. (2015b)
M. Vacher, J. Meisner,
D. Mendive-Tapia, M. J. Bearpark, and M. A. Robb, The Journal of
Physical Chemistry A 119, 5165 (2015b).
Meisner et al. (2015)
J. Meisner, M. Vacher,
M. J. Bearpark, and M. A. Robb, Journal of Chemical Theory and Computation 11, 3115 (2015).
Salières et al. (2012)
P. Salières, A. Maquet, S. Haessler,
J. Caillat, and R. Taïeb, Reports on Progress in Physics 75, 062401 (2012).
Lépine et al. (2014)
F. Lépine, M. Y. Ivanov, and M. J. J. Vrakking, Nature Photonics 8, 195 (2014).
Nisoli et al. (2017)
M. Nisoli, P. Decleva,
F. Calegari, A. Palacios, and F. Martín, Chemical Reviews 117, 10760 (2017).
Worth and Cederbaum (2004)
G. A. Worth and L. S. Cederbaum, Annual Review of Physical
Chemistry 55, 127
(2004).
Koppel et al. (1984)
H. Koppel, W. Domcke, and L. Cederbaum, Advances in
Chemical Physics 57, 59
(1984), wOS:A1984TX79700002.
Cattarius et al. (2001)
C. Cattarius, G. A. Worth, H.-D. Meyer, and L. S. Cederbaum, The Journal of Chemical Physics 115, 2088 (2001).
Mahapatra et al. (2001)
S. Mahapatra, G. A. Worth, H.-D. Meyer,
L. S. Cederbaum, and H. Köppel, The
Journal of Physical Chemistry A 105, 5567 (2001).
Döscher and Köppel (1997)
M. Döscher and H. Köppel, Chemical Physics 225, 93 (1997).
Worth et al. (2015)
G. A. Worth, M. H. Beck,
A. Jäckle, O. Vendrell, and H.-D. Meyer, “The MCTDH Package,” (2015).
Beck et al. (2000)
M. H. Beck, A. Jäckle,
G. A. Worth, and H. D. Meyer, Physics Reports 324, 1 (2000).
Meyer et al. (1990)
H. D. Meyer, U. Manthe, and L. S. Cederbaum, Chemical Physics Letters 165, 73 (1990).
Fiete and Heller (2003)
G. A. Fiete and E. J. Heller, Physical Review A 68, 022112 (2003).
(37)
See Supplemental Material at … for the
linear vibronic coupling model parameters, analytic expansion of expectation values and electronic density matrix
elements, electronic coherence for shifts of the C.I. along $y$,
attochemistry with very strong coupling strengths, and time evolution on
adiabatic surfaces.
Calegari et al. (2016b)
F. Calegari, A. Trabattoni, A. Palacios, D. Ayuso,
M. C. Castrovilli,
J. B. Greenwood, P. Decleva, F. Martín, and M. Nisoli, Journal of Physics B: Atomic, Molecular and Optical Physics 49, 142001 (2016b).
Shapiro and Brumer (2003)
M. Shapiro and P. Brumer, Reports on Progress in Physics 66, 859 (2003).
Rabitz et al. (2000)
H. Rabitz, R. de Vivie-Riedle, M. Motzkus, and K. Kompa, Science 288, 824 (2000).
von den Hoff et al. (2012b)
P. von den Hoff, S. Thallmair, M. Kowalewski, R. Siemering, and R. de Vivie-Riedle, Physical Chemistry Chemical Physics 14, 14460 (2012b).
Kling et al. (2013)
M. F. Kling, P. von den Hoff,
I. Znakovskaya, and R. de Vivie-Riedle, Physical Chemistry Chemical Physics 15, 9448 (2013).
Moiseyev et al. (2008)
N. Moiseyev, M. Šindelka, and L. S. Cederbaum, Journal of Physics B: Atomic, Molecular and
Optical Physics 41, 221001 (2008).
Natan et al. (2016)
A. Natan, M. R. Ware,
V. S. Prabhudesai,
U. Lev, B. D. Bruner, O. Heber, and P. H. Bucksbaum, Physical Review Letters 116, 143004 (2016). |
Two-Loop Corrections to Top-Antitop Production at Hadron Colliders
Laboratoire de Physique Subatomique et de Cosmologie,
Université Joseph Fourier/CNRS-IN2P3/INPG,
F-38026 Grenoble, France
E-mail:
A. Ferroglia
New York City College of Technology
300 Jay Street, NY 11201 Brooklyn, USA
Email:
[email protected]
T. Gehrmann
Institut für Theoretische Physik,
Universität Zürich,
CH-8057 Zürich, Switzerland
Email:
[email protected]
A. von Manteuffel
Institut für Theoretische Physik,
Universität Zürich,
CH-8057 Zurich, Switzerland
Email:
[email protected]
C. Studerus
Fakultät für Physik
Universität Bielefeld,
D-33501 Bielefeld, Germany
Email:
[email protected]
Abstract:
The status of the theoretical predictions for the top-anti top
production in hadronic collisions is shortly reviewed, paying a particular
attention to the analytic calculation of the two-loop
QCD corrections to the parton-level matrix elements.
Since its discovery in 1995, the top quark properties were extensively studied
at the Fermilab Tevatron. In the last 15 years, many observables
concerning top-quark physics were measured with remarkable accuracy. Among
others, the $t\bar{t}$ total cross section, $\sigma_{t\bar{t}}$, was
measured with an accuracy of $\Delta\sigma_{t\bar{t}}/\sigma_{t\bar{t}}\sim 9$%.
Data coming from the LHC are expected to improve significantly the measurement
of several observables related to the top quark. Already within a couple of
years of data taking in the low-luminosity low-energy phase (${\mathcal{L}}\sim 100\,\mbox{pb}^{-1}/\mbox{year}$ at $7\,\mbox{TeV}$ of center of mass
energy), tens of thousands $t\bar{t}$ events before selection will be
available. Consequently, already in this first phase, the accuracy of the
cross section measurement is supposed to match the one reached to date at the
Tevatron. In the high-luminosity phase (
${\mathcal{L}}\sim 100\,\mbox{fb}^{-1}/\mbox{year}$ at $14\,\mbox{TeV}$ of
center of mass energy) it will be possible to reach an accuracy $\Delta\sigma_{t\bar{t}}/\sigma_{t\bar{t}}\sim 5$% [1].
This accuracy in the experimental measurements motivates theorists to refine
the existing predictions, both for the total top-quark pair production cross
section and for the related differential distributions. In the following, we
will briefly outline the research program aiming to the full calculation of the
next-to-next-to-leading order (NNLO) corrections to top-pair production cross
section.
The full NLO QCD corrections to the total cross section were calculated in
[2] in the case of “stable” on-shell top-quarks. In
[3] several differential distributions were calculated at the
same accuracy level. In [4] the NLO corrections to the
top-pair production were evaluated by taking into account the top-quark decay in
narrow-width approximation. The resummation of soft-gluon enhanced terms near the
$t\bar{t}$ production threshold is implemented at the leading [5],
next-to-leading [6], and next-to-next-to-leading [7]
logarithmic accuracy. Approximate NNLO formulas for the total cross section were
recently obtained by several groups [8].
The calculation of the corrections to the partonic cross section beyond leading
order can be split in the calculation of the real corrections, in which
the final state includes extra partons in addition to the top-quark pair, and in
the calculation of virtual (loop) corrections to the partonic processes
already present at the tree level.
A complete calculation of the NNLO corrections requires the knowledge of the
two-loop matrix elements for the processes $q\bar{q}\to t\bar{t}$
(quark-annihilation channel) and $gg\to t\bar{t}$ (gluon-fusion channel), as
well as the $2\to 3$ matrix elements at the one-loop level and the $2\to 4$
matrix elements at the tree-level [9]. Moreover, in order to be able to
deal with IR singularities, a NNLO subtraction method has to be implemented
[10].
From the technical point of view, the calculation of the two-loop (virtual)
corrections is particularly challenging. The squared matrix element for the
processes $q(p_{1})+\overline{q}(p_{2})\to t(p_{3})+\overline{t}(p_{4})$ and
$g(p_{1})+g(p_{2})\to t(p_{3})+\overline{t}(p_{4})$, summed over spin and color,
can be expanded in powers of the strong coupling constant $\alpha_{S}$ as follows:
$$|\mathcal{M}|^{2}(s,t,m,\varepsilon)=16\pi^{2}\alpha_{S}^{2}\left[{\mathcal{A}%
}_{0}+\left(\frac{\alpha_{s}}{\pi}\right){\mathcal{A}}_{1}+\left(\frac{\alpha_%
{s}}{\pi}\right)^{2}{\mathcal{A}}_{2}+{\mathcal{O}}\left(\alpha_{s}^{3}\right)%
\right]\,.$$
(1)
where $p_{i}^{2}=0$ for $i=1,2$ and $p_{j}^{2}=-m_{t}^{2}$ for $i=3,4$. The Mandelstam
variables are defined in the usual way:
$s=-\left(p_{1}+p_{2}\right)^{2}$, $t=-\left(p_{1}-p_{3}\right)^{2}$, $u=-\left(p_{1}-p_{4}\right)^{2}$. Conservation of momentum implies that $s+t+u=2m_{t}^{2}$.
The tree-level term ${\mathcal{A}}_{0}$ in the r. h. s. of Eq. (1) is
well known in both production channels. The ${\mathcal{O}}(\alpha_{S})$ term
${\mathcal{A}}_{1}$ arises from the interference of one-loop diagrams with the
tree-level amplitude.
The ${\mathcal{O}}(\alpha_{S}^{2})$ term ${\mathcal{A}}_{2}$ consists of two parts: the
interference of two-loop diagrams with the Born amplitude and the interference of
one-loop diagrams among themselves, ${\mathcal{A}}_{2}={\mathcal{A}}_{2}^{(2\times 0)}+{\mathcal{A}}_{2}^{(1\times 1)}$.
The latter term, ${\mathcal{A}}_{2}^{(1\times 1)}$, was calculated for both channels
in [11].
The first term, ${\mathcal{A}}_{2}^{(2\times 0)}$, originating from the two-loop
diagrams, can be decomposed according to the color and flavor structures as
follows:
$$\displaystyle{\mathcal{A}}_{2}^{(2\times 0)\,q\bar{q}}$$
$$\displaystyle=$$
$$\displaystyle(N_{c}^{2}\!-\!1)\left[N_{c}^{2}A^{q\bar{q}}\!+\!B^{q\bar{q}}\!+%
\frac{C^{q\bar{q}}}{N_{c}^{2}}\!+\!N_{c}N_{l}D^{q\bar{q}}_{l}\!+\!\frac{N_{l}}%
{N_{c}}E^{q\bar{q}}_{l}\!+\!N_{c}N_{h}D^{q\bar{q}}_{h}\!+\!\frac{N_{h}}{N_{c}}%
E^{q\bar{q}}_{h}\!+\!N_{l}^{2}F^{q\bar{q}}_{l}\!\right.$$
$$\displaystyle\left.+\!N_{l}N_{h}F^{q\bar{q}}_{lh}\!+\!N_{h}^{2}F^{q\bar{q}}_{h%
}\right],$$
$$\displaystyle{\mathcal{A}}_{2}^{(2\times 0)\,gg}$$
$$\displaystyle=$$
$$\displaystyle(N_{c}^{2}\!-\!1)\left[N_{c}^{3}A^{gg}+N_{c}B^{gg}+\frac{1}{N_{c}%
}C^{gg}+\frac{1}{N_{c}^{3}}D^{gg}+N_{c}^{2}N_{l}E^{gg}_{l}+N_{c}^{2}N_{h}E^{gg%
}_{h}+N_{l}F^{gg}_{l}+N_{h}F^{gg}_{h}\right.$$
(2)
$$\displaystyle\left.\!+\!\frac{N_{l}}{N_{c}^{2}}G^{gg}_{l}\!+\!\frac{N_{h}}{N_{%
c}^{2}}G^{gg}_{h}\!+\!N_{c}\left(N_{l}^{2}H^{gg}_{l}\!+\!N_{h}^{2}H^{gg}_{h}\!%
+\!N_{l}N_{h}H^{gg}_{lh}\right)\!+\!\frac{N_{l}^{2}}{N_{c}}I^{gg}_{l}\!+\!%
\frac{N_{h}^{2}}{N_{c}}I^{gg}_{h}\!+\!\frac{N_{l}N_{h}}{N_{c}}I^{gg}_{lh}%
\right]\,,$$
where $N_{h}$ and $N_{l}$ are the number of heavy-quark flavors (in our case, only
the top quark is considered heavy) and light-quark flavors, respectively. The
coefficients $A,B,\ldots,I_{lh}$ in both channels are functions of $s$, $t$,
and $m_{t}$, as well as of the dimensional regulator $\varepsilon$.
These quantities were calculated in [12] in the approximation
$s,|t|,|u|\gg m_{t}^{2}$. However, the results in [12] are not
sufficient to obtain accurate NNLO predictions, since a large fraction of the
events are characterised by values of the partonic center of mass energy which
do not satisfy the ultra-relativistic limit. The complete top mass dependence
of the color coefficients $A,B,\ldots$ is required. A numerical calculation
of ${\mathcal{A}}_{2}^{(2\times 0)\,q\bar{q}}$, exact in $s$, $t$, and $m_{t}$,
was presented in [13]. Analytic expressions for all of the IR
poles in ${\mathcal{A}}_{2}^{(2\times 0)\,q\bar{q}}$ and ${\mathcal{A}}_{2}^{(2\times 0)\,gg}$ are also available [15]; they
were calculated by employing the expression of the IR divergences in a generic
two-loop QCD amplitude with both massive and massless particles, derived in
[14].
The analytic calculation of the coefficients $A^{q\bar{q}}$, $D^{q\bar{q}}_{l},...,F^{q\bar{q}}_{h}$ which appear in the quark annihilation
channel, as well as of the coefficient $A^{gg}$ for the gluon fusion
channel, was carried out in [16]. It must be observed that
the coefficients with a $l$ or $h$ subscript receive contributions only from
diagrams involving a closed light or heavy fermion loop, respectively. The
leading color coefficients in the two production channels, $A^{q\bar{q}}$ and
$A^{gg}$, involve planar diagrams only. The results reported in
[16] were obtained by employing the Laporta Algorithm
[17], implemented in the C++ code REDUZE 2
[18, 19], for the reduction to the Master Integrals
[20, 16]. Subsequently, the master integrals were evaluated
by means of the differential equation method [21]. The analytic
expression of the master integrals can be written in terms of one- and
two-dimensional harmonic polylogarithms [22]. The analytic results
were evaluated numerically by employing codes which make use of a GiNaC
package for the evaluation of generalized polylogarithms
[23]. In [16] the analytic results were
also expanded in the $s\gg m_{t}^{2}$ limit, in order to reproduce the results
already obtained in [12]. Starting from the exact results of
[16], it was also possible to obtain new analytic formulas
which are valid in the production threshold limit $s\to 4m_{t}^{2}$.
A complete numerical result for the two-loop corrections in the gluon fusion
channel is still missing, and to date only the coefficient $A^{gg}$ was
evaluated. Among the remaining color coefficients in Eq. (2),
$E^{gg}_{l}$, $F^{gg}_{l}$, $G^{gg}_{l}$, $H^{gg}_{l}$, $H^{gg}_{h}$, $H^{gg}_{lh}$,
$I^{gg}_{l}$, $I^{gg}_{h}$, $I^{gg}_{lh}$ can be calculated using the same
technique already employed in [16]; their evaluation is in
progress [24]. The remaining color coefficients both in the
quark-antiquark channel ($B^{q\bar{q}}$ and $C^{q\bar{q}}$) and in the
gluon fusion channel ($B^{gg}$, $C^{gg}$, $D^{gg}$, $E_{h}^{gg}$, $F_{h}^{gg}$, and
$G_{h}^{gg}$) contain either crossed box topologies, or complicated massive
sub-topologies. In the first case, the calculation of the color coefficients in
a closed functional form using the differential equation method is very
difficult, because of the large number of master integrals that occurs for some
specific topologies. In the second case, the problems arise from the fact that it
is known that already the sunrise diagram with three virtual propagators of
mass $m$ and an external momentum $p$ such that $p^{2}\neq m^{2}$ can be
expressed analytically only in terms of elliptic integrals
[25]. This three-denominator diagram appears as a sub-topology
in the color coefficients with subscript $h$. A viable solution for both issues
is the semi-numerical
approach adopted, for instance, for the two-loop equal-mass sunrise diagram
[26].
To conclude, the calculation of the two-loop corrections to the top-quark pair
production is an essential step needed for the full evaluation of the NNLO
corrections to the top-quark production cross section and differential
distributions. In the last few years several results were obtained for the
quark annihilation channel. The calculation of the two-loop corrections in the
gluon fusion channel is technically more complicated because of the larger
number of diagrams involved and because the functional basis of the harmonic
polylogarithms is known to be insufficient to obtain a full analytic result.
However, the calculation of a large number of the color coefficients can be
carried out with available methods and is in progress. The calculation of
the color coefficients for which standard analytic techniques are insufficient
can in principle be carried out with semi-analytic methods already applied to
other related problems.
Acknowledgments.
The work of R. B. is supported by the Theory-LHC-France initiative of
CNRS/IN2P3, T. G. and A. v.M. are supported by the Schweizer Nationalfonds
(grants 200020-126691 and 200020-124773).
The work of C. S. was supported by the Deutsche Forschungsgemeinschaft
(DFG SCHR 993/2-1).
References
[1]
W. Bernreuther,
J. Phys. G 35 (2008) 083001;
arXiv:1008.3819;
R. Frederix,
arXiv:1009.6199.
[2]
P. Nason et al.
Nucl. Phys. B 303 (1988) 607;
W. Beenakker et al.
Phys. Rev. D 40 (1989) 54;
Nucl. Phys. B 351 (1991) 507;
M. Czakon and A. Mitov,
Nucl. Phys. B 824 (2010) 111.
[3]
P. Nason et al.
Nucl. Phys. B 327 (1989) 49
[Erratum-ibid. B 335 (1990) 260];
M. L. Mangano et al.
Nucl. Phys. B 373 (1992) 295;
S. Frixione et al.
Phys. Lett. B351 (1995) 555.
[4]
W. Bernreuther et al.
Nucl. Phys. B 690 (2004) 81;
K. Melnikov and M. Schulze,
JHEP 0908 (2009) 049;
W. Bernreuther and Z. G. Si,
Nucl. Phys. B 837 (2010) 90.
[5]
E. Laenen et al.
Nucl. Phys. B 369 (1992) 543;
Phys. Lett. B 321 (1994) 254;
E. L. Berger and H. Contopanagos,
Phys. Lett. B 361 (1995) 115;
Phys. Rev. D 54 (1996) 3085;
Phys. Rev. D 57 (1998) 253;
S. Catani et al.
Phys. Lett. B 378 (1996) 329;
Nucl. Phys. B 478 (1996) 273.
[6]
N. Kidonakis and G. Sterman,
Phys. Lett. B 387 (1996) 867;
Nucl. Phys. B 505 (1997) 321;
R. Bonciani et al.
Nucl. Phys. B 529 (1998) 424
[Erratum-ibid. B 803 (2008) 234];
Phys. Lett. B 575 (2003) 268.
[7]
M. Beneke et al.
Nucl. Phys. B 828 (2010) 69;
M. Czakon et al.
Phys. Rev. D 80 (2009) 074017;
V. Ahrens et al.
JHEP 1009 (2010) 097;
arXiv:1006.4682.
N. Kidonakis,
arXiv:1009.4935.
[8]
U. Langenfeld et al.
Phys. Rev. D 80 (2009) 054009;
M. Cacciari et al.
JHEP 0809 (2008) 127;
N. Kidonakis and R. Vogt,
Phys. Rev. D 78 (2008) 074005.
[9]
S. Dittmaier et al.
Phys. Rev. Lett. 98 (2007) 262002;
Eur. Phys. J. C 59 (2009) 625;
G. Bevilacqua et al.
Phys. Rev. Lett. 104 (2010) 162002;
K. Melnikov, M. Schulze,
Nucl. Phys. B840 (2010) 129.
[10]
D.A. Kosower,
Phys. Rev. D 67 (2003) 116003.
A. Daleo, T. Gehrmann and D. Maître,
JHEP 0704 (2007) 016;
A. Gehrmann-De Ridder and M. Ritzmann,
JHEP 0907 (2009) 041;
A. Gehrmann-De Ridder et al.
JHEP 0509 (2005) 056;
E. W. N. Glover and J. Pires,
JHEP 1006 (2010) 096;
R. Boughezal et al.
PoS RADCOR2009 (2010) 052;
S. Catani and M. Grazzini,
Phys. Rev. Lett. 98 (2007) 222002;
M. Czakon,
Phys. Lett. B693 (2010) 259;
C. Anastasiou, F. Herzog and A. Lazopoulos,
arXiv:1011.4867 [hep-ph].
[11]
J. G. Korner et al.
Phys. Rev. D 77 (2008) 094011;
C. Anastasiou and S. M. Aybat,
Phys. Rev. D 78 (2008) 114006;
B. Kniehl et al.
Phys. Rev. D 78 (2008) 094013.
[12]
M. Czakon et al.
Phys. Lett. B 651 (2007) 147;
Nucl. Phys. B 798 (2008) 210.
[13]
M. Czakon,
Phys. Lett. B 664 (2008) 307.
[14]
T. Becher and M. Neubert,
Phys. Rev. Lett. 102 (2009) 162001;
JHEP 0906 (2009) 081;
Phys. Rev. D 79 (2009) 125004;
A. Mitov et al.
Phys. Rev. D 79 (2009) 094015;
A. Ferroglia et al.
Phys. Rev. Lett. 103 (2009) 201601;
E. Gardi and L. Magnea,
Nuovo Cim. C 32N5-6 (2009) 137;
[15]
A. Ferroglia et al.
JHEP 0911 (2009) 062.
[16]
R. Bonciani et al.
JHEP 0807 (2008) 129;
R. Bonciani et al.
JHEP 0908 (2009) 067;
R. Bonciani et al.
arXiv:1011.6661 [hep-ph].
[17]
S. Laporta,
Int. J. Mod. Phys. A 15 (2000) 5087;
F.V. Tkachov,
Phys. Lett. B 100 (1981) 65;
K.G. Chetyrkin and F.V. Tkachov,
Nucl. Phys. B 192 (1981) 159.
[18]
A. von Manteuffel and C. Studerus, Reduze 2, to be published.
[19]
C. Studerus,
Comput. Phys. Commun. 181 (2010) 1293.
[20]
M. Argeri et al.
Nucl. Phys. B 631 (2002) 388;
R. Bonciani et al.
Nucl. Phys. B 661 (2003) 289
[Erratum-ibid. B 702 (2004) 359];
Nucl. Phys. B 690 (2004) 138;
Nucl. Phys. B 676 (2004) 399;
J. Fleischer et al.
Nucl. Phys. B 547 (1999) 343;
U. Aglietti and R. Bonciani,
Nucl. Phys. B 668 (2003) 3;
Nucl. Phys. B 698 (2004) 277;
A.I. Davydychev and M.Y. Kalmykov,
Nucl. Phys. B 699 (2004) 3.
M. Czakon et al.
Phys. Rev. D 71 (2005) 073009;
G. Bell,
arXiv:0705.3133;
R. Bonciani and A. Ferroglia,
JHEP 0811 (2008) 065.
[21]
A.V. Kotikov,
Phys. Lett. B 254 (1991) 158;
Phys. Lett. B 259 (1991) 314;
Phys. Lett. B 267 (1991) 123;
E. Remiddi,
Nuovo Cim. A 110 (1997) 1435.
[22]
A B. Goncharov,
Math. Res. Lett. 5 (1998), 497;
E. Remiddi and J.A.M. Vermaseren,
Int. J. Mod. Phys. A 15 (2000) 725;
T. Gehrmann and E. Remiddi,
Nucl. Phys. B 601 (2001) 248;
Nucl. Phys. B 601 (2001) 287;
Comput. Phys. Commun. 141 (2001) 296;
Comput. Phys. Commun. 144 (2002) 200;
D. Maître,
Comput. Phys. Commun. 174 (2006) 222;
hep-ph/0703052.
[23]
J. Vollinga and S. Weinzierl,
Comput. Phys. Commun. 167 (2005) 177.
[24]
R. Bonciani, A. Ferroglia, T. Gehrmann, A. von Manteuffel, and C. Studerus,
in preparation.
[25]
S. Laporta, E. Remiddi,
Nucl. Phys. B704, 349-386 (2005).
[26]
S. Pozzorini and E. Remiddi,
Comput. Phys. Commun. 175 (2006) 381;
U. Aglietti et al.
Nucl. Phys. B 789 (2008) 45. |
Phase Splitting for Periodic Lie Systems
R. Flores-Espinoza${}^{\ast}$, J. de Lucas${}^{\dagger}$ and Yu. M. Vorobiev${}^{\ast}$
(${}^{\ast}$Departamento de Matemáticas, Universidad de Sonora, México
${}^{\dagger}$Departamento de Física Teórica, Universidad de Zaragoza, Spain
and
Institute of Mathematics of the Polish Academy of Sciences, Warszawa, Poland)
Abstract
In the context of the Floquet theory, using a variation of parameter
argument, we show that the logarithm of the monodromy of a real periodic Lie
system with appropriate properties admits a splitting into two parts, called
dynamic and geometric phases. The dynamic phase is intrinsic and linked to
the Hamiltonian of a periodic linear Euler system on the co-algebra. The
geometric phase is represented as a surface integral of the symplectic form
of a co-adjoint orbit.
PACS: 02.20.Sv, 02.20.Qs, 02.40.Hw, 02.40.Yy.
1 Introduction
The so-called Lie systems, representing a special sort of (generally nonlinear) time-dependent dynamics on
$G$-spaces, are of a great interest in the integrability theory of non-autonomous systems of ODEs as well as in
various physical applications (see, for example, [3, 4, 5, 6, 7] and references therein). In
this paper, we are interested in periodic Lie systems in the context of the Floquet theory and
the phase splitting problem. The original motivation for this problem came from quantum mechanics [1] and then, for classical integrable systems, the phase phenomena was studied in [11]. A general geometric approach for computing the geometric and dynamic
phases for Hamiltonian systems with symmetries was developed in [14]. Our goal is to give a
version of phase splitting for periodic Lie systems which is based on a variation of parameter
argument [10, 18]. First, we observe that the geometric and dynamic phases (as elements of a
Lie algebra) are naturally defined for every smooth one-parameter family of uniformly reducible
periodic Lie systems containing the trivial system. Then, we show that an individual periodic
Lie system which is reducible and contractible, is included in such a family and hence, the
logarithm of its monodromy is the sum of two parts, called the dynamic and geometric phases. The
dynamical phase is defined in an intrinsic way and linked to the time-dependent Hamiltonian of a
periodic linear Euler system on the co-algebra. The geometric phase is given as a surface integral
of the symplectic form of a co-adjoint orbit. This result can be viewed as a generalization of
“scalar” formulas for the dynamic and geometric phases of cyclic solutions to periodic linear
Euler systems [10, 18]. Moreover, in the case when a Lie system comes from a simple
mechanical system, we show that our result agrees with known phase formulas in the reconstruction
theory for Hamiltonian systems with symmetries [2, 14].
The question on the phase splitting naturally appears in the spectral problems for quantum systems in the semiclassical approximation [12, 13]. Here, the classical geometric phase defines a correction to the Bohr-Sommerfeld quantization rule and the dynamical phase gives an excitation of the classical energy level. In this context, one of the possible applications of our results is related to the computation of semiclassical spectra of spin-like quantum systems whose classical limits are just periodic Lie systems.
2 The Floquet theory for periodic Lie systems
The classical Floquet theory for linear periodic systems [19] can be naturally extended to
the nonlinear case. Suppose we start with a time-periodic dynamical system on a smooth manifold $M$:
$$\frac{\mathrm{d}x}{\mathrm{d}t}=X_{t}(x),\qquad x\in M.$$
(2.1)
Here $X_{t}$ is a smooth time-dependent vector field on $M$ which is $2\pi$-periodic in $t$,
$X_{t+2\pi}(x)=X_{t}(x)$. We assume that $X_{t}$ is complete. Let $\mathrm{Fl}^{t}:M\rightarrow M$
be the flow of $X_{t}$,
$$\frac{\mathrm{d}\mathrm{Fl}^{t}(x)}{\mathrm{d}t}=X_{t}\circ\mathrm{Fl}^{t}(x),%
\qquad\mathrm{Fl}^{0}=\mathrm{id}_{M}.$$
From the periodicity condition and standard arguments, it follows that
$$\mathrm{Fl}^{t+2\pi}=\mathrm{Fl}^{t}\circ\mathrm{Fl}^{2\pi}\qquad\forall\,t\in%
\mathbb{R}.$$
(2.2)
The diffeomorphism $\mathrm{Fl}^{2\pi}\in\mathrm{Diff}(M)$ is called the monodromy of
(2.1). System (2.1) is said to be reducible if there exists a time-dependent
diffeomorphism on $M$, $2\pi$-periodic in $t$, which transforms the system into an autonomous one.
Observe that the reducibility property is equivalent to the Floquet representation for
the flow of $X_{t}$:
$$\mathrm{Fl}^{t}=P^{t}\circ\Xi^{t},$$
(2.3)
where $\Xi^{t}:M\rightarrow M$ is a one-parameter group of diffeomorphisms,
$\Xi^{t+\tau}=\Xi^{t}\circ\Xi^{\tau}$ and $P^{t}:M\rightarrow M$ is a time-dependent
diffeomorphism such that $P^{t+2\pi}=P^{t}$. Indeed, if there exists a $2\pi$-periodic change
of variables
$$x\mapsto y=(P^{t})^{-1}(x),\qquad P^{0}=\mathrm{id}_{M},$$
(2.4)
transforming (2.1) to the system
$$\frac{\mathrm{d}y}{\mathrm{d}t}=Y(y),$$
(2.5)
with time-independent vector field
$$Y=\frac{\mathrm{d}(P^{t})^{-1}}{\mathrm{d}t}\circ P^{t}+(P^{t})^{-1}_{*}X_{t},$$
(2.6)
then (2.3) holds, where the one-parameter group $\Xi^{t}$ is defined as the flow of $Y$.
Conversely, if the flow admits decomposition (2.3) for some $\Xi^{t}$ and $P^{t}$, then the
$2\pi$-periodic transformation (2.4) reduces system (2.1) to the form (2.5) with
$\displaystyle Y(y)=\frac{\mathrm{d}}{dt}\Big{|}_{t=0}\,\Xi^{t}(y)$. We get the following reducibility
criterion: time-periodic system (2.1) is reducible if and only if there exists a one-parameter
group of diffeomorphisms $\Xi^{t}:M\rightarrow M$ such that
$$\mathrm{Fl}^{2\pi}=\Xi^{2\pi}.$$
(2.7)
Therefore, as in the linear case, the reducibility of system (2.1) is completely controlled by
its monodromy.
It is also useful to introduce the notion of relative reducibility. Let
$\mathcal{G}\subset\mathrm{Diff}(M)$ be a subgroup of diffeomorphisms on $M$. Assume that the
flow $\mathrm{Fl}^{t}$ takes values in $\mathcal{G}$, that is, $\mathrm{Fl}^{t}\in\mathcal{G}$
for all $t\in\mathbb{R}$. Then, we say that system (2.1) is reducible relative to
$\mathcal{G}$ (or, shortly, $\mathcal{G}$-reducible), if one can choose a reducibility,
time-dependent, diffeomorphism $P^{t}$ in (2.4), (2.5) to be a $2\pi$-periodic curve in
the subgroup $\mathcal{G}$. The criterion above is modified as follows: the $\mathcal{G}$-reducibility
of system (2.1) is equivalent to the existence of a one-parameter subgroup $\{\Xi^{t}\}$ in
$\mathcal{G}$ which passes through the monodromy at $t=2\pi$.
In general, the question on the embedding of the monodromy into a one-parameter group of diffeomorphisms
is a difficult task. Our point is to discuss this problem for a special class of time dependent dynamical
systems on $G$-spaces, namely, the Lie systems [3].
Suppose that the manifold $M$ is endowed with a smooth left action $\Phi:G\times M\rightarrow G$ of
a real connected Lie group $G$. For every $g\in G$, denote by $\Phi_{g}:M\rightarrow M$ the
diffeomorphism given by $\Phi_{g}(x)=\Phi(g,x)$ for all $x\in M$. Fixing $x\in M$, we define also
the smooth mapping $\Phi^{x}:G\rightarrow M$ letting $\Phi^{x}(g)=\Phi(g,x)$. Let $\mathfrak{g}$
be the Lie algebra of $G$. By a periodic Lie system on $M$ associated with the $G$-action
we mean the following non-autonomous system
$$\frac{\mathrm{d}x}{\mathrm{d}t}=T_{e}\Phi^{x}(\phi(t)),\qquad x\in M$$
(2.8)
where $T_{e}\Phi^{x}:\mathfrak{g}\rightarrow T_{x}M$ is the tangent map of $\Phi^{x}$ at the identity
element $e$ and $\mathbb{R}\ni t\mapsto\phi(t)\in\mathfrak{g}$ is a smooth $2\pi$-periodic curve in
the Lie algebra, i.e. $\phi(t+2\pi)=\phi(t)$. The vector field $X_{t}(x)=T_{e}\Phi^{x}(\phi(t))$
of this system is represented as a linear combination of the infinitesimal generators of the $G$-action with
time-periodic coefficients. In general, $X_{t}$ is not $G$-invariant but the trajectories of (2.8)
belong to the orbits of the $G$-action. Let $L_{g}:G\rightarrow G$ and $R_{g}:G\rightarrow G$
denote the left and right translations by an element $g\in G$, respectively. One can associate to (2.8)
the following non-autonomous system in $G$
$$\frac{\mathrm{d}g}{\mathrm{d}t}=T_{e}R_{g}(\phi(t)),\qquad g\in G.$$
(2.9)
System (2.9) is complete because its vector field is time-periodic and right invariant. Since the
right invariant vector fields are the infinitesimal generators of the action of $G$ on itself by the left
translations, system (2.9) can also be viewed as a periodic Lie system associated with this
left $G$-action. Consider the solution $\mathbb{R}\ni t\mapsto f(t)\in G$ to the initial value problem
$$\frac{\mathrm{d}f}{\mathrm{d}t}=T_{e}R_{f}(\phi(t)),\qquad f(0)=e.$$
(2.10)
Then, we have the following relationship between the flow $\mathrm{Fl}^{t}$ of system (2.8) and
the solution $f(t)$ of system (2.10):
$$\mathrm{Fl}^{t}=\Phi_{f(t)}.$$
(2.11)
In particular, the flow of system (2.9) is given by $g\mapsto L_{f(t)}g$. In analogy with the
linear case, we will call $f(t)$ the fundamental solution of the periodic Lie system (2.9)
on $G$. In terms of the fundamental solution, the property (2.2) for the flow of (2.9)
reads $f(t+2\pi)=f(t)\cdot m$. Here $m:=f(2\pi)\in G$ is said to be the
monodromy element of (2.9). It follows from (2.11) that the monodromy of system
(2.8) is represented as $\mathrm{Fl}^{2\pi}=\Phi_{m}$. Let
$\mathcal{G}_{G,\Phi}=\{\Phi_{g}$, $g\in G\}$ be the subgroup of diffeomorphisms on $M$ generated
by the $G$-action. Then, formula (2.11) says that $\mathrm{Fl}^{t}\in\mathcal{G}_{G,\Phi}$ and
hence one can talk on the reducibility of (2.8) relative to the group $\mathcal{G}_{G,\Phi}$.
Assume that the monodromy element $m$ lies in the image of the exponential map
$\exp:\mathfrak{g}\rightarrow G$,
$$m=\exp k,$$
(2.12)
for a certain $k\in\mathfrak{g}$. This implies that the Floquet representation for the fundamental solution reads
$$f(t)=L_{p(t)}\left(\exp\left(\frac{tk}{2\pi}\right)\right),$$
(2.13)
where $t\mapsto p(t)$ is a $2\pi$-periodic curve in $G$ with $p(0)=e$. It follows from here and
(2.11) that the monodromy $\mathrm{Fl}^{2\pi}$ satisfies condition (2.3) for the one-parameter
group of diffeomorphisms $\Xi^{t}=\Phi_{\exp(\frac{tk}{2\pi})}$ and hence system (2.8) is
$\mathcal{G}_{G,\Phi}$-reducible. Therefore, under the $2\pi$-periodic change of variables (2.4)
with
$$P^{t}=\mathrm{Fl}^{t}\circ\Phi_{\exp(-\frac{tk}{2\pi})}=\Phi_{p(t)},$$
system (2.8) is transformed to the autonomous Lie system of the form
${\mathrm{d}y}/{dt}=T_{e}\Phi^{y}(\frac{k}{2\pi})$. Condition (2.12) becomes also necessary
for the reducibility under a natural assumption on the $G$-action. We have the following
criterion: if the action $\Phi$ of $G$ on $M$ is faithful, then property (2.12) is a
sufficient and necessary condition for the $\mathcal{G}_{G,\Phi}$-reducibility of system (2.8).
In particular, periodic Lie system (2.9) is reducible relative to the group of left translations on
$G$ if and only if (2.12) holds.
One can also show that if the $G$-action is not faithful, then the criterion of the
$\mathcal{G}_{G,\Phi}$-reducibility for system (2.8) leads to the following representation for
the monodromy element:
$$m=m_{0}\cdot\exp k,$$
(2.14)
for a certain element $m_{0}$ in the kernel of the homomorphism $g\mapsto\Phi_{g}$. In this case,
system (2.9) in $G$ is not necessarily reducible. In terms of the monodromy of (2.8),
reducibility condition (2.14) reads $\mathrm{Fl}^{2\pi}=\Phi_{\exp k}$.
Consider the following important class of periodic Lie systems associated with linear
representations of $G$. Let $\mathrm{Ad}:G\times\mathfrak{g}\rightarrow\mathfrak{g}$ be the
adjoint action of the Lie group $G$ on its Lie algebra,
$\mathrm{Ad}_{g}=T_{g}R_{g^{-1}}\circ T_{e}L_{g}$. Taking
$\Phi=\mathrm{Ad}$ and $M=\mathfrak{g}$ for (2.8), we get the following Lie system
$$\frac{\mathrm{d}x}{\mathrm{d}t}=\mathrm{ad}_{\phi(t)}x,\qquad x\in\mathfrak{g},$$
(2.15)
which is called a periodic linear Euler system on $\mathfrak{g}$. Here
$\mathrm{ad}_{\phi}:\mathfrak{g}\rightarrow\mathfrak{g}$ is the adjoint operator,
$\mathrm{ad}_{\phi}y=[\phi,y]$. It follows from (2.11) that the flow of (2.15) is
$\mathrm{Fl}^{t}=\mathrm{Ad}_{f(t)}$, where $f(t)$ is the fundamental solution in (2.10).
Therefore, $\operatorname*{Fl}{}^{t}$ takes values in the adjoint group $\mathrm{Ad}\,\mathfrak{g}$ of
the Lie algebra $\mathfrak{g}$, which is generated by the elements $\exp(\mathrm{ad}_{z})$, for
$z\in\mathfrak{g}$. Since $G$ is connected, the kernel of the adjoint representation
$g\mapsto\mathrm{Ad}_{g}$ coincides with the center $Z(G)$ of the Lie group. Then, the
$\mathrm{Ad}\,\mathfrak{g}$-reducibility of linear Euler equation (2.15) is equivalent to
representation (2.14) for some $m_{0}\in Z(G)$ and $k\in\mathfrak{g}$. In terms of the
monodromy $\mathfrak{M}=\mathrm{Ad}_{m}$ of (2.15), the reducibility criterion says that
$\mathfrak{M}=\exp(\mathrm{ad}_{k})$.
Remark that the reducibility condition (2.12) automatically holds in the case when $G$ belongs
to the class of exponential Lie groups which includes, for example, the Lie groups of
compact type [8, 9, 17]. If the Lie group $G$ is not exponential, then one can
apply the following criterion. Assume that the monodromy element $m$ is regular and the
isotropy subgroup $G_{m}=\{\alpha\in G\;\big{|}\;\alpha\cdot m=m\cdot\alpha\}$ is
connected. Then, it follows that $G_{m}$ is abelian [9] and hence,
(2.12) holds.
Example 2.1
Let $G={SO}(3)$ be the compact Lie group of all orthogonal $3\times 3$ matrices $g$ with $\det g=1$
and $\mathfrak{g}=\mathfrak{so}(3)$ its Lie algebra of skew-symmetric matrices. A periodic Lie system
on ${SO}(3)$ is written as
$$\frac{\mathrm{d}g}{\mathrm{d}t}=(\Lambda\circ w(t))\cdot g,$$
(2.16)
where $t\mapsto w(t)\in\mathbb{R}^{3}$ is a $2\pi$-periodic vector function and $\Lambda\circ x$
denotes the matrix of the cross product in $\mathbb{R}^{3}$, $(\Lambda\circ x)y=x\times y$. Under
the identification of $\mathfrak{so}(3)$ with $\mathbb{R}^{3}$, the corresponding periodic linear Euler
system takes the form
$$\frac{\mathrm{d}x}{\mathrm{d}t}=w(t)\times x,\qquad x\in\mathbb{R}^{3}.$$
(2.17)
Since the monodromy element $m\in{SO}(3)$ of (2.16) is a rotation in $\mathbb{R}^{3}$,
we have $m=\exp\Lambda\circ\nu$ for $\nu\in\mathbb{R}^{3}$. Therefore, systems
(2.16) and (2.17) are reducible.
The next example is related to the reducibility of periodic linear Hamiltonian systems on $\mathbb{R}^{2}$
[19].
Example 2.2
Consider the special linear group $G={SL}(2;\mathbb{R})$ of all real $2\times 2$ matrices with
determinant one. The corresponding Lie algebra $\mathfrak{g}=\mathfrak{sl}(2;\mathbb{R})$
consists of traceless $2\times 2$ matrices. A periodic Lie system in ${SL}(2;\mathbb{R})$ is of the
form
$$\frac{\mathrm{d}g}{\mathrm{d}t}=\left[\begin{array}[]{cc}a_{1}(t)&a_{2}(t)\\
a_{3}(t)&-a_{1}(t)\end{array}\right]\cdot g,$$
(2.18)
where $a_{i}(t)$ $(i=1,2,3)$ are $2\pi$-periodic, real functions. It is well-known [8, 9]
that the exponential map for ${SL}(2;\mathbb{R})$ is not surjective. The monodromy element $m\in{SL}(2;\mathbb{R})$ of (2.18) has the representation $m=\pm\exp k$, for $k=\left[\begin{array}[]{cc}k_{1}&k_{2}\\
k_{3}&-k_{1}\end{array}\right]$. Moreover, $m$ is in the image of the exponential map if and only if $\mathrm{tr}\;m>-2$ or
$m=-I$. In the opposite case, system (2.18) is not reducible. Identifying
$\mathfrak{sl}(2,\mathbb{R})$ with $\mathbb{R}^{3}$, we can write the periodic linear Euler system
associated with (2.18) in the form
$$\frac{\mathrm{d}x}{\mathrm{d}t}=\mathcal{I}(w(t)\times x),\qquad x\in\mathbb{R%
}^{3},$$
(2.19)
where $\mathcal{I}=\mathrm{diag}(1,1,-1)$ and $w(t)=(2a_{1}(t),-a_{2}(t)-a_{3}(t),a_{2}(t)-a_{3}(t))$.
The kernel of the adjoint representation of ${SL}(2;\mathbb{R})$ is the two element group $\{I,-I\}$.
The adjoint group of $\mathfrak{sl}(2;\mathbb{R})$ is isomorphic to the Lorentz group ${SO}^{+}(2,1)$
which is exponential [8]. Therefore, linear Euler system (2.19) is
reducible since for its monodromy $\mathfrak{M}\in{SO}^{+}(2,1)$ we have
$\mathfrak{M}=\exp\mathcal{I}(\Lambda\circ v)$, for
$v=(2k_{1},-k_{2}-k_{3},k_{2}-k_{3})\in\mathbb{R}^{3}$.
3 The Mapping D
Denote by $C_{e}^{\infty}(\mathbb{R},G)$ the set of all smooth curves
$\alpha(t):\mathbb{R}\rightarrow G$ in the Lie group with $\alpha(0)=e$, and
$C^{\infty}(\mathbb{R},\mathfrak{g)}$ the set of all smooth curves in the Lie algebra $\mathfrak{g}$.
Introduce the mapping $\mathrm{D}:C_{e}^{\infty}(\mathbb{R},G)\rightarrow C^{\infty}(\mathbb{R},%
\mathfrak{g)}$
given by
$$\mathrm{D}\alpha(t):=T_{\alpha(t)}R_{\alpha(t)^{-1}}\left(\frac{\mathrm{d}%
\alpha}{\mathrm{d}t}(t)\right)\in\mathfrak{g}.$$
Then, in terms of $\mathrm{D}$, equation (2.10) for the fundamental solution $f(t)$ is rewritten
as follows
$$\mathrm{D}f=\phi.$$
(3.1)
Moreover, for any $a\in\mathfrak{g}$ and $\alpha,\beta\in C_{e}^{\infty}(\mathbb{R},G)$, the following identities hold [9]:
$$\displaystyle\mathrm{D}(\exp ta)$$
$$\displaystyle=$$
$$\displaystyle a,$$
(3.2)
$$\displaystyle\mathrm{D}\alpha^{-1}(t)$$
$$\displaystyle=$$
$$\displaystyle-\mathrm{Ad}_{\alpha^{-1}(t)}\mathrm{D}\alpha(t),$$
(3.3)
$$\displaystyle\mathrm{D}(L_{\alpha}\beta)(t)$$
$$\displaystyle=$$
$$\displaystyle\mathrm{D}\alpha(t)+\mathrm{Ad}_{\alpha(t)}\mathrm{D}\beta(t),$$
(3.4)
$$\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\mathrm{Ad}_{\alpha(t)}$$
$$\displaystyle=$$
$$\displaystyle\mathrm{ad}_{\mathrm{D}\alpha(t)}\circ\mathrm{Ad}_{\alpha(t)}.$$
(3.5)
Let $\sigma$ be a parametrized surface in $G$ given by a smooth map
$\mathbb{R}^{2}\ni(s,t)\mapsto\sigma(s,t)\in G$. Denote by $\mathrm{D}_{s}$ and $\mathrm{D}_{t}$ the
mappings which act on the $s$-parameter and $t$-parameter families of curves $\sigma_{t}$ and $\sigma_{s}$
associated with $\sigma$, respectively,
$$\mathrm{D}_{s}\sigma(s,t):=\mathrm{D}\sigma_{t}(s)\equiv T_{\sigma(s,t)}R_{%
\sigma(s,t)^{-1}}\left(\frac{\partial\sigma(s,t)}{\partial s}\right),$$
$$\mathrm{D}_{t}\sigma(s,t):=\mathrm{D}\sigma_{s}(t)\equiv T_{\sigma(s,t)}R_{%
\sigma(s,t)^{-1}}\left(\frac{\partial\sigma(s,t)}{\partial t}\right).$$
One can show that the relationship between these two mappings is given by the
“zero curvature” type equation:
$$\frac{\partial\mathrm{D}_{t}\sigma}{\partial s}-\frac{\partial\mathrm{D}_{s}%
\sigma}{\partial t}+[\mathrm{D}_{t}\sigma,\mathrm{D}_{s}\sigma]=0.$$
(3.6)
4 Dynamic and Geometric Phases
Suppose we start with a family of periodic Lie systems in $G$ of the form (2.9) associated
with a $s$-parameter family $\{\phi_{s}\}$ of closed curves in $\mathfrak{g}$ given by a
$C^{\infty}$ mapping $[0,1]\times\mathbb{R}\ni(s,t)\mapsto\phi(s,t)\in\mathfrak{g}$ with
$$\displaystyle\phi(s,t+2\pi)$$
$$\displaystyle=$$
$$\displaystyle\phi(s,t),$$
(4.1)
$$\displaystyle\phi(0,t)$$
$$\displaystyle=$$
$$\displaystyle 0.$$
(4.2)
Let $f(s,t)$ be the parameter dependent fundamental solution,
$$\frac{\mathrm{d}f(s,t)}{\mathrm{d}t}=T_{e}R_{f(s,t)}(\phi(s,t)),\qquad f(s,0)=e.$$
(4.3)
It is clear that $f(s,t)$ is smooth in both variables $s$ and $t$. Moreover, $f(0,t)=e$ because
of (4.2). Assume that the family of periodic Lie systems is uniformly reducible, that is,
for every $s\in[0,1]$, the monodromy element has the representation
$$m(s)=f(s,2\pi)=\exp k(s),$$
(4.4)
for a certain $k(s)\in\mathfrak{g}$, smoothly varying in $s$ and such that $k(0)=0$. Consider
the $G$-valued function
$$p(s,t)=L_{f(s,t)}\exp\left(-\frac{tk(s)}{2\pi}\right),$$
(4.5)
with properties
$$\displaystyle p(s,t+2\pi)$$
$$\displaystyle=$$
$$\displaystyle p(s,t),$$
(4.6)
$$\displaystyle p(s,0)$$
$$\displaystyle=$$
$$\displaystyle e,$$
(4.7)
$$\displaystyle p(0,t)$$
$$\displaystyle=$$
$$\displaystyle e.$$
(4.8)
Applying the mapping $\mathrm{D}_{t}$ to both sides of (4.5) and by using
(3.1), (3.2) and (3.4), we derive the identity
$$\frac{k(s)}{2\pi}=\mathrm{Ad}_{p(s,t)^{-1}}\bigl{(}\phi(s,t)-\mathrm{D}_{t}p(s%
,t)\bigr{)}.$$
Integrating this equality in $t$ over $[0,2\pi]$ gives
$$k(s)=\int_{0}^{2\pi}\mathrm{Ad}_{p(s,t)^{-1}}\bigl{(}\phi(s,t)\bigr{)}\mathrm{%
d}t-\int_{0}^{2\pi}\mathrm{Ad}_{p(s,t)^{-1}}\bigl{(}\mathrm{D}_{t}p(s,t)\bigr{%
)}\mathrm{d}t.$$
(4.9)
By using (3.3), (3.6) and (4.8), we compute the second term in (4.9):
$$\displaystyle\int_{0}^{2\pi}\mathrm{Ad}_{p(s,t)^{-1}}$$
$$\displaystyle\bigl{(}\mathrm{D}_{t}p(s,t)\bigr{)}\mathrm{d}{t}=-\int_{0}^{2\pi%
}\mathrm{D}_{t}p^{-1}(s,t)\mathrm{d}{t}$$
$$\displaystyle=-\int_{0}^{2\pi}\int_{0}^{s}\frac{\partial}{\partial u}(\mathrm{%
D}_{t}p^{-1}(u,t))\mathrm{d}{u}\mathrm{d}{t}+\int_{0}^{2\pi}\mathrm{D}_{t}p^{-%
1}(0,t)\mathrm{d}{t}$$
$$\displaystyle=\int_{0}^{s}(-\mathrm{D}_{u}p^{-1}(u,2\pi)+\mathrm{D}_{u}p^{-1}(%
u,0))\mathrm{d}{u}$$
$$\displaystyle\qquad-\int_{0}^{2\pi}\int_{0}^{s}[\mathrm{D}_{u}p^{-1}(u,t),%
\mathrm{D}_{t}p^{-1}(u,t)]\mathrm{d}{u}\mathrm{d}{t}.$$
The first summand in the last formula vanishes because of (4.6), (4.7). Summarizing, we get
the following result.
Theorem 4.1
For every $s\in[0,1]$, the $\log$ phase $k(s)$ in (4.4) has the decomposition
$$k(s)=k_{\mathrm{dyn}}(s)+k_{\mathrm{geom}}(s),$$
(4.10)
where
$$\displaystyle k_{\mathrm{dyn}}(s)$$
$$\displaystyle=\int_{0}^{2\pi}\mathrm{Ad}_{p^{-1}(s,t)}\phi(s,t)\mathrm{d}{t},$$
(4.11)
$$\displaystyle k_{\mathrm{geom}}(s)$$
$$\displaystyle=\int_{0}^{2\pi}\int_{0}^{s}[\mathrm{D}_{u}p^{-1}(u,t),\mathrm{D}%
_{t}p^{-1}(u,t)]\mathrm{d}{t}\mathrm{d}{u}.$$
(4.12)
The components $k_{\mathrm{dyn}}(s)$ and $k_{\mathrm{geom}}(s)$ will be called the dynamic
and geometric phases of the family of periodic Lie systems, respectively. Now,
let us give interpretations of $k_{\mathrm{dyn}}(s)$ and $k_{\mathrm{geom}}(s)$ in terms of the
Poisson geometry and Hamiltonian dynamics on the dual space (the co-algebra) $\mathfrak{g}^{\ast}$
of $\mathfrak{g}$. Let $\Phi:G\times\mathfrak{g}^{\ast}\rightarrow\mathfrak{g}^{\ast}$ be the
left action of $G$ on the co-algebra $\mathfrak{g}^{\ast}$ given by
$\Phi_{g}=\mathrm{Ad}_{g^{-1}}^{\ast}$, where
$\mathrm{Ad}_{\alpha}^{\ast}:\mathfrak{g}^{\ast}\rightarrow\mathfrak{g}^{\ast}$ is the co-adjoint
action of the Lie group. Then, Lie system (2.8) associated to such action and the function
$\phi=\phi(s,t)$ gives the family of periodic linear Euler systems on $\mathfrak{g}^{\ast}$:
$$\frac{\mathrm{d}\xi}{\mathrm{d}{t}}=-\mathrm{ad}_{\phi(s,t)}^{\ast}\xi,\qquad%
\xi\in\mathfrak{g}^{\ast}.$$
(4.13)
It follows from (2.11) that the flow (the fundamental solution) of (4.13) is given by
$\mathrm{Fl}^{t}=\mathrm{Ad}_{f^{-1}(s,t)}^{\ast}$, where $f(s,t)$ is the $G$-valued fundamental
solution in (2.10). Taking into account (2.11), we get that the monodromy of (4.13)
is of the form $\mathrm{Fl}^{2\pi}=\exp(-\mathrm{ad}_{k(s)}^{\ast})$. For every $s$, system
(4.13) represents a time-dependent Hamiltonian system relative to the “plus” Lie-Poisson
bracket on $\mathfrak{g}^{\ast}$ [15] and the function $H_{t}(\xi)=-\langle\xi,\phi(s,t)\rangle$. Here, we denote by $\langle\,{,}\,\rangle$ the pairing between vectors and covectors.
Pick a point $\mu\in\mathfrak{g}^{\ast}$. Then, in terms of the time-dependent Hamiltonian $H_{t}:\mathfrak{g}^{\ast}\rightarrow\mathbb{R}$, we have the following representation for the
dynamical phase
$$\langle\mu,k_{\mathrm{dyn}}(s)\rangle=-\int_{0}^{2\pi}H_{t}(\mathrm{Ad}_{p^{-1%
}(s,t)}^{\ast}\mu)\mathrm{d}{t}.$$
(4.14)
Here $p(s,t)$ is defined by (4.5). Let $\mathcal{O}\subset\mathfrak{g}^{\ast}$ be the
co-adjoint orbit passing through the point $\mu$. Fix $s\in[0,1]$ and consider the oriented
cylinder $\mathcal{C}_{s}^{2}=[0,s]\times\mathbb{S}^{1}$ with coordinates $(s,u\;\mathrm{mod}\,2\pi)$. Define the $C^{\infty}$ mapping $F:\mathcal{C}_{s}^{2}\rightarrow\mathfrak{g}^{\ast}$ by
$F(u,t)=\mathrm{Ad}_{p^{-1}(u,t)}^{\ast}\mu$. It is clear that the image of $\mathcal{C}_{s}^{2}$
under $F$ lies in the co-adjoint orbit, $F(\mathcal{C}_{s}^{2})\subset\mathcal{O}$. Moreover,
$F(\{0\}\times\mathbb{S}^{1})=\mu$. Let $\omega_{\mathcal{O}}$ be the symplectic form
(the Kirillov form) on $\mathcal{O}$ which is given by
$$\omega_{\mathcal{O}}(\mathrm{ad}_{x}^{\ast}\eta,\mathrm{ad}_{y}^{\ast}\eta)=%
\langle\eta,[x,y]\rangle,$$
(4.15)
for $\eta\in\mathcal{O}$ and $x,y\in\mathfrak{g}$. Taking into account properties
(3.3), (3.5), we compute
$$\frac{\partial F(u,t)}{\partial t}=-\mathrm{ad}_{\mathrm{D}_{t}p(u,t)}^{\ast}F%
(u,t),$$
$$\frac{\partial F(u,t)}{\partial u}=-\mathrm{ad}_{\mathrm{D}_{u}p(u,t)}^{\ast}F%
(u,t).$$
Putting these formulas into (4.15) and using again (3.3), we get
$$\displaystyle\omega_{\mathcal{O}}\left(\frac{\partial F}{\partial u},\frac{%
\partial F}{\partial t}\right)$$
$$\displaystyle=\langle F(u,t),[\mathrm{D}_{u}p(u,t),\mathrm{D}_{t}p(u,t)]\rangle$$
$$\displaystyle=\langle\mathrm{Ad}_{p^{-1}(u,t)}^{\ast}\mu,[\mathrm{D}_{u}p(u,t)%
,\mathrm{D}_{t}p(u,t)]\rangle$$
$$\displaystyle=\langle\mu,[\mathrm{D}_{u}p^{-1}(u,t),\mathrm{D}_{t}p^{-1}(u,t)]\rangle.$$
Comparing this with (4.12) leads to the following representation for the geometric phase
$$\langle\mu,k_{\mathrm{geom}}(s)\rangle=\int_{\mathcal{C}_{s}^{2}}F^{\ast}%
\omega_{\mathcal{O}}.$$
(4.16)
If the mapping $F$ is regular, then the right hand side of (4.16) is the symplectic area of
the oriented surface $\Sigma_{s}=F(\mathcal{C}_{s}^{2})$ in $\mathcal{O}$ whose boundary is the
loop $\gamma_{s}=\{\xi=\mathrm{Ad}_{p^{-1}(u,s)}^{\ast}\mu\}$. Notice that formulas (4.14)
and (4.16) remain also valid if $\mu=\mu(s)$ varies smoothly with $s$ and lies at a
co-adjoint orbit $\mathcal{O}$ for all $s\in[0,1]$. In the case when
$\mathrm{ad}_{k(s)}^{\ast}\mu(s)=0$, the parametrized curves $\gamma_{s}$ are periodic solutions
of linear Euler system (4.13) and the values $\langle\mu(s),k_{\mathrm{dyn}}(s)\rangle$ and
$\langle\mu(s),k_{\mathrm{geom}}(s)\rangle$ correspond to the dynamic and geometric parts in the
splitting of Floquet exponents of cyclic solutions of (4.13) (for more details, see [10, 18]).
Remark 4.2
If instead of (4.1) we have
$$\phi(s,t+T(s))=\phi(s,t)$$
for a certain smooth positive function $T(s)$, then the geometric phase of the corresponding
family of periodic Lie systems is given by (4.16) and the formula for the dynamic
phase is modified as follows
$$\langle\mu,k_{\mathrm{dyn}}(s)\rangle=-\int_{0}^{T(s)}H_{t}(\mathrm{Ad}_{p^{-1%
}(s,t)}^{\ast}\mu)\mathrm{d}{t}.$$
(4.17)
Here $p(s,t)=L_{f(s,t)}\exp\left(-\frac{tk(s)}{T(s)}\right)$ is a $T(s)$-periodic in $t$
for each $s$.
Now, using the above results, we introduce dynamical and geometric phases for an
individual periodic Lie system (2.9) satisfying reducibility condition (2.12)
for a certain element $k\in\mathfrak{g}$. Consider the loop $\Gamma:t\mapsto p(t)$ in $G$,
based at $e$, where $p(t)=f(t)\exp(-{tk}/{2\pi})$ is the $2\pi$-periodic, $G$-valued function
corresponding to the periodic factor in the Floquet representation. Then, $\Gamma$ depends on the
choice of $k$ in (2.12), but it is easy to see that the homotopy class $[\Gamma]$ of $\Gamma$
in $\pi_{1}(G)$ is independent of any such choice. Assume that, $[\Gamma]$ is trivial. In
this case, we say that the reducible periodic Lie system is contractible. This condition
means that we can fix a smooth homotopy in $G$ of the loop $\Gamma$ to the unity $e$ which is
given by a $C^{\infty}$ function $p(s,t)$ satisfying (4.8) and $p(1,t)=p(t)$. Pick an
arbitrary $C^{\infty}$ function $k(s)$ on $[0,1]$ with $k(0)=0$ and $k(1)=k$. For example, one
can put $k(s)=sk$. Then, we define
$$\phi(s,t):=\frac{1}{2\pi}\mathrm{Ad}_{p(s,t)}k(s)+\mathrm{D}_{t}p(s,t).$$
(4.18)
Clearly, this function satisfies properties (4.1), (4.2) and $\phi(1,t)=\phi(t)$.
Therefore, we have proved that the original Lie system is included into a smooth family of reducible
periodic Lie systems on $G$ associated with $\phi$ in (4.18) which is contractible to the trivial
system $\dot{g}=0$. Applying formulas (4.14), (4.16) to this family, we arrive at the
final result.
Theorem 4.3
Assume that a periodic Lie system
$$\frac{\mathrm{d}{g}}{\mathrm{d}{t}}=T_{e}R_{g}(\phi(t)),\qquad g\in G$$
is reducible, $m\in\exp(\mathfrak{g})$, and contractible. Let $p(s,t)$ be an arbitrary smooth
homotopy of $\Gamma$ to the identity $e$. Then, the monodromy element has the representation
$$m=\exp(k_{\mathrm{dyn}}+k_{\mathrm{geom}}),$$
where the dynamic and geometric phases $k_{\mathrm{dyn}},k_{\mathrm{geom}}\in\mathfrak{g}$
are given by
$$\displaystyle\langle\mu,k_{\mathrm{dyn}}\rangle$$
$$\displaystyle=-\int_{0}^{2\pi}H_{t}(F(1,t))\mathrm{d}{t},$$
(4.19)
$$\displaystyle\langle\mu,k_{\mathrm{geom}}\rangle$$
$$\displaystyle=\int_{[01]\times\mathbb{S}^{1}}F^{\ast}\omega_{\mathcal{O}}$$
(4.20)
for any $\mu\in\mathfrak{g}^{\ast}$. Here $F(s,t)=\mathrm{Ad}_{p^{-1}(s,t)}^{\ast}\mu$,
$H_{t}(\xi)=-\langle\xi,\phi(s,t)\rangle$ and $\mathcal{O}$ is the co-adjoint orbit through $\mu$.
Remark that the dynamic phase (4.19) is independent of the choice of a homotopy $p(s,t)$. The
elements $k_{\mathrm{dyn}}$ and $k_{\mathrm{geom}}$ can be also called the dynamical and geometric
phases of the periodic Lie system (2.8) in the $G$-space $(M,G,\Phi)$.
5 Some Applications
Consider the following dynamical system in $\mathfrak{g}^{\ast}\times G$
$$\displaystyle\frac{\mathrm{d}\xi}{\mathrm{d}{t}}$$
$$\displaystyle=$$
$$\displaystyle-\mathrm{ad}_{\frac{\delta h}{\delta\xi}}^{\ast}\xi,$$
(5.1)
$$\displaystyle\frac{\mathrm{d}{g}}{\mathrm{d}{t}}$$
$$\displaystyle=$$
$$\displaystyle T_{e}R_{g}\left(\frac{\delta h}{\delta\xi}\right),$$
(5.2)
where $\xi\in\mathfrak{g}^{\ast}$, $g\in G$ and $h:\mathfrak{g}^{\ast}\rightarrow\mathbb{R}$ is
a $C^{\infty}$ function and the element $\frac{\delta h}{\delta\xi}\in\mathfrak{g}$ is defined by the
equality $\langle\mu,\frac{\delta h}{\delta\xi}\rangle=d_{\xi}h(\mu)$ [15]. This system comes
from a $G$-invariant Hamiltonian system in $T^{\ast}G$ under the identification of
$T^{\ast}G$ with $\mathfrak{g}^{\ast}\times G$ by the right translations [15]. Suppose we are
given a one-parameter family of periodic trajectories $\gamma_{s}:t\mapsto\xi_{s}(t)$ of nonlinear
Euler system (5.1), $\xi_{s}(t+T(s))=\xi_{s}(t)$ which are smoothly contractible to a rest
point $\xi_{0}(t)=\eta_{0}$, $d_{\eta_{0}}h=0$. Putting these periodic solutions in second
equation (5.2), we get a family of periodic Lie systems on $G$ associated with the
$\mathfrak{g}$-valued function
$$\phi(s,t)=\frac{\delta h(\xi_{s}(t))}{\delta\xi}.$$
(5.3)
which satisfies the conditions (4.1) and (4.2). Let $f(s,t)$ be the fundamental solution of
(5.2), (5.3) and $m(s)=f(s,T(s))$ the monodromy element. Assume that the periodic
orbits belong to one and the same co-adjoint orbit $\mathcal{O}$,
$$\xi_{s}^{0}:=\xi_{s}(0)\in\mathcal{O},\qquad\forall\,s\in[0,1].$$
Remark that $\xi_{s}(t)$ is also a periodic solution of periodic linear Euler system
(4.13) with $\phi$ given by (5.3). This implies that for every $s\in[0,1]$, the
monodromy element $m(s)$ lies in the isotropy subgroup of the co-adjoint representation at
$\xi_{s}^{0}$,
$$m(s)\in G_{\xi_{s}^{0}}=\{\alpha\in G\;|\;\mathrm{Ad}_{\alpha}^{\ast}\xi_{s}^{%
0}=\xi_{s}^{0}\}.$$
The Lie algebra of $G_{\xi_{s}^{0}}$ is the isotropy
$\mathfrak{g}_{\xi_{s}^{0}}=\{a\in\mathfrak{g}\;|\;\mathrm{ad}_{a}^{\ast}\xi_{s%
}^{0}=0\}$.
Moreover, we assume that $G_{\xi_{s}^{0}}$ is connected and the co-adjoint orbit $\mathcal{O}$
is regular. Then, $G_{\xi_{s}^{0}}$ is Abelian and exponential [9]. It follows that
there exists a unique $C^{\infty}$ curve $[0,1]\ni s\mapsto k(s)\in\mathfrak{g}$ such that
$k(s)\in\mathfrak{g}_{\xi_{s}^{0}}$, $k(0)=0$ and $m(s)=\exp k(s)$. Applying the results
of Section 4, we get that the total $\log$ phase $k(s)$ has decomposition (4.10),
where $k_{\mathrm{geom}}(s)$ and $k_{\mathrm{dyn}}(s)$ are given by formulae (4.16) and
(4.14), respectively. Taking into account that
$\xi_{s}(t)=\mathrm{Ad}_{p^{-1}(s,t)}^{\ast}\xi_{s}^{0}$ and evaluating $k_{\mathrm{geom}}(s)$ at
$\mu=\xi_{s}^{0}$, we get
$$\displaystyle\langle\xi_{s}^{0},k_{\mathrm{dyn}}(s)\rangle$$
$$\displaystyle=\int_{0}^{T(s)}\langle\xi,\frac{\delta h}{\delta\xi}\rangle\big{%
|}_{\xi=\xi_{s}(t)}\mathrm{d}{t},$$
(5.4)
$$\displaystyle\langle\xi_{s}^{0},k_{\mathrm{geom}(s)}\rangle$$
$$\displaystyle=\int_{\Sigma_{s}}\omega_{\mathcal{O}},$$
(5.5)
where $\displaystyle\Sigma_{s}=\bigcup_{0\leq s^{\prime}\leq s}\gamma_{s^{\prime}}$ is the oriented
surface in $\mathcal{O}$ spanned by the periodic trajectories. In particular, if $h$ is a quadratic
form, then $\langle\xi,\frac{\delta h}{\delta\xi}\rangle=2h(\xi)$ and
$$\langle\xi_{s}^{0},k_{\mathrm{dyn}}\rangle=2T(s)h(\xi_{s}^{0}).$$
(5.6)
Here, we use the property that $h$ is constant along $\gamma_{s}$. In the context of the theory of
reconstruction phases for simple mechanical systems, formulas like (5.5) and (5.6)
were derived in [2, 14]. In the case when $G={SO}(3)$, these formulas lead to the
well-known
representations for the rigid body phases [16].
Acknowledgements. The authors thank G. Dávila-Rascón for fruitful discussions
and comments. This research was partially supported by CONACYT under the grant no. 55463. JdL acknowledges a FPU grant
from Ministerio de Educacion y Ciencia and partial support by the research projects MTM2006-10531 and E24/1 (DGA).
References
[1]
Berry M.V.
Quantal phase factors accompanying adiabatic changes,
Proc. R. Soc. London Ser. A, Vol. 392, (1984) 45-57.
[2]
Blaom A.D. Reconstruction phases via Poisson reduction.
Different. Geom. Appl. Vol.12 (3) (2000), 231-252.
[3]
Cariñena J.F., Grabowski J. and Marmo G.
Lie-Scheffers systems: a geometric approach, Bibliopolis, Napoli, 2000.
[4]
Cariñena J.F., Grabowski J. and Marmo G.
Superposition rules, Lie theorem, and partial differential equations.
Rep. Math. Phys. Vol. 60 (2) (2007), 237-258.
[5]
Cariñena J.F. and de Lucas J. Integrability of Lie systems
and some of its applications in Physics. J.Phys. A: Math. Theor. Vol. 41 (2008), 304029.
[6]
Cariñena J.F. and de Lucas J. Quantum Lie systems and integrability conditions, Int. J. Geom. Meth. Mod. Phys. (IJGMMP), Vol. 6 (8) (2009), 1235-1252.
[7]
Cariñena J.F, de Lucas J, and Ramos, A. A geometric approach to time evolution operators of Lie quantum systems , Int. J. Theor. Phys. Vol. 48 (5) (2009), 1379-1404.
[8]
Dokovic D.Z. and Hofmann K.H. The surjectivity question
for the exponential function of real Lie groups: A status report. J. Lie
Theory, Vol. 7, (1997), 171-199.
[9]
Duistermaat J.J. and Kolke J.A. Lie groups,
Springer-Verlag, Berlin, Heidelberg, New York, 1999.
[10]
Flores R. and Vorobiev Y. On dynamical and geometric
phases of time-periodic linear Euler equations, Russ. Jour. Math. Phys., Vol. 12 (3) (2005), 326-349.
[11]
Hannay, J., Angle variable holonomy in adiabatic excursion of an integrable Hamiltonian, J. Phys. A: Math. Gen. 18 (1985), 221-230.
[12]
Karasev, M., Vorobjev, Yu., “Adapted Connections, Hamiltonian Dynamics, Geometric Phases and Quantization over Isotropic Submanifolds”, Amer. MAth. Soc. Transl. 187 (2), 203-326 (1998).
[13]
Littlejohn, R. G., Cyclic Evolution in Quantum Mechanics and the Phase of Bohr-Sommerfeld and Maslov, Phys. Rev. Lett., 61 (1988), 2159-2162.
[14]
Marsden J.E, Montgomery R. and Ratiu T.S. Reduction,
symmetry, and phases in mechanics. Memoirs AMS, Vol 88, Providence R.I. USA 1990.
[15]
Marsden J.E. and Ratiu T.S. Introduction to Mechanics
and Symmetry, Springer-Verlag, New York, 1999.
[16]
Montgomery R. How much does a rigid body rotates? A Berry’s
phase from 18th century. Amer. J. Phys. Vol.59 (5) (1991), 394-398.
[17]
Moskowitz M. and Sacksteder R. The exponential map and
differential equations on real Lie groups. J. Lie Theory, Vol. 13 (2003), 291-306.
[18]
Vorobjev Y.M. Poisson Structures and Linear Euler
Systems over Symplectic Manifolds. Amer. Math. Soc. Transl., Ser. 2, AMS,
Providence,Vol. 216 (2005), 137-239.
[19]
Yakubovich V.A. and Starzhinskii V.M. Linear
Differential Equations with Periodic Coefficients, John Wiley & Sons, New York-Toronto, 1975.
Rubén Flores Espinoza, [email protected]
Javier de Lucas, [email protected]
Yuri M. Vorobiev, [email protected] |
A Note on Generalized $q$-Difference Equations
and Their Applications
[1mm]
Involving $q$-Hypergeometric Functions
[4mm]
H. M. Srivastava${}^{1,2,3,\ast}$, Jian Cao${}^{4}$
and Sama Arjika${}^{5,6}$
[2mm]
${}^{1}$Department of Mathematics and Statistics, University of Victoria,
Victoria, British Columbia $V8W\;3R4$, Canada
[1mm]
${}^{2}$Department of Medical Research,
China Medical University Hospital,
China Medical University,
Taichung $40402$, Taiwan, Republic of China
[1mm]
${}^{3}$Department of Mathematics and Informatics, Azerbaijan University,
71 Jeyhun Hajibeyli Street, AZ$1007$ Baku, Azerbaijan
[1mm]
E-Mail: [email protected]
[1mm]
${}^{\ast}$Corresponding Author
[2mm]
${}^{4}$Department of Mathematics, Hangzhou Normal University,
Hangzhou City 311121, Zhejiang Province, People’s Republic of China
[1mm]
E-Mail: [email protected]
[2mm]
${}^{5}$Department of Mathematics and Informatics, University of Agadez, Niger
[1mm]
${}^{6}$International Chair of Mathematical Physics and Applications (ICMPA-UNESCO Chair),
University of Abomey-Calavi, Post Box 072, Cotonou 50, Republic of Benin
[1mm]
E-Mail: [email protected]
Abstract
In this paper, we use two $q$-operators $\mathbb{T}(a,b,c,d,e,yD_{x})$ and $\mathbb{E}(a,b,c,d,e,y\theta_{x})$ to derive two potentially useful
generalizations of the $q$-binomial theorem, a set of two extensions of
the $q$-Chu-Vandermonde summation formula and two new generalizations
of the Andrews-Askey integral by means of the $q$-difference equations.
We also briefly describe relevant connections of various special cases
and consequences of our main results with a number of known results.
2020 Mathematics Subject Classification. Primary 05A30,
11B65, 33D15, 33D45; Secondary 33D60, 39A13, 39B32.
Key Words and Phrases. $q$-Difference operator;
$q$-Binomial theorem; $q$-Hypergeometric functions;
$q$-Chu-Vandermonde summation formula; Andrews-Askey integral;$q$-Series and $q$-integral identities;
$q$-Difference equations; Sears transformation.
1. Introduction, Definitions and Preliminaries
Throughout this paper, we refer to [12] for definitions
and notations. We also suppose that $0<q<1$. For complex numbers $a$,
the $q$-shifted factorials are defined by
$$\displaystyle(a;q)_{0}:=1,\quad(a;q)_{n}=\prod_{k=0}^{n-1}(1-aq^{k})\quad\text%
{and}\quad(a;q)_{\infty}:=\prod_{k=0}^{\infty}(1-aq^{k}),$$
(1.1)
where (see, for example, [12] and [19])
$$(a;q)_{n}=\frac{(a;q)_{\infty}}{(aq^{n};q)_{\infty}}\qquad\text{and}\qquad(a;q%
)_{n+m}=(a;q)_{n}(aq^{n};q)_{m}$$
and
$$\left(\frac{q}{a};q\right)_{n}=(-a)^{-n}\;q^{({}^{n+1}_{\,\,\,2})}\frac{(aq^{-%
n};q)_{\infty}}{(a;q)_{\infty}}.$$
We adopt the following notation:
$$(a_{1},a_{2},\cdots,a_{r};q)_{m}=(a_{1};q)_{m}(a_{2};q)_{m}\cdots(a_{r};q)_{m}%
\qquad(m\in\mathbb{N}:=\{1,2,3,\cdots\}).$$
Also, for $m$ large, we have
$$(a_{1},a_{2},\cdots,a_{r};q)_{\infty}=(a_{1};q)_{\infty}(a_{2};q)_{\infty}%
\cdots(a_{r};q)_{\infty}.$$
The $q$-binomial coefficient is defined by
$$\begin{bmatrix}n\\
k\\
\end{bmatrix}_{q}:=\frac{(q;q)_{n}}{(q;q)_{k}(q;q)_{n-k}}.$$
(1.2)
The basic (or $q$-) hypergeometric function
of the variable $z$ and with $\mathfrak{r}$ numerator
and $\mathfrak{s}$ denominator parameters
is defined as follows (see, for details, the monographs by
Slater [19, Chapter 3]
and by Srivastava and Karlsson
[26, p. 347, Eq. (272)];
see also [20] and [13]):
$${}_{\mathfrak{r}}\Phi_{\mathfrak{s}}\left[\begin{array}[]{rr}a_{1},a_{2},%
\cdots,a_{\mathfrak{r}};\\
\\
b_{1},b_{2},\cdots,b_{\mathfrak{s}};\end{array}\,q;z\right]:=\sum_{n=0}^{%
\infty}\Big{[}(-1)^{n}\;q^{\binom{n}{2}}\Big{]}^{1+{\mathfrak{s}}-{\mathfrak{r%
}}}\,\frac{(a_{1},a_{2},\cdots,a_{\mathfrak{r}};q)_{n}}{(b_{1},b_{2},\cdots,b_%
{\mathfrak{s}};q)_{n}}\;\frac{z^{n}}{(q;q)_{n}},$$
where $q\neq 0$ when ${\mathfrak{r}}>{\mathfrak{s}}+1$.
We also note that
$${}_{\mathfrak{r}+1}\Phi_{\mathfrak{r}}\left[\begin{array}[]{rr}a_{1},a_{2},%
\cdots,a_{\mathfrak{r}+1}\\
\\
b_{1},b_{2},\cdots,b_{\mathfrak{r}};\end{array}\,q;z\right]=\sum_{n=0}^{\infty%
}\frac{(a_{1},a_{2},\cdots,a_{\mathfrak{r}+1};q)_{n}}{(b_{1},b_{2},\cdots,b_{%
\mathfrak{r}};q)_{n}}\;\frac{z^{n}}{(q;q)_{n}}.$$
We remark in passing that, in a recently-published
survey-cum-expository review article, the so-called $(p,q)$-calculus
was exposed to be a rather trivial and inconsequential variation of
the classical $q$-calculus, the additional parameter $p$ being redundant
or superfluous (see, for details, [21, p. 340]).
Basic (or $q$-)
series and basic (or $q$-) polynomials, especially
the basic (or $q$-) hypergeometric functions and basic
(or $q$-) hypergeometric polynomials, are
applicable particularly in several areas of Number Theory
such as the Theory of Partitions and
are useful also in a wide variety
of fields including, for example, Combinatorial Analysis,
Finite Vector Spaces, Lie Theory, Particle Physics, Non-Linear
Electric Circuit Theory, Mechanical Engineering, Theory of
Heat Conduction, Quantum Mechanics, Cosmology, and Statistics
(see also [26, pp. 350–351] and the references cited
therein). Here, in our present investigation, we are mainly concerned
with the Cauchy polynomials $p_{n}(x,y)$ as given
below (see [8] and [12]):
$$p_{n}(x,y):=(x-y)(x-qy)\cdots(x-q^{n-1}y)=\left(\frac{y}{x};q\right)_{n}\,x^{n},$$
(1.3)
together with the following Srivastava-Agarwal
type generating function
(see also [5]):
$$\sum_{n=0}^{\infty}p_{n}(x,y)\;\frac{(\lambda;q)_{n}\,t^{n}}{(q;q)_{n}}={}_{2}%
\Phi_{1}\left[\begin{array}[]{rr}\lambda,\frac{y}{x};\\
\\
0;\end{array}\,q;xt\right].$$
(1.4)
In particular, for $\lambda=0$ in (1.4), we get the
following simpler generating function [8]:
$$\sum_{n=0}^{\infty}p_{n}(x,y)\;\frac{t^{n}}{(q;q)_{n}}=\frac{(yt;q)_{\infty}}{%
(xt;q)_{\infty}}.$$
(1.5)
The generating function (1.5)
is also the homogeneous version
of the Cauchy identity or the following
$q$-binomial theorem (see, for example, [12],
[19] and [26]):
$$\sum_{k=0}^{\infty}\frac{(a;q)_{k}}{(q;q)_{k}}\;z^{k}={}_{1}\Phi_{0}\left[%
\begin{array}[]{rr}a;\\
\\
\overline{\hskip 8.535827pt}\,;\end{array}\,q;z\right]=\frac{(az;q)_{\infty}}{%
(z;q)_{\infty}}\qquad(|z|<1).$$
(1.6)
Upon further setting $a=0$, this last relation (1.6)
becomes Euler’s identity
(see, for example, [12]):
$$\sum_{k=0}^{\infty}\frac{z^{k}}{(q;q)_{k}}=\frac{1}{(z;q)_{\infty}}\qquad(|z|<1)$$
(1.7)
and its inverse relation given below [12]:
$$\sum_{k=0}^{\infty}\frac{(-1)^{k}}{(q;q)_{k}}\;q^{\binom{k}{2}}\,z^{k}=(z;q)_{%
\infty}.$$
(1.8)
Based upon the $q$-binomial theorem (1.6) and Heine’s
transformations, Srivastava et al. [25] established
a set of two presumably new theta-function identities
(see, for details, [25]).
The following usual $q$-difference operators are defined by
[9, 22, 18]
$${D}_{a}\big{\{}f(a)\big{\}}:=\frac{f(a)-f(qa)}{a},\quad{\theta}_{a}=\big{\{}f(%
a)\}:=\frac{f(q^{-1}a)-f(a)}{q^{-1}a},$$
(1.9)
and their Leibniz rules are given by (see [17])
$$\displaystyle D_{a}^{n}\left\{f(a)g(a)\right\}=\sum_{k=0}^{n}\mbox{$\biggl{[}%
\begin{array}[]{c}n\\
k\end{array}\biggr{]}_{\!{q}}$}q^{k(k-n)}D_{a}^{k}\left\{f(a)\right\}D_{a}^{n-%
k}\left\{g(q^{k}a)\right\}$$
(1.10)
and
$$\displaystyle\theta_{a}^{n}\left\{f(a)g(a)\right\}=\sum_{k=0}^{n}\mbox{$\biggl%
{[}\begin{array}[]{c}n\\
k\end{array}\biggr{]}_{\!{q}}$}\theta_{a}^{k}\left\{f(a)\right\}\theta_{a}^{n-%
k}\left\{g(q^{-k}a)\right\},$$
(1.11)
respectively. Here, and in what follows,
$D_{a}^{0}$ and $\theta_{a}^{0}$ are understood as the identity operators.
Recently, Chen and Liu [9, 10] constructed the following
pair of augmentation
operators, which is of great significance for deriving identities
by applying its various special cases:
$$\mathbb{T}(bD_{x})=\sum_{n=0}^{\infty}\frac{(bD_{x})^{n}}{(q;q)_{n}}\qquad%
\text{and}\qquad\mathbb{E}(bD_{x})=\sum_{n=0}^{\infty}\frac{(b\theta_{x})^{n}}%
{(q;q)_{n}}.$$
(1.12)
Subsequently, Chen and Gu [7] defined
the Cauchy augmentation operators as follows:
$$\mathbb{T}(a,bD_{x})=\sum_{n=0}^{\infty}\frac{(a;q)_{n}}{(q;q)_{n}}\,(bD_{x})^%
{n}$$
(1.13)
and
$$\mathbb{E}(a,bD_{x})=\sum_{n=0}^{\infty}\;\frac{(b;q)_{n}}{(q;q)_{n}}\;(-b%
\theta_{x})^{n}.$$
(1.14)
On the other hand, Fang [11]
and Zhang and Wang [27]
considered the following finite generalized $q$-exponential
operators with two parameters:
$$\mathbb{T}\left[\begin{array}[]{c}q^{-N},w\\
v\end{array}\Big{|}q;tD_{x}\right]=\sum_{n=0}^{N}\;\frac{(q^{-N},w;q)_{n}}{(v,%
q;q)_{n}}\,(tD_{x})^{n}$$
(1.15)
and
$$\mathbb{E}\left[\begin{array}[]{c}q^{-N},w\\
v\end{array}\Big{|}q;t\theta_{x}\right]=\sum_{n=0}^{N}\;\frac{(q^{-N},w;q)_{n}%
}{(v,q;q)_{n}}\,(t\theta_{x})^{n}.$$
(1.16)
Moreover, Li and Tan [14] constructed two generalized
$q$-exponential operators with three parameters as follows:
$$\mathbb{T}\left[\begin{array}[]{c}u,v\\
w\end{array}\Big{|}q;tD_{x}\right]=\sum_{n=0}^{\infty}\;\frac{(u,v;q)_{n}}{(w,%
q;q)_{n}}\,(tD_{x})^{n}$$
(1.17)
and
$$\mathbb{E}\left[\begin{array}[]{c}u,v\\
w\end{array}\Big{|}q;t\theta_{x}\right]=\sum_{n=0}^{\infty}\;\frac{(u,v;q)_{n}%
}{(w,q;q)_{n}}\,(t\theta_{x})^{n}.$$
(1.18)
Finally, we recall that Cao et al. [6] constructed
the following $q$-operators:
$$\displaystyle\mathbb{T}(a,b,c,d,e,yD_{x})=\sum_{n=0}^{\infty}\;\frac{(a,b,c;q)%
_{n}}{(q,d,e;q)_{n}}\;(yD_{x})^{n}$$
(1.19)
and
$$\displaystyle\mathbb{E}(a,b,c,d,e,y\theta_{x})=\sum_{n=0}^{\infty}\;\frac{(-1)%
^{n}q^{n\choose 2}(a,b,c;q)_{n}}{(q,d,e;q)_{n}}\;(y\theta_{x})^{n}$$
(1.20)
and thereby generalized Arjika’s results in [3] by
using the $q$-difference equations (see, for details, [6]).
We remark that the $q$-operator (1.19) is a particular case of
the homogeneous $q$-difference operator $\mathbb{T}({\bf a},{\bf b},cD_{x})$
(see [23]) by taking
$${\bf a}=(a,b,c),\quad{\bf b}=(d,e)\qquad\text{and}\qquad c=y.$$
Furthermore, for $b=c=d=e=0$, the $q$-operator (1.20) reduces to the operator
$\widetilde{L}(a,y;\theta_{x})$ which was investigated by Srivastava
et al. [24].
Proposition 1.
(see [6, Theorems 3])
Let $f(a,b,c,d,e,x,y)$ be a seven-variable analytic function in a
neighborhood of $(a,b,c,d,e,x,y)=(0,0,0,0,0,0,0)\in\mathbb{C}^{7}$.
(I) If $f(a,b,c,d,e,x,y)$ satisfies
the following difference equation$:$
$$\displaystyle x\big{\{}f(a,b,c,d,e,x,y)-f(a,b,c,d,e,x,yq)$$
$$\displaystyle\qquad\qquad\quad\quad-(d+e)q^{-1}\;[f(a,b,c,d,e,x,yq)-f(a,b,c,d,%
e,x,yq^{2})]$$
$$\displaystyle\qquad\qquad\quad\quad+deq^{-2}\;[f(a,b,c,d,e,x,yq^{2})-f(a,b,c,d%
,e,x,yq^{3})]\big{\}}$$
$$\displaystyle\qquad\quad=y\big{\{}[f(a,b,c,d,e,x,y)-f(a,b,c,d,e,xq,y)]$$
$$\displaystyle\qquad\qquad\quad\quad-(a+b+c)[f(a,b,c,d,e,x,yq)-f(a,b,c,d,e,xq,%
yq)]$$
$$\displaystyle\qquad\qquad\quad\quad+(ab+ac+bc)[f(a,b,c,d,e,x,yq^{2})-f(a,b,c,d%
,e,xq,yq^{2})]$$
$$\displaystyle\qquad\qquad\quad\quad-abc[f(a,b,c,d,e,x,yq^{3})-f(a,b,c,d,e,xq,%
yq^{3})]\big{\}},$$
(1.21)
then
$$\displaystyle f(a,b,c,d,e,x,y)=\mathbb{T}(a,b,c,d,e,yD_{x})\{f(a,b,c,d,e,x,0)\}.$$
(1.22)
(II) If $f(a,b,c,d,e,x,y)$ satisfies the following difference equation$:$
$$\displaystyle x\big{\{}f(a,b,c,d,e,xq,y)-f(a,b,c,d,e,xq,yq)$$
$$\displaystyle\qquad\qquad\quad\quad-(d+e)q^{-1}\;[f(a,b,c,d,e,xq,yq)-f(a,b,c,d%
,e,xq,yq^{2})]$$
$$\displaystyle\quad\quad\quad\quad+deq^{-2}\;[f(a,b,c,d,e,xq,yq^{2})-f(a,b,c,d,%
e,xq,yq^{3})]\big{\}}$$
$$\displaystyle\qquad\quad=y\big{\{}[f(a,b,c,d,e,xq,yq)-f(a,b,c,d,e,x,yq)]$$
$$\displaystyle\qquad\qquad\quad\quad-(a+b+c)[f(a,b,c,d,e,xq,yq^{2})-f(a,b,c,d,e%
,x,yq^{2})]$$
$$\displaystyle\qquad\qquad\quad\quad+(ab+ac+bc)[f(a,b,c,d,e,xq,yq^{3})-f(a,b,c,%
d,e,x,yq^{3})]$$
$$\displaystyle\qquad\qquad\quad\quad-abc[f(a,b,c,d,e,xq,yq^{4})-f(a,b,c,d,e,x,%
yq^{4})]\big{\}},$$
(1.23)
then
$$\displaystyle f(a,b,c,d,e,x,y)=\mathbb{E}(a,b,c,d,e,y\theta_{x})\{f(a,b,c,d,e,%
x,0)\}.$$
(1.24)
Liu [15, 16] initiated the method based upon
$q$-difference equations and deduced several results involving
Bailey’s ${}_{6}\psi_{6},\,q$-Mehler formulas
for the Rogers-Szegö polynomials
and $q$-integral version of the Sears transformation.
Lemma 1.
Each of the following $q$-identities holds true$:$
$$\displaystyle{D}_{a}^{k}\left\{\frac{1}{(as;q)_{\infty}}\right\}=\frac{s^{k}}{%
(as;q)_{\infty}},$$
(1.25)
$$\displaystyle\theta_{a}^{k}\left\{\frac{1}{(as;q)_{\infty}}\right\}=\frac{s^{k%
}q^{-\binom{k}{2}}}{(asq^{-k};q)_{\infty}},$$
(1.26)
$$\displaystyle{D}_{a}^{k}\left\{{(as;q)_{\infty}}\right\}=(-s)^{k}\;q^{\binom{k%
}{2}}\;(asq^{k};q)_{\infty},$$
(1.27)
$$\displaystyle\theta_{a}^{k}\left\{{(as;q)_{\infty}}\right\}=(-s)^{k}\;(as;q)_{%
\infty},$$
(1.28)
$$\displaystyle D_{a}^{n}\left\{\frac{(as;q)_{\infty}}{(a\omega;q)_{\infty}}%
\right\}=\omega^{n}\;\frac{\left(\frac{s}{\omega};q\right)_{n}}{(as;q)_{n}}\;%
\frac{(as;q)_{\infty}}{(a\omega;q)_{\infty}}$$
(1.29)
and
$$\displaystyle\theta_{a}^{n}\left\{\frac{(as;q)_{\infty}}{(a\omega;q)_{\infty}}%
\right\}=\left(-\frac{q}{a}\right)^{n}\;\frac{\left(\frac{s}{\omega};q\right)_%
{n}}{\left(\frac{q}{a\omega};q\right)_{n}}\;\frac{(as;q)_{\infty}}{(a\omega;q)%
_{\infty}}.$$
(1.30)
We now state and prove the $q$-difference formulas as
Theorem 1 below.
Theorem 1.
Each of the following assertions holds true$:$
$$\displaystyle\mathbb{T}(r,f,g,v,w,uD_{a})\left\{\frac{(as;q)_{\infty}}{(az,at;%
q)_{\infty}}\right\}$$
$$\displaystyle\qquad\quad=\frac{(as;q)_{\infty}}{(az,at;q)_{\infty}}\;\sum_{k=0%
}^{\infty}\;\frac{\left(r,f,g,\frac{s}{z},at;q\right)_{k}(zu)^{k}}{(v,w,as,q;q%
)_{k}}\;{}_{3}\Phi_{2}\left[\begin{matrix}\begin{array}[]{rrr}rq^{k},fq^{k},gq%
^{k};\\
\\
vq^{k},wq^{k};\end{array}\end{matrix}q;ut\right]$$
(1.31)
and
$$\displaystyle\mathbb{E}(r,f,g,v,w,u\theta_{a})\left\{\frac{(az,at;q)_{\infty}}%
{(as;q)_{\infty}}\right\}$$
$$\displaystyle\qquad=\frac{(az,at;q)_{\infty}}{(as;q)_{\infty}}\;\sum_{k=0}^{%
\infty}\;\frac{\left(r,f,g,\frac{z}{s},\frac{q}{at};q\right)_{k}\,(-ut)^{k}}{%
\left(v,w,\frac{q}{as},q;q\right)_{k}}\;{}_{3}\Phi_{3}\left[\begin{matrix}%
\begin{array}[]{rrr}rq^{k},fq^{k},gq^{k};\\
\\
vq^{k},wq^{k},0;\end{array}\end{matrix}q;-ut\right],$$
(1.32)
provided that $\max\left\{|az|,|as|,|at|,|ut|\right\}<1.$
Proof.
By the means of the definitions (1.19) and (1.20) of the
operators $\mathbb{T}(r,f,g,v,w,uD_{a})$
and $\mathbb{E}(r,f,g,v,w,u\theta_{a})$
and the Leibniz rules (1.10) and (1.11), we observe that
$$\displaystyle\mathbb{T}(r,f,g,v,w,uD_{a})\left\{\frac{(as;q)_{\infty}}{(az,at;%
q)_{\infty}}\right\}$$
$$\displaystyle\qquad=\sum_{n=0}^{\infty}\frac{(r,f,g;q)_{n}u^{n}}{(v,w,q;q)_{n}%
}D_{a}^{n}\left\{\frac{(as;q)_{\infty}}{(az,at;q)_{\infty}}\right\}$$
$$\displaystyle\qquad=\sum_{n=0}^{\infty}\frac{(r,f,g;q)_{n}u^{n}}{(v,w,q;q)_{n}%
}\;\sum_{k=0}^{n}\mbox{$\biggl{[}\begin{array}[]{c}n\\
k\end{array}\biggr{]}_{\!{q}}$}\;q^{k(k-n)}\;D_{a}^{k}\left\{\frac{(as;q)_{%
\infty}}{(az;q)_{\infty}}\right\}D_{a}^{n-k}\left\{\frac{1}{(atq^{k};q)_{%
\infty}}\right\}$$
$$\displaystyle\qquad=\sum_{n=0}^{\infty}\frac{(r,f,g;q)_{n}u^{n}}{(v,w,q;q)_{n}%
}\;\sum_{k=0}^{n}\mbox{$\biggl{[}\begin{array}[]{c}n\\
k\end{array}\biggr{]}_{\!{q}}$}\;q^{k(k-n)}\;\frac{\left(\frac{s}{z};q\right)_%
{k}\,z^{k}}{(as;q)_{k}}\;\frac{(as;q)_{\infty}}{(az;q)_{\infty}}\;\frac{(tq^{k%
})^{n-k}}{(atq^{k};q)_{\infty}}$$
$$\displaystyle\qquad=\frac{(as;q)_{\infty}}{(az,at;q)_{\infty}}\sum_{k=0}^{%
\infty}\;\frac{\left(\frac{s}{z},at;q\right)_{k}\,z^{k}}{(as,q;q)_{k}}\sum_{n=%
k}^{\infty}\frac{(r,f,g;q)_{n}\,u^{n}\,t^{n-k}}{(v,w,q;q)_{n}}$$
$$\displaystyle\qquad=\frac{(as;q)_{\infty}}{(az,at;q)_{\infty}}\sum_{k=0}^{%
\infty}\;\frac{\left(r,f,g,\frac{s}{z},at;q\right)_{k}\,(uz)^{k}}{(v,w,as,q;q)%
_{k}}\sum_{n=0}^{\infty}\frac{(rq^{k},fq^{k},gq^{k};q)_{n}(ut)^{n}}{(vq^{k},wq%
^{k},q;q)_{n}}.$$
(1.33)
Similarly, we have
$$\displaystyle\mathbb{E}(r,f,g,v,w,u\theta_{a})\left\{\frac{(az,at;q)_{\infty}}%
{(as;q)_{\infty}}\right\}$$
$$\displaystyle\quad=\sum_{n=0}^{\infty}\frac{(-1)^{n}\;q^{n\choose 2}(r,f,g;q)_%
{n}u^{n}}{(v,w,q;q)_{n}}\theta_{a}^{n}\left\{\frac{(az,at;q)_{\infty}}{(as;q)_%
{\infty}}\right\}$$
$$\displaystyle\quad=\sum_{n=0}^{\infty}\frac{(-1)^{n}\;q^{n\choose 2}(r,f,g;q)_%
{n}u^{n}}{(v,w,q;q)_{n}}\;\sum_{k=0}^{n}\mbox{$\biggl{[}\begin{array}[]{c}n\\
k\end{array}\biggr{]}_{\!{q}}$}\theta_{a}^{k}\left\{\frac{(az;q)_{\infty}}{(as%
;q)_{\infty}}\right\}\theta_{a}^{n-k}\left\{(atq^{-k};q)_{\infty}\right\}$$
$$\displaystyle\quad=\sum_{n=0}^{\infty}\frac{(-1)^{n}\;q^{n\choose 2}(r,f,g;q)_%
{n}u^{n}}{(v,w,q;q)_{n}}\;\sum_{k=0}^{n}\mbox{$\biggl{[}\begin{array}[]{c}n\\
k\end{array}\biggr{]}_{\!{q}}$}\frac{\left(-\frac{q}{a}\right)^{k}\;\left(%
\frac{z}{s};q\right)_{k}}{\left(\frac{q}{as};q\right)_{k}}$$
$$\displaystyle\qquad\quad\cdot\frac{(az;q)_{\infty}}{(as;q)_{\infty}}(atq^{-k};%
q)_{\infty}\;(-tq^{-k})^{n-k}$$
$$\displaystyle\quad=\frac{(az,at;q)_{\infty}}{(as;q)_{\infty}}\sum_{k=0}^{%
\infty}\;\frac{(-1)^{k}q^{-{k\choose 2}}\left(\frac{z}{s},\frac{q}{at};q\right%
)_{k}\,t^{k}}{\left(\frac{q}{as},q;q\right)_{k}}\sum_{n=k}^{\infty}\frac{q^{{n%
\choose 2}-k(n-k)}(r,f,g;q)_{n}\,u^{n}\,t^{n-k}}{(v,w,q;q)_{n}}$$
$$\displaystyle\quad=\frac{(az,at;q)_{\infty}}{(as;q)_{\infty}}\sum_{k=0}^{%
\infty}\;\frac{\left(r,f,g,\frac{z}{s},\frac{q}{at};q\right)_{k}\,(-ut)^{k}}{%
\left(v,w,\frac{q}{as},q;q\right)_{k}}\sum_{n=0}^{\infty}\frac{q^{n\choose 2}(%
rq^{k},fq^{k},gq^{k};q)_{n}\,(ut)^{n}}{(vq^{k},wq^{k},q;q)_{n}},$$
(1.34)
which evidently completes the proof of Theorem 1.
We remark that, when $g=w=0$, Theorem 1 reduces to
the concluding result of Li and Tan [14].
Corollary 1.
It is asserted that
$$\displaystyle\mathbb{T}(r,f,g,v,w,uD_{s})\left\{\frac{1}{(xs;q)_{\infty}}%
\right\}=\frac{1}{(xs;q)_{\infty}}{}_{3}\Phi_{2}\left[\begin{matrix}\begin{%
array}[]{rrr}r,f,g;\\
\\
v,w;\end{array}\end{matrix}q;xu\right]$$
(1.35)
and
$$\displaystyle\mathbb{E}(r,f,g,v,w,-u\theta_{s})\left\{(xs;q)_{\infty}\right\}=%
(xs;q)_{\infty}\,{}_{3}\Phi_{3}\left[\begin{matrix}\begin{array}[]{rrr}r,f,g;%
\\
\\
v,w,0;\end{array}\end{matrix}q;xu\right],$$
(1.36)
provided that $\max\left\{|xs|,|xu|\right\}<1$.
The goal in this paper is to give potentially
useful generalizations of a number
$q$-series and $q$-integral identities
such as the $q$-binomial
theorem or the $q$-Gauss sum,
the $q$-Chu-Vandermonde summation formula
and the Andrews-Askey integral.
Our paper is organized as follows.
In Section 2, we give two formal generalizations
of the $q$-binomial theorem or the $q$-Gauss sum
by applying the $q$-difference equations.
In Section 3, we derive a set of two extensions
$q$-Chu-Vandermonde summation formulas by making use of the
$q$-difference equations. Next, in Section 4,
we derive two new generalizations
of the Andrews-Askey integral by means of the $q$-difference equations.
Finally, in our last section (Section 5), we present
a number of concluding remarks and observations concerning the various
results which we have considered in this investigation.
2. A Set of Formal Generalizations of the $q$-Binomial Theorem
We begin this section by recalling the following $q$-binomial theorem
(see, for example, [12], [19]
and [26]):
$$\sum_{n=0}^{\infty}\frac{(a;q)_{n}\,x^{n}}{(q;q)_{n}}=\frac{(ax;q)_{\infty}}{(%
x;q)_{\infty}}\qquad(|x|<1).$$
(2.1)
In Theorem 2 below, we give two
generalizations of the $q$-binomial theorem
(2.1) by applying the $q$-difference equations.
Theorem 2.
Each of the following assertions holds true$:$
$$\displaystyle\sum_{n=0}^{\infty}\frac{(a;q)_{n}\;a^{-n}}{(q;q)_{n}}\;\sum_{k=0%
}^{\infty}\;\frac{(q^{-n},ax;q)_{k}\,q^{k}}{(q;q)_{k}}\;\sum_{j,i\geqq 0}\;%
\frac{(r,f,g;q)_{j+i}\;u^{j+i}}{(q;q)_{i}\;(v,w;q)_{j+i}}\;\frac{\left(\frac{c%
}{b},axq^{k};q\right)_{j}}{(cx,q;q)_{j}}\;(aq^{k})^{i}\;b^{j}$$
$$\displaystyle\qquad=\frac{(ax;q)_{\infty}}{(x;q)_{\infty}}\;\sum_{j,i\geqq 0}%
\frac{(r,f,g;q)_{j+i}\;u^{j+i}}{(q;q)_{i}\;(v,w;q)_{j+i}}\;\frac{\left(\frac{c%
}{b},x;q\right)_{j}}{(cx,q;q)_{j}}\;b^{j}\qquad(|x|<1)$$
(2.2)
and
$$\displaystyle\sum_{n=0}^{\infty}\;\frac{(a;q)_{n}\;a^{-n}}{(q;q)_{n}}\;\sum_{k%
=0}^{\infty}\;\frac{(q^{-n},ax;q)_{k}\,q^{k}}{(q;q)_{k}}\;\sum_{i,j\geqq 0}\;%
\frac{(-1)^{j+i}\;q^{\left({}^{i}_{2}\right)}(r,f,g;q)_{j+i}}{(q;q)_{i}\;(v,w;%
q)_{j+i}}$$
$$\displaystyle\qquad\qquad\qquad\cdot\frac{\left(\frac{bq^{1-k}}{a},\frac{q}{cx%
};q\right)_{j}}{\left(\frac{q^{1-k}}{ax},q;q\right)_{j}}\,(uc)^{j+i}$$
$$\displaystyle\qquad=\frac{(ax;q)_{\infty}}{(x;q)_{\infty}}\;\sum_{i,j=0}^{%
\infty}\;\frac{(-1)^{j+i}\;q^{\left({}^{i}_{2}\right)}\;(r,f,g;q)_{j+i}}{(q;q)%
_{i}(v,w;q)_{j+i}}\;\frac{\left(\frac{q}{bx};q\right)_{j}}{\left(\frac{q}{x},q%
;q\right)_{j}}\,(bu)^{j+i},$$
(2.3)
provided that both sides of $\eqref{vlem10}$ and $\eqref{thlem10}$ exist.
Remark 1.
For $u=0$ and by using the fact that
$$\sum_{k=0}^{\infty}\frac{(q^{-n},ax;q)_{k}}{(q;q)_{k}}\;q^{k}={}_{2}\Phi_{1}%
\left[\begin{matrix}\begin{array}[]{rrr}q^{-n},ax;\\
\\
0;\end{array}\end{matrix}q;q\right]=(ax)^{n},$$
the assertions $\eqref{vlem10}$ and $\eqref{thlem10}$ reduce
to $\eqref{qbino}$.
In our proof of Theorem 2, we shall need Theorem 3
and Corollary 2 below.
Theorem 3.
Each of the following assertions holds true$:$
$$\displaystyle\mathbb{T}(r,f,g,v,w,uD_{x})\left\{\frac{p_{n}\left(x,\frac{y}{a}%
;q\right)_{n}\;(cx;q)_{\infty}}{(ax,bx;q)_{\infty}}\right\}$$
$$\displaystyle\qquad=\frac{(y;q)_{n}}{a^{n}}\;\frac{(cx;q)_{\infty}}{(ax,bx;q)_%
{\infty}}\sum_{k=0}^{\infty}\frac{(q^{-n},ax;q)_{k}\,q^{k}}{(y,q;q)_{k}}$$
$$\displaystyle\qquad\qquad\cdot\sum_{j,i\geqq 0}\frac{(r,f,g;q)_{j+i}\;u^{j+i}}%
{(q;q)_{i}(v,w;q)_{j+i}}\;\frac{\left(\frac{c}{b},axq^{k};q\right)_{j}}{(cx,q;%
q)_{j}}\;(aq^{k})^{i}\;b^{j}$$
(2.4)
and
$$\displaystyle\mathbb{E}(r,f,g,v,w,u\theta_{x})\left\{\frac{p_{n}\left(x,\frac{%
y}{a}\right)(bx,cx;q)_{\infty}}{(ax;q)_{\infty}}\right\}$$
$$\displaystyle\qquad=\frac{(y;q)_{n}}{a^{n}}\frac{(bx,cx;q)_{\infty}}{(ax;q)_{%
\infty}}\;\sum_{i,j\geqq 0}\frac{(-1)^{j+i}\;q^{({}^{i}_{2})}\;(r,f,g;q)_{j+i}%
}{(q;q)_{i}\;(v,w;q)_{j+i}}$$
$$\displaystyle\qquad\qquad\cdot\frac{\left(\frac{bq^{1-k}}{a},\frac{q}{cx};q%
\right)_{j}}{\left(\frac{q^{1-k}}{ax},q;q\right)_{j}}\,(uc)^{j+i},$$
(2.5)
provided that $\max\left\{|ax|,|bx|,|cx|\right\}<1$.
Corollary 2.
Each of the following assertions holds true$:$
$$\displaystyle\mathbb{T}(r,f,g,v,w,uD_{x})\left\{\frac{x^{n}(cx;q)_{\infty}}{(%
ax,bx;q)_{\infty}}\right\}$$
$$\displaystyle\qquad=\frac{1}{a^{n}}\;\frac{(cx;q)_{\infty}}{(ax,bx;q)_{\infty}%
}\;\sum_{k=0}^{\infty}\;\frac{(q^{-n},ax;q)_{k}\,q^{k}}{(q;q)_{k}}$$
$$\displaystyle\qquad\qquad\cdot\sum_{j,i\geqq 0}\;\frac{(r,f,g;q)_{j+i}\;u^{j+i%
}}{(q;q)_{i}\;(v,w;q)_{j+i}}\;\frac{\left(\frac{c}{b},axq^{k};\right)_{j}}{(cx%
,q;q)_{j}}\;(aq^{k})^{i}\;b^{j}$$
(2.6)
and
$$\displaystyle\mathbb{E}(r,f,g,v,w,u\theta_{x})\left\{\frac{x^{n}(cx,bx;q)_{%
\infty}}{(ax;q)_{\infty}}\right\}$$
$$\displaystyle\qquad=\frac{1}{a^{n}}\frac{(bx,cx;q)_{\infty}}{(ax;q)_{\infty}}%
\sum_{k=0}^{\infty}\;\frac{(q^{-n},ax;q)_{k}\,q^{k}}{(q;q)_{k}}\sum_{i,j\geqq 0%
}\frac{(-1)^{j+i}q^{({}^{i}_{2})}\;(r,f,g;q)_{j+i}}{(q;q)_{i}\;(v,w;q)_{j+i}}$$
$$\displaystyle\qquad\qquad\cdot\frac{\left(\frac{bq^{1-k}}{a},\frac{q}{cx};q%
\right)_{j}}{\left(\frac{q^{1-k}}{ax},q;q\right)_{j}}\;(cu)^{j+i},$$
(2.7)
provided that $\max\left\{|ax|,|bx|,|cx|,|cu|\right\}<1$.
Remark 2.
For $y=0,$ the assertions $\eqref{thm10}$ and $\eqref{2f2.16}$
reduce to $\eqref{lem10}$ and $\eqref{ff2.16},$ respectively.
Proof of Theorem 3.
Upon first setting $x\to ax$ in (3.1) and then multiplying
both sides of the resulting equation by
$\frac{(cx;q)_{\infty}}{(bx;q)_{\infty}},$ we get
$$\sum_{n=0}^{\infty}\;\frac{(q^{-n};q)_{k}\,q^{k}}{(y,q;q)_{k}}\;\frac{(cx;q)_{%
\infty}}{(axq^{k},bx;q)_{\infty}}=\frac{(ax)^{n}\left(\frac{y}{ax};q\right)_{n%
}\;(cx;q)_{\infty}}{(y;q)_{n}(ax,bx;q)_{\infty}}.$$
(2.8)
Now, by applying the operator $\mathbb{T}(r,f,g,v,w,uD_{x})$
to both sides of (2.8), it is easy to see that
$$\displaystyle\sum_{k=0}^{\infty}\frac{(q^{-n};q)_{k}\,q^{k}}{(y,q;q)_{k}}\;%
\mathbb{T}(r,f,g,v,w,uD_{x})\left\{\frac{(cx;q)_{\infty}}{(axq^{k},bx;q)_{%
\infty}}\right\}$$
$$\displaystyle\qquad=\frac{a^{n}}{(y;q)_{n}}\;\mathbb{T}(r,f,g,v,w,uD_{x})\left%
\{\frac{x^{n}\left(\frac{y}{ax};q\right)_{n}(cx;q)_{\infty}}{(ax,bx;q)_{\infty%
}}\right\}$$
$$\displaystyle\qquad=\frac{a^{n}}{(y;q)_{n}}\;\mathbb{T}(r,f,g,v,w,uD_{x})\left%
\{\frac{p_{n}\left(x,\frac{y}{a}\right)(cx;q)_{\infty}}{(ax,bx;q)_{\infty}}%
\right\}.$$
(2.9)
The proof of the first assertion (2.4) of Theorem 3
is completed by using the relation (1)
in the left-hand side of (2.9).
The proof of the second assertion (2.5) of Theorem 3
is much akin to that of the first assertion (2.4). The details
involved are, therefore, being omitted here.
Proof of Theorem 2.
Multiplying both sides of (2.1) by
$\frac{(cx;q)_{\infty}}{(bx;q)_{\infty}}$, we find that
$$\sum_{n=0}^{\infty}\;\frac{(a;q)_{n}}{(q;q)_{n}}\;\frac{x^{n}\,(cx;q)_{\infty}%
}{(ax,bx;q)_{\infty}}=\frac{(cx;q)_{\infty}}{(bx,x;q)_{\infty}}.$$
(2.10)
Eq. (2.2) can be written equivalently as follows:
$$\displaystyle\sum_{n=0}^{\infty}\;\frac{(a;q)_{n}}{(q;q)_{n}}\cdot\frac{a^{-n}%
(cx;q)_{\infty}}{(bx,ax;q)_{\infty}}\;\sum_{k=0}^{\infty}\;\frac{(q^{-n},ax;q)%
_{k}\,q^{k}}{(q;q)_{k}}$$
$$\displaystyle\qquad\qquad\qquad\cdot\sum_{j,i\geqq 0}\;\frac{(r,f,g;q)_{j+i}\;%
u^{j+i}}{(q;q)_{i}(v,w;q)_{j+i}}\frac{\left(\frac{c}{b},axq^{k};q\right)_{j}}{%
(cx,q;q)_{j}}\;(aq^{k})^{i}\;b^{j}$$
$$\displaystyle\qquad=\frac{(cx;q)_{\infty}}{(bx,x;q)_{\infty}}\;\sum_{j,i=0}%
\frac{(r,f,g;q)_{j+i}\;u^{j+i}}{(q;q)_{i}(v,w;q)_{j+i}}\cdot\frac{\left(\frac{%
c}{b},x;q\right)_{j}}{(cx,q;q)_{j}}\;b^{j}.$$
(2.11)
If we use $F(r,f,g,v,w,a,u)$ to denote the right-hand side
of (2.11),
it is easy to verify that $F(r,f,g,v,w,a,u)$ satisfies
(1). By applying (1.22), we thus find that
$$\displaystyle F(r,f,g,v,w,x,u)$$
$$\displaystyle=\mathbb{T}(r,f,g,v,w,uD_{x})\Big{\{}F(r,f,g,v,w,x,0)\Big{\}}$$
$$\displaystyle=\mathbb{T}(r,f,g,v,w,uD_{x})\left\{\frac{(cx;q)_{\infty}}{(bx,x;%
q)_{\infty}}\right\}$$
$$\displaystyle=\mathbb{T}(r,f,g,v,w,uD_{x})\left\{\sum_{n=0}^{\infty}\;\frac{(a%
;q)_{n}}{(q;q)_{n}}\;\frac{x^{n}\,(cx;q)_{\infty}}{(ax,bx;q)_{\infty}}\right\}$$
$$\displaystyle=\sum_{n=0}^{\infty}\;\frac{(a;q)_{n}}{(q;q)_{n}}\;\mathbb{T}(r,f%
,g,v,w,uD_{x})\left\{\frac{x^{n}\,(cx;q)_{\infty}}{(ax,bx;q)_{\infty}}\right\}.$$
(2.12)
The proof of the first assertion (2.2) of Theorem 2
can now be completed by making use of the relation (2.6).
The proof of the second assertion (2.3) of Theorem 2
is much akin to that of the first assertion (2.2). The details
involved are, therefore, being omitted here.
3. Two Generalizations of the $q$-Chu-Vandermonde Summation Formula
The $q$-Chu-Vandermonde summation formula is recalled here as follows
(see, for example, [1] and [12]):
$${}_{2}\Phi_{1}\left[\begin{matrix}\begin{array}[]{rrr}q^{-n},x;\\
\\
y;\end{array}\end{matrix}q;q\right]=\frac{\left(\frac{y}{x};q\right)_{n}}{(y;q%
)_{n}}\;x^{n}\qquad\big{(}n\in\mathbb{N}_{0}:=\mathbb{N}\cup\{0\}\big{)}.$$
(3.1)
In this section, we give two generalizations of the
$q$-Chu-Vandermonde summation formula (3.1)
by applying $q$-difference equations.
Theorem 4.
The following assertion holds true for $y\neq 0$$:$
$$\displaystyle\sum_{k=0}^{n}\frac{(q^{-n},x;q)_{k}\,q^{k}}{(q,y;q)_{k}}\;{}_{3}%
\Phi_{2}\left[\begin{matrix}\begin{array}[]{rrr}r,f,g;\\
\\
v,w;\end{array}\end{matrix}q;uq^{k}\right]$$
$$\displaystyle\qquad=\frac{x^{n}\left(\frac{y}{x};q\right)_{n}}{(y;q)_{n}}\;%
\sum_{k,j\geqq 0}\frac{(r,f,g;q)_{k+j}}{(q;q)_{j}\;(v,w;q)_{k+j}}\;\frac{\left%
(\frac{q^{1-n}}{y},\frac{qx}{y};q\right)_{k}}{\left(\frac{xq^{1-n}}{y},q;q%
\right)_{k}}\;u^{k+j}\left(\frac{q}{y}\right)^{j}.$$
(3.2)
We next derive another generalization of the $q$-Chu-Vandermonde
summation formula (3.1) as follows.
Theorem 5.
For $m\in\mathbb{N}_{0}$ and $y\neq 0$$,$
it is asserted that
$$\displaystyle{}_{2}\Phi_{1}\left[\begin{matrix}\begin{array}[]{rrr}q^{-n},x;\\
\\
y;\end{array}\end{matrix}q;q^{1+m}\right]=\frac{x^{n}\left(\frac{y}{x};q\right%
)_{n}}{(y;q)_{n}}\;\sum_{j=0}^{m}\begin{bmatrix}m\\
j\\
\end{bmatrix}_{q}\;\frac{\left(\frac{q^{1-n}}{y},\frac{qx}{y};q\right)_{m-j}}{%
\left(\frac{xq^{1-n}}{y};q\right)_{m-j}}\;\left(\frac{q}{y}\right)^{j}.$$
(3.3)
Remark 3.
For $u=0$ or $m=0,$ the assertion $\eqref{qCh}$ or $\eqref{AqCh}$ reduces to
the $q$-Chu-Vandermonde summation formula $\eqref{ttf}$.
Furthermore, if we first set $i+j=m$ and then extract the coefficients
of $\displaystyle\frac{(r,f,g;q)_{m}}{(v,w;q)_{m}}u^{m}$
from the two members of the assertion $\eqref{qCh}$ of Theorem $\ref{thm_11},$
we obtain the transformation formula (3.3), which leads us to
the $q$-Chu-Vandermonde summation formula $\eqref{ttf}$ when $m=0$.
Also, upon putting $n=0,$ the assertion $\eqref{AqCh}$
reduces to the following identity$:$
$$\displaystyle\sum_{j=0}^{m}\begin{bmatrix}m\\
j\\
\end{bmatrix}_{q}\;\left(\frac{q}{y};q\right)_{m-j}\;\left(\frac{q}{y}\right)^%
{j}=1\qquad(y\neq 0).$$
(3.4)
Proof of Theorem 4.
We first write (3.1) in the following form:
$$\sum_{k=0}^{n}\;\frac{(q^{-n};q)_{k}\,q^{k}}{(y,q;q)_{k}}\;\frac{1}{(xq^{k};q)%
_{\infty}}=\frac{(-1)^{n}\;y^{n}\;q^{\left({}^{n}_{2}\right)}}{(y;q)_{n}}\frac%
{\left(\frac{xq^{1-n}}{y};q\right)_{\infty}}{\left(x,\frac{qx}{y};q\right)_{%
\infty}}.$$
(3.5)
Eq. (3.2) can be written equivalently as follows:
$$\displaystyle\sum_{k=0}^{\infty}\;\frac{(q^{-n};q)_{k}\,q^{k}}{(q,c;q)_{k}}%
\cdot\frac{1}{(xq^{k};q)_{\infty}}\;{}_{3}\Phi_{2}\left[\begin{matrix}\begin{%
array}[]{rrr}r,f,g;\\
\\
v,w;\end{array}\end{matrix}q;uq^{k}\right]$$
$$\displaystyle\qquad=\frac{(-1)^{n}\;y^{n}\;q^{\left({}^{n}_{2}\right)}}{(y;q)_%
{n}}\;\frac{\left(\frac{xq^{1-n}}{y};q\right)_{\infty}}{\left(x,\frac{qx}{y};q%
\right)_{\infty}}\cdot\sum_{i,j\geqq 0}\;\frac{(r,f,g;q)_{j+i}\;u^{j+i}}{(q;q)%
_{i}\;(v,w;q)_{j+i}}\;\frac{\left(\frac{q^{1-n}}{y},\frac{qx}{y};q\right)_{j}}%
{\left(\frac{xq^{1-n}}{y},q;q\right)_{j}}\;\left(\frac{q}{y}\right)^{i}.$$
(3.6)
If we use $G(r,f,g,v,w,x,u)$ to denote the right-hand side of (3.6),
it is easy to observe that $G(r,f,g,v,w,x,u)$ satisfies (1). By
using (1.22), we obtain
$$\displaystyle G(r,f,g,v,w,x,u)$$
$$\displaystyle=\mathbb{T}(r,f,g,v,w,uD_{x})\Big{\{}G(r,f,g,v,w,x,0)\Big{\}}$$
$$\displaystyle=\mathbb{T}(r,f,g,v,w,uD_{x})\left\{\frac{(-1)^{n}\;y^{n}\;q^{%
\left({}^{n}_{2}\right)}}{(y;q)_{n}}\;\frac{\left(\frac{xq^{1-n}}{y};q\right)_%
{\infty}}{\left(x,\frac{qx}{y};q\right)_{\infty}}\right\}$$
$$\displaystyle=\mathbb{T}(r,f,g,v,w,uD_{x})\left\{\sum_{k=0}^{\infty}\frac{(q^{%
-n};q)_{k}\,q^{k}}{(y,q;q)_{k}}\;\frac{1}{(xq^{k};q)_{\infty}}\right\}$$
$$\displaystyle=\sum_{k=0}^{n}\frac{(q^{-n};q)_{k}\,q^{k}}{(y,q;q)_{k}}\;\mathbb%
{T}(r,f,g,v,w,uD_{x})\left\{\frac{1}{(xq^{k};q)_{\infty}}\right\}.$$
(3.7)
Finally, by using the fact that
$$\mathbb{T}(r,f,g,v,w,uD_{x})\left\{\frac{1}{(xq^{k};q)_{\infty}}\right\}=\frac%
{1}{(xq^{k};q)_{\infty}}\;{}_{3}\Phi_{2}\left[\begin{matrix}\begin{array}[]{%
rrr}r,f,g;\\
\\
v,w;\end{array}\end{matrix}q;uq^{k}\right],$$
(3.8)
and after some simplification involving $\frac{1}{(x;q)_{\infty}}$,
we get the left-hand side of (3.2).
4. New Generalizations of the Andrews-Askey Integral
The following famous formula is known as the
Andrews-Askey integral (see, for details, [2]).
It was derived from Ramanujan’s celebrated
${}_{1}\Psi_{1}$-summation formula.
Proposition 2.
(see [2, Eq. (2.1)]).
For $\max\left\{|ac|,|ad|,|bc|,|bd|\right\}<1,$
it is asserted that
$$\displaystyle\int_{c}^{d}\frac{\left(\frac{qt}{c},\frac{qt}{d};q\right)_{%
\infty}}{(at,bt;q)_{\infty}}\;{\rm d}_{q}t=\frac{d(1-q)\left(q,\frac{dq}{c},%
\frac{c}{d},abcd;q\right)_{\infty}}{(ac,ad,bc,bd;q)_{\infty}}.$$
(4.1)
The Andrews-Askey integral (4.1) is indeed an important
formula in the theory of $q$-series (see [9]).
Recently, Cao [4] gave the following two generalizations of
the Andrews-Askey integral (4.1) by the method based upon
$q$-difference equations.
Proposition 3.
(see [4, Theorems 14 and 15])
For $N\in\mathbb{N}$ and $r=q^{-N},$ suppose that
$$\max\left\{|ac|,|ad|,|bc|,|bd|,\left|\frac{qwr}{v}\right|,\left|\frac{q}{v}%
\right|\right\}<1.$$
Then
$$\displaystyle\int_{c}^{d}\;\frac{\left(\frac{qt}{c},\frac{qt}{d};q\right)_{%
\infty}}{(at,bt;q)_{\infty}}\,{}_{4}\Phi_{2}\left[\begin{matrix}\begin{array}[%
]{rrr}r,w,\frac{c}{t},abcd;\\
\\
ac,\frac{qwr}{v};\end{array}\end{matrix}q;\frac{qt}{vbcd}\right]\;{\rm d}_{q}t$$
$$\displaystyle\qquad\quad=\frac{d(1-q)\left(q,\frac{dq}{c},\frac{c}{d},abcd,%
\frac{qw}{v},\frac{qr}{v};q\right)_{\infty}}{\left(ac,ad,bc,bd,\frac{qwr}{v},%
\frac{q}{v};q\right)_{\infty}}\;{}_{2}\Phi_{1}\left[\begin{matrix}\begin{array%
}[]{rrr}w,r;\\
\\
v;\end{array}\end{matrix}q;\frac{q}{bc}\right].$$
(4.2)
Furthermore$,$ for $N\in\mathbb{N}$ and $r=q^{-N},$ suppose that
$$\max\left\{|ac|,|ad|,|bc|,|bd|,\left|\frac{v}{w}\right|,\left|\frac{v}{r}%
\right|\right\}<1.$$
Then
$$\displaystyle\int_{c}^{d}\;\frac{\left(\frac{qt}{c},\frac{qt}{d};q\right)_{%
\infty}}{(at,bt;q)_{\infty}}\,{}_{4}\Phi_{2}\left[\begin{matrix}\begin{array}[%
]{rrr}r,w,\frac{c}{t},\frac{q}{ad};\\
\\
\frac{q}{at},\frac{qrw}{v};\end{array}\end{matrix}q;q\right]\;d_{q}t$$
$$\displaystyle\qquad\quad=\frac{d(1-q)\left(q,\frac{dq}{c},\frac{c}{d},abcd,%
\frac{v}{wr},v;q\right)_{\infty}}{\left(ac,ad,bc,bd,\frac{v}{w},\frac{v}{r};q%
\right)_{\infty}}\;{}_{2}\Phi_{1}\left[\begin{matrix}\begin{array}[]{rrr}w,r;%
\\
\\
v;\end{array}\end{matrix}q;\frac{vbc}{wr}\right].$$
(4.3)
In this section, we give the following two generalizations of the
Andrews-Askey integral (4.1) by using the method
of $q$-difference equations.
Theorem 6.
For $M\in\mathbb{N}$ and $r=q^{-M},$ suppose that
$$\max\left\{|ac|,|ad|,|bc|,|bd|,\left|\frac{q}{bc}\right|\right\}<1.$$
Then
$$\displaystyle\int_{c}^{d}\;\frac{\left(\frac{qt}{c},\frac{qt}{d};q\right)_{%
\infty}}{(at,bt;q)_{\infty}}\;\sum_{k=0}^{\infty}\;\frac{\left(r,f,g,\frac{c}{%
t},abcd;q\right)_{k}\;\left(\frac{qt}{bcd}\right)^{k}}{(v,w,ac,q;q)_{k}}\;{}_{%
3}\Phi_{2}\left[\begin{matrix}\begin{array}[]{rrr}rq^{k},fq^{k},gq^{k};\\
\\
vq^{k},wq^{k};\end{array}\end{matrix}q;q\right]\;{\rm d}_{q}t$$
$$\displaystyle\qquad\quad=\frac{d(1-q)\left(q,\frac{dq}{c},\frac{c}{d},abcd;q%
\right)_{\infty}}{(ac,ad,bc,bd;q)_{\infty}}\;{}_{3}\Phi_{2}\left[\begin{matrix%
}\begin{array}[]{rrr}r,f,g;\\
\\
v,w;\end{array}\end{matrix}q;\frac{q}{bc}\right].$$
(4.4)
Theorem 7.
For $M\in\mathbb{N}$ and $r=q^{-M},$ suppose that
$\max\left\{|ac|,|ad|,|bc|,|bd|\right\}<1$.
Then
$$\displaystyle\int_{c}^{d}\;\frac{\left(\frac{qt}{c},\frac{qt}{d};q\right)_{%
\infty}}{(at,bt;q)_{\infty}}\;\sum_{k=0}^{\infty}\;\frac{\left(r,f,g,\frac{c}{%
t},\frac{q}{ad};q\right)_{k}\,\left(\frac{vw}{rfg}\right)^{k}}{\left(v,w,\frac%
{q}{at},q;q\right)_{k}}\;{}_{3}\Phi_{3}\left[\begin{matrix}\begin{array}[]{rrr%
}rq^{k},fq^{k},gq^{k};\\
\\
vq^{k},wq^{k},0;\end{array}\end{matrix}q;-\frac{vw}{rfg}\right]\;{\rm d}_{q}t$$
$$\displaystyle\qquad\quad=\frac{d(1-q)\left(q,\frac{dq}{c},\frac{c}{d},abcd;q%
\right)_{\infty}}{(ac,ad,bc,bd;q)_{\infty}}\;{}_{3}\Phi_{3}\left[\begin{matrix%
}\begin{array}[]{rrr}r,f,g;\\
\\
v,w,0;\end{array}\end{matrix}q;\frac{vwbc}{rfg}\right].$$
(4.5)
Remark 4.
For $r=1,$ both $\eqref{gqdqd}$ and $\eqref{vdqd}$
reduce to $\eqref{eqd}$.
Moreover$,$ for $r=q^{-N},$ $g=w=0$ and $u=\frac{q}{bcd},$
the assertion (6) of
Theorem 6 reduces to $\eqref{hdqd}$.
For $r=q^{-N},$ $g=w=0$ and $u=\frac{v}{rfbcd},$
the assertion $\eqref{gqdqd}$ of
Theorem 7 reduces to $\eqref{vdqd}$.
Proof of Theorems 6 and 7.
Eq. (6) can be written equivalently as follows:
$$\displaystyle\int_{c}^{d}\;\frac{\left(\frac{qt}{c},\frac{qt}{d};q\right)_{%
\infty}}{(bt;q)_{\infty}}\cdot\frac{(ac;q)_{\infty}}{(at,abcd;q)_{\infty}}\;%
\sum_{k=0}^{\infty}\;\frac{\left(r,f,g,\frac{c}{t},abcd;q\right)_{k}\left(%
\frac{qt}{bcd}\right)^{k}}{(v,w,ac,q;q)_{k}}$$
$$\displaystyle\qquad\qquad\qquad\cdot{}_{3}\Phi_{2}\left[\begin{matrix}\begin{%
array}[]{rrr}rq^{k},fq^{k},gq^{k};\\
\\
vq^{k},wq^{k};\end{array}\end{matrix}q;q\right]\;{\rm d}_{q}t$$
$$\displaystyle\qquad\quad=\frac{d(1-q)\left(q,\frac{dq}{c},\frac{c}{d};q\right)%
_{\infty}}{(bc,bd;q)_{\infty}}\cdot\frac{1}{(ad;q)_{\infty}}\;{}_{3}\Phi_{2}%
\left[\begin{matrix}\begin{array}[]{rrr}r,f,g;\\
\\
v,w;\end{array}\end{matrix}q;\frac{q}{bc}\right].$$
(4.6)
If we use $H(r,f,g,v,w,a,u)$ to denote the right-hand side of (4),
it is easy to see that $H(r,f,g,v,w,a,u)$ satisfies (1)
with $u=\frac{q}{bcd}$. By making use of
(1.22), we thus find that
$$\displaystyle H(r,f,g,v,w,a,u)$$
$$\displaystyle=\mathbb{T}(r,f,g,v,w,uD_{a})\Big{\{}H(r,f,g,v,w,a,0)\Big{\}}$$
$$\displaystyle=\mathbb{T}\left(r,f,g,v,w,\frac{q}{bcd}\;D_{a}\right)\Big{\{}H(1%
,f,g,v,w,a,u)\Big{\}}$$
$$\displaystyle=\mathbb{T}\left(r,f,g,v,w,\frac{q}{bcd}\;D_{a}\right)\left\{%
\frac{d(1-q)\left(q,\frac{dq}{c},\frac{c}{d};q\right)_{\infty}}{(bc,bd;q)_{%
\infty}}\cdot\frac{1}{(ad;q)_{\infty}}\right\}$$
$$\displaystyle=\mathbb{T}\left(r,f,g,v,w,\frac{q}{bcd}\;D_{a}\right)\left\{\int%
_{c}^{d}\;\frac{\left(\frac{qt}{c},\frac{qt}{d};q\right)_{\infty}}{(bt;q)_{%
\infty}}\cdot\frac{(ac;q)_{\infty}}{(at,abcd;q)_{\infty}}\;{\rm d}_{q}t\right\}$$
$$\displaystyle=\int_{c}^{d}\;\frac{\left(\frac{qt}{c},\frac{qt}{d};q\right)_{%
\infty}}{(bt;q)_{\infty}}\cdot\mathbb{T}\left(r,f,g,v,w,\frac{q}{bcd}\;D_{a}%
\right)\left\{\frac{(ac;q)_{\infty}}{(at,abcd;q)_{\infty}}\right\}{\rm d}qt.$$
Now, by applying the fact that
$$\displaystyle\mathbb{T}\left(r,f,g,v,w,\frac{q}{bcd}\;D_{a}\right)\left\{\frac%
{(ac;q)_{\infty}}{(at,abcd;q)_{\infty}}\right\}$$
$$\displaystyle\qquad\quad=\frac{(ac;q)_{\infty}}{(at,abcd;q)_{\infty}}\;\sum_{k%
=0}^{\infty}\;\frac{\left(r,f,g,\frac{c}{t},abcd;q\right)_{k}\;\left(\frac{qt}%
{bcd}\right)^{k}}{(v,w,ac,q;q)_{k}}\;{}_{3}\Phi_{2}\left[\begin{matrix}\begin{%
array}[]{rrr}rq^{k},fq^{k},gq^{k};\\
\\
vq^{k},wq^{k};\end{array}\end{matrix}q;q\right],$$
we get the left-hand side of (6).
The proof of the assertion (7) of Theorem 7
is much akin to that of the assertion (6)
of Theorem 6. The details involved are,
therefore, being omitted here.
The proofs of Theorems 6 and 7
are thus completed.
5. Concluding Remarks and Observations
In our present investigation, we have introduced a set of
two $q$-operators $\mathbb{T}(a,b,c,d,e,yD_{x})$ and
$\mathbb{E}(a,b,c,d,e,y\theta_{x})$ with to applying them
to derive two potentially useful
generalizations of the $q$-binomial theorem, two extensions of
the $q$-Chu-Vandermonde summation formula and two new generalizations
of the Andrews-Askey integral by means of the $q$-difference equations.
We have also briefly described relevant
connections of various special cases
and consequences of our main results
with several known results.
It is believed that the $q$-series and $q$-integral identities, which we
have presented in this paper, as well as the various related recent works
cited here, will provide encouragement and motivation for further researches
on the topics that are dealt with and investigated in this paper.
Conflicts of Interest: The authors declare that they
have no conflicts of interest.
References
[1]
G. E. Andrews, $q$-Series$:$ Their Development and
Applications in Analysis$,$ Number
Theory$,$ Combinatorics$,$ Physics and Computer Algebra,
CBMS Regional Conference Lecture Series, Vol. 66,
American Mathematical Society, Providence, Rhode Island,
1986.
[2]
G. E. Andrews and R. Askey, Another
$q$-extension of the beta function,
Proc. Amer. Math. Soc. 81 (1981), 97–100.
[3]
S. Arjika, $q$-difference equations for homogeneous
$q$-difference operators and their applications,
J. Differ. Equ. Appl. 26 (2020), 987–999.
[4]
J. Cao, A note on generalized $q$-difference equations
for $q$-beta and Andrews-Askey integral, J. Math. Anal. Appl.
412 (2014), 841–851.
[5]
J. Cao and H. M. Srivastava, Some $q$-generating functions of
the Carlitz and Srivastava-Agarwal types associated
with the generalized Hahn polynomials and the
generalized Rogers-Szegö polynomials,
Appl. Math. Comput. 219 (2013), 8398–8406.
[6]
J. Cao, B. Xu and S. Arjika, A note on generalized
$q$-difference equations for general Al-Salam-Carlitz polynomials,
J. Differ. Equ. Appl. (2020) (submitted).
[7]
V. Y. B. Chen and N. S. S. Gu, The Cauchy operator
for basic hypergeometric series, Adv. Appl. Math.
41 (2008), 177–196.
[8]
W. Y. C. Chen, A. M. Fu and B. Zhang, The homogeneous
$q$-difference operator, Adv. Appl. Math. 31 (2003), 659–668.
[9]
W. Y. C. Chen and Z.-G. Liu, Parameter augmenting for
basic hypergeometric series. II, J. Combin. Theory Ser. A
80 (1997), 175–195.
[10]
W. Y. C. Chen and Z.-G. Liu, Parameter augmentation for
basic hypergeometric series. I,
In: B. E. Sagan and R. P. Stanley (Editors),
Mathematical Essays in Honor of
Gian-Carlo Rota, Birkäuser, Basel and New York,
1998, pp. 111–129.
[11]
J. P. Fang, Some applications of $q$-differential operator,
J. Korean Math. Soc. 47 (2010), 223–233.
[12]
G. Gasper and M. Rahman,
Basic Hypergeometric Series
(with a Foreword by Richard Askey), Encyclopedia of
Mathematics and Its Applications, Vol. 35,
Cambridge University Press, Cambridge, New York,
Port Chester, Melbourne and Sydney, 1990;
Second edition, Encyclopedia of Mathematics and
Its Applications, Vol. 96, Cambridge
University Press, Cambridge, London and New York, 2004.
[13]
R. Koekock and R. F. Swarttouw, The Askey-scheme of hypergeometric
orthogonal polynomials and its $q$-analogue, Report No. 98-17,
Delft University of Technology, Delft, The Netherlands, 1998.
[14]
N. N. Li and W. Tan, Two generalized $q$-exponential operators
and their applications, Adv. Differ. Equ. 2016 (2016), Article ID 53,
1–14.
[15]
Z.-G. Liu, Two $q$-difference equations and $q$-operator identities,
J. Differ. Equ. Appl. 16 (2010), 1293-1307.
[16]
Z.-G. Liu, An extension of the non-terminating ${}_{6}\phi_{5}$-summation
and the Askey-Wilson polynomials, J. Differ. Equ. Appl.
17 (2011), 1401–1411.
[17]
S. Roman, The theory of the umbral calculus. I,
J. Math. Anal. Appl. 87 (1982), 58–115.
[18]
H. L. Saad and A. A. Sukhi, Another homogeneous
$q$-difference operator,
Appl. Math. Comput. 215 (2010), 4332–4339.
[19]
L. J. Slater, Generalized Hypergeometric Functions,
Cambridge University Press, Cambridge, London and New York, 1966.
[20]
H. M. Srivastava, Certain $q$-polynomial expansions for functions of
several variables. I and II, IMA J. Appl. Math. 30 (1983), 315–323;
ibid. 33 (1984), 205–209.
[21]
H. M. Srivastava, Operators of basic $($or $q$-$)$ calculus and
fractional $q$-calculus and their applications in geometric function
theory of complex analysis, Iran. J. Sci. Technol. Trans. A$:$ Sci.
44 (2020) , 327–344.
[22]
H. M. Srivastava and M. A. Abdlhusein, New forms of the Cauchy
operator and some of their applications, Russian J. Math. Phys.
23 (2016), 124–134.
[23]
H. M. Srivastava and S. Arjika, Generating functions for some
families of the generalized Al-Salam-Carlitz $q$-polynomials,
Adv. Differ. Equ. 2020 (2020), Article ID 498, 1–17.
[24]
H. M. Srivastava, S. Arjika and A. Sherif Kelil,
Some homogeneous $q$-difference operators and the associated
generalized Hahn polynomials, Appl. Set-Valued Anal. Optim.
1 (2019), 187–201.
[25]
H. M. Srivastava, M. P. Chaudhary and F. K. Wakene, A family of
theta-function identities based upon $q$-binomial theorem and
Heine’s transformations, Montes Taurus J. Pure Appl. Math.
2 (2020), 1-6.
[26]
H. M. Srivastava and P. W. Karlsson, Multiple Gaussian
Hypergeometric Series, Halsted Press (Ellis Horwood Limited, Chichester),
John Wiley and Sons, New York, Chichester, Brisbane and Toronto, 1985.
[27]
Z. Z. Zhang and J. Z. Wang, Finite $q$-exponential operators
with two parameters and their applications, Acta Math. Sinica
53 (2010), 1007–1018 (in Chinese). |
On Okounkov’s conjecture connecting Hilbert schemes of points
and multiple $q$-zeta values
Zhenbo Qin${}^{1}$
Department of Mathematics, University of Missouri, Columbia, MO
65211, USA
[email protected]
and
Fei Yu${}^{2}$
Department of Mathematics, Xiamen University, Xiamen, China
[email protected]
Abstract.
We compute the generating series
for the intersection pairings between the total Chern classes of the tangent bundles
of the Hilbert schemes of points on a smooth projective surface
and the Chern characters of tautological bundles over these Hilbert schemes.
Modulo the lower weight term, we verify Okounkov’s conjecture [Oko]
connecting these Hilbert schemes and multiple $q$-zeta values. In addition,
this conjecture is completely proved when the surface is abelian.
We also determine some universal constants
in the sense of Boissiére and Nieper-Wisskirchen [Boi, BN] regarding
the total Chern classes of the tangent bundles of these Hilbert schemes.
The main approach of this paper is to use the set-up of Carlsson and Okounkov
outlined in [Car, CO] and the structure of the Chern character operators
proved in [LQW2].
Key words and phrases:Hilbert schemes of points on surfaces, multiple $q$-zeta values.
2000 Mathematics Subject Classification: Primary 14C05; Secondary 11B65, 17B69.
${}^{1}$Partially supported by a grant from the Simons Foundation
${}^{2}$Partially supported by the Fundamental Research Funds for
the Central Universities (No. 20720140526)
1. Introduction
In the region ${\rm Re}\,s>1$, the Riemann zeta function is defined by
$$\zeta(s)=\sum_{n=1}^{\infty}{1\over n^{s}}.$$
The integers $s>1$ give rise to a sequence of special values of the Riemann zeta function.
Multiple zeta values are series of the form
$$\zeta(s_{1},\ldots,s_{k})=\sum_{n_{1}>\cdots>n_{k}}{1\over n_{1}^{s_{1}}\cdots
n%
_{k}^{s_{k}}}$$
where $n_{1},\ldots,n_{k}$ denote positive integers, and $s_{1},\ldots,s_{k}$ are
positive integers with $s_{1}>1$.
Multiple $q$-zeta values are $q$-deformations of $\zeta(s_{1},\ldots,s_{k})$,
which may take different forms (see [Bra1, Bra2, OT, Zud] for details).
In [Oko], Okounkov proposed several interesting conjectures regarding
multiple $q$-zeta values and Hilbert schemes of points. Motivated by these conjectures,
we compute in this paper the generating series
for the intersection pairings between the total Chern classes of the tangent bundles
of the Hilbert schemes of points on a smooth projective surface
and the Chern characters of tautological bundles over these Hilbert schemes.
Let $X$ be a smooth projective complex surface,
and let ${X^{[n]}}$ be the Hilbert scheme of $n$ points in $X$.
A line bundle $L$ on $X$ induces a tautological rank-$n$ bundle $L^{[n]}$ on ${X^{[n]}}$.
Let ${\rm ch}_{k}(L^{[n]})$ be the $k$-th Chern character of $L^{[n]}$.
Following Okounkov [Oko], we introduce the two generating series:
$$\displaystyle\big{\langle}{\rm ch}_{k_{1}}^{L_{1}}\cdots{\rm ch}_{k_{N}}^{L_{N%
}}\big{\rangle}$$
$$\displaystyle=$$
$$\displaystyle\sum_{n\geq 0}q^{n}\,\int_{X^{[n]}}{\rm ch}_{k_{1}}(L_{1}^{[n]})%
\cdots{\rm ch}_{k_{N}}(L_{N}^{[n]})\cdot c(T_{X^{[n]}})$$
(1.1)
$$\displaystyle\big{\langle}{\rm ch}_{k_{1}}^{L_{1}}\cdots{\rm ch}_{k_{N}}^{L_{N%
}}\big{\rangle}^{\prime}$$
$$\displaystyle=$$
$$\displaystyle\big{\langle}{\rm ch}_{k_{1}}^{L_{1}}\cdots{\rm ch}_{k_{N}}^{L_{N%
}}\big{\rangle}/\langle 1\rangle=(q;q)_{\infty}^{\chi(X)}\cdot\big{\langle}{%
\rm ch}_{k_{1}}^{L_{1}}\cdots{\rm ch}_{k_{N}}^{L_{N}}\big{\rangle}$$
(1.2)
where $0<q<1$, $c\big{(}T_{X^{[n]}}\big{)}$ is the total Chern class of
the tangent bundle $T_{X^{[n]}}$, $\chi(X)$ is the Euler characteristics,
and $(a;q)_{n}=\prod_{i=0}^{n}(1-aq^{i})$.
In [Car], for $X=\mathbb{C}^{2}$ with a suitable $\mathbb{C}^{*}$-action and $L=\mathcal{O}_{X}$,
the series $\big{\langle}{\rm ch}_{k_{1}}^{L}\cdots{\rm ch}_{k_{\ell}}^{L}\big{\rangle}$
in the equivariant setting has been studied.
In [Oko], Okounkov proposed the following conjecture.
Conjecture 1.1.
$\big{\langle}{\rm ch}_{k_{1}}^{L}\cdots{\rm ch}_{k_{N}}^{L}\big{\rangle}^{\prime}$ is a multiple $q$-zeta value
of weight $\sum_{i=1}^{N}(k_{i}+2)$.
In this paper, we study Conjecture 1.1. To state our result,
we introduce some definitions. For integers $n_{i}>0,w_{i}>0$ and $p_{i}\geq 0$
with $1\leq i\leq v$, define the weight of
$\prod_{i=1}^{v}{q^{n_{i}w_{i}p_{i}}\over(1-q^{n_{i}})^{w_{i}}}$ to be
$\sum_{i=1}^{v}w_{i}$. For $k\geq 0$ and $\alpha\in H^{*}(X)$,
define $\Theta^{\alpha}_{k}(q,z)$ to be the weight-$(k+2)$ multiple $q$-zeta value
(with an additional variable $z$ inserted):
$$-\sum_{a,s_{1},\ldots,s_{a},b,t_{1},\ldots,t_{b}\geq 1\atop\sum_{i=1}^{a}s_{i}%
+\sum_{j=1}^{b}t_{j}=k+2}\big{\langle}(1_{X}-K_{X})^{\sum_{i=1}^{a}s_{i}},%
\alpha\big{\rangle}\prod_{i=1}^{a}{(-1)^{s_{i}}\over s_{i}!}\cdot\prod_{j=1}^{%
b}{1\over t_{j}!}$$
$$\displaystyle\cdot\sum_{n_{1}>\cdots>n_{a}}\prod_{i=1}^{a}{(qz)^{n_{i}s_{i}}%
\over(1-q^{n_{i}})^{s_{i}}}\cdot\sum_{m_{1}>\cdots>m_{b}}\prod_{j=1}^{b}{z^{-m%
_{j}t_{j}}\over(1-q^{m_{j}})^{t_{j}}}$$
where $K_{X}$ and $1_{X}$ are the canonical class and fundamental class of $X$ respectively.
Let ${\rm Coeff}_{z_{1}^{0}\cdots z_{N}^{0}}(\cdot)$ denote
the coefficient of $z_{1}^{0}\cdots z_{N}^{0}$, $L$ also denote
the first Chern class of the line bundle $L$, and $e_{X}$ be the Euler class of $X$.
Theorem 1.2.
Let $L_{1},\ldots,L_{N}$ be line bundles over $X$, and $k_{1},\ldots,k_{N}\geq 0$. Then,
$$\displaystyle\big{\langle}{\rm ch}_{k_{1}}^{L_{1}}\cdots{\rm ch}_{k_{N}}^{L_{N%
}}\big{\rangle}^{\prime}={\rm Coeff}_{z_{1}^{0}\cdots z_{N}^{0}}\left(\prod_{i%
=1}^{N}\Theta^{1_{X}}_{k_{i}}(q,z_{i})\right)+W,$$
(1.3)
and the lower weight term $W$ is an infinite linear combination of the expressions:
$$\displaystyle\prod_{i=1}^{u}\left\langle K_{X}^{r_{i}}e_{X}^{r_{i}^{\prime}},L%
_{1}^{\ell_{i,1}}\cdots L_{N}^{\ell_{i,N}}\right\rangle\cdot\prod_{i=1}^{v}{q^%
{n_{i}w_{i}p_{i}}\over(1-q^{n_{i}})^{w_{i}}}$$
where $\sum_{i=1}^{v}w_{i}<\sum_{i=1}^{N}(k_{i}+2)$,
and the integers $u,v$, $r_{i},r_{i}^{\prime},\ell_{i,j}\geq 0,n_{i}>0,w_{i}>0,p_{i}\in\{0,1\}$ depend only on $k_{1},\ldots,k_{N}$.
Furthermore, all the coefficients of this linear combination are
independent of $q,L_{1},\ldots,L_{N}$ and $X$.
Theorem 1.2 proves Conjecture 1.1,
modulo the lower weight term $W$. Note that the leading term
${\rm Coeff}_{z_{1}^{0}\cdots z_{N}^{0}}\left(\prod_{i=1}^{N}\Theta^{1_{X}}_{k_%
{i}}(q,z_{i})\right)$
in $\big{\langle}{\rm ch}_{k_{1}}^{L_{1}}\cdots{\rm ch}_{k_{N}}^{L_{N}}\big{%
\rangle}^{\prime}$ has weight
$\sum_{i=1}^{N}(k_{i}+2)$, and is a multiple of
$\langle K_{X},K_{X}\rangle^{N}$ whose coefficient depends only on $k_{1},\ldots,k_{N}$
and is independent of the line bundles $L_{1},\ldots,L_{N}$ and the surface $X$.
In general, it is unclear how to organize the lower weight term $W$ in
Theorem 1.2 into multiple $q$-zeta values. On the other hand,
we have the following result which together with Theorem 1.2
verifies Conjecture 1.1 when $X$ is an abelian surface.
Theorem 1.3.
Let $L_{1},\ldots,L_{N}$ be line bundles over an abelian surface $X$,
and $k_{1},\ldots,k_{N}\geq 0$. Then, the lower weight term $W$ in (1.3)
is a linear combination of the coefficients of $z_{1}^{0}\cdots z_{N}^{0}$ in
some multiple $q$-zeta values (with additional variables $z_{1},\ldots,z_{N}$ inserted)
of weights $<\sum_{i=1}^{N}(k_{i}+2)$.
Moreover, the coefficients in this linear combination are independent of $q$.
We remark that some of the multiple $q$-zeta values mentioned in Theorem 1.3
are in the generalized sense, i.e., in the following form:
$$\sum_{n_{1}>\cdots>n_{\ell}}\prod_{i=1}^{\ell}{(-n_{i})^{w_{i}}q^{n_{i}p_{i}}f%
_{i}(z_{1},\ldots,z_{N})^{n_{i}}\over(1-q^{n_{i}})^{w_{i}}}$$
where $0\leq p_{i}\leq w_{i}$, and each $f_{i}(z_{1},\ldots,z_{N})$ is a monomial of
$z_{1}^{\pm 1},\ldots,z_{N}^{\pm 1}$.
We refer to (4.34) in the proof of Theorem 4.10 for more details.
As indicated in [Oko], the factors $(-n_{i})^{w_{i}}$ in the above expression
may be related to the operator $\displaystyle{q{{\rm d}\over{\rm d}q}}$.
The main idea in our proofs of Theorem 1.2 and Theorem 1.3
is to use the structure of the Chern character operators proved in [LQW2]
and the set-up of Carlsson and Okounkov in [Car, CO].
Let $G_{k}(\alpha,n)$ be the degree-$(2k+|\alpha|)$ component of (2.1),
and $\mathfrak{G}_{k}(\alpha)$ be the Chern character operator acting on the Fock space
${\mathbb{H}}_{X}=\bigoplus_{n=0}^{\infty}H^{*}({X^{[n]}})$ via cup product by
$\bigoplus_{n=0}^{\infty}G_{k}(\alpha,n)$. Then,
$${\rm ch}_{k}(L^{[n]})=G_{k}(1_{X},n)+G_{k-1}(L,n)+G_{k-2}(L^{2}/2,n)$$
by the Grothendieck-Riemann-Roch Theorem.
So $\big{\langle}{\rm ch}_{k_{1}}^{L_{1}}\cdots{\rm ch}_{k_{N}}^{L_{N}}\big{%
\rangle}^{\prime}$ is reduced to
the series $F^{\alpha_{1},\ldots,\alpha_{N}}_{k_{1},\ldots,k_{N}}(q)$
defined by (2.2).
Let $\mathfrak{L}_{1}$ be the trivial line bundle on $X$ with a scaling action of $\mathbb{C}^{*}$ of
character $1$ ††\dagger†††\dagger†$\dagger$Throughout the paper, we implicitly set $t=1$
for the generator $t$ of the equivariant cohomology $H^{*}_{\mathbb{C}^{*}}({\rm pt})$ of a point..
Using the set-up in [Car, CO], we get
$$\displaystyle F^{\alpha_{1},\ldots,\alpha_{N}}_{k_{1},\ldots,k_{N}}(q)={\rm Tr%
}\,q^{\mathfrak{d}}\,W(\mathfrak{L}_{1},z)\,\prod_{i=1}^{N}\mathfrak{G}_{k_{i}%
}(\alpha_{i})$$
where $W(\mathfrak{L}_{1},z)$ is the vertex operator constructed in [Car, CO],
and $\mathfrak{d}$ is the number-of-points operator (i.e., $\mathfrak{d}|_{H^{*}({X^{[n]}})}=n\,{\rm Id}$).
The structure of the Chern character operators $\mathfrak{G}_{k_{i}}(\alpha_{i})$ is given by
Theorem 2.3 which is proved in [LQW2]. It implies that
the computation of $F^{\alpha_{1},\ldots,\alpha_{N}}_{k_{1},\ldots,k_{N}}(q)$ can be
further reduced to
$$\displaystyle{\rm Tr}\,q^{\mathfrak{d}}\,W(\mathfrak{L}_{1},z)\,\prod_{i=1}^{N%
}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})\over{\lambda}^{(i)}!}$$
(1.4)
where ${\lambda}^{(i)}$ denotes a generalized partition which may also contain negative parts,
and ${\lambda}^{(i)}!$ and $\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})$ are defined in
Definition 2.2 (ii). The trace (1.4) is investigated via
some standard but rather lengthy calculations.
As an application, our results enable us to determine some of the universal constants in
$\displaystyle{\sum_{n}c\big{(}T_{X^{[n]}}\big{)}}\,q^{n}$.
Let $C_{i}={2i\choose i}/(i+1)$ be the Catalan number, and
$\sigma_{1}(i)=\sum_{j|i}j$. By [Boi, BN].
there exist unique rational numbers $b_{\mu},f_{\mu},g_{\mu},h_{\mu}$
depending only on the (usual) partitions $\mu$ such that
$\displaystyle{\sum_{n}c\big{(}T_{X^{[n]}}\big{)}}\,q^{n}$ is equal to
$$\exp\left(\sum_{\mu}q^{|\mu|}\Big{(}b_{\mu}\mathfrak{a}_{-\mu}(1_{X})+f_{\mu}%
\mathfrak{a}_{-\mu}(e_{X})+g_{\mu}\mathfrak{a}_{-\mu}(K_{X})+h_{\mu}\mathfrak{%
a}_{-\mu}(K_{X}^{2})\Big{)}\right)|0\rangle;$$
in addition, $b_{2i}=0$,
$b_{2i-1}=(-1)^{i-1}C_{i-1}/(2i-1)$,
$b_{(1^{i})}=f_{(1^{i})}=-g_{(1^{i})}=\sigma_{1}(i)/i$, and $h_{(1^{i})}=0$.
In Theorem 6.4, we determine $b_{(i,1^{j})}$ for $i\geq 2$ and $j\geq 0$.
The paper is organized as follows. In Sect. 2,
we review the Heisenberg operators of Grojnowski and Nakajima,
and the structure of the Chern character operators.
In Sect. 3, we recall the vertex operator of Carlsson and Okounkov.
In Sect. 4, we compute the trace (1.4).
Theorem 1.2 and Theorem 1.3 are proved
in Sect. 5. In Sect. 6, we determine the universal constants
$b_{(i,1^{j})}$ for $i\geq 2$ and $j\geq 0$.
Convention. All the (co)homology groups are in $\mathbb{C}$-coefficients unless
otherwise specified.
For $\alpha,\beta\in H^{*}(Y)$ where $Y$ is a smooth projective variety,
$\alpha\beta$ and $\alpha\cdot\beta$ denote the cup product $\alpha\cup\beta$,
and $\langle\alpha,\beta\rangle$ denotes
$\displaystyle{\int_{Y}\alpha\beta}$.
Acknowledgment. The authors thank Professors Dan Edidin, Wei-ping Li and Weiqiang Wang
for stimulating discussions and valuable helps. The second author also thanks the Mathematics
Department of the University of Missouri for its hospitality during his visit in
August 2015 as a Miller’s Scholar.
2. Basics on Hilbert schemes of points on surfaces
In this section, we will review some basic aspects of the Hilbert schemes of points on surfaces.
We will recall the definition of the Heisenberg operators of Grojnowski and Nakajima,
and the structure of the Chern character operators.
Let $X$ be a smooth projective complex surface,
and ${X^{[n]}}$ be the Hilbert scheme of $n$ points in $X$.
An element in ${X^{[n]}}$ is represented by a
length-$n$ $0$-dimensional closed subscheme $\xi$ of $X$. For $\xi\in{X^{[n]}}$, let $I_{\xi}$ be the corresponding sheaf of ideals. It
is well known that ${X^{[n]}}$ is smooth.
Define the universal codimension-$2$ subscheme:
$$\displaystyle{\mathcal{Z}}_{n}=\{(\xi,x)\subset{X^{[n]}}\times X\,|\,x\in{\rm
Supp%
}{(\xi)}\}\subset{X^{[n]}}\times X.$$
Denote by $p_{1}$ and $p_{2}$ the projections of ${X^{[n]}}\times X$ to
${X^{[n]}}$ and $X$ respectively. Let
$$\displaystyle{\mathbb{H}}_{X}=\bigoplus_{n=0}^{\infty}H^{*}({X^{[n]}})$$
be the direct sum of total cohomology groups of the Hilbert schemes ${X^{[n]}}$.
For $m\geq 0$ and $n>0$, let $Q^{[m,m]}=\emptyset$ and define
$Q^{[m+n,m]}$ to be the closed subset:
$$\{(\xi,x,\eta)\in X^{[m+n]}\times X\times X^{[m]}\,|\,\xi\supset\eta\text{ and%
}\mbox{Supp}(I_{\eta}/I_{\xi})=\{x\}\}.$$
We recall Nakajima’s definition of the Heisenberg operators [Nak].
Let $\alpha\in H^{*}(X)$. Set $\mathfrak{a}_{0}(\alpha)=0$.
For $n>0$, the operator $\mathfrak{a}_{-n}(\alpha)\in{\rm End}({\mathbb{H}}_{X})$ is
defined by
$$\mathfrak{a}_{-n}(\alpha)(a)=\tilde{p}_{1*}([Q^{[m+n,m]}]\cdot\tilde{\rho}^{*}%
\alpha\cdot\tilde{p}_{2}^{*}a)$$
for $a\in H^{*}(X^{[m]})$, where $\tilde{p}_{1},\tilde{\rho},\tilde{p}_{2}$ are the projections of $X^{[m+n]}\times X\times X^{[m]}$ to $X^{[m+n]},X,X^{[m]}$ respectively. Define
$\mathfrak{a}_{n}(\alpha)\in{\rm End}({\mathbb{H}}_{X})$ to be $(-1)^{n}$ times the
operator obtained from the definition of $\mathfrak{a}_{-n}(\alpha)$ by switching the roles of $\tilde{p}_{1}$ and $\tilde{p}_{2}$.
We often refer to $\mathfrak{a}_{-n}(\alpha)$ (resp. $\mathfrak{a}_{n}(\alpha)$)
as the creation (resp. annihilation) operator.
The following is from [Nak, Gro]. Our convention of the sign follows [LQW2].
Theorem 2.1.
The operators $\mathfrak{a}_{n}(\alpha)$ satisfy
the commutation relation:
$$\displaystyle\displaystyle{[\mathfrak{a}_{m}(\alpha),\mathfrak{a}_{n}(\beta)]=%
-m\;\delta_{m,-n}\cdot\langle\alpha,\beta\rangle\cdot{\rm Id}_{{\mathbb{H}}_{X%
}}}.$$
The space ${\mathbb{H}}_{X}$ is an irreducible module over the Heisenberg
algebra generated by the operators $\mathfrak{a}_{n}(\alpha)$ with a
highest weight vector $|0\rangle=1\in H^{0}(X^{[0]})\cong\mathbb{C}$.
The Lie bracket in the above theorem is understood in the super
sense according to the parity of the cohomology degrees of the
cohomology classes involved. It follows from
Theorem 2.1 that the space ${\mathbb{H}}_{X}$ is linearly
spanned by all the Heisenberg monomials $\mathfrak{a}_{n_{1}}(\alpha_{1})\cdots\mathfrak{a}_{n_{k}}(\alpha_{k})|0\rangle$
where $k\geq 0$ and $n_{1},\ldots,n_{k}<0$.
Definition 2.2.
(i)
Let $\alpha\in H^{*}(X)$ and $k\geq 1$. Define $\tau_{k*}:H^{*}(X)\to H^{*}(X^{k})$
to be the linear map induced by the diagonal embedding $\tau_{k}:X\to X^{k}$, and
$$(\mathfrak{a}_{m_{1}}\cdots\mathfrak{a}_{m_{k}})(\alpha)=\mathfrak{a}_{m_{1}}%
\cdots\mathfrak{a}_{m_{k}}(\tau_{*k}\alpha)=\sum_{j}\mathfrak{a}_{m_{1}}(%
\alpha_{j,1})\cdots\mathfrak{a}_{m_{k}}(\alpha_{j,k})$$
when $\tau_{k*}\alpha=\sum_{j}\alpha_{j,1}\otimes\cdots\otimes\alpha_{j,k}$ via the Künneth decomposition of $H^{*}(X^{k})$.
(ii)
Let $\lambda=\big{(}\cdots(-2)^{m_{-2}}(-1)^{m_{-1}}1^{m_{1}}2^{m_{2}}\cdots\big{)}$ be a generalized partition of the integer $n=\sum_{i}im_{i}$ whose
part $i\in\mathbb{Z}$ has multiplicity $m_{i}$. Define $\ell(\lambda)=\sum_{i}m_{i}$, $|\lambda|=\sum_{i}im_{i}=n$, $s(\lambda)=\sum_{i}i^{2}m_{i}$, $\lambda!=\prod_{i}m_{i}!$, and
$$\displaystyle\mathfrak{a}_{\lambda}(\alpha)=\left(\prod_{i}(\mathfrak{a}_{i})^%
{m_{i}}\right)(\alpha)$$
where the product $\prod_{i}(\mathfrak{a}_{i})^{m_{i}}$ is understood to be
$\cdots\mathfrak{a}_{-2}^{m_{-2}}\mathfrak{a}_{-1}^{m_{-1}}\mathfrak{a}_{1}^{m_%
{1}}\mathfrak{a}_{2}^{m_{2}}\cdots$.
The set of all generalized partitions is denoted by $\widetilde{\mathcal{P}}$.
(iii)
A generalized partition becomes a partition
in the usual sense if the multiplicity $m_{i}=0$ for all $i<0$.
The set of all partitions is denoted by $\mathcal{P}$.
For $n>0$ and a homogeneous class $\alpha\in H^{*}(X)$, let
$|\alpha|=s$ if $\alpha\in H^{s}(X)$, and let $G_{k}(\alpha,n)$ be
the homogeneous component in $H^{|\alpha|+2k}({X^{[n]}})$ of
$$\displaystyle G(\alpha,n)=p_{1*}({\rm ch}({\mathcal{O}}_{{\mathcal{Z}}_{n}})%
\cdot p_{2}^{*}\alpha\cdot p_{2}^{*}{\rm td}(X))\in H^{*}({X^{[n]}})$$
(2.1)
where ${\rm ch}({\mathcal{O}}_{{\mathcal{Z}}_{n}})$ denotes the Chern
character of ${\mathcal{O}}_{{\mathcal{Z}}_{n}}$
and ${\rm td}(X)$ denotes the Todd class. We extend the notion $G_{k}(\alpha,n)$ linearly to an arbitrary class $\alpha\in H^{*}(X)$, and set $G(\alpha,0)=0$.
It was proved in [LQW1] that the cohomology ring of ${X^{[n]}}$ is
generated by the classes $G_{k}(\alpha,n)$ where $0\leq k<n$
and $\alpha$ runs over a linear basis of $H^{*}(X)$.
The Chern character operator ${\mathfrak{G}}_{k}(\alpha)\in{\rm End}({{\mathbb{H}}_{X}})$ is the operator acting on $H^{*}({X^{[n]}})$ by the cup product with $G_{k}(\alpha,n)$.
The following is from [LQW2].
Theorem 2.3.
Let $k\geq 0$ and $\alpha\in H^{*}(X)$. Then, $\mathfrak{G}_{k}(\alpha)$ is equal to
$$\displaystyle-\sum_{\ell(\lambda)=k+2,|\lambda|=0}{1\over\lambda!}\mathfrak{a}%
_{\lambda}(\alpha)+\sum_{\ell(\lambda)=k,|\lambda|=0}{s(\lambda)-2\over 24%
\lambda!}\mathfrak{a}_{\lambda}(e_{X}\alpha)$$
$$\displaystyle+$$
$$\displaystyle\sum_{\ell(\lambda)=k+1,|\lambda|=0}{g_{1,\lambda}\over{\lambda}!%
}\mathfrak{a}_{\lambda}(K_{X}\alpha)+\sum_{\ell(\lambda)=k,|\lambda|=0}{g_{2,%
\lambda}\over{\lambda}!}\mathfrak{a}_{\lambda}(K_{X}^{2}\alpha)$$
where all the numbers $g_{1,\lambda}$ and $g_{2,\lambda}$ are independent of
$X$ and $\alpha$.
For $\alpha_{1},\ldots,\alpha_{N}\in H^{*}(X)$ and integers $k_{1},\ldots,k_{N}\geq 0$,
define the series
$$\displaystyle F^{\alpha_{1},\ldots,\alpha_{N}}_{k_{1},\ldots,k_{N}}(q)=\sum_{n%
}q^{n}\int_{X^{[n]}}\left(\prod_{i=1}^{N}G_{k_{i}}(\alpha_{i},n)\right)c\big{(%
}T_{X^{[n]}}\big{)}.$$
(2.2)
In view of Göttsche’s Theorem in [Got], we have
$F(q)=(q;q)_{\infty}^{-\chi(X)}$.
The following is from [LQW3] and will be used throughout the paper.
Lemma 2.4.
Let $k,s\geq 1$, $n_{1},\ldots,n_{k},m_{1},\ldots,m_{s}\in\mathbb{Z}$, and
$\alpha,\beta\in H^{*}(X)$.
(i)
The commutator $[(\mathfrak{a}_{n_{1}}\cdots\mathfrak{a}_{n_{k}})(\alpha),(\mathfrak{a}_{m_{1}%
}\cdots\mathfrak{a}_{m_{s}})(\beta)]$ is
equal to
$$\displaystyle-\sum_{t=1}^{k}\sum_{j=1}^{s}n_{t}\delta_{n_{t},-m_{j}}\cdot\left%
(\prod_{\ell=1}^{j-1}\mathfrak{a}_{m_{\ell}}\prod_{1\leq u\leq k,u\neq t}%
\mathfrak{a}_{n_{u}}\prod_{\ell=j+1}^{s}\mathfrak{a}_{m_{\ell}}\right)(\alpha%
\beta).$$
(ii)
Let $j$ satisfy $1\leq j<k$. Then,
$(\mathfrak{a}_{n_{1}}\cdots\mathfrak{a}_{n_{k}})(\alpha)$ is equal to
$$\displaystyle\left(\prod_{1\leq s<j}\mathfrak{a}_{n_{s}}\cdot\mathfrak{a}_{n_{%
j+1}}\mathfrak{a}_{n_{j}}\cdot\prod_{j+1<s\leq k}\mathfrak{a}_{n_{s}}\right)(%
\alpha)-n_{j}\delta_{n_{j},-n_{j+1}}\left(\prod_{1\leq s\leq k\atop s\neq j,j+%
1}\mathfrak{a}_{n_{s}}\right)(e_{X}\alpha).$$
3. The vertex operators of Carlsson and Okounkov
In this section, we will recall the vertex operators constructed in [CO, Car],
and use them to rewrite the generating
series $F^{\alpha_{1},\ldots,\alpha_{N}}_{k_{1},\ldots,k_{N}}(q)$
defined in (2.2).
Let $L$ be a line bundle over the smooth projective surface $X$.
Let $\mathbb{E}_{L}$ be the virtual vector bundle on $X^{[k]}\times X^{[\ell]}$
whose fiber at $(I,J)\in X^{[k]}\times X^{[\ell]}$ is given by
$$\mathbb{E}_{L}|_{(I,J)}=\chi(\mathcal{O},L)-\chi(J,I\otimes L).$$
Let $\mathfrak{L}_{m}$ be the trivial line bundle on $X$ with a scaling action of $\mathbb{C}^{*}$ of
character $m$, and let $\Delta_{n}$ be the diagonal in ${X^{[n]}}\times{X^{[n]}}$. Then,
$$\displaystyle\mathbb{E}_{\mathfrak{L}_{m}}|_{\Delta_{n}}=T_{{X^{[n]}},m},$$
(3.1)
the tangent bundle $T_{X^{[n]}}$ with a scaling action of $\mathbb{C}^{*}$ of character $m$.
By abusing notations, we also use $L$ to denote its first Chern class. Put
$$\displaystyle\Gamma_{\pm}(L,z)=\exp\left(\sum_{n>0}{z^{\mp n}\over n}\mathfrak%
{a}_{\pm n}(L)\right).$$
(3.2)
Remark 3.1.
There is a sign difference between the Heisenberg commutation relations used in
[Car] (see p.3 there) and in this paper (see Theorem 2.1).
So for $n>0$, our Heisenberg operators $\mathfrak{a}_{-n}(L)$ and $\mathfrak{a}_{n}(-L)$
are equal to the Heisenberg operators $\mathfrak{a}_{-n}(L)$ and $\mathfrak{a}_{n}(L)$ in [Car].
Accordingly, our operators $\Gamma_{-}(L,z)$ and $\Gamma_{+}(-L,z)$ are equal to
the operators $\Gamma_{-}(L,z)$ and $\Gamma_{+}(L,z)$ in [Car].
The following commutation relations can be found in [Car] (see Remark 3.1):
$$\displaystyle[\Gamma_{+}(L,x),\Gamma_{+}(L^{\prime},y)]=[\Gamma_{-}(L,x),%
\Gamma_{-}(L^{\prime},y)]=0,$$
(3.3)
$$\displaystyle\Gamma_{+}(L,x)\Gamma_{-}(L^{\prime},y)=(1-y/x)^{\langle L,L^{%
\prime}\rangle}\,\Gamma_{-}(L^{\prime},y)\Gamma_{+}(L,x).$$
(3.4)
Let $W(L,z):{\mathbb{H}}_{X}\to{\mathbb{H}}_{X}$ be the vertex operator constructed in [CO, Car]
where $z$ is a formal variable. By [Car], $W(L,z)$ is defined via the pairing
$$\displaystyle\langle W(L,z)\eta,\xi\rangle=\int_{X^{[k]}\times X^{[\ell]}}(%
\eta\otimes\xi)\,c_{k+\ell}(\mathbb{E}_{L})$$
(3.5)
for $\eta\in H^{*}(X^{[k]})$ and $\xi\in H^{*}(X^{[\ell]})$.
The main result in [Car] is (see Remark 3.1):
$$\displaystyle W(L,z)=\Gamma_{-}(L-K_{X},z)\,\Gamma_{+}(-L,z).$$
(3.6)
Lemma 3.2.
Let $\mathfrak{d}$ be the number-of-points operator, i.e., $\mathfrak{d}|_{H^{*}({X^{[n]}})}=n\,{\rm Id}$. Then,
$$\displaystyle F^{\alpha_{1},\ldots,\alpha_{N}}_{k_{1},\ldots,k_{N}}(q)={\rm Tr%
}\,q^{\mathfrak{d}}\,W(\mathfrak{L}_{1},z)\,\prod_{i=1}^{N}\mathfrak{G}_{k_{i}%
}(\alpha_{i}).$$
(3.7)
Proof.
We will show that the coefficients of $q^{n}$ on both sides of (3.7) are equal.
Let $\{e_{j}\}_{j}$ be a linear basis of $H^{*}({X^{[n]}})$.
Then the fundamental class of the diagonal $\Delta_{n}$ in ${X^{[n]}}\times{X^{[n]}}$ is given by
$[\Delta_{n}]=\sum_{j}(-1)^{|e_{j}|}\,e_{j}\otimes e_{j}^{*}$
where $\{e_{j}^{*}\}_{j}$ is the linear basis of $H^{*}({X^{[n]}})$ dual to $\{e_{j}\}_{j}$ in the sense
that $\langle e_{j},e_{j^{\prime}}^{*}\rangle=\delta_{j,j^{\prime}}$.
By the definitions of $W(L,z)$ and $\mathfrak{G}_{k}(\alpha)$,
$\displaystyle{{\rm Tr}\,q^{n}W(\mathfrak{L}_{1},z)\,\prod_{i=1}^{N}\mathfrak{G%
}_{k_{i}}(\alpha_{i})}$ is equal to
$$\displaystyle q^{n}\sum_{j}(-1)^{|e_{j}|}\,\left\langle W(\mathfrak{L}_{1},z)%
\left(\prod_{i=1}^{N}\mathfrak{G}_{k_{i}}(\alpha_{i})\right)e_{j},e_{j}^{*}\right\rangle$$
$$\displaystyle=$$
$$\displaystyle q^{n}\sum_{j}(-1)^{|e_{j}|}\,\int_{{X^{[n]}}\times{X^{[n]}}}%
\left(\left(\prod_{i=1}^{N}\mathfrak{G}_{k_{i}}(\alpha_{i})\right)e_{j}\otimes
e%
_{j}^{*}\right)\,c_{2n}(\mathbb{E}_{\mathfrak{L}_{1}})$$
$$\displaystyle=$$
$$\displaystyle q^{n}\sum_{j}(-1)^{|e_{j}|}\,\int_{{X^{[n]}}\times{X^{[n]}}}%
\left(\left(\prod_{i=1}^{N}G_{k_{i}}(\alpha_{i},n)\right)e_{j}\otimes e_{j}^{*%
}\right)\,c_{2n}(\mathbb{E}_{\mathfrak{L}_{1}})$$
$$\displaystyle=$$
$$\displaystyle q^{n}\sum_{j}(-1)^{|e_{j}|}\,\int_{{X^{[n]}}\times{X^{[n]}}}(e_{%
j}\otimes e_{j}^{*})\,p_{1}^{*}\left(\prod_{i=1}^{N}G_{k_{i}}(\alpha_{i},n)%
\right)\,c_{2n}(\mathbb{E}_{\mathfrak{L}_{1}})$$
$$\displaystyle=$$
$$\displaystyle q^{n}\int_{{X^{[n]}}\times{X^{[n]}}}[\Delta_{n}]\,p_{1}^{*}\left%
(\prod_{i=1}^{N}G_{k_{i}}(\alpha_{i},n)\right)\,c_{2n}(\mathbb{E}_{\mathfrak{L%
}_{1}})$$
where $p_{1}:{X^{[n]}}\times{X^{[n]}}\to{X^{[n]}}$ denotes the first projection.
By (3.1), we have $c_{2n}\big{(}\mathbb{E}_{\mathfrak{L}_{1}}\big{)}|_{\Delta_{n}}=c\big{(}T_{X^{%
[n]}}\big{)}$.
Here and below, we implicitly set $t=1$ for the generator $t$ of
the equivariant cohomology $H^{*}_{\mathbb{C}^{*}}({\rm pt})$ of a point. Therefore,
$${\rm Tr}\,q^{n}W(\mathfrak{L}_{1},z)\,\prod_{i=1}^{N}\mathfrak{G}_{k_{i}}(%
\alpha_{i})=q^{n}\int_{{X^{[n]}}}\left(\prod_{i=1}^{N}G_{k_{i}}(\alpha_{i},n)%
\right)\,c\big{(}T_{X^{[n]}}\big{)}.$$
$$\qed$$
4. The trace $\displaystyle{{\rm Tr}\,q^{\mathfrak{d}}\,W(\mathfrak{L}_{1},z)\,\prod_{i=1}^{%
N}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})\over{\lambda}^{(i)}!}}$ and
the series $F^{\alpha_{1},\ldots,\alpha_{N}}_{k_{1},\ldots,k_{N}}(q)$
In this section, we will first determine the structure of
$\displaystyle{{\rm Tr}\,q^{\mathfrak{d}}\,W(\mathfrak{L}_{1},z)\,\prod_{i=1}^{%
N}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})\over{\lambda}^{(i)}!}}$.
Then, the structure of the generating series
$F^{\alpha_{1},\ldots,\alpha_{N}}_{k_{1},\ldots,k_{N}}(q)$ will
follow from Lemma 3.2, Theorem 2.3 and the structure of
$\displaystyle{{\rm Tr}\,q^{\mathfrak{d}}\,W(\mathfrak{L}_{1},z)\,\prod_{i=1}^{%
N}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})\over{\lambda}^{(i)}!}}$.
We begin with four technical lemmas. To explain the ideas behind these lemmas,
note from (3.6) that $\displaystyle{{\rm Tr}\,q^{\mathfrak{d}}\,W(\mathfrak{L}_{1},z)\,\prod_{i=1}^{%
N}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})\over{\lambda}^{(i)}!}}$ is equal to
$$\displaystyle{\rm Tr}\,q^{\mathfrak{d}}\,\Gamma_{-}(\mathfrak{L}_{1}-K_{X},z)%
\,\Gamma_{+}(-\mathfrak{L}_{1},z)\,\prod_{i=1}^{N}{\mathfrak{a}_{{\lambda}^{(i%
)}}(\alpha_{i})\over{\lambda}^{(i)}!}.$$
(4.1)
Lemma 4.1 deals with the commutator between
$\displaystyle{{\mathfrak{a}_{\lambda}(\alpha)\over{\lambda}!}}$ and
$\displaystyle{\exp\left({z^{n}\over n}\mathfrak{a}_{-n}(\gamma)\right)}$.
It enables us in Lemma 4.2
to eliminate $\Gamma_{-}(\mathfrak{L}_{1}-K_{X},z)$ from (4.1),
and allows us in Lemma 4.3 to eliminate $\Gamma_{+}(-\mathfrak{L}_{1},z)$
from (4.1). Lemma 4.4 determines the structure
of $\displaystyle{{\rm Tr}\,q^{\mathfrak{d}}\,\prod_{i=1}^{N}{\mathfrak{a}_{{%
\lambda}^{(i)}}(\alpha_{i})\over{\lambda}^{(i)}!}}$.
The proofs of these lemmas are standard but lengthy.
Recall from Definition 2.2 (ii) that $\widetilde{\mathcal{P}}$ denotes
the set of generalized partitions.
If $\lambda=\big{(}\cdots(-2)^{s_{-2}}(-1)^{s_{-1}}1^{s_{1}}2^{s_{2}}\cdots\big{)}$
and $\mu=\big{(}\cdots(-2)^{t_{-2}}(-1)^{t_{-1}}1^{t_{1}}2^{t_{2}}\cdots\big{)}$, let
$${\lambda}-\mu=\big{(}\cdots(-2)^{s_{-2}-t_{-2}}(-1)^{s_{-1}-t_{-1}}1^{s_{1}-t_%
{1}}2^{s_{2}-t_{2}}\cdots\big{)}$$
with the convention that ${\lambda}-\mu=\emptyset$ if $s_{i}<t_{i}$ for some $i$.
Lemma 4.1.
Let ${\lambda}\in\widetilde{\mathcal{P}}$. Assume that $\gamma\in H^{\rm even}(X)$. Then,
$$\displaystyle{\mathfrak{a}_{\lambda}(\alpha)\over{\lambda}!}\exp\left({z^{n}%
\over n}\mathfrak{a}_{-n}(\gamma)\right)$$
$$\displaystyle=$$
$$\displaystyle\exp\left({z^{n}\over n}\mathfrak{a}_{-n}(\gamma)\right)\cdot\sum%
_{i\geq 0}{(-z^{n})^{i}\over i!}{\mathfrak{a}_{{\lambda}-(n^{i})}(\gamma^{i}%
\alpha)\over({{\lambda}-(n^{i})})!},$$
(4.2)
$$\displaystyle\exp\left({z^{n}\over n}\mathfrak{a}_{n}(\gamma)\right)\cdot{%
\mathfrak{a}_{\lambda}(\alpha)\over{\lambda}!}$$
$$\displaystyle=$$
$$\displaystyle\sum_{i\geq 0}{(-z^{n})^{i}\over i!}{\mathfrak{a}_{{\lambda}-((-n%
)^{i})}(\gamma^{i}\alpha)\over({{\lambda}-((-n)^{i})})!}\exp\left({z^{n}\over n%
}\mathfrak{a}_{n}(\gamma)\right).$$
(4.3)
Proof.
Note that the adjoint of $\mathfrak{a}_{m}(\beta)$ is equal to $(-1)^{m}\mathfrak{a}_{-m}(\beta)$.
So (4.3) follows from (4.2) by taking adjoint on both sides of
(4.2) and by making suitable adjustments.
To prove (4.2), put $A=\displaystyle{{\mathfrak{a}_{\lambda}(\alpha)\over{\lambda}!}\exp\left({z^{n%
}\over n}\mathfrak{a}_{-n}(\gamma)\right)}$. Then,
$$\displaystyle A$$
$$\displaystyle=$$
$$\displaystyle{\mathfrak{a}_{\lambda}(\alpha)\over{\lambda}!}\sum_{t\geq 0}{1%
\over t!}\left({z^{n}\over n}\mathfrak{a}_{-n}(\gamma)\right)^{t}$$
$$\displaystyle=$$
$$\displaystyle{1\over{\lambda}!}\sum_{t\geq 0}{1\over t!}\left({z^{n}\over n}%
\right)^{t}\sum_{i=0}^{t}{t\choose i}\big{(}\mathfrak{a}_{-n}(\gamma)\big{)}^{%
t-i}\big{[}\cdots[\mathfrak{a}_{\lambda}(\alpha),\underbrace{\mathfrak{a}_{-n}%
(\gamma)],\ldots,\mathfrak{a}_{-n}(\gamma)}_{i\,\,{\rm times}}\big{]}.$$
Let $\lambda=\big{(}\cdots(-2)^{s_{-2}}(-1)^{s_{-1}}1^{s_{1}}2^{s_{2}}\cdots\big{)}$.
We conclude from Lemma 2.4 (i) that the commutator
$\big{[}\cdots[\mathfrak{a}_{\lambda}(\alpha),\underbrace{\mathfrak{a}_{-n}(%
\gamma)],\ldots,\mathfrak{a}_{-n}(\gamma)}_{i\,\,{\rm times}}\big{]}$ is equal to
$$s_{n}(s_{n}-1)\cdots(s_{n}+1-i)\,(-n)^{i}\,\mathfrak{a}_{{\lambda}-(n^{i})}(%
\gamma^{i}\alpha)$$
where by our convention, ${\lambda}-(n^{i})=\emptyset$ if $s_{n}<i$. So $A$ is equal to
$$\displaystyle{1\over{\lambda}!}\sum_{t\geq 0}{1\over t!}\left({z^{n}\over n}%
\right)^{t}\sum_{i=0}^{t}{t\choose i}\big{(}\mathfrak{a}_{-n}(\gamma)\big{)}^{%
t-i}\cdot$$
$$\displaystyle\quad\quad\cdot s_{n}(s_{n}-1)\cdots(s_{n}+1-i)\,(-n)^{i}\,%
\mathfrak{a}_{{\lambda}-(n^{i})}(\gamma^{i}\alpha).$$
Simplifying this, we complete the proof of our formula (4.2).
∎
Let $\widetilde{\mathcal{P}}_{+}=\mathcal{P}$ be the subset of $\widetilde{\mathcal{P}}$ consisting of the usual partitions,
and $\widetilde{\mathcal{P}}_{-}$ be the subset of $\widetilde{\mathcal{P}}$ consisting of generalized partitions
of the form $\big{(}\cdots(-2)^{s_{-2}}(-1)^{s_{-1}}\big{)}$.
Lemma 4.2.
Let ${\lambda}^{(1)},\ldots,{\lambda}^{(N)}\in\widetilde{\mathcal{P}}$ be generalized partitions,
and $\alpha_{1},\ldots,\alpha_{N}\in H^{*}(X)$.
Then, the trace $\displaystyle{{\rm Tr}\,q^{\mathfrak{d}}\,W(\mathfrak{L}_{1},z)\,\prod_{i=1}^{%
N}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})\over{\lambda}^{(i)}!}}$ is equal to
$$\sum_{\mu^{(i,s)}\in\widetilde{\mathcal{P}}_{+}\atop 1\leq i\leq N,s\geq 1}\,%
\prod_{1\leq i\leq N\atop s,n\geq 1}{(-(zq^{s})^{n})^{m^{(i,s)}_{n}}\over m^{(%
i,s)}_{n}!}\cdot$$
$$\displaystyle\cdot{\rm Tr}\,q^{\mathfrak{d}}\,\Gamma_{+}(-\mathfrak{L}_{1},z)%
\prod_{i=1}^{N}{\mathfrak{a}_{{\lambda}^{(i)}-\sum_{s\geq 1}\mu^{(i,s)}}\big{(%
}(1_{X}-K_{X})^{\sum_{s,n\geq 1}m^{(i,s)}_{n}}\alpha_{i}\big{)}\over\big{(}{%
\lambda}^{(i)}-\sum_{s\geq 1}\mu^{(i,s)}\big{)}!}.$$
(4.4)
where $\mu^{(i,s)}=\big{(}1^{m^{(i,s)}_{1}}\cdots n^{m^{(i,s)}_{n}}\cdots\big{)}\in%
\widetilde{\mathcal{P}}_{+}$ for $1\leq i\leq N$ and $s\geq 1$.
Proof.
For simplicity, put $Q_{1}=\displaystyle{{\rm Tr}\,q^{\mathfrak{d}}\,W(\mathfrak{L}_{1},z)\,\prod_{%
i=1}^{N}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})\over{\lambda}^{(i)}!}}$. By (3.6),
$$\displaystyle Q_{1}$$
$$\displaystyle=$$
$$\displaystyle{\rm Tr}\,q^{\mathfrak{d}}\,\Gamma_{-}(\mathfrak{L}_{1}-K_{X},z)%
\,\Gamma_{+}(-\mathfrak{L}_{1},z)\,\prod_{i=1}^{N}{\mathfrak{a}_{{\lambda}^{(i%
)}}(\alpha_{i})\over{\lambda}^{(i)}!}$$
$$\displaystyle=$$
$$\displaystyle{\rm Tr}\,\Gamma_{-}(\mathfrak{L}_{1}-K_{X},zq)\,q^{\mathfrak{d}}%
\,\Gamma_{+}(-\mathfrak{L}_{1},z)\,\prod_{i=1}^{N}{\mathfrak{a}_{{\lambda}^{(i%
)}}(\alpha_{i})\over{\lambda}^{(i)}!}$$
$$\displaystyle=$$
$$\displaystyle{\rm Tr}\,q^{\mathfrak{d}}\,\Gamma_{+}(-\mathfrak{L}_{1},z)\prod_%
{i=1}^{N}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})\over{\lambda}^{(i)}!}%
\cdot\Gamma_{-}(\mathfrak{L}_{1}-K_{X},zq).$$
By (3.2) and applying (4.2) repeatedly, we obtain
$$\displaystyle\prod_{i=1}^{N}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})\over{%
\lambda}^{(i)}!}\cdot\Gamma_{-}(\mathfrak{L}_{1}-K_{X},zq)$$
$$\displaystyle=$$
$$\displaystyle\prod_{i=1}^{N}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})\over{%
\lambda}^{(i)}!}\cdot\exp\left(\sum_{n>0}{(zq)^{n}\over n}\mathfrak{a}_{-n}(%
\mathfrak{L}_{1}-K_{X})\right)$$
$$\displaystyle=$$
$$\displaystyle\Gamma_{-}(\mathfrak{L}_{1}-K_{X},zq)\sum_{\mu^{(i,1)}\in%
\widetilde{\mathcal{P}}_{+}\atop 1\leq i\leq N}\prod_{1\leq i\leq N\atop n\geq
1%
}{(-(zq)^{n})^{m_{n}^{(i,1)}}\over m_{n}^{(i,1)}!}\cdot$$
$$\displaystyle\quad\quad\cdot\prod_{i=1}^{N}{\mathfrak{a}_{{\lambda}^{(i)}-\mu^%
{(i,1)}}\big{(}(1_{X}-K_{X})^{\sum_{n\geq 1}m^{(i,1)}_{n}}\alpha_{i}\big{)}%
\over\big{(}{\lambda}^{(i)}-\mu^{(i,1)}\big{)}!}.$$
where $\mu^{(i,1)}=\big{(}1^{m^{(i,1)}_{1}}\cdots n^{m^{(i,1)}_{n}}\cdots\big{)}$.
Therefore, $Q_{1}$ is equal to
$${\rm Tr}\,q^{\mathfrak{d}}\,\Gamma_{+}(-\mathfrak{L}_{1},z)\Gamma_{-}(%
\mathfrak{L}_{1}-K_{X},zq)\sum_{\mu^{(i,1)}\in\widetilde{\mathcal{P}}_{+}\atop
1%
\leq i\leq N}\prod_{1\leq i\leq N\atop n\geq 1}{(-(zq)^{n})^{m_{n}^{(i,1)}}%
\over m_{n}^{(i,1)}!}\cdot$$
$$\cdot\prod_{i=1}^{N}{\mathfrak{a}_{{\lambda}^{(i)}-\mu^{(i,1)}}\big{(}(1_{X}-K%
_{X})^{\sum_{n\geq 1}m^{(i,1)}_{n}}\alpha_{i}\big{)}\over\big{(}{\lambda}^{(i)%
}-\mu^{(i,1)}\big{)}!}.$$
Since $\langle\mathfrak{L}_{1},\mathfrak{L}_{1}-K_{X}\rangle=0$, we see from (3.4) that
$Q_{1}$ is equal to
$${\rm Tr}\,q^{\mathfrak{d}}\,\Gamma_{-}(\mathfrak{L}_{1}-K_{X},zq)\Gamma_{+}(-%
\mathfrak{L}_{1},z)\cdot\sum_{\mu^{(i,1)}\in\widetilde{\mathcal{P}}_{+}\atop 1%
\leq i\leq N}\prod_{1\leq i\leq N\atop n\geq 1}{(-(zq)^{n})^{m_{n}^{(i,1)}}%
\over m_{n}^{(i,1)}!}\cdot$$
$$\cdot\prod_{i=1}^{N}{\mathfrak{a}_{{\lambda}^{(i)}-\mu^{(i,1)}}\big{(}(1_{X}-K%
_{X})^{\sum_{n\geq 1}m^{(i,1)}_{n}}\alpha_{i}\big{)}\over\big{(}{\lambda}^{(i)%
}-\mu^{(i,1)}\big{)}!}.$$
Repeat the above process beginning at line (4) $s$ times. Then,
$Q_{1}$ is equals to
$${\rm Tr}\,q^{\mathfrak{d}}\,\Gamma_{-}(\mathfrak{L}_{1}-K_{X},zq^{s})\Gamma_{+%
}(-\mathfrak{L}_{1},z)\cdot\sum_{\mu^{(i,r)}\in\widetilde{\mathcal{P}}_{+}%
\atop 1\leq i\leq N,1\leq r\leq s}\prod_{1\leq i\leq N\atop 1\leq r\leq s,n%
\geq 1}{(-(zq^{r})^{n})^{m_{n}^{(i,r)}}\over m_{n}^{(i,r)}!}\cdot$$
$$\cdot\prod_{i=1}^{N}{\mathfrak{a}_{{\lambda}^{(i)}-\sum_{r=1}^{s}\mu^{(i,r)}}%
\big{(}(1_{X}-K_{X})^{\sum_{r=1}^{s}\sum_{n\geq 1}m^{(i,r)}_{n}}\alpha_{i}\big%
{)}\over\big{(}{\lambda}^{(i)}-\sum_{r=1}^{s}\mu^{(i,r)}\big{)}!}$$
where $\mu^{(i,r)}=\big{(}1^{m^{(i,r)}_{1}}\cdots n^{m^{(i,r)}_{n}}\cdots\big{)}$.
Letting $s\to+\infty$ proves our lemma.
∎
Lemma 4.3.
Let $\tilde{\lambda}^{(1)},\ldots,\tilde{\lambda}^{(N)}\in\widetilde{\mathcal{P}}$ be generalized partitions,
and $\tilde{\alpha}_{1},\ldots,\tilde{\alpha}_{N}\in H^{*}(X)$.
Then, the trace $\displaystyle{{\rm Tr}\,q^{\mathfrak{d}}\,\Gamma_{+}(-\mathfrak{L}_{1},z)\prod%
_{i=1}^{N}{\mathfrak{a}_{\tilde{\lambda}^{(i)}}(\tilde{\alpha}_{i})\over\tilde%
{\lambda}^{(i)}!}}$ is equal to
$$\displaystyle\sum_{\sum_{i=1}^{N}(|\tilde{\lambda}^{(i)}|-\sum_{t\geq 1}|%
\tilde{\mu}^{(i,t)}|)=0}\prod_{1\leq i\leq N\atop t,n\geq 1}{(z^{-1}q^{t-1})^{%
n\tilde{m}^{(i,t)}_{n}}\over{\tilde{m}^{(i,t)}_{n}}!}\cdot{\rm Tr}\,q^{%
\mathfrak{d}}\prod_{i=1}^{N}{\mathfrak{a}_{\tilde{\lambda}^{(i)}-\sum_{t\geq 1%
}\tilde{\mu}^{(i,t)}}(\tilde{\alpha}_{i})\over\big{(}\tilde{\lambda}^{(i)}-%
\sum_{t\geq 1}\tilde{\mu}^{(i,t)}\big{)}!}$$
where $\tilde{\mu}^{(i,t)}=\big{(}\cdots(-n)^{\tilde{m}^{(i,t)}_{n}}\cdots(-1)^{%
\tilde{m}^{(i,t)}_{1}}\big{)}\in\widetilde{\mathcal{P}}_{-}$ for $1\leq i\leq N$ and $t\geq 1$.
Proof.
For simplicity, put $Q_{2}=\displaystyle{{\rm Tr}\,q^{\mathfrak{d}}\,\Gamma_{+}(-\mathfrak{L}_{1},z%
)\prod_{i=1}^{N}{\mathfrak{a}_{\tilde{\lambda}^{(i)}}(\tilde{\alpha}_{i})\over%
\tilde{\lambda}^{(i)}!}}$. By (3.2),
$$Q_{2}={\rm Tr}\,q^{\mathfrak{d}}\,\exp\left(\sum_{n>0}{z^{-n}\over n}\mathfrak%
{a}_{n}(-\mathfrak{L}_{1})\right)\,\prod_{i=1}^{N}{\mathfrak{a}_{\tilde{%
\lambda}^{(i)}}(\tilde{\alpha}_{i})\over\tilde{\lambda}^{(i)}!}.$$
Applying (4.3) repeatedly, we see that $Q_{2}$ is equal to
$$\sum_{\tilde{\mu}^{(i,1)}\in\widetilde{\mathcal{P}}_{-}\atop 1\leq i\leq N}%
\prod_{1\leq i\leq N\atop n\geq 1}{z^{-n\tilde{m}^{(i,1)}_{n}}\over{\tilde{m}^%
{(i,1)}_{n}}!}\cdot{\rm Tr}\,q^{\mathfrak{d}}\prod_{i=1}^{N}{\mathfrak{a}_{%
\tilde{\lambda}^{(i)}-\tilde{\mu}^{(i,1)}}(\tilde{\alpha}_{i})\over\big{(}%
\tilde{\lambda}^{(i)}-\tilde{\mu}^{(i,1)}\big{)}!}\cdot\Gamma_{+}(-\mathfrak{L%
}_{1},z)$$
where $\tilde{\mu}^{(i,1)}=\big{(}\cdots(-n)^{\tilde{m}^{(i,1)}_{n}}\cdots(-1)^{%
\tilde{m}^{(i,1)}_{1}}\big{)}\in\widetilde{\mathcal{P}}_{-}$. Now $Q_{2}$ is equal to
$$\displaystyle\sum_{\tilde{\mu}^{(i,1)}\in\widetilde{\mathcal{P}}_{-}\atop 1%
\leq i\leq N}\prod_{1\leq i\leq N\atop n\geq 1}{z^{-n\tilde{m}^{(i,1)}_{n}}%
\over{\tilde{m}^{(i,1)}_{n}}!}\cdot q^{\sum_{i=1}^{N}(|\tilde{\mu}^{(i,1)}|-|%
\tilde{\lambda}^{(i)}|)}{\rm Tr}\,\prod_{i=1}^{N}{\mathfrak{a}_{\tilde{\lambda%
}^{(i)}-\tilde{\mu}^{(i,1)}}(\tilde{\alpha}_{i})\over\big{(}\tilde{\lambda}^{(%
i)}-\tilde{\mu}^{(i,1)}\big{)}!}\cdot q^{\mathfrak{d}}\Gamma_{+}(-\mathfrak{L}%
_{1},z)$$
$$\displaystyle=$$
$$\displaystyle\sum_{\tilde{\mu}^{(i,1)}\in\widetilde{\mathcal{P}}_{-}\atop 1%
\leq i\leq N}\prod_{1\leq i\leq N\atop n\geq 1}{z^{-n\tilde{m}^{(i,1)}_{n}}%
\over{\tilde{m}^{(i,1)}_{n}}!}\cdot q^{\sum_{i=1}^{N}(|\tilde{\mu}^{(i,1)}|-|%
\tilde{\lambda}^{(i)}|)}{\rm Tr}\,q^{\mathfrak{d}}\Gamma_{+}(-\mathfrak{L}_{1}%
,z)\prod_{i=1}^{N}{\mathfrak{a}_{\tilde{\lambda}^{(i)}-\tilde{\mu}^{(i,1)}}(%
\tilde{\alpha}_{i})\over\big{(}\tilde{\lambda}^{(i)}-\tilde{\mu}^{(i,1)}\big{)%
}!}.$$
By degree reason, ${\rm Tr}\,q^{\mathfrak{d}}\Gamma_{+}(-\mathfrak{L}_{1},z)\mathfrak{a}_{\mu}(%
\beta)=0$ if $|\mu|>0$.
If $|\mu|=0$, then we have ${\rm Tr}\,q^{\mathfrak{d}}\Gamma_{+}(-\mathfrak{L}_{1},z)\mathfrak{a}_{\mu}(%
\beta)={\rm Tr}\,q^{\mathfrak{d}}\mathfrak{a}_{\mu}(\beta)$. So $Q_{2}$ is equal to
$$\displaystyle\sum_{\sum_{i=1}^{N}(|\tilde{\lambda}^{(i)}|-|\tilde{\mu}^{(i,1)}%
|)<0}\prod_{1\leq i\leq N\atop n\geq 1}{z^{-n\tilde{m}^{(i,1)}_{n}}\over{%
\tilde{m}^{(i,1)}_{n}}!}\cdot q^{\sum_{i=1}^{N}(|\tilde{\mu}^{(i,1)}|-|\tilde{%
\lambda}^{(i)}|)}{\rm Tr}\,q^{\mathfrak{d}}\Gamma_{+}(-\mathfrak{L}_{1},z)%
\prod_{i=1}^{N}{\mathfrak{a}_{\tilde{\lambda}^{(i)}-\tilde{\mu}^{(i,1)}}(%
\tilde{\alpha}_{i})\over\big{(}\tilde{\lambda}^{(i)}-\tilde{\mu}^{(i,1)}\big{)%
}!}$$
$$\displaystyle+\sum_{\sum_{i=1}^{N}(|\tilde{\lambda}^{(i)}|-|\tilde{\mu}^{(i,1)%
}|)=0}\prod_{1\leq i\leq N\atop n\geq 1}{z^{-n\tilde{m}^{(i,1)}_{n}}\over{%
\tilde{m}^{(i,1)}_{n}}!}\cdot{\rm Tr}\,q^{\mathfrak{d}}\prod_{i=1}^{N}{%
\mathfrak{a}_{\tilde{\lambda}^{(i)}-\tilde{\mu}^{(i,1)}}(\tilde{\alpha}_{i})%
\over\big{(}\tilde{\lambda}^{(i)}-\tilde{\mu}^{(i,1)}\big{)}!}.$$
Repeating the process in the previous paragraph $t$ times, we conclude that
$$Q_{2}=U(t)-V(t)$$
where $U(t)$ is given by
$$\displaystyle\sum_{\sum_{i=1}^{N}(|\tilde{\lambda}^{(i)}|-\sum_{r=1}^{t}|%
\tilde{\mu}^{(i,r)}|)<0}\prod_{r=1}^{t}\left(\prod_{1\leq i\leq N\atop n\geq 1%
}{z^{-n\tilde{m}^{(i,r)}_{n}}\over{\tilde{m}^{(i,r)}_{n}}!}\cdot q^{\sum_{i=1}%
^{N}(\sum_{\ell=1}^{r}|\tilde{\mu}^{(i,\ell)}|-|\tilde{\lambda}^{(i)}|)}\right)\cdot$$
(4.6)
$$\displaystyle\cdot{\rm Tr}\,q^{\mathfrak{d}}\Gamma_{+}(-\mathfrak{L}_{1},z)%
\prod_{i=1}^{N}{\mathfrak{a}_{\tilde{\lambda}^{(i)}-\sum_{r=1}^{t}\tilde{\mu}^%
{(i,r)}}(\tilde{\alpha}_{i})\over\big{(}\tilde{\lambda}^{(i)}-\sum_{r=1}^{t}%
\tilde{\mu}^{(i,r)}\big{)}!}$$
(4.7)
with $\tilde{\mu}^{(i,r)}=\big{(}\cdots(-n)^{\tilde{m}^{(i,r)}_{n}}\cdots(-1)^{%
\tilde{m}^{(i,r)}_{1}}\big{)}\in\widetilde{\mathcal{P}}_{-}$, and $V(t)$ is given by
$$\displaystyle\sum_{\sum_{i=1}^{N}(|\tilde{\lambda}^{(i)}|-\sum_{r=1}^{t}|%
\tilde{\mu}^{(i,r)}|)=0}\prod_{r=1}^{t}\left(\prod_{1\leq i\leq N\atop n\geq 1%
}{z^{-n\tilde{m}^{(i,r)}_{n}}\over{\tilde{m}^{(i,r)}_{n}}!}\cdot q^{\sum_{i=1}%
^{N}(\sum_{\ell=1}^{r}|\tilde{\mu}^{(i,\ell)}|-|\tilde{\lambda}^{(i)}|)}\right)\cdot$$
$$\displaystyle\cdot{\rm Tr}\,q^{\mathfrak{d}}\prod_{i=1}^{N}{\mathfrak{a}_{%
\tilde{\lambda}^{(i)}-\sum_{r=1}^{t}\tilde{\mu}^{(i,r)}}(\tilde{\alpha}_{i})%
\over\big{(}\tilde{\lambda}^{(i)}-\sum_{r=1}^{t}\tilde{\mu}^{(i,r)}\big{)}!}.$$
Denote line (4.6) by $\widetilde{U}(t)$.
Since $\sum_{i=1}^{N}(|\tilde{\lambda}^{(i)}|-\sum_{r=1}^{t}|\tilde{\mu}^{(i,r)}|)<0$
and $|\tilde{\mu}^{(i,r)}|<0$, $\widetilde{U}(t)$ is a polynomial in $q$ with coefficients
being bounded in terms of $-\sum_{i=1}^{N}|\tilde{\lambda}^{(i)}|$. Moreover, $q^{t}|\widetilde{U}(t)$.
Line (4.7) is bounded in terms of the generalized partitions $\tilde{\lambda}^{(i)}$.
Since $0<q<1$, $U(t)\to 0$ as $t\to+\infty$.
Letting $t\to+\infty$, we see that $Q_{2}$ equals
$$\displaystyle\sum_{\sum_{i=1}^{N}(|\tilde{\lambda}^{(i)}|-\sum_{t\geq 1}|%
\tilde{\mu}^{(i,t)}|)=0}\prod_{t\geq 1}\left(\prod_{1\leq i\leq N\atop n\geq 1%
}{z^{-n\tilde{m}^{(i,t)}_{n}}\over{\tilde{m}^{(i,t)}_{n}}!}\cdot q^{\sum_{i=1}%
^{N}(\sum_{\ell=1}^{t}|\tilde{\mu}^{(i,\ell)}|-|\tilde{\lambda}^{(i)}|)}\right)\cdot$$
$$\displaystyle\cdot{\rm Tr}\,q^{\mathfrak{d}}\prod_{i=1}^{N}{\mathfrak{a}_{%
\tilde{\lambda}^{(i)}-\sum_{t\geq 1}\tilde{\mu}^{(i,t)}}(\tilde{\alpha}_{i})%
\over\big{(}\tilde{\lambda}^{(i)}-\sum_{t\geq 1}\tilde{\mu}^{(i,t)}\big{)}!}.$$
Replacing $q^{\sum_{i=1}^{N}(\sum_{\ell=1}^{t}|\tilde{\mu}^{(i,\ell)}|-|\tilde{\lambda}^{%
(i)}|)}$
by $q^{-\sum_{i=1}^{N}\sum_{\ell\geq t+1}|\tilde{\mu}^{(i,\ell)}|}$ proves our lemma.
∎
Lemma 4.4.
Let ${\lambda}^{(1)},\ldots,{\lambda}^{(N)}\in\widetilde{\mathcal{P}}$ be generalized partitions,
and $\alpha_{1},\ldots,\alpha_{N}\in H^{*}(X)$ be homogeneous. Then,
$\displaystyle{{\rm Tr}\,q^{\mathfrak{d}}\,\prod_{i=1}^{N}{\mathfrak{a}_{{%
\lambda}^{(i)}}(\alpha_{i})\over{\lambda}^{(i)}!}}$
can be computed by an induction on $N$, and is a linear combination of expressions of the form:
$$\displaystyle(q;q)_{\infty}^{-\chi(X)}\cdot{\rm Sign}(\pi)\cdot\prod_{i=1}^{u}%
\left\langle e_{X}^{m_{i}},\prod_{j\in\pi_{i}}\alpha_{j}\right\rangle\cdot%
\prod_{i=1}^{v}{n_{i}^{k_{i}}q^{n_{i}}\over 1-q^{n_{i}}}$$
(4.8)
where $0\leq v\leq\sum_{i=1}^{N}\ell({\lambda}^{(i)})/2$, $m_{i}\geq 0$, $n_{i}>0$,
all the integers involved and the partition $\{\pi_{1},\ldots,\pi_{u}\}$
of $\{1,\ldots,N\}$ depend only on ${\lambda}^{(1)},\ldots,{\lambda}^{(N)}$,
and ${\rm Sign}(\pi)$ is the sign compensating the formal difference between
$\prod_{i=1}^{u}\prod_{j\in\pi_{i}}\alpha_{j}$ and $\alpha_{1}\cdots\alpha_{N}$.
Moreover, the coefficients of this linear combination are independent of
$q,\alpha_{i},n_{i},X$.
Proof.
For simplicity, put $A_{N}=\displaystyle{{\rm Tr}\,q^{\mathfrak{d}}\,\prod_{i=1}^{N}{\mathfrak{a}_{%
{\lambda}^{(i)}}(\alpha_{i})\over{\lambda}^{(i)}!}}$. Since $\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})$ has conformal weight $|{\lambda}^{(i)}|$,
$A_{N}=0$ unless $\sum_{i=1}^{N}|{\lambda}^{(i)}|=0$.
In the rest of the proof, we will assume $\sum_{i=1}^{N}|{\lambda}^{(i)}|=0$.
We will divide the proof into two cases.
Case 1: $|{\lambda}^{(i)}|=0$ for every $1\leq i\leq N$.
Then, $\ell({\lambda}^{(i)})\geq 2$ for every $i$.
Since $\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})$ has
degree $2(\ell({\lambda}^{(i)})-2)+|\alpha_{i}|$, $A_{N}=0$ unless $\ell({\lambda}^{(i)})=2$
and $|\alpha_{i}|=0$ for all $1\leq i\leq N$. Assume that $\ell({\lambda}^{(i)})=2$
and $|\alpha_{i}|=0$ for all $1\leq i\leq N$. Then for every $1\leq i\leq N$,
we have ${\lambda}^{(i)}=((-n_{i})n_{i})$ for some $n_{i}>0$.
We further assume that $n_{1}=\ldots=n_{r}$ for some $1\leq r\leq N$
and $n_{i}\neq n_{1}$ if $r<i\leq N$. Let $\alpha_{1}=a1_{X}$
and $\tau_{2*}1_{X}=\sum_{j}(-1)^{|\beta_{j}|}\beta_{j}\otimes\gamma_{j}$
with $\langle\beta_{j},\gamma_{j^{\prime}}\rangle=\delta_{j,j^{\prime}}$.
Then, $A_{N}$ is equal to
$$\displaystyle a\sum_{j}(-1)^{|\beta_{j}|}{\rm Tr}\,q^{\mathfrak{d}}\,\mathfrak%
{a}_{-n_{1}}(\beta_{j})\,\mathfrak{a}_{n_{1}}(\gamma_{j})\prod_{i=2}^{N}%
\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})$$
$$\displaystyle=$$
$$\displaystyle aq^{n_{1}}\sum_{j}(-1)^{|\beta_{j}|}{\rm Tr}\,\mathfrak{a}_{-n_{%
1}}(\beta_{j})\,q^{\mathfrak{d}}\,\mathfrak{a}_{n_{1}}(\gamma_{j})\prod_{i=2}^%
{N}\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})$$
$$\displaystyle=$$
$$\displaystyle aq^{n_{1}}\sum_{j}{\rm Tr}\,q^{\mathfrak{d}}\,\mathfrak{a}_{n_{1%
}}(\gamma_{j})\prod_{i=2}^{N}\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})\cdot%
\mathfrak{a}_{-n_{1}}(\beta_{j})$$
$$\displaystyle=$$
$$\displaystyle aq^{n_{1}}\sum_{j}{\rm Tr}\,q^{\mathfrak{d}}\,\mathfrak{a}_{n_{1%
}}(\gamma_{j})\mathfrak{a}_{-n_{1}}(\beta_{j})\prod_{i=2}^{N}\mathfrak{a}_{{%
\lambda}^{(i)}}(\alpha_{i})$$
$$\displaystyle\quad+aq^{n_{1}}\sum_{j}\sum_{i=2}^{r}{\rm Tr}\,q^{\mathfrak{d}}%
\,\mathfrak{a}_{n_{1}}(\gamma_{j})\prod_{k=2}^{i-1}\mathfrak{a}_{{\lambda}^{(k%
)}}(\alpha_{k})\cdot[\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i}),\mathfrak{a}_{%
-n_{1}}(\beta_{j})]\cdot\prod_{k=i+1}^{N}\mathfrak{a}_{{\lambda}^{(k)}}(\alpha%
_{k}).$$
By Lemma 2.4 (i), $A_{N}$ is equal to the sum of the expressions
$$\displaystyle\left\langle e_{X},\alpha_{1}\prod_{i=1}^{k_{1}}\alpha_{j_{i}}%
\right\rangle\cdot{(-n_{1})^{k_{1}}q^{n_{1}}\over 1-q^{n_{1}}}\cdot{\rm Tr}\,q%
^{\mathfrak{d}}\,\prod_{i\in\{2,\ldots,N\}-\{j_{1},\ldots,j_{k_{1}}\}}%
\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})$$
(4.9)
where $0\leq k_{1}\leq r-1$, $\{j_{1},\ldots,j_{k_{1}}\}\subset\{2,\ldots,r\}$,
every factor in $(-n_{1})^{k_{1}}$ comes from a commutator of type
$[\mathfrak{a}_{n_{1}}(\cdot),\mathfrak{a}_{-n_{1}}(\cdot)]$, and the coefficients
of this linear combination depend only on ${\lambda}^{(1)},\ldots,{\lambda}^{(N)}$.
In particular, we have
$$\displaystyle A_{1}={\rm Tr}\,q^{\mathfrak{d}}\,\mathfrak{a}_{{\lambda}^{(1)}}%
(\alpha_{1})=(q;q)_{\infty}^{-\chi(X)}\cdot\langle e_{X},\alpha_{1}\rangle%
\cdot{(-n_{1})q^{n_{1}}\over 1-q^{n_{1}}}.$$
(4.10)
Combining with (4.9), we see that our lemma holds in this case.
Case 2: $\sum_{i=1}^{N}|{\lambda}^{(i)}|=0$ but $|{\lambda}^{(i_{0})}|\neq 0$
for some $i_{0}$. Then, $N\geq 2$, and we may assume that $|{\lambda}^{(i_{0})}|<0$.
To simplify the expressions, we further assume that
every $\alpha_{i}$ has an even degree. Note that $A_{N}$ can be rewritten as
$$\displaystyle{\rm Tr}\,q^{\mathfrak{d}}{\mathfrak{a}_{{\lambda}^{(i_{0})}}(%
\alpha_{i_{0}})\over{\lambda}^{({i_{0}})}!}\prod_{1\leq i\leq N,i\neq i_{0}}{%
\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})\over{\lambda}^{(i)}!}$$
$$\displaystyle+$$
$$\displaystyle\sum_{r=1}^{i_{0}-1}{\rm Tr}\,q^{\mathfrak{d}}\prod_{i=1}^{r-1}{%
\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})\over{\lambda}^{(i)}!}\cdot\left[{%
\mathfrak{a}_{{\lambda}^{(r)}}(\alpha_{r})\over{\lambda}^{(r)}!},{\mathfrak{a}%
_{{\lambda}^{(i_{0})}}(\alpha_{i_{0}})\over{\lambda}^{({i_{0}})}!}\right]\cdot%
\prod_{r+1\leq i\leq N,i\neq i_{0}}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})%
\over{\lambda}^{(i)}!}.$$
Since $q^{\mathfrak{d}}\mathfrak{a}_{{\lambda}^{(i_{0})}}(\alpha_{i_{0}})=q^{-|{%
\lambda}^{(i_{0})}|}\mathfrak{a}_{{\lambda}^{(i_{0})}}(\alpha_{i_{0}})q^{%
\mathfrak{d}}$, we see that $A_{N}$ is equal to
$$\displaystyle q^{-|{\lambda}^{(i_{0})}|}\,{\rm Tr}\,{\mathfrak{a}_{{\lambda}^{%
(i_{0})}}(\alpha_{i_{0}})\over{\lambda}^{({i_{0}})}!}q^{\mathfrak{d}}\prod_{1%
\leq i\leq N,i\neq i_{0}}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})\over{%
\lambda}^{(i)}!}$$
$$\displaystyle+\sum_{r=1}^{i_{0}-1}{\rm Tr}\,q^{\mathfrak{d}}\prod_{i=1}^{r-1}{%
\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})\over{\lambda}^{(i)}!}\cdot\left[{%
\mathfrak{a}_{{\lambda}^{(r)}}(\alpha_{r})\over{\lambda}^{(r)}!},{\mathfrak{a}%
_{{\lambda}^{(i_{0})}}(\alpha_{i_{0}})\over{\lambda}^{({i_{0}})}!}\right]\cdot%
\prod_{r+1\leq i\leq N,i\neq i_{0}}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})%
\over{\lambda}^{(i)}!}$$
$$\displaystyle=$$
$$\displaystyle q^{-|{\lambda}^{(i_{0})}|}\,{\rm Tr}\,q^{\mathfrak{d}}\prod_{1%
\leq i\leq N,i\neq i_{0}}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})\over{%
\lambda}^{(i)}!}\cdot{\mathfrak{a}_{{\lambda}^{(i_{0})}}(\alpha_{i_{0}})\over{%
\lambda}^{({i_{0}})}!}$$
$$\displaystyle+\sum_{r=1}^{i_{0}-1}{\rm Tr}\,q^{\mathfrak{d}}\prod_{i=1}^{r-1}{%
\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})\over{\lambda}^{(i)}!}\cdot\left[{%
\mathfrak{a}_{{\lambda}^{(r)}}(\alpha_{r})\over{\lambda}^{(r)}!},{\mathfrak{a}%
_{{\lambda}^{(i_{0})}}(\alpha_{i_{0}})\over{\lambda}^{({i_{0}})}!}\right]\cdot%
\prod_{r+1\leq i\leq N,i\neq i_{0}}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})%
\over{\lambda}^{(i)}!}.$$
Note that ${\rm Tr}\,q^{\mathfrak{d}}\prod_{1\leq i\leq N,i\neq i_{0}}{\mathfrak{a}_{{%
\lambda}^{(i)}}(\alpha_{i})\over{\lambda}^{(i)}!}\cdot{\mathfrak{a}_{{\lambda}%
^{(i_{0})}}(\alpha_{i_{0}})\over{\lambda}^{({i_{0}})}!}$ is equal to
$$A_{N}+\sum_{r=i_{0}+1}^{N}{\rm Tr}\,q^{\mathfrak{d}}\prod_{1\leq i\leq r-1,i%
\neq i_{0}}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})\over{\lambda}^{(i)}!}%
\cdot\left[{\mathfrak{a}_{{\lambda}^{(r)}}(\alpha_{r})\over{\lambda}^{(r)}!},{%
\mathfrak{a}_{{\lambda}^{(i_{0})}}(\alpha_{i_{0}})\over{\lambda}^{({i_{0}})}!}%
\right]\cdot\prod_{i=r+1}^{N}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})\over{%
\lambda}^{(i)}!}.$$
Therefore, we conclude that $(1-q^{-|{\lambda}^{(i_{0})}|})A_{N}$ is equal to
$$q^{-|{\lambda}^{(i_{0})}|}\,\sum_{r=i_{0}+1}^{N}{\rm Tr}\,q^{\mathfrak{d}}%
\prod_{1\leq i\leq r-1,i\neq i_{0}}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})%
\over{\lambda}^{(i)}!}\cdot\left[{\mathfrak{a}_{{\lambda}^{(r)}}(\alpha_{r})%
\over{\lambda}^{(r)}!},{\mathfrak{a}_{{\lambda}^{(i_{0})}}(\alpha_{i_{0}})%
\over{\lambda}^{({i_{0}})}!}\right]\cdot\prod_{i=r+1}^{N}{\mathfrak{a}_{{%
\lambda}^{(i)}}(\alpha_{i})\over{\lambda}^{(i)}!}$$
$$+\sum_{r=1}^{i_{0}-1}{\rm Tr}\,q^{\mathfrak{d}}\prod_{i=1}^{r-1}{\mathfrak{a}_%
{{\lambda}^{(i)}}(\alpha_{i})\over{\lambda}^{(i)}!}\cdot\left[{\mathfrak{a}_{{%
\lambda}^{(r)}}(\alpha_{r})\over{\lambda}^{(r)}!},{\mathfrak{a}_{{\lambda}^{(i%
_{0})}}(\alpha_{i_{0}})\over{\lambda}^{({i_{0}})}!}\right]\cdot\prod_{r+1\leq i%
\leq N,i\neq i_{0}}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})\over{\lambda}^{%
(i)}!}.$$
Put $n_{0}=-|{\lambda}^{(i_{0})}|>0$. It follows that $A_{N}$ is equal to
$${q^{n_{0}}\over 1-q^{n_{0}}}\,\sum_{r=i_{0}+1}^{N}{\rm Tr}\,q^{\mathfrak{d}}%
\prod_{1\leq i\leq r-1,i\neq i_{0}}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})%
\over{\lambda}^{(i)}!}\cdot\left[{\mathfrak{a}_{{\lambda}^{(r)}}(\alpha_{r})%
\over{\lambda}^{(r)}!},{\mathfrak{a}_{{\lambda}^{(i_{0})}}(\alpha_{i_{0}})%
\over{\lambda}^{({i_{0}})}!}\right]\cdot\prod_{i=r+1}^{N}{\mathfrak{a}_{{%
\lambda}^{(i)}}(\alpha_{i})\over{\lambda}^{(i)}!}$$
$$\displaystyle+{1\over 1-q^{n_{0}}}\,\sum_{r=1}^{i_{0}-1}{\rm Tr}\,q^{\mathfrak%
{d}}\prod_{i=1}^{r-1}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})\over{\lambda}%
^{(i)}!}\cdot\left[{\mathfrak{a}_{{\lambda}^{(r)}}(\alpha_{r})\over{\lambda}^{%
(r)}!},{\mathfrak{a}_{{\lambda}^{(i_{0})}}(\alpha_{i_{0}})\over{\lambda}^{({i_%
{0}})}!}\right]\cdot\prod_{r+1\leq i\leq N,i\neq i_{0}}{\mathfrak{a}_{{\lambda%
}^{(i)}}(\alpha_{i})\over{\lambda}^{(i)}!}.$$
(4.11)
By Lemma 2.4 (i) and (ii), our lemma holds in this case as well.
∎
The following theorem provides the structure of the trace
$${\rm Tr}\,q^{\mathfrak{d}}\,W(\mathfrak{L}_{1},z)\,\prod_{i=1}^{N}{\mathfrak{a%
}_{{\lambda}^{(i)}}(\alpha_{i})\over{\lambda}^{(i)}!}.$$
Theorem 4.5.
For $1\leq i\leq N$, let ${\lambda}^{(i)}=\big{(}\cdots(-n)^{\tilde{m}^{(i)}_{n}}\cdots(-1)^{\tilde{m}^{%
(i)}_{1}}1^{m^{(i)}_{1}}\cdots n^{m^{(i)}_{n}}\cdots)$ and $\alpha_{i}\in H^{*}(X)$ be homogeneous.
Then, $\displaystyle{{\rm Tr}\,q^{\mathfrak{d}}\,W(\mathfrak{L}_{1},z)\,\prod_{i=1}^{%
N}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})\over{\lambda}^{(i)}!}}$ is equal to
$$z^{\sum_{i=1}^{N}|{\lambda}^{(i)}|}\cdot(q;q)_{\infty}^{-\chi(X)}\cdot\prod_{i%
=1}^{N}\big{\langle}(1_{X}-K_{X})^{\sum_{n\geq 1}m^{(i)}_{n}},\alpha_{i}\big{%
\rangle}\cdot$$
$$\cdot\prod_{1\leq i\leq N,n\geq 1}\left({(-1)^{m^{(i)}_{n}}\over m^{(i)}_{n}!}%
{q^{nm^{(i)}_{n}}\over(1-q^{n})^{m^{(i)}_{n}}}{1\over\tilde{m}^{(i)}_{n}!}{1%
\over(1-q^{n})^{\tilde{m}^{(i)}_{n}}}\right)+\widetilde{W},$$
and the lower weight term $\widetilde{W}$ is a linear combination of expressions of the form:
$$\displaystyle z^{\sum_{i=1}^{N}|{\lambda}^{(i)}|}\cdot(q;q)_{\infty}^{-\chi(X)%
}\cdot{\rm Sign}(\pi)\cdot\prod_{i=1}^{u}\left\langle K_{X}^{r_{i}}e_{X}^{r_{i%
}^{\prime}},\prod_{j\in\pi_{i}}\alpha_{j}\right\rangle\cdot\prod_{i=1}^{v}{q^{%
n_{i}w_{i}p_{i}}\over(1-q^{n_{i}})^{w_{i}}}$$
(4.12)
where $\sum_{i=1}^{v}w_{i}<\sum_{i=1}^{N}\ell({\lambda}^{(i)})$,
the integers $u,v$, $r_{i},r_{i}^{\prime}\geq 0,n_{i}>0,w_{i}>0,p_{i}\in\{0,1\}$ and the partition $\pi=\{\pi_{1},\ldots,\pi_{u}\}$
of $\{1,\ldots,N\}$ depend only on the generalized partitions ${\lambda}^{(1)},\ldots,{\lambda}^{(N)}$, and
${\rm Sign}(\pi)$ is the sign compensating the formal difference between
$\prod_{i=1}^{u}\prod_{j\in\pi_{i}}\alpha_{j}$ and $\alpha_{1}\cdots\alpha_{N}$.
Moreover, the coefficients of this linear combination are independent of
$q,\alpha_{1},\ldots,\alpha_{N}$ and $X$.
Proof.
For simplicity, put ${\rm Tr}_{\lambda}=\displaystyle{{\rm Tr}\,q^{\mathfrak{d}}\,W(\mathfrak{L}_{1%
},z)\,\prod_{i=1}^{N}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})\over{\lambda}%
^{(i)}!}}$.
Combining Lemma 4.2 and Lemma 4.3,
we conclude that ${\rm Tr}_{\lambda}$ is equal to
$$\sum_{\sum_{i=1}^{N}(|{\lambda}^{(i)}|-\sum_{s\geq 1}|\mu^{(i,s)}|-\sum_{t\geq
1%
}|\tilde{\mu}^{(i,t)}|)=0\atop\mu^{(i,s)}\in\widetilde{\mathcal{P}}_{+},\,%
\tilde{\mu}^{(i,t)}\in\widetilde{\mathcal{P}}_{-}}\,\prod_{1\leq i\leq N\atop s%
,n\geq 1}{(-(zq^{s})^{n})^{m^{(i,s)}_{n}}\over m^{(i,s)}_{n}!}\cdot\prod_{1%
\leq i\leq N\atop t,n\geq 1}{(z^{-1}q^{t-1})^{n\tilde{m}^{(i,t)}_{n}}\over{%
\tilde{m}^{(i,t)}_{n}}!}\cdot$$
$$\displaystyle\cdot{\rm Tr}\,q^{\mathfrak{d}}\prod_{i=1}^{N}{\mathfrak{a}_{{%
\lambda}^{(i)}-\sum_{s\geq 1}\mu^{(i,s)}-\sum_{t\geq 1}\tilde{\mu}^{(i,t)}}%
\big{(}(1_{X}-K_{X})^{\sum_{s,n\geq 1}m^{(i,s)}_{n}}\alpha_{i}\big{)}\over\big%
{(}{\lambda}^{(i)}-\sum_{s\geq 1}\mu^{(i,s)}-\sum_{t\geq 1}\tilde{\mu}^{(i,t)}%
\big{)}!}$$
where $\mu^{(i,s)}=\big{(}1^{m^{(i,s)}_{1}}\cdots n^{m^{(i,s)}_{n}}\cdots\big{)}$
and $\tilde{\mu}^{(i,t)}=\big{(}\cdots(-n)^{\tilde{m}^{(i,t)}_{n}}\cdots(-1)^{%
\tilde{m}^{(i,t)}_{1}}\big{)}$.
The sum of all the exponents of $z$ is $\sum_{i=1}^{N}|{\lambda}^{(i)}|$. So ${\rm Tr}_{\lambda}$ is equal to
$$z^{\sum_{i=1}^{N}|{\lambda}^{(i)}|}\cdot\sum_{\sum_{i=1}^{N}(|{\lambda}^{(i)}|%
-\sum_{s\geq 1}|\mu^{(i,s)}|-\sum_{t\geq 1}|\tilde{\mu}^{(i,t)}|)=0\atop\mu^{(%
i,s)}\in\widetilde{\mathcal{P}}_{+},\,\tilde{\mu}^{(i,t)}\in\widetilde{%
\mathcal{P}}_{-}}\,\prod_{1\leq i\leq N\atop s,n\geq 1}{(-q^{sn})^{m^{(i,s)}_{%
n}}\over m^{(i,s)}_{n}!}\cdot\prod_{1\leq i\leq N\atop t,n\geq 1}{q^{(t-1)n%
\tilde{m}^{(i,t)}_{n}}\over{\tilde{m}^{(i,t)}_{n}}!}\cdot$$
$$\displaystyle\cdot{\rm Tr}\,q^{\mathfrak{d}}\prod_{i=1}^{N}{\mathfrak{a}_{{%
\lambda}^{(i)}-\sum_{s\geq 1}\mu^{(i,s)}-\sum_{t\geq 1}\tilde{\mu}^{(i,t)}}%
\big{(}(1_{X}-K_{X})^{\sum_{s,n\geq 1}m^{(i,s)}_{n}}\alpha_{i}\big{)}\over\big%
{(}{\lambda}^{(i)}-\sum_{s\geq 1}\mu^{(i,s)}-\sum_{t\geq 1}\tilde{\mu}^{(i,t)}%
\big{)}!}.$$
(4.13)
By our convention, $\sum_{s\geq 1}\mu^{(i,s)}+\sum_{t\geq 1}\tilde{\mu}^{(i,t)}\leq{\lambda}^{(i)}$ for every $1\leq i\leq N$. We now divide the rest of the proof into
Case A and Case B.
Case A:
$\sum_{s\geq 1}\mu^{(i,s)}+\sum_{t\geq 1}\tilde{\mu}^{(i,t)}={\lambda}^{(i)}$ for every $1\leq i\leq N$. Then line (4.13) is
$${\rm Tr}\,q^{\mathfrak{d}}\cdot\prod_{i=1}^{N}\big{\langle}(1_{X}-K_{X})^{\sum%
_{s,n\geq 1}m^{(i,s)}_{n}},\alpha_{i}\big{\rangle}=(q;q)_{\infty}^{-\chi(X)}%
\prod_{i=1}^{N}\big{\langle}(1_{X}-K_{X})^{\sum_{s,n\geq 1}m^{(i,s)}_{n}},%
\alpha_{i}\big{\rangle}.$$
Therefore, the contribution $C_{1}$ of this case to ${\rm Tr}_{\lambda}$ is equal to
$$z^{\sum_{i=1}^{N}|{\lambda}^{(i)}|}\cdot(q;q)_{\infty}^{-\chi(X)}\prod_{i=1}^{%
N}\big{\langle}(1_{X}-K_{X})^{\sum_{n\geq 1}m^{(i)}_{n}},\alpha_{i}\big{%
\rangle}\cdot$$
$$\cdot\sum_{\sum_{s\geq 1}m^{(i,s)}_{n}=m^{(i)}_{n}\atop 1\leq i\leq N,\,n\geq 1%
}\,\prod_{1\leq i\leq N\atop s,n\geq 1}{(-q^{sn})^{m^{(i,s)}_{n}}\over m^{(i,s%
)}_{n}!}\cdot\sum_{\sum_{t\geq 1}\tilde{m}^{(i,t)}_{n}=\tilde{m}^{(i)}_{n}%
\atop 1\leq i\leq N,\,n\geq 1}\,\prod_{1\leq i\leq N\atop t,n\geq 1}{q^{(t-1)n%
\tilde{m}^{(i,t)}_{n}}\over{\tilde{m}^{(i,t)}_{n}}!}.$$
Rewrite $q^{sn}$ as $q^{(s-1)n}q^{n}$. Then the contribution $C_{1}$ is equal to
$$z^{\sum_{i=1}^{N}|{\lambda}^{(i)}|}\cdot(q;q)_{\infty}^{-\chi(X)}\prod_{i=1}^{%
N}\big{\langle}(1_{X}-K_{X})^{\sum_{n\geq 1}m^{(i)}_{n}},\alpha_{i}\big{%
\rangle}\cdot\prod_{1\leq i\leq N,n\geq 1}(-q^{n})^{m^{(i)}_{n}}\cdot$$
$$\cdot\sum_{\sum_{s\geq 1}m^{(i,s)}_{n}=m^{(i)}_{n}\atop 1\leq i\leq N,\,n\geq 1%
}\,\prod_{1\leq i\leq N\atop s,n\geq 1}{q^{(s-1)nm^{(i,s)}_{n}}\over m^{(i,s)}%
_{n}!}\cdot\sum_{\sum_{t\geq 1}\tilde{m}^{(i,t)}_{n}=\tilde{m}^{(i)}_{n}\atop 1%
\leq i\leq N,\,n\geq 1}\,\prod_{1\leq i\leq N\atop t,n\geq 1}{q^{(t-1)n\tilde{%
m}^{(i,t)}_{n}}\over{\tilde{m}^{(i,t)}_{n}}!}.$$
Since $\displaystyle{\sum_{\sum_{s\geq 1}i_{s,n}=i_{n},\,n\geq 1}\,\prod_{s,n\geq 1}{%
(q^{(s-1)n})^{i_{s,n}}\over i_{s,n}!}=\prod_{n\geq 1}\left({1\over i_{n}!}{1%
\over(1-q^{n})^{i_{n}}}\right)}$,
$C_{1}$ is equal to
$$z^{\sum_{i=1}^{N}|{\lambda}^{(i)}|}\cdot(q;q)_{\infty}^{-\chi(X)}\prod_{i=1}^{%
N}\big{\langle}(1_{X}-K_{X})^{\sum_{n\geq 1}m^{(i)}_{n}},\alpha_{i}\big{%
\rangle}\cdot$$
$$\displaystyle\cdot\prod_{1\leq i\leq N,n\geq 1}\left({(-1)^{m^{(i)}_{n}}\over m%
^{(i)}_{n}!}{q^{nm^{(i)}_{n}}\over(1-q^{n})^{m^{(i)}_{n}}}\right)\cdot\prod_{1%
\leq i\leq N,n\geq 1}\left({1\over\tilde{m}^{(i)}_{n}!}{1\over(1-q^{n})^{%
\tilde{m}^{(i)}_{n}}}\right).$$
(4.14)
Case B: $\sum_{s\geq 1}\mu^{(i,s)}+\sum_{t\geq 1}\tilde{\mu}^{(i,t)}<{\lambda}^{(i)}$ for some $1\leq i\leq N$. Without loss of generality, we may
assume that $\sum_{s\geq 1}\mu^{(i,s)}+\sum_{t\geq 1}\tilde{\mu}^{(i,t)}={\lambda}^{(i)}$ for every $1\leq i\leq N_{1}$ where $N_{1}<N$,
and $\sum_{s\geq 1}\mu^{(i,s)}+\sum_{t\geq 1}\tilde{\mu}^{(i,t)}<{\lambda}^{(i)}$ for every $N_{1}+1\leq i\leq N$. For $N_{1}+1\leq i\leq N$, put
$\sum_{s\geq 1}\mu^{(i,s)}+\sum_{t\geq 1}\tilde{\mu}^{(i,t)}=\tilde{\lambda}^{(%
i)}=\big{(}\cdots(-n)^{\tilde{p}^{(i)}_{n}}\cdots(-1)^{\tilde{p}^{(i)}_{1}}1^{%
p^{(i)}_{1}}\cdots n^{p^{(i)}_{n}}\cdots\big{)}$.
An argument similar to that in the previous paragraph shows that
for the fixed generalized partitions $\tilde{\lambda}^{(i)}$ with $N_{1}+1\leq i\leq N$,
the contribution $C_{2}$ of this case to ${\rm Tr}_{\lambda}$ is equal to
$$z^{\sum_{i=1}^{N}|{\lambda}^{(i)}|}\cdot\prod_{i=1}^{N_{1}}\big{\langle}(1_{X}%
-K_{X})^{\sum_{n\geq 1}m^{(i)}_{n}},\alpha_{i}\big{\rangle}\cdot\prod_{1\leq i%
\leq N_{1}\atop n\geq 1}\left({(-1)^{m^{(i)}_{n}}\over m^{(i)}_{n}!}{q^{nm^{(i%
)}_{n}}\over(1-q^{n})^{m^{(i)}_{n}}}{1\over\tilde{m}^{(i)}_{n}!}{1\over(1-q^{n%
})^{\tilde{m}^{(i)}_{n}}}\right)$$
$$\cdot\prod_{N_{1}+1\leq i\leq N\atop n\geq 1}\left({(-1)^{p^{(i)}_{n}}\over p^%
{(i)}_{n}!}{q^{np^{(i)}_{n}}\over(1-q^{n})^{p^{(i)}_{n}}}{1\over\tilde{p}^{(i)%
}_{n}!}{1\over(1-q^{n})^{\tilde{p}^{(i)}_{n}}}\right)$$
$$\displaystyle\cdot{\rm Tr}\,q^{\mathfrak{d}}\prod_{i=N_{1}+1}^{N}{\mathfrak{a}%
_{{\lambda}^{(i)}-\tilde{\lambda}^{(i)}}\big{(}(1_{X}-K_{X})^{\sum_{n\geq 1}p^%
{(i)}_{n}}\alpha_{i}\big{)}\over\big{(}{\lambda}^{(i)}-\tilde{\lambda}^{(i)}%
\big{)}!}.$$
(4.15)
By Lemma 4.4, $C_{2}$ is a linear combination of expressions of the form:
$$z^{\sum_{i=1}^{N}|{\lambda}^{(i)}|}\cdot\prod_{i=1}^{N_{1}}\big{\langle}(1_{X}%
-K_{X})^{\sum_{n\geq 1}m^{(i)}_{n}},\alpha_{i}\big{\rangle}\cdot\prod_{1\leq i%
\leq N_{1}\atop n\geq 1}\left({(-1)^{m^{(i)}_{n}}\over m^{(i)}_{n}!}{q^{nm^{(i%
)}_{n}}\over(1-q^{n})^{m^{(i)}_{n}}}{1\over\tilde{m}^{(i)}_{n}!}{1\over(1-q^{n%
})^{\tilde{m}^{(i)}_{n}}}\right)$$
$$\displaystyle\cdot\prod_{N_{1}+1\leq i\leq N\atop n\geq 1}\left({(-1)^{p^{(i)}%
_{n}}\over p^{(i)}_{n}!}{q^{np^{(i)}_{n}}\over(1-q^{n})^{p^{(i)}_{n}}}{1\over%
\tilde{p}^{(i)}_{n}!}{1\over(1-q^{n})^{\tilde{p}^{(i)}_{n}}}\right)\cdot$$
$$\displaystyle(q;q)_{\infty}^{-\chi(X)}\cdot{\rm Sign}(\pi)\cdot\prod_{i=1}^{u}%
\left\langle e_{X}^{m_{i}},\prod_{j\in\pi_{i}}\big{(}(1_{X}-K_{X})^{\sum_{n%
\geq 1}p^{(j)}_{n}}\alpha_{j}\big{)}\right\rangle\cdot\prod_{i=1}^{v}{q^{n_{i}%
}\over 1-q^{n_{i}}}$$
where $v<\sum_{i=N_{1}+1}^{N}\ell({\lambda}^{(i)}-\tilde{\lambda}^{(i)})$, $n_{i}>0$, $m_{i}\geq 0$,
$\{\pi_{1},\ldots,\pi_{u}\}$ is a partition of $\{N_{1}+1,\ldots,N\}$,
and ${\rm Sign}(\pi)$ compensates the formal difference between
$\prod_{i=1}^{u}\prod_{j\in\pi_{i}}\alpha_{j}$ and $\alpha_{N_{1}+1}\cdots\alpha_{N}$.
The coefficients of this linear combination are independent of
$q,\alpha_{1},\ldots,\alpha_{N}$ and $X$, and depend only on
the partitions ${\lambda}^{(i)}-\tilde{\lambda}^{(i)}$. Note that
for nonnegative integers $a$ and $b$, the pairing
$\langle e_{X}^{a},(1_{X}-K_{X})^{b}\beta\rangle=\langle e_{X}^{a}(1_{X}-K_{X})%
^{b},\beta\rangle$
is a linear combination of $\langle e_{X}^{a}K_{X}^{c},\beta\rangle,0\leq c\leq b$.
In addition, we have
$$\sum_{1\leq i\leq N_{1},n\geq 1}\big{(}m^{(i)}_{n}+\tilde{m}^{(i)}_{n}\big{)}+%
\sum_{N_{1}+1\leq i\leq N,n\geq 1}\big{(}p^{(i)}_{n}+\tilde{p}^{(i)}_{n}\big{)%
}+v<\sum_{i=1}^{N}\ell({\lambda}^{(i)})$$
regarding the weights in $C_{2}$.
It follows that $C_{2}$ is a linear combination of the expressions (4.12).
Combining with (4.14) completes the proof of our theorem.
∎
Remark 4.6.
When $N=1$, we can work out the lower weight term $\widetilde{W}$ in Theorem 4.5
by examining its proof more carefully and by using (4.10). To state the result,
let ${\lambda}=(\cdots(-n)^{\tilde{m}_{n}}\cdots(-1)^{\tilde{m}_{1}}1^{m_{1}}\cdots
n%
^{m_{n}}\cdots)\in\widetilde{\mathcal{P}}$.
For $n_{1}\geq 1$ with $m_{n_{1}}\cdot\tilde{m}_{n_{1}}\geq 1$, define
$m_{n_{1}}(n_{1})=m_{n_{1}}-1$, $\tilde{m}_{n_{1}}(n_{1})=\tilde{m}_{n_{1}}-1$,
and $m_{n}(n_{1})=m_{n}$ and $\tilde{m}_{n}(n_{1})=\tilde{m}_{n}$ if $n\neq n_{1}$. Then,
$\displaystyle{{\rm Tr}\,q^{\mathfrak{d}}\,W(\mathfrak{L}_{1},z)\,{\mathfrak{a}%
_{\lambda}(\alpha)\over{\lambda}!}}$
is equal to the sum
$$\displaystyle z^{|{\lambda}|}\cdot(q;q)_{\infty}^{-\chi(X)}\cdot\langle(1_{X}-%
K_{X})^{\sum_{n\geq 1}m_{n}},\alpha\rangle\cdot$$
$$\displaystyle\quad\cdot\prod_{n\geq 1}\left({(-1)^{m_{n}}\over m_{n}!}{q^{nm_{%
n}}\over(1-q^{n})^{m_{n}}}{1\over\tilde{m}_{n}!}{1\over(1-q^{n})^{\tilde{m}_{n%
}}}\right)$$
$$\displaystyle+$$
$$\displaystyle z^{|{\lambda}|}\cdot(q;q)_{\infty}^{-\chi(X)}\cdot\langle e_{X},%
\alpha\rangle\cdot\sum_{n_{1}\geq 1\,\text{\rm with }m_{n_{1}}\cdot\tilde{m}_{%
n_{1}}\geq 1}{n_{1}q^{n_{1}}\over 1-q^{n_{1}}}\cdot$$
$$\displaystyle\quad\cdot\prod_{n\geq 1}\left({(-1)^{m_{n}(n_{1})}\over m_{n}(n_%
{1})!}{q^{nm_{n}(n_{1})}\over(1-q^{n})^{m_{n}(n_{1})}}{1\over\tilde{m}_{n}(n_{%
1})!}{1\over(1-q^{n})^{\tilde{m}_{n}(n_{1})}}\right).$$
The next lemma is used to organize the leading term in Theorem 4.5.
Lemma 4.7.
For $\alpha\in H^{*}(X)$ and $k\geq 0$, define $\Theta^{\alpha}_{k}(q)$ to be
$$\displaystyle-\sum_{\ell({\lambda})=k+2,|{\lambda}|=0}\big{\langle}(1_{X}-K_{X%
})^{\sum_{n\geq 1}i_{n}},\alpha\big{\rangle}\cdot\prod_{n\geq 1}\left({(-1)^{i%
_{n}}\over i_{n}!}{q^{ni_{n}}\over(1-q^{n})^{i_{n}}}{1\over\tilde{i}_{n}!}{1%
\over(1-q^{n})^{\tilde{i}_{n}}}\right)$$
(4.16)
where ${\lambda}=(\cdots(-n)^{\tilde{i}_{n}}\cdots(-1)^{\tilde{i}_{1}}1^{i_{1}}\cdots
n%
^{i_{n}}\cdots)$.
Then, $\Theta^{\alpha}_{k}(q)={\rm Coeff}_{z^{0}}\Theta^{\alpha}_{k}(q,z)$
which denotes the coefficient of $z^{0}$ in $\Theta^{\alpha}_{k}(q,z)$ defined by
$$-\sum_{a,s_{1},\ldots,s_{a},b,t_{1},\ldots,t_{b}\geq 1\atop\sum_{i=1}^{a}s_{i}%
+\sum_{j=1}^{b}t_{j}=k+2}\big{\langle}(1_{X}-K_{X})^{\sum_{i=1}^{a}s_{i}},%
\alpha\big{\rangle}\prod_{i=1}^{a}{(-1)^{s_{i}}\over s_{i}!}\cdot\prod_{j=1}^{%
b}{1\over t_{j}!}$$
$$\displaystyle\cdot\sum_{n_{1}>\cdots>n_{a}}\prod_{i=1}^{a}{(qz)^{n_{i}s_{i}}%
\over(1-q^{n_{i}})^{s_{i}}}\cdot\sum_{m_{1}>\cdots>m_{b}}\prod_{j=1}^{b}{z^{-m%
_{j}t_{j}}\over(1-q^{m_{j}})^{t_{j}}}.$$
(4.17)
Proof.
Put $A=\big{\langle}(1_{X}-K_{X})^{\sum_{n\geq 1}i_{n}},\alpha\big{\rangle}$
which implicitly depends on $\sum_{n\geq 1}i_{n}$. Rewrite $|{\lambda}|$ and $\ell({\lambda})$ in terms of
the integers $i_{n}$ and $\tilde{i}_{n}$. Then, $\Theta^{\alpha}_{k}(q)$ is equal to
$$\displaystyle-\sum_{\sum_{n\geq 1}i_{n}+\sum_{n\geq 1}\tilde{i}_{n}=k+2\atop%
\sum_{n\geq 1}ni_{n}=\sum_{n\geq 1}n\tilde{i}_{n}>0}A\,\prod_{n\geq 1}\left({(%
-1)^{i_{n}}\over i_{n}!}{q^{ni_{n}}\over(1-q^{n})^{i_{n}}}{1\over\tilde{i}_{n}%
!}{1\over(1-q^{n})^{\tilde{i}_{n}}}\right).$$
(4.18)
Denote the positive integers in the ordered list $\{i_{1},\ldots,i_{n},\ldots\}$
by $s_{a},\ldots,s_{1}$ respectively (e.g., if the ordered list
$\{i_{1},\ldots,i_{n},\ldots\}$ is $\{2,0,5,4,0,\ldots\}$, then $a=3$ with
$s_{3}=2,s_{2}=5,s_{1}=4$). We have $a\geq 1$. Similarly,
denote the positive integers in the ordered list
$\{\tilde{i}_{1},\ldots,\tilde{i}_{n},\ldots\}$ by $t_{b},\ldots,t_{1}$ respectively. Then $b\geq 1$.
Since $\sum_{n\geq 1}i_{n}=\sum_{i=1}^{a}s_{i}$, we get
$A=\big{\langle}(1_{X}-K_{X})^{\sum_{i=1}^{a}s_{i}},\alpha\big{\rangle}$.
Rewriting (4.18) in terms of $s_{a},\ldots,s_{1}$ and $t_{b},\ldots,t_{1}$,
we see that $\Theta^{\alpha}_{k}(q)={\rm Coeff}_{z^{0}}\Theta^{\alpha}_{k}(q,z)$.
∎
We remark that the multiple $q$-zeta value $\Theta^{\alpha}_{k}(q,z)$ has weight $(k+2)$.
Theorem 4.8.
For $1\leq i\leq N$, let $k_{i}\geq 0$ and $\alpha_{i}\in H^{*}(X)$ be homogeneous. Then,
$$\displaystyle F^{\alpha_{1},\ldots,\alpha_{N}}_{k_{1},\ldots,k_{N}}(q)=(q;q)_{%
\infty}^{-\chi(X)}\cdot{\rm Coeff}_{z_{1}^{0}\cdots z_{N}^{0}}\left(\prod_{i=1%
}^{N}\Theta^{\alpha_{i}}_{k_{i}}(q,z_{i})\right)+W_{1},$$
(4.19)
and the lower weight term $W_{1}$ is an infinite linear combination of the expressions:
$$\displaystyle(q;q)_{\infty}^{-\chi(X)}\cdot{\rm Sign}(\pi)\cdot\prod_{i=1}^{u}%
\left\langle K_{X}^{r_{i}}e_{X}^{r_{i}^{\prime}},\prod_{j\in\pi_{i}}\alpha_{j}%
\right\rangle\cdot\prod_{i=1}^{v}{q^{n_{i}w_{i}p_{i}}\over(1-q^{n_{i}})^{w_{i}}}$$
(4.20)
where $\sum_{i=1}^{v}w_{i}<\sum_{i=1}^{N}(k_{i}+2)$,
and the integers $u,v,r_{i},r_{i}^{\prime}\geq 0,n_{i}>0,w_{i}>0,p_{i}\in\{0,1\}$ and the partition $\pi=\{\pi_{1},\ldots,\pi_{u}\}$
of $\{1,\ldots,N\}$ depend only on the integers $k_{i}$.
Moreover, the coefficients of this linear combination are independent of
$q,\alpha_{i},X$.
Proof.
By Lemma 3.2, $\displaystyle{F^{\alpha_{1},\ldots,\alpha_{N}}_{k_{1},\ldots,k_{N}}(q)={\rm Tr%
}\,q^{\mathfrak{d}}\,W(\mathfrak{L}_{1},z)\,\prod_{i=1}^{N}\mathfrak{G}_{k_{i}%
}(\alpha_{i})}$.
Combining with Theorem 2.3 and Theorem 4.5, we conclude that
$$\displaystyle F^{\alpha_{1},\ldots,\alpha_{N}}_{k_{1},\ldots,k_{N}}(q)=%
\widetilde{F}^{\alpha_{1},\ldots,\alpha_{N}}_{k_{1},\ldots,k_{N}}(q)+W_{1,1}$$
(4.21)
where $W_{1,1}$ is an infinite linear combination of
the expressions (4.20), and
$$\displaystyle\widetilde{F}^{\alpha_{1},\ldots,\alpha_{N}}_{k_{1},\ldots,k_{N}}%
(q):=(-1)^{N}\cdot\sum_{\ell({\lambda}^{(i)})=k_{i}+2,|{\lambda}^{(i)}|=0\atop
1%
\leq i\leq N}{\rm Tr}\,q^{\mathfrak{d}}\,W(\mathfrak{L}_{1},z)\,\prod_{i=1}^{N%
}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})\over{\lambda}^{(i)}!}.$$
(4.22)
Applying Theorem 4.5 again,
we see that $\widetilde{F}^{\alpha_{1},\ldots,\alpha_{N}}_{k_{1},\ldots,k_{N}}(q)$ is equal to
$$(-1)^{N}(q;q)_{\infty}^{-\chi(X)}\cdot\sum_{\ell({\lambda}^{(i)})=k_{i}+2,|{%
\lambda}^{(i)}|=0\atop 1\leq i\leq N}\prod_{i=1}^{N}\big{\langle}(1_{X}-K_{X})%
^{\sum_{n\geq 1}m^{(i)}_{n}},\alpha_{i}\big{\rangle}\cdot$$
$$\displaystyle\cdot\prod_{1\leq i\leq N,n\geq 1}\left({(-1)^{m^{(i)}_{n}}\over m%
^{(i)}_{n}!}{q^{nm^{(i)}_{n}}\over(1-q^{n})^{m^{(i)}_{n}}}{1\over\tilde{m}^{(i%
)}_{n}!}{1\over(1-q^{n})^{\tilde{m}^{(i)}_{n}}}\right)+W_{1,2}$$
(4.23)
where the lower weight term $W_{1,2}$ is an infinite linear combination of
the expressions (4.20), and we have put
${\lambda}^{(i)}=\big{(}\cdots(-n)^{\tilde{m}^{(i)}_{n}}\cdots(-1)^{\tilde{m}^{%
(i)}_{1}}1^{m^{(i)}_{1}}\cdots n^{m^{(i)}_{n}}\cdots)$. So
$$\displaystyle\widetilde{F}^{\alpha_{1},\ldots,\alpha_{N}}_{k_{1},\ldots,k_{N}}%
(q)$$
$$\displaystyle=$$
$$\displaystyle(q;q)_{\infty}^{-\chi(X)}\cdot\prod_{i=1}^{N}\Theta^{\alpha_{i}}_%
{k_{i}}(q)+W_{1,2}$$
(4.24)
$$\displaystyle=$$
$$\displaystyle(q;q)_{\infty}^{-\chi(X)}\cdot{\rm Coeff}_{z_{1}^{0}\cdots z_{N}^%
{0}}\left(\prod_{i=1}^{N}\Theta^{\alpha_{i}}_{k_{i}}(q,z_{i})\right)+W_{1,2}$$
by Lemma 4.7. Putting $W_{1}=W_{1,1}+W_{1,2}$ completes the proof of
(4.19).
∎
Our next goal is to relate the lower weight term $W_{1,2}$ in (4.23)
and (4.24)
to multiple $q$-zeta values (with additional variables $z_{1},\ldots,z_{N}$ inserted).
We will assume $e_{X}\alpha_{i}=0$ for all $1\leq i\leq N$.
We begin with a lemma strengthening Lemma 4.4.
Lemma 4.9.
Let ${\lambda}^{(1)},\ldots,{\lambda}^{(N)}\in\widetilde{\mathcal{P}}$,
and $\alpha_{1},\ldots,\alpha_{N}\in H^{*}(X)$ be homogeneous. Assume that
$e_{X}\alpha_{i}=0$ for every $1\leq i\leq N$, and $\sum_{i=1}^{N}|{\lambda}^{(i)}|=0$. Put
$$A_{N}={\rm Tr}\,q^{\mathfrak{d}}\,\prod_{i=1}^{N}{\mathfrak{a}_{{\lambda}^{(i)%
}}(\alpha_{i})\over{\lambda}^{(i)}!}.$$
(i)
If $\ell({\lambda}^{(i)})\geq 2$ for every $1\leq i\leq N$, then $A_{N}=0$.
(ii)
If $A_{N}\neq 0$, then $A_{N}$ is a linear combination of the expressions:
$$\displaystyle(q;q)_{\infty}^{-\chi(X)}\cdot{\rm Sign}(\pi)\cdot\prod_{i=1}^{u}%
\left\langle 1_{X},\prod_{j\in\pi_{i}}\alpha_{j}\right\rangle\cdot\prod_{i=1}^%
{\tilde{\ell}}{(-\tilde{n}_{i})q^{\tilde{n}_{i}\tilde{p}_{i}}\over 1-q^{\tilde%
{n}_{i}}}$$
(4.25)
$$\displaystyle=$$
$$\displaystyle(q;q)_{\infty}^{-\chi(X)}\cdot{\rm Sign}(\pi)\cdot\prod_{i=1}^{u}%
\left\langle 1_{X},\prod_{j\in\pi_{i}}\alpha_{j}\right\rangle\cdot\prod_{i=1}^%
{\ell}{(-n_{i}^{\prime})^{w_{i}}q^{n_{i}^{\prime}p_{i}}\over(1-q^{n_{i}^{%
\prime}})^{w_{i}}}$$
(4.26)
where $\tilde{\ell}=\sum_{i=1}^{N}\ell({\lambda}^{(i)})/2=\sum_{i=1}^{\ell}w_{i}$,
$\tilde{p}_{i}\in\{0,1\}$, $0\leq p_{i}\leq w_{i}$,
the partition $\pi=\{\pi_{1},\ldots,\pi_{u}\}$ of $\{1,\ldots,N\}$ depend only on
${\lambda}^{(1)},\ldots,{\lambda}^{(N)}$, the integers
$\tilde{n}_{1},\ldots,\tilde{n}_{\tilde{\ell}}$ are the positive parts (repeated with multiplicities)
in ${\lambda}^{(1)},\ldots,{\lambda}^{(N)}$, the integers $n_{1}^{\prime},\ldots,n_{\ell}^{\prime}$ denote
the different integers in $\tilde{n}_{1},\ldots,\tilde{n}_{\tilde{\ell}}$,
and each $n_{i}^{\prime}$ appears $w_{i}$ times in $\tilde{n}_{1},\ldots,\tilde{n}_{\tilde{\ell}}$.
Proof.
(i) As in the proof of Lemma 4.4,
$A_{N}=0$ unless $\ell({\lambda}^{(i)})=2$ and $|\alpha_{i}|=0$ for every $1\leq i\leq N$.
Assume $\ell({\lambda}^{(i)})=2$ and $|\alpha_{i}|=0$ for every $1\leq i\leq N$.
To prove $A_{N}=0$, we will use induction on $N$.
If $N=1$, then $A_{1}=0$ by (4.10). Let $N\geq 2$.
If $|{\lambda}^{(i)}|=0$ for every $1\leq i\leq N$, then $A_{N}=0$ by (4.9).
Assume $|{\lambda}^{(i_{0})}|\neq 0$ for some $1\leq i_{0}\leq N$.
Since $\sum_{i=1}^{N}|{\lambda}^{(i)}|=0$, we may further assume that $|{\lambda}^{(i_{0})}|<0$.
By (4.11), Lemma 2.4 (i) and (ii), and induction,
we conclude that $A_{N}=0$.
(ii) Note that (4.26) follows from (4.25) since
each integer $n_{i}^{\prime}$ appears $w_{i}$ times among the integers $\tilde{n}_{1},\ldots,\tilde{n}_{\tilde{\ell}}$.
In the following, we will prove (4.25).
To simplify the signs, we will assume that $|\alpha_{i}|$ is even for every $i$.
Since $A_{N}\neq 0$, we conclude from (i) that $\ell({\lambda}^{(i_{0})})=1$ for
some $1\leq i_{0}\leq N$. If ${\lambda}^{(i_{0})}=(-n_{0})$ for some $n_{0}>0$,
then by (4.11), $A_{N}$ is equal to
$${1\over 1-q^{n_{0}}}\,\sum_{r=1}^{i_{0}-1}{\rm Tr}\,q^{\mathfrak{d}}\prod_{i=1%
}^{r-1}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})\over{\lambda}^{(i)}!}\cdot%
\left[{\mathfrak{a}_{{\lambda}^{(r)}}(\alpha_{r})\over{\lambda}^{(r)}!},%
\mathfrak{a}_{-n_{0}}(\alpha_{i_{0}})\right]\cdot\prod_{r+1\leq i\leq N,i\neq i%
_{0}}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})\over{\lambda}^{(i)}!}$$
$$+{q^{n_{0}}\over 1-q^{n_{0}}}\,\sum_{r=i_{0}+1}^{N}{\rm Tr}\,q^{\mathfrak{d}}%
\prod_{1\leq i\leq r-1,i\neq i_{0}}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})%
\over{\lambda}^{(i)}!}\cdot\left[{\mathfrak{a}_{{\lambda}^{(r)}}(\alpha_{r})%
\over{\lambda}^{(r)}!},\mathfrak{a}_{-n_{0}}(\alpha_{i_{0}})\right]\cdot\prod_%
{i=r+1}^{N}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})\over{\lambda}^{(i)}!}.$$
Similarly, if ${\lambda}^{(i_{0})}=(n_{0})$ for some $n_{0}>0$, then $A_{N}$ is equal to
$${q^{n_{0}}\over 1-q^{n_{0}}}\,\sum_{r=1}^{i_{0}-1}{\rm Tr}\,q^{\mathfrak{d}}%
\prod_{i=1}^{r-1}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})\over{\lambda}^{(i%
)}!}\cdot\left[\mathfrak{a}_{n_{0}}(\alpha_{i_{0}}),{\mathfrak{a}_{{\lambda}^{%
(r)}}(\alpha_{r})\over{\lambda}^{(r)}!}\right]\cdot\prod_{r+1\leq i\leq N,i%
\neq i_{0}}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})\over{\lambda}^{(i)}!}$$
$$+{1\over 1-q^{n_{0}}}\,\sum_{r=i_{0}+1}^{N}{\rm Tr}\,q^{\mathfrak{d}}\prod_{1%
\leq i\leq r-1,i\neq i_{0}}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})\over{%
\lambda}^{(i)}!}\cdot\left[\mathfrak{a}_{n_{0}}(\alpha_{i_{0}}),{\mathfrak{a}_%
{{\lambda}^{(r)}}(\alpha_{r})\over{\lambda}^{(r)}!}\right]\cdot\prod_{i=r+1}^{%
N}{\mathfrak{a}_{{\lambda}^{(i)}}(\alpha_{i})\over{\lambda}^{(i)}!}.$$
Note that $[{\mathfrak{a}_{{\lambda}^{(r)}}(\alpha_{r})/{\lambda}^{(r)}!},\mathfrak{a}_{-%
n_{0}}(\alpha_{i_{0}})]=(-n_{0})\mathfrak{a}_{{\lambda}^{(r)}-(n_{0})}(\alpha_%
{r}\alpha_{i_{0}})/\big{(}{\lambda}^{(r)}-(n_{0})\big{)}!$, and
$[\mathfrak{a}_{n_{0}}(\alpha_{i_{0}}),{\mathfrak{a}_{{\lambda}^{(r)}}(\alpha_{%
r})/{\lambda}^{(r)}!}]=(-n_{0})\mathfrak{a}_{{\lambda}^{(r)}-(-n_{0})}(\alpha_%
{i_{0}}\alpha_{r})/\big{(}{\lambda}^{(r)}-(-n_{0})\big{)}!$.
Therefore, by induction, $A_{N}$ is a linear combination of the expressions
(4.25). We remark that the negative parts (repeated with multiplicities)
in ${\lambda}^{(1)},\ldots,{\lambda}^{(N)}$ are $-\tilde{n}_{1},\ldots,-\tilde{n}_{\tilde{\ell}}$.
∎
Theorem 4.10.
For $1\leq i\leq N$, let $k_{i}\geq 0$ and $\alpha_{i}\in H^{*}(X)$ be homogeneous.
Assume that $e_{X}\alpha_{i}=0$ for every $1\leq i\leq N$. Then,
$$\displaystyle\widetilde{F}^{\alpha_{1},\ldots,\alpha_{N}}_{k_{1},\ldots,k_{N}}%
(q)=(q;q)_{\infty}^{-\chi(X)}\cdot{\rm Coeff}_{z_{1}^{0}\cdots z_{N}^{0}}\left%
(\prod_{i=1}^{N}\Theta^{\alpha_{i}}_{k_{i}}(q,z_{i})\right)+W_{1,2},$$
(4.27)
and $(q;q)_{\infty}^{\chi(X)}\cdot W_{1,2}$ is a linear combination of the coefficients
of $z_{1}^{0}\cdots z_{N}^{0}$ in some multiple $q$-zeta values
(with variables $z_{1},\ldots,z_{N}$ inserted) of weights
$<\sum_{i=1}^{N}(k_{i}+2)$. Moreover, the coefficients in this linear combination
are independent of $q$.
Proof.
To simplify the signs, we will assume that $|\alpha_{i}|$ is even for every $i$.
Recall that $\widetilde{F}^{\alpha_{1},\ldots,\alpha_{N}}_{k_{1},\ldots,k_{N}}(q)$
is defined in (4.22), and that (4.27) is
just (4.24). From the proofs of (4.24)
and Theorem 4.5, we see that the lower weight term $W_{1,2}$
in (4.27) is the contributions of Case B
in the proof of Theorem 4.5 to the right-hand-side of (4.22).
By (4.15) and Lemma 4.7,
up to a re-ordering of the set $\{1,\ldots,N\}$,
these contributions are of the following form, denoted by $C_{2,N-N_{1}}$:
$${\rm Coeff}_{z_{1}^{0}\cdots z_{N_{1}}^{0}}\left(\prod_{i=1}^{N_{1}}\Theta^{%
\alpha_{i}}_{k_{i}}(q,z_{i})\right)$$
$$\cdot(-1)^{N-N_{1}}\cdot\sum_{\ell({\lambda}^{(i)})=k_{i}+2,|{\lambda}^{(i)}|=%
0\atop N_{1}+1\leq i\leq N}\sum_{\tilde{\lambda}^{(i)}<{\lambda}^{(i)}\atop N_%
{1}+1\leq i\leq N}\prod_{N_{1}+1\leq i\leq N\atop n\geq 1}\left({(-1)^{p^{(i)}%
_{n}}\over p^{(i)}_{n}!}{q^{np^{(i)}_{n}}\over(1-q^{n})^{p^{(i)}_{n}}}{1\over%
\tilde{p}^{(i)}_{n}!}{1\over(1-q^{n})^{\tilde{p}^{(i)}_{n}}}\right)$$
$$\displaystyle\cdot{\rm Tr}\,q^{\mathfrak{d}}\prod_{i=N_{1}+1}^{N}{\mathfrak{a}%
_{{\lambda}^{(i)}-\tilde{\lambda}^{(i)}}\big{(}(1_{X}-K_{X})^{\sum_{n\geq 1}p^%
{(i)}_{n}}\alpha_{i}\big{)}\over\big{(}{\lambda}^{(i)}-\tilde{\lambda}^{(i)}%
\big{)}!}$$
where $0\leq N_{1}<N$, $\tilde{\lambda}^{(i)}$ is denoted by $\big{(}\cdots(-n)^{\tilde{p}^{(i)}_{n}}\cdots(-1)^{\tilde{p}^{(i)}_{1}}1^{p^{(%
i)}_{1}}\cdots n^{p^{(i)}_{n}}\cdots\big{)}$,
and $\sum_{i=N_{1}+1}^{N}|{\lambda}^{(i)}-\tilde{\lambda}^{(i)}|=0$.
We may let $N_{1}=0$. Put $\mu^{(i)}={\lambda}^{(i)}-\tilde{\lambda}^{(i)}$. Then $C_{2,N}$ is
$$\displaystyle(-1)^{N}\cdot\sum_{\ell(\tilde{\lambda}^{(i)})+\ell(\mu^{(i)})=k_%
{i}+2\atop{|\tilde{\lambda}^{(i)}|+|\mu^{(i)}|=0\atop 1\leq i\leq N}}\prod_{1%
\leq i\leq N\atop n\geq 1}\left({(-1)^{p^{(i)}_{n}}\over p^{(i)}_{n}!}{q^{np^{%
(i)}_{n}}\over(1-q^{n})^{p^{(i)}_{n}}}{1\over\tilde{p}^{(i)}_{n}!}{1\over(1-q^%
{n})^{\tilde{p}^{(i)}_{n}}}\right)$$
(4.28)
$$\displaystyle\cdot{\rm Tr}\,q^{\mathfrak{d}}\prod_{i=1}^{N}{\mathfrak{a}_{\mu^%
{(i)}}\big{(}(1_{X}-K_{X})^{\sum_{n\geq 1}p^{(i)}_{n}}\alpha_{i}\big{)}\over%
\mu^{(i)}!}$$
(4.29)
where $\mu^{(i)}\neq\emptyset$ for every $1\leq i\leq N$, and $\sum_{i=1}^{N}|\mu^{(i)}|=0$.
By Lemma 4.9 (ii), the trace on line (4.29)
is a linear combination of the expressions:
$$\displaystyle(q;q)_{\infty}^{-\chi(X)}\cdot\prod_{i=1}^{u}\left\langle 1_{X},%
\prod_{j\in\pi_{i}}\big{(}(1_{X}-K_{X})^{\sum_{n\geq 1}p^{(j)}_{n}}\alpha_{j}%
\big{)}\right\rangle\cdot\prod_{i=1}^{\ell}{(-n_{i}^{\prime})^{w_{i}}q^{n_{i}^%
{\prime}p_{i}}\over(1-q^{n_{i}^{\prime}})^{w_{i}}}$$
(4.30)
where $\sum_{i=1}^{\ell}w_{i}=\sum_{i=1}^{N}\ell(\mu^{(i)})/2$, $0\leq p_{i}\leq w_{i}$,
and the mutually distinct integers $n_{1}^{\prime},\ldots,n_{\ell}^{\prime}$ appear
$w_{1},\ldots,w_{\ell}$ times respectively as the positive parts
(repeated with multiplicities) of $\mu^{(1)},\ldots,\mu^{(N)}$
(so the negative parts, repeated with multiplicities,
of $\mu^{(1)},\ldots,\mu^{(N)}$ are $-n_{1}^{\prime},\ldots,-n_{\ell}^{\prime}$ with multiplicities
$w_{1},\ldots,w_{\ell}$ respectively).
We now fix the type of the $N$-tuple
$(\mu^{(1)},\ldots,\mu^{(N)})$. Define $\mathfrak{T}$ to be the set consisting of
all the $N$-tuples $(\tilde{\mu}^{(1)},\ldots,\tilde{\mu}^{(N)})$ obtained from
$(\mu^{(1)},\ldots,\mu^{(N)})$ as follows: take $N$ mutually distinct positive integers
$n_{1},\ldots,n_{\ell}$, and obtain $\tilde{\mu}^{(i)},1\leq i\leq N$ from
$\mu^{(i)},1\leq i\leq N$ by replacing every part $\pm n_{j}^{\prime}$ in $\mu^{(i)}$ by $\pm n_{j}$.
Denote the contribution of the type $\mathfrak{T}$ to $C_{2,N}$ by $C_{2,N}^{\mathfrak{T}}$.
Then, $C_{2,N}=\sum_{\mathfrak{T}}C_{2,N}^{\mathfrak{T}}$. Thus, to prove the statement
about $(q;q)_{\infty}^{\chi(X)}\cdot W_{1,2}$ in our theorem, it remains to
study $C_{2,N}^{\mathfrak{T}}$. For $1\leq i\leq N$, let $\ell_{i,+}$
(resp. $\ell_{i,-}$) be the sum of the multiplicities of the positive
(resp. negative) parts in $\mu^{(i)}$. Denote the parts (repeated with multiplicities)
of $\mu^{(i)}$ by $-n_{j_{i,1}}^{\prime},\ldots,-n_{j_{i,\ell_{i,-}}}^{\prime},n_{h_{i,1}}^{%
\prime},\ldots,n_{h_{i,\ell_{i,+}}}^{\prime}$. By the definition of $\mathfrak{T}$,
the following data are the same for every $N$-tuple
$(\tilde{\mu}^{(1)},\ldots,\tilde{\mu}^{(N)})\in\mathfrak{T}$:
$\bullet$
the indexes $j_{i,1},\ldots,j_{i,\ell_{i,-}}(1\leq i\leq N)$
up to re-ordering
$\bullet$
the indexes $h_{i,1},\ldots,h_{i,\ell_{i,+}}(1\leq i\leq N)$
up to re-ordering
$\bullet$
the partition $\{\pi_{1},\ldots,\pi_{u}\}$ of $\{1,\ldots,N\}$ and integers
$w_{i},p_{i}$ in (4.30)
$\bullet$
the coefficient of line (4.30) in the linear combination.
So by (4.28), (4.29) and (4.30),
$C_{2,N}^{\mathfrak{T}}$ is a linear combination of the expressions
$$\displaystyle(q;q)_{\infty}^{-\chi(X)}\sum_{n_{1},\ldots,n_{\ell}>0\atop n_{i}%
\neq n_{j}\text{\rm if }i\neq j}\prod_{i=1}^{\ell}{(-n_{i})^{w_{i}}q^{n_{i}p_{%
i}}\over(1-q^{n_{i}})^{w_{i}}}\sum_{\ell(\tilde{\lambda}^{(i)})=k_{i}+2-\ell_{%
i,+}-\ell_{i,-}\atop{|\tilde{\lambda}^{(i)}|=\sum_{r=1}^{\ell_{i,-}}n_{j_{i,r}%
}-\sum_{r=1}^{\ell_{i,+}}n_{h_{i,r}}\atop 1\leq i\leq N}}$$
$$\prod_{i=1}^{u}\left\langle 1_{X},\prod_{j\in\pi_{i}}\big{(}(1_{X}-K_{X})^{%
\sum_{n\geq 1}p^{(j)}_{n}}\alpha_{j}\big{)}\right\rangle\cdot\prod_{1\leq i%
\leq N\atop n\geq 1}\left({(-1)^{p^{(i)}_{n}}\over p^{(i)}_{n}!}{q^{np^{(i)}_{%
n}}\over(1-q^{n})^{p^{(i)}_{n}}}{1\over\tilde{p}^{(i)}_{n}!}{1\over(1-q^{n})^{%
\tilde{p}^{(i)}_{n}}}\right)$$
(we have moved the factor $(-1)^{N}$ into the coefficients of
the linear combination). Inserting the variables $z_{1},\ldots,z_{N}$,
we conclude that $(q;q)_{\infty}^{\chi(X)}\cdot C_{2,N}^{\mathfrak{T}}$ is
a linear combination of the coefficients of $z_{1}^{0}\cdots z_{N}^{0}$ in the expressions
$$\displaystyle\left(\sum_{n_{1},\ldots,n_{\ell}>0\atop n_{i}\neq n_{j}\text{\rm
if%
}i\neq j}\prod_{i=1}^{\ell}{(-n_{i})^{w_{i}}q^{n_{i}p_{i}}\over(1-q^{n_{i}})^%
{w_{i}}}\cdot\prod_{1\leq i\leq N}z_{i}^{-\sum_{r=1}^{\ell_{i,-}}n_{j_{i,r}}+%
\sum_{r=1}^{\ell_{i,+}}n_{h_{i,r}}}\right)$$
(4.31)
$$\displaystyle\cdot\sum_{\ell(\tilde{\lambda}^{(i)})=k_{i}+2-\ell_{i,+}-\ell_{i%
,-}\atop 1\leq i\leq N}\prod_{i=1}^{u}\left\langle 1_{X},\prod_{j\in\pi_{i}}%
\big{(}(1_{X}-K_{X})^{\sum_{n\geq 1}p^{(j)}_{n}}\alpha_{j}\big{)}\right\rangle$$
(4.32)
$$\displaystyle\cdot\prod_{1\leq i\leq N\atop n\geq 1}\left({(-1)^{p^{(i)}_{n}}%
\over p^{(i)}_{n}!}{(qz_{i})^{np^{(i)}_{n}}\over(1-q^{n})^{p^{(i)}_{n}}}{1%
\over\tilde{p}^{(i)}_{n}!}{z_{i}^{-n\tilde{p}^{(i)}_{n}}\over(1-q^{n})^{\tilde%
{p}^{(i)}_{n}}}\right).$$
(4.33)
We claim that line (4.31) is the sum of $\ell!$ multiple $q$-zeta values
of weight $\sum_{i=1}^{\ell}w_{i}$. Indeed, the sum of the terms with
$n_{1}>\cdots>n_{\ell}$ in line (4.31) is equal to:
$$\displaystyle\sum_{n_{1}>\cdots>n_{\ell}}\prod_{i=1}^{\ell}{(-n_{i})^{w_{i}}q^%
{n_{i}p_{i}}\over(1-q^{n_{i}})^{w_{i}}}\cdot\prod_{1\leq i\leq N}z_{i}^{-\sum_%
{r=1}^{\ell_{i,-}}n_{j_{i,r}}+\sum_{r=1}^{\ell_{i,+}}n_{h_{i,r}}}$$
(4.34)
$$\displaystyle=$$
$$\displaystyle\sum_{n_{1}>\cdots>n_{\ell}}\prod_{i=1}^{\ell}{(-n_{i})^{w_{i}}q^%
{n_{i}p_{i}}f_{i}(z_{1},\ldots,z_{N})^{n_{i}}\over(1-q^{n_{i}})^{w_{i}}}$$
where each $f_{i}(z_{1},\ldots,z_{N})$ is a suitable monomial of $z_{1}^{\pm 1},\ldots,z_{N}^{\pm 1}$.
So line (4.31) is the sum of the following $\ell!$ multiple $q$-zeta values:
$$\displaystyle\sum_{n_{1}>\cdots>n_{\ell}}\prod_{i=1}^{\ell}{(-n_{i})^{w_{%
\sigma(i)}}q^{n_{i}p_{\sigma(i)}}f_{\sigma(i)}(z_{1},\ldots,z_{N})^{n_{i}}%
\over(1-q^{n_{i}})^{w_{\sigma(i)}}}$$
(4.35)
where $\sigma$ runs in the symmetric group $S_{\ell}$. Furthermore,
as in the proof of Lemma 4.7, the product of lines (4.32)
and (4.33) is equal to
$$\displaystyle\sum_{a_{i},b_{i}\geq 0;s_{1}^{(i)},\ldots,s_{a_{i}}^{(i)},t_{1}^%
{(i)},\ldots,t_{b_{i}}^{(i)}\geq 1\atop{\sum_{r=1}^{a_{i}}s_{r}^{(i)}+\sum_{r=%
1}^{b_{i}}t_{r}^{(i)}=k_{i}+2-\ell_{i,+}-\ell_{i,-}\atop 1\leq i\leq N}}\prod_%
{i=1}^{u}\left\langle 1_{X},\prod_{j\in\pi_{i}}\big{(}(1_{X}-K_{X})^{\sum_{r=1%
}^{a_{j}}s_{r}^{(j)}}\alpha_{j}\big{)}\right\rangle\cdot\prod_{1\leq r\leq a_{%
i}\atop 1\leq i\leq N}{(-1)^{s_{r}^{(i)}}\over s_{r}^{(i)}!}$$
$$\displaystyle\cdot\prod_{1\leq r\leq b_{i}\atop 1\leq i\leq N}{1\over t_{r}^{(%
i)}!}\cdot\prod_{i=1}^{N}\left(\sum_{n_{1}>\cdots>n_{a_{i}}}\prod_{r=1}^{a_{i}%
}{(qz_{i})^{n_{r}s_{r}^{(i)}}\over(1-q^{n_{r}})^{s_{r}^{(i)}}}\cdot\sum_{m_{1}%
>\cdots>m_{b_{i}}}\prod_{r=1}^{b_{i}}{z_{i}^{-m_{r}t_{r}^{(i)}}\over(1-q^{m_{r%
}})^{t_{r}^{(i)}}}\right).$$
Combining with lines (4.31) and (4.35),
we see that $(q;q)_{\infty}^{\chi(X)}\cdot C_{2,N}^{\mathfrak{T}}$
is a linear combination of the coefficients of $z_{1}^{0}\cdots z_{N}^{0}$ in
some multiple $q$-zeta values of weights
$$\displaystyle w$$
$$\displaystyle:=$$
$$\displaystyle\sum_{i=1}^{\ell}w_{i}+\sum_{i=1}^{N}\left(\sum_{r=1}^{a_{i}}s_{r%
}^{(i)}+\sum_{r=1}^{b_{i}}t_{r}^{(i)}\right)$$
$$\displaystyle=$$
$$\displaystyle\sum_{i=1}^{\ell}w_{i}+\sum_{i=1}^{N}(k_{i}+2-\ell_{i,+}-\ell_{i,%
-}).$$
Note from (4.30) that $\sum_{i=1}^{\ell}w_{i}=\sum_{i=1}^{N}\ell(\mu^{(i)})/2=\sum_{i=1}^{N}(\ell_{i,%
+}+\ell_{i,-})/2$.
So we have $w<\sum_{i=1}^{N}(k_{i}+2)$. This completes the proof of our theorem.
∎
We will end this section with three propositions about $F^{\alpha}_{k}(q)$,
which provide some insight into the lower weight term $W_{1}$ in Theorem 4.8.
Proposition 4.11 deals with $F^{\alpha}_{0}(q)$ for an arbitrary $\alpha\in H^{*}(X)$.
Proposition 4.13 calculates $F^{\alpha}_{1}(q)$ by assuming $e_{X}\alpha=0$.
Proposition 4.14 computes $F^{\alpha}_{k}(q),k\geq 2$ by assuming
$e_{X}\alpha=K_{X}\alpha=0$.
Proposition 4.11.
The generating series $F^{\alpha}_{0}(q)$ is equal to
$$\displaystyle(q;q)_{\infty}^{-\chi(X)}\cdot\langle 1_{X}-K_{X},\alpha\rangle%
\cdot\sum_{n}{q^{n}\over(1-q^{n})^{2}}+(q;q)_{\infty}^{-\chi(X)}\cdot\langle e%
_{X},\alpha\rangle\cdot\sum_{n}{nq^{n}\over 1-q^{n}}.$$
(4.36)
Proof.
By Lemma 3.2, $F_{k}^{\alpha}(q)={\rm Tr}\,q^{\mathfrak{d}}\,W(\mathfrak{L}_{1},z)\,\mathfrak%
{G}_{k}(\alpha)$.
By Theorem 2.3, we have
$\mathfrak{G}_{0}(\alpha)=-\sum_{n>0}(\mathfrak{a}_{-n}\mathfrak{a}_{n})(\alpha)$.
Now (4.36) follows from Remark 4.6.
∎
Remark 4.12.
By (4.36), $\displaystyle{F^{1_{X}}_{0}(q)=(q;q)_{\infty}^{-\chi(X)}\cdot\chi(X)\cdot\sum_%
{n}{nq^{n}\over 1-q^{n}}=q{{\rm d}\over{\rm d}q}(q;q)_{\infty}^{-\chi(X)}}$.
Proposition 4.13.
Let $\alpha\in H^{*}(X)$ be a homogeneous class satisfying $e_{X}\alpha=0$. Then,
the generating series $F_{1}^{\alpha}(q)$ is the coefficient of $z^{0}$ in
$$(q;q)_{\infty}^{-\chi(X)}\cdot{\langle K_{X}-K_{X}^{2},\alpha\rangle\over 2}\cdot$$
$$\cdot\left(\sum_{n}{(n-1)q^{n}\over(1-q^{n})^{2}}+\sum_{n}{(qz)^{n}\over 1-q^{%
n}}\cdot\left(\sum_{m}{z^{-2m}\over(1-q^{m})^{2}}+2\sum_{m_{1}>m_{2}}{z^{-m_{1%
}}\over 1-q^{m_{1}}}{z^{-m_{2}}\over 1-q^{m_{2}}}\right)\right).$$
Proof.
We have $F_{1}^{\alpha}(q)={\rm Tr}\,q^{\mathfrak{d}}\,W(\mathfrak{L}_{1},z)\,\mathfrak%
{G}_{1}(\alpha)$.
It is known that
$$\displaystyle\mathfrak{G}_{1}(\alpha)=-\sum_{\ell(\lambda)=3,|\lambda|=0}{%
\mathfrak{a}_{\lambda}(\alpha)\over\lambda!}-\sum_{n>0}\frac{n-1}{2}(\mathfrak%
{a}_{-n}\mathfrak{a}_{n})(K_{X}\alpha).$$
(4.37)
Applying Remark 4.6 to $\displaystyle{-\sum_{n>0}\frac{n-1}{2}{\rm Tr}\,q^{\mathfrak{d}}\,W(\mathfrak{%
L}_{1},z)\,(\mathfrak{a}_{-n}\mathfrak{a}_{n})(K_{X}\alpha)}$ yields the weight-$2$ terms in our proposition.
Again by Remark 4.6, the trace $\displaystyle{{\rm Tr}\,q^{\mathfrak{d}}\,W(\mathfrak{L}_{1},z)\,{\mathfrak{a}%
_{\lambda}(\alpha)\over\lambda!}}$ with $\ell(\lambda)=3$ and $|\lambda|=0$
contains only weight-$3$ terms (i.e., does not contain lower weight terms).
So the proof of Theorem 4.8 shows that
$$-\sum_{\ell(\lambda)=3,|\lambda|=0}{\rm Tr}\,q^{\mathfrak{d}}\,W(\mathfrak{L}_%
{1},z)\,{\mathfrak{a}_{\lambda}(\alpha)\over\lambda!}=(q;q)_{\infty}^{-\chi(X)%
}\cdot{\rm Coeff}_{z^{0}}\Theta^{\alpha}_{1}(q,z).$$
Expanding ${\rm Coeff}_{z^{0}}\Theta^{\alpha}_{1}(q,z)$ yields the weight-$3$ terms in our proposition.
∎
Proposition 4.14.
Let $\alpha\in H^{*}(X)$ be homogeneous satisfying $K_{X}\alpha=e_{X}\alpha=0$.
(i)
If $|\alpha|<4$, then $F^{\alpha}_{k}(q)=0$ for every $k\geq 0$;
(ii)
Let $|\alpha|=4$ and $k\geq 0$.
Then, $F^{\alpha}_{k}(q)$ is the coefficient of $z^{0}$ in
$$-(q;q)_{\infty}^{-\chi(X)}\cdot\langle 1_{X},\alpha\rangle\cdot\sum_{a,s_{1},%
\ldots,s_{a},b,t_{1},\ldots,t_{b}\geq 1\atop\sum_{i=1}^{a}s_{i}+\sum_{j=1}^{b}%
t_{j}=k+2}\prod_{i=1}^{a}{(-1)^{s_{i}}\over s_{i}!}\cdot\prod_{j=1}^{b}{1\over
t%
_{j}!}$$
$$\displaystyle\cdot\sum_{n_{1}>\cdots>n_{a}}\prod_{i=1}^{a}{(qz)^{n_{i}s_{i}}%
\over(1-q^{n_{i}})^{s_{i}}}\cdot\sum_{m_{1}>\cdots>m_{b}}\prod_{j=1}^{b}{z^{-m%
_{j}t_{j}}\over(1-q^{m_{j}})^{t_{j}}}.$$
(4.38)
In particular, if $2\not|k$, then $F^{\alpha}_{k}(q)=0$.
Proof.
Since $K_{X}\alpha=e_{X}\alpha=0$, we conclude from Theorem 2.3 that
$$\displaystyle\mathfrak{G}_{k}(\alpha)=-\sum_{\ell(\lambda)=k+2,|\lambda|=0}{%
\mathfrak{a}_{\lambda}(\alpha)\over\lambda!}.$$
As in the proof of Proposition 4.13,
Remark 4.6 and the proof of Theorem 4.8 yield
$$\displaystyle F^{\alpha}_{k}(q)=(q;q)_{\infty}^{-\chi(X)}\cdot{\rm Coeff}_{z^{%
0}}\Theta^{\alpha}_{k}(q,z).$$
By the definition of $\Theta^{\alpha}_{k}(q,z)$ in (4.17),
we see that (i) holds and that our formula for $F^{\alpha}_{k}(q)$ with
$|\alpha|=4$ and $k\geq 0$ holds.
Note that line (4.38) can be rewritten as
$$\sum_{n_{1}>\cdots>n_{a}}\prod_{i=1}^{a}{(qz^{2})^{n_{i}s_{i}/2}\over(1-q^{n_{%
i}})^{s_{i}}}\cdot\sum_{m_{1}>\cdots>m_{b}}\prod_{j=1}^{b}{(qz^{-2})^{m_{j}t_{%
j}/2}\over(1-q^{m_{j}})^{t_{j}}}.$$
Therefore, if $|\alpha|=4$ and $2\nmid k$, then the role of $a,s_{1},\ldots,s_{a}$ and
the role of $b,t_{1},\ldots,t_{b}$ in the above formula of $F^{\alpha}_{k}(q)$
are anti-symmetric; so $F^{\alpha}_{k}(q)=0$.
∎
5. The reduced series $\big{\langle}{\rm ch}_{k_{1}}^{L_{1}}\cdots{\rm ch}_{k_{N}}^{L_{N}}\big{%
\rangle}^{\prime}$
In this section, we will prove Conjecture 1.1 modulo the lower weight term.
Moreover, for abelian surfaces, we will verify Conjecture 1.1.
Let $L$ be a line bundle on the smooth projective surface $X$.
It induces the tautological rank-$n$ bundle $L^{[n]}$ over the Hilbert scheme ${X^{[n]}}$:
$$L^{[n]}=p_{1*}\big{(}p_{2}^{*}L|_{\mathcal{Z}_{n}}\big{)}$$
where $\mathcal{Z}_{n}$ is the universal codimension-$2$ subscheme of ${X^{[n]}}\times X$,
and $p_{1}$ and $p_{2}$ are the projections of ${X^{[n]}}\times X$ to ${X^{[n]}}$ and $X$ respectively.
By the Grothendieck-Riemann-Roch Theorem and (2.1), we obtain
$$\displaystyle{\rm ch}(L^{[n]})$$
$$\displaystyle=$$
$$\displaystyle p_{1*}({\rm ch}({\mathcal{O}}_{{\mathcal{Z}}_{n}})\cdot p_{2}^{*%
}{\rm ch}(L)\cdot p_{2}^{*}{\rm td}(X))$$
(5.1)
$$\displaystyle=$$
$$\displaystyle G(1_{X},n)+G(L,n)+G(L^{2}/2,n).$$
Since the cohomology degree of $G_{i}(\alpha,n)$ is $2i+|\alpha|$, we have
$$\displaystyle{\rm ch}_{k}(L^{[n]})=G_{k}(1_{X},n)+G_{k-1}(L,n)+G_{k-2}(L^{2}/2%
,n).$$
(5.2)
Following Okounkov [Oko], we have defined the generating series
$\big{\langle}{\rm ch}_{k_{1}}^{L_{1}}\cdots{\rm ch}_{k_{N}}^{L_{N}}\big{\rangle}$
and its reduced version $\big{\langle}{\rm ch}_{k_{1}}^{L_{1}}\cdots{\rm ch}_{k_{N}}^{L_{N}}\big{%
\rangle}^{\prime}$
in (1.1) and (1.2) respectively.
Theorem 5.1.
Let $L_{1},\ldots,L_{N}$ be line bundles over $X$, and $k_{1},\ldots,k_{N}\geq 0$. Then,
$$\displaystyle\big{\langle}{\rm ch}_{k_{1}}^{L_{1}}\cdots{\rm ch}_{k_{N}}^{L_{N%
}}\big{\rangle}^{\prime}={\rm Coeff}_{z_{1}^{0}\cdots z_{N}^{0}}\left(\prod_{i%
=1}^{N}\Theta^{1_{X}}_{k_{i}}(q,z_{i})\right)+W,$$
(5.3)
and the lower weight term $W$ is an infinite linear combination of the expressions:
$$\displaystyle\prod_{i=1}^{u}\left\langle K_{X}^{r_{i}}e_{X}^{r_{i}^{\prime}},L%
_{1}^{\ell_{i,1}}\cdots L_{N}^{\ell_{i,N}}\right\rangle\cdot\prod_{i=1}^{v}{q^%
{n_{i}w_{i}p_{i}}\over(1-q^{n_{i}})^{w_{i}}}$$
where $\sum_{i=1}^{v}w_{i}<\sum_{i=1}^{N}(k_{i}+2)$,
and the integers $u,v$, $r_{i},r_{i}^{\prime},\ell_{i,j}\geq 0,n_{i}>0,w_{i}>0,p_{i}\in\{0,1\}$ depend only on $k_{1},\ldots,k_{N}$.
Furthermore, all the coefficients of this linear combination are
independent of $q,L_{1},\ldots,L_{N}$ and $X$.
Proof.
We conclude from (1.1), (1.2), (5.2)
and (2.2) that
$$\displaystyle\big{\langle}{\rm ch}_{k_{1}}^{L}\cdots{\rm ch}_{k_{N}}^{L}\big{%
\rangle}^{\prime}=(q;q)_{\infty}^{\chi(X)}\cdot F^{1_{X},\ldots,1_{X}}_{k_{1},%
\ldots,k_{N}}(q)+(q;q)_{\infty}^{\chi(X)}\cdot A$$
where $A$ is the sum of the series $F^{\alpha_{1},\ldots,\alpha_{N}}_{k_{1}^{\prime},\ldots,k_{N}^{\prime}}$
such that for every $1\leq i\leq N$,
$$(\alpha_{i},k_{i}^{\prime})\in\{(1_{X},k_{i}),(L_{i},k_{i}-1),(L_{i}^{2}/2,k_{%
i}-2)\},$$
and $\sum_{i=1}^{N}k_{i}^{\prime}<\sum_{i=1}^{N}k_{i}$.
Now our result follows from Theorem 4.8.
∎
Theorem 5.2.
Let $L_{1},\ldots,L_{N}$ be line bundles over an abelian surface $X$,
and $k_{1},\ldots,k_{N}\geq 0$. Then, the lower weight term $W$ in (5.3)
is a linear combination of the coefficients of $z_{1}^{0}\cdots z_{N}^{0}$ in
some multiple $q$-zeta values (with additional variables $z_{1},\ldots,z_{N}$ inserted)
of weights $<\sum_{i=1}^{N}(k_{i}+2)$.
Moreover, the coefficients in this linear combination are independent of $q$.
Proof.
Since $e_{X}=K_{X}=0$,
$F^{\tilde{\alpha}_{1},\ldots,\tilde{\alpha}_{N}}_{\tilde{k}_{1},\ldots,\tilde{%
k}_{N}}=\widetilde{F}^{\tilde{\alpha}_{1},\ldots,\tilde{\alpha}_{N}}_{\tilde{k%
}_{1},\ldots,\tilde{k}_{N}}$
by Lemma 3.2, Theorem 2.3 and (4.22).
By Theorem 4.10 and the proof of Theorem 5.1,
our theorem follows.
∎
Our next two propositions compute the series $\big{\langle}{\rm ch}_{k}^{L}\big{\rangle}$ completely,
and should offer some insight into the lower weight term $W$ in Theorem 5.1.
Proposition 5.3 calculates $\big{\langle}{\rm ch}_{1}^{L}\big{\rangle}$
by assuming $e_{X}=0$, while Proposition 5.4 deals with the series
$\big{\langle}{\rm ch}_{k}^{L}\big{\rangle},k\geq 2$ by assuming $e_{X}=K_{X}=0$
(i.e., by assuming that $X$ is an abelian surface).
Note from (5.2) that when $\chi(X)=0$, we have
$$\displaystyle\big{\langle}{\rm ch}_{k}^{L}\big{\rangle}^{\prime}=\big{\langle}%
{\rm ch}_{k}^{L}\big{\rangle}=F^{1_{X}}_{k}(q)+F^{L}_{k-1}(q)+{1\over 2}\cdot F%
^{L^{2}}_{k-2}(q).$$
(5.4)
Proposition 5.3.
Let $L$ be a line bundle over a smooth projective surface $X$ with $e_{X}=0$. Then,
the series $\big{\langle}{\rm ch}_{1}^{L}\big{\rangle}$ is the coefficient of $z^{0}$ in
$$-\langle K_{X},L\rangle\cdot\sum_{n}{q^{n}\over(1-q^{n})^{2}}-{\langle K_{X},K%
_{X}\rangle\over 2}\cdot\sum_{n}{(n-1)q^{n}\over(1-q^{n})^{2}}$$
$$-{\langle K_{X},K_{X}\rangle\over 2}\cdot\sum_{n}{(qz)^{n}\over 1-q^{n}}\cdot%
\left(\sum_{m}{z^{-2m}\over(1-q^{m})^{2}}+2\sum_{m_{1}>m_{2}}{z^{-m_{1}}\over 1%
-q^{m_{1}}}{z^{-m_{2}}\over 1-q^{m_{2}}}\right).$$
Proof.
Our formula follows from (5.4), (4.36)
and Proposition 4.13.
∎
Proposition 5.4.
Let $L$ be a line bundle over an abelian surface $X$. If $2\nmid k$,
then $\big{\langle}{\rm ch}_{k}^{L}\big{\rangle}^{\prime}=0$. If $2|k$, the generating series
$\big{\langle}{\rm ch}_{k}^{L}\big{\rangle}^{\prime}$ is the coefficient of $z^{0}$ in
$$-{\langle L,L\rangle\over 2}\cdot\sum_{a,s_{1},\ldots,s_{a},b,t_{1},\ldots,t_{%
b}\geq 1\atop\sum_{i=1}^{a}s_{i}+\sum_{j=1}^{b}t_{j}=k}\prod_{i=1}^{a}{(-1)^{s%
_{i}}\over s_{i}!}\cdot\prod_{j=1}^{b}{1\over t_{j}!}$$
$$\displaystyle\cdot\sum_{n_{1}>\cdots>n_{a}}\prod_{i=1}^{a}{(qz)^{n_{i}s_{i}}%
\over(1-q^{n_{i}})^{s_{i}}}\cdot\sum_{m_{1}>\cdots>m_{b}}\prod_{j=1}^{b}{z^{-m%
_{j}t_{j}}\over(1-q^{m_{j}})^{t_{j}}}.$$
Proof.
Follows immediately from (5.4) and Proposition 4.14.
∎
6. Applications to the universal constants in
$\displaystyle{\sum_{n}c\big{(}T_{X^{[n]}}\big{)}}\,q^{n}$
Let $x\in H^{4}(X)$ be the cohomology class of a point in the surface $X$.
In this section, we will compute $F^{x,\ldots,x}_{k_{1},\ldots,k_{N}}(q)$ in terms of
the universal constants in the expression of
$\displaystyle{\sum_{n}c\big{(}T_{X^{[n]}}\big{)}}\,q^{n}$ formulated
in [Boi, BN]. Comparing with Proposition 4.14 (ii) enables us
to determine some of these universal constants.
Let $C_{i}={2i\choose i}/(i+1)$ be the Catalan number and
$\sigma_{1}(i)=\sum_{j|i}j$.
Recall that $\mathcal{P}=\widetilde{\mathcal{P}}_{+}$ is the set of all the usual partitions.
The following lemma is from [Boi, BN].
Lemma 6.1.
There exist unique rational numbers $b_{\mu},f_{\mu},g_{\mu},h_{\mu}$
depending only on the partitions $\mu\in\mathcal{P}$ such that
$\displaystyle{\sum_{n}c\big{(}T_{X^{[n]}}\big{)}}\,q^{n}$ is equal to
$$\exp\left(\sum_{\mu\in\mathcal{P}}q^{|\mu|}\Big{(}b_{\mu}\mathfrak{a}_{-\mu}(1%
_{X})+f_{\mu}\mathfrak{a}_{-\mu}(e_{X})+g_{\mu}\mathfrak{a}_{-\mu}(K_{X})+h_{%
\mu}\mathfrak{a}_{-\mu}(K_{X}^{2})\Big{)}\right)|0\rangle.$$
In addition, for $i\geq 1$, we have $b_{2i}=0$, $b_{2i-1}=(-1)^{i-1}C_{i-1}/(2i-1)$,
$b_{(1^{i})}=f_{(1^{i})}=-g_{(1^{i})}=\sigma_{1}(i)/i$, and $h_{(1^{i})}=0$.
Our goal is to compute $F^{\alpha_{1},\ldots,\alpha_{N}}_{k_{1},\ldots,k_{N}}(q)$
in terms of the universal constants $b_{\mu},f_{\mu},g_{\mu}$ and $h_{\mu}$.
Using the definition of the operators $\mathfrak{G}_{k_{i}}(\alpha_{i})$, we see that
$$\displaystyle F^{\alpha_{1},\ldots,\alpha_{N}}_{k_{1},\ldots,k_{N}}(q)$$
$$\displaystyle=$$
$$\displaystyle\sum_{n}q^{n}\left\langle\left(\prod_{i=1}^{N}G_{k_{i}}(\alpha_{i%
},n)\right)c\big{(}T_{X^{[n]}}\big{)},1_{X^{[n]}}\right\rangle$$
(6.1)
$$\displaystyle=$$
$$\displaystyle\sum_{n}q^{n}\left\langle\left(\prod_{i=1}^{N}\mathfrak{G}_{k_{i}%
}(\alpha_{i})\right)c\big{(}T_{X^{[n]}}\big{)},1_{X^{[n]}}\right\rangle$$
$$\displaystyle=$$
$$\displaystyle\left\langle\left(\prod_{i=1}^{N}\mathfrak{G}_{k_{i}}(\alpha_{i})%
\right)\sum_{n}c\big{(}T_{X^{[n]}}\big{)}q^{n},|1\rangle\right\rangle$$
where we have put $\displaystyle{|1\rangle=\sum_{n}1_{X^{[n]}}=\exp{\big{(}\mathfrak{a}_{-1}(1_{X%
})\big{)}}\cdot|0\rangle}$.
Lemma 6.2.
Let $w\in{\mathbb{H}}_{X}$, and $\mathfrak{G}$ be a (possibly infinite) sum of monomials of Heisenberg creation operators.
Then, $\big{\langle}\mathfrak{G}w,|1\rangle\big{\rangle}=\big{\langle}\mathfrak{G}|0%
\rangle,|1\rangle\big{\rangle}\cdot\big{\langle}w,|1\rangle\big{\rangle}$.
Proof.
By linearity, it suffices to prove that
$$\displaystyle\left\langle\prod_{i=1}^{s}\mathfrak{a}_{-n_{i}}(\alpha_{i})\cdot%
\prod_{j=1}^{t}\mathfrak{a}_{-m_{j}}(\beta_{j})|0\rangle,|1\rangle\right%
\rangle=\left\langle\prod_{i=1}^{s}\mathfrak{a}_{-n_{i}}(\alpha_{i})|0\rangle,%
|1\rangle\right\rangle\cdot\left\langle\prod_{j=1}^{t}\mathfrak{a}_{-m_{j}}(%
\beta_{j})|0\rangle,|1\rangle\right\rangle$$
where $n_{1},\ldots,n_{s},m_{1},\ldots,m_{t}>0$,
and $\alpha_{1},\ldots,\alpha_{s},\beta_{1},\ldots,\beta_{t}$ are homogeneous.
Indeed, if $\mathfrak{a}_{-n_{i}}(\alpha_{i})\not\in\mathbb{C}\,\mathfrak{a}_{-1}(x)$ for some $i$ or
if $\mathfrak{a}_{-m_{j}}(\beta_{j})\not\in\mathbb{C}\,\mathfrak{a}_{-1}(x)$ for some $j$,
then both sides are equal to $0$. Otherwise, letting $\mathfrak{a}_{-n_{i}}(\alpha_{i})=u_{i}\mathfrak{a}_{-1}(x)$ for every $i$ and $\mathfrak{a}_{-m_{j}}(\beta_{j})=v_{j}\mathfrak{a}_{-1}(x)$ for every $j$,
we see that both sides are $u_{1}\cdots u_{s}v_{1}\cdots v_{t}$.
∎
Lemma 6.3.
Let $b_{(1^{j})}$ and $b_{(i,1^{j})}$ be from Lemma 6.1.
Let $\tilde{b}_{(i,1^{j})}=(j+1)b_{(1^{j+1})}$ if $i=1$, and $\tilde{b}_{(i,1^{j})}=b_{(i,1^{j})}$ if $i>1$. Then, $F^{x,\ldots,x}_{k_{1},\ldots,k_{N}}(q)$ is equal to
$$(q;q)_{\infty}^{-\chi(X)}\cdot(-1)^{N}\sum_{\sum_{s\geq 1}(s+1)m_{i,s}=k_{i}+2%
\atop 1\leq i\leq N}\prod_{i=1}^{N}{1\over(\sum_{s\geq 1}sm_{i,s})!}\cdot$$
$$\cdot\prod_{s\geq 1}\left({(-s)^{m_{s}}m_{s}!\over\prod_{i=1}^{N}m_{i,s}!}\sum%
_{t_{0}+t_{1}+\cdots+t_{j}+\cdots=m_{s}}\prod_{j=0}^{+\infty}{(\tilde{b}_{(s,1%
^{j})}q^{s+j})^{t_{j}}\over t_{j}!}\right)$$
where $m_{i,s}\geq 0$ for every $i$ and $s$, and $m_{s}=\sum_{i=1}^{N}m_{i,s}$
for every $s\geq 1$.
Proof.
By (6.1) and Theorem 2.3, we obtain
$$\displaystyle F^{x,\ldots,x}_{k_{1},\ldots,k_{N}}(q)$$
$$\displaystyle=$$
$$\displaystyle\left\langle\left(\prod_{i=1}^{N}\mathfrak{G}_{k_{i}}(x)\right)%
\sum_{n}c\big{(}T_{X^{[n]}}\big{)}q^{n},|1\rangle\right\rangle,$$
(6.2)
$$\displaystyle\prod_{i=1}^{N}\mathfrak{G}_{k_{i}}(x)$$
$$\displaystyle=$$
$$\displaystyle(-1)^{N}\sum_{\ell(\lambda^{(i)})=k_{i}+2,|\lambda^{(i)}|=0\atop 1%
\leq i\leq N}\prod_{i=1}^{N}{\mathfrak{a}_{\lambda^{(i)}}(x)\over(\lambda^{(i)%
})!}.$$
(6.3)
Note that $\tau_{\ell*}x=\underbrace{x\otimes\cdots\otimes x}_{\text{$\ell$ times}}$,
$K_{X}^{2}=\langle K_{X},K_{X}\rangle x$, $e_{X}=\chi(X)x$, and
$$\displaystyle\tau_{\ell*}1_{X}=1_{X}\otimes\underbrace{x\otimes\cdots\otimes x%
}_{\text{$(\ell-1)$ times}}+\cdots+\underbrace{x\otimes\cdots\otimes x}_{\text%
{$(\ell-1)$ times}}\otimes 1_{X}+w$$
(6.4)
where $w$ is a sum of cohomology classes of the form
$\alpha_{1}\otimes\cdots\otimes\alpha_{\ell}$ with $0<|\alpha_{i}|<4$ for some $i$.
So for a generalized partition ${\lambda}$, positive integers
$n_{1},\ldots,n_{s}$ and homogeneous classes $\alpha_{1},\ldots,\alpha_{s}\in H^{*}(X)$, we have
$$\displaystyle\big{\langle}\mathfrak{a}_{\lambda}(x)\mathfrak{a}_{-n_{1}}(%
\alpha_{1})\cdots\mathfrak{a}_{-n_{s}}(\alpha_{s})|0\rangle,|1\rangle\big{%
\rangle}=0$$
(6.5)
if $0<|\alpha_{i}|<4$ for some $i$, or if $\mathfrak{a}_{-n_{i}}(\alpha_{i})\in\mathbb{C}\,\mathfrak{a}_{-j}(x)$
for some $i$ and for some $j>1$. Combining with (6.2), (6.3) and
Lemma 6.1, we see that $F^{x,\ldots,x}_{k_{1},\ldots,k_{N}}(q)$ equals
$$\displaystyle\left\langle\left(\prod_{i=1}^{N}\mathfrak{G}_{k_{i}}(x)\right)%
\exp\left(\sum_{\mu\in\mathcal{P}}b_{\mu}\mathfrak{a}_{-\mu}(1_{X})q^{|\mu|}+%
\sum_{i}\tilde{f}_{(1^{i})}\mathfrak{a}_{-(1^{i})}(x)q^{i}\right)|0\rangle,|1%
\rangle\right\rangle$$
$$\displaystyle=$$
$$\displaystyle\left\langle\exp\left(\sum_{i}\tilde{f}_{(1^{i})}\mathfrak{a}_{-(%
1^{i})}(x)q^{i}\right)\cdot\prod_{i=1}^{N}\mathfrak{G}_{k_{i}}(x)\cdot\exp%
\left(\sum_{\mu\in\mathcal{P}}b_{\mu}\mathfrak{a}_{-\mu}(1_{X})q^{|\mu|}\right%
)|0\rangle,|1\rangle\right\rangle$$
where $\tilde{f}_{(1^{i})}=\chi(X)\cdot f_{(1^{i})}$.
By Lemma 6.2, $F^{x,\ldots,x}_{k_{1},\ldots,k_{N}}(q)$
is equal to
$$\displaystyle\left\langle\exp\left(\sum_{i}\tilde{f}_{(1^{i})}\mathfrak{a}_{-(%
1^{i})}(x)q^{i}\right)|0\rangle,|1\rangle\right\rangle$$
(6.6)
$$\displaystyle\cdot$$
$$\displaystyle\left\langle\prod_{i=1}^{N}\mathfrak{G}_{k_{i}}(x)\cdot\exp\left(%
\sum_{\mu\in\mathcal{P}}b_{\mu}\mathfrak{a}_{-\mu}(1_{X})q^{|\mu|}\right)|0%
\rangle,|1\rangle\right\rangle.$$
In particular, setting $N=0$, we conclude that
$$\displaystyle\left\langle\exp\left(\sum_{i}\tilde{f}_{(1^{i})}\mathfrak{a}_{-(%
1^{i})}(x)q^{i}\right)|0\rangle,|1\rangle\right\rangle=F(q)=(q;q)_{\infty}^{-%
\chi(X)}.$$
(6.7)
It follows from (6.6) and (6.4) that
$F^{x,\ldots,x}_{k_{1},\ldots,k_{N}}(q)$ is equal to
$$\displaystyle(q;q)_{\infty}^{-\chi(X)}\left\langle\prod_{i=1}^{N}\mathfrak{G}_%
{k_{i}}(x)\cdot\exp\left(\sum_{\mu\in\mathcal{P}}b_{\mu}\mathfrak{a}_{-\mu}(1_%
{X})q^{|\mu|}\right)|0\rangle,|1\rangle\right\rangle$$
(6.8)
$$\displaystyle=$$
$$\displaystyle(q;q)_{\infty}^{-\chi(X)}\left\langle\prod_{i=1}^{N}\mathfrak{G}_%
{k_{i}}(x)\cdot\exp\left(\sum_{i\geq 1\atop j\geq 0}\tilde{b}_{(i,1^{j})}%
\mathfrak{a}_{-i}(1_{X})\mathfrak{a}_{-1}(x)^{j}q^{i+j}\right)|0\rangle,|1%
\rangle\right\rangle$$
where $\tilde{b}_{(i,1^{j})}=(j+1)b_{(1^{j+1})}$ if $i=1$, and $\tilde{b}_{(i,1^{j})}=b_{(i,1^{j})}$ if $i>1$.
Let ${\lambda}^{(1)},\ldots,{\lambda}^{(N)}$ be from the right-hand-side of (6.3).
In order to have a nonzero pairing
$$\displaystyle\left\langle\prod_{i=1}^{N}\mathfrak{a}_{\lambda^{(i)}}(x)\cdot%
\exp\left(\sum_{i\geq 1,j\geq 0}\tilde{b}_{(i,1^{j})}\mathfrak{a}_{-i}(1_{X})%
\mathfrak{a}_{-1}(x)^{j}q^{i+j}\right)|0\rangle,|1\rangle\right\rangle,$$
(6.9)
each ${\lambda}^{(i)}$ with $1\leq i\leq N$ must be of the form
$\big{(}(-1)^{n_{i}}1^{m_{i,1}}2^{m_{i,2}}\cdots\big{)}$;
since $\ell(\lambda^{(i)})=k_{i}+2$ and $|\lambda^{(i)}|=0$, we get
$n_{i}+\sum_{s\geq 1}m_{i,s}=k_{i}+2$ and $n_{i}=\sum_{s\geq 1}sm_{i,s}$; so
$$\displaystyle\sum_{s\geq 1}(s+1)m_{i,s}=k_{i}+2.$$
(6.10)
In this case, using Lemma 6.2, we see that (6.9) is equal to
$$\displaystyle\left\langle\mathfrak{a}_{-1}(x)^{\sum_{i}n_{i}}\cdot\prod_{i,s}%
\mathfrak{a}_{s}(x)^{m_{i,s}}\cdot\exp\left(\sum_{i\geq 1,j\geq 0}\tilde{b}_{(%
i,1^{j})}\mathfrak{a}_{-i}(1_{X})\mathfrak{a}_{-1}(x)^{j}q^{i+j}\right)|0%
\rangle,|1\rangle\right\rangle$$
$$\displaystyle=$$
$$\displaystyle\left\langle\prod_{1\leq i\leq N,s\geq 1}\mathfrak{a}_{s}(x)^{m_{%
i,s}}\cdot\exp\left(\sum_{i\geq 1,j\geq 0}\tilde{b}_{(i,1^{j})}\mathfrak{a}_{-%
i}(1_{X})\mathfrak{a}_{-1}(x)^{j}q^{i+j}\right)|0\rangle,|1\rangle\right\rangle.$$
Put $m_{s}=\sum_{i=1}^{N}m_{i,s}$ for every $s\geq 1$. Then, (6.9) is equal to
$$\displaystyle\left\langle\prod_{s\geq 1}\mathfrak{a}_{s}(x)^{m_{s}}\cdot\exp%
\left(\sum_{i\geq 1,j\geq 0}\tilde{b}_{(i,1^{j})}\mathfrak{a}_{-i}(1_{X})%
\mathfrak{a}_{-1}(x)^{j}q^{i+j}\right)|0\rangle,|1\rangle\right\rangle$$
$$\displaystyle=$$
$$\displaystyle\left\langle\prod_{s\geq 1}\mathfrak{a}_{s}(x)^{m_{s}}\cdot\prod_%
{i\geq 1,j\geq 0}\sum_{t}{1\over t!}\left(\tilde{b}_{(i,1^{j})}\mathfrak{a}_{-%
i}(1_{X})\mathfrak{a}_{-1}(x)^{j}q^{i+j}\right)^{t}\cdot|0\rangle,|1\rangle\right\rangle$$
$$\displaystyle=$$
$$\displaystyle\prod_{s\geq 1}\left((-s)^{m_{s}}m_{s}!\sum_{t_{0}+t_{1}+\cdots+t%
_{j}+\cdots=m_{s}}\prod_{j=0}^{+\infty}{(\tilde{b}_{(s,1^{j})}q^{s+j})^{t_{j}}%
\over t_{j}!}\right).$$
Combining this with (6.8), (6.3), (6.9) and (6.10),
$F^{x,\ldots,x}_{k_{1},\ldots,k_{N}}(q)$ is equal to
$$(q;q)_{\infty}^{-\chi(X)}\cdot(-1)^{N}\sum_{\sum_{s\geq 1}(s+1)m_{i,s}=k_{i}+2%
\atop 1\leq i\leq N}\prod_{i=1}^{N}{1\over(\sum_{s\geq 1}sm_{i,s})!}\cdot$$
$$\cdot\prod_{s\geq 1}\left({(-s)^{m_{s}}m_{s}!\over\prod_{i}m_{i,s}!}\sum_{t_{0%
}+t_{1}+\cdots+t_{j}+\cdots=m_{s}}\prod_{j=0}^{+\infty}{(\tilde{b}_{(s,1^{j})}%
q^{s+j})^{t_{j}}\over t_{j}!}\right)$$
where $m_{i,s}\geq 0$ for every $i$ and $s$, and $m_{s}=\sum_{i=1}^{N}m_{i,s}$
for every $s\geq 1$.
∎
Our next result determines the universal constants $b_{(i,1^{j})}$ with $i\geq 2$ and $j\geq 0$.
Theorem 6.4.
Let the numbers $b_{(1^{j})}$ and $b_{(i,1^{j})}$
be from Lemma 6.1. Let $\tilde{b}_{(i,1^{j})}=(j+1)b_{(1^{j+1})}=\sigma_{1}(j+1)$
if $i=1$, and $\tilde{b}_{(i,1^{j})}=b_{(i,1^{j})}$ if $i>1$.
(i)
If $i$ is an even positive integer, then $b_{(i,1^{j})}=0$ for all $j\geq 0$.
(ii)
Let $i>1$ be odd. Then,
$\displaystyle{{1\over(i-1)!}\sum_{j\geq 0}b_{(i,1^{j})}q^{i+j}}$ is equal to
$$\sum_{\sum_{1\leq s<i}(s+1)m_{s}=i+1\atop 2\nmid s}{1\over(\sum_{2\nmid s}sm_{%
s})!}\prod_{2\nmid s}\left(\sum_{\sum_{j\geq 0}t_{j}=m_{s}}\prod_{j=0}^{+%
\infty}{\big{(}(-s)\tilde{b}_{(s,1^{j})}q^{s+j}\big{)}^{t_{j}}\over t_{j}!}\right)$$
$$-\sum_{a,s_{1},\ldots,s_{a},b,t_{1},\ldots,t_{b}\geq 1\atop{\sum_{u=1}^{a}s_{u%
}+\sum_{v=1}^{b}t_{v}=i+1\atop{n_{1}>\cdots>n_{a},m_{1}>\cdots>m_{b}\atop{\sum%
_{u=1}^{a}n_{u}s_{u}=\sum_{v=1}^{b}m_{v}t_{v}}}}}\prod_{u=1}^{a}{(-1)^{s_{u}}q%
^{n_{u}s_{u}}\over s_{u}!\cdot(1-q^{n_{u}})^{s_{u}}}\cdot\prod_{v=1}^{b}{1%
\over t_{v}!\cdot(1-q^{m_{v}})^{t_{v}}}.$$
Proof.
(i) Setting $N=1$ in Lemma 6.3, we see that $F^{x}_{k}(q)$ is equal to
$$-(q;q)_{\infty}^{-\chi(X)}\cdot\sum_{\sum_{s\geq 1}(s+1)m_{s}=k+2}{1\over(\sum%
_{s\geq 1}sm_{s})!}\prod_{s\geq 1}\left(\sum_{\sum_{j\geq 0}t_{j}=m_{s}}\prod_%
{j=0}^{+\infty}{\big{(}(-s)\tilde{b}_{(s,1^{j})}q^{s+j}\big{)}^{t_{j}}\over t_%
{j}!}\right).$$
Comparing this with (4.38) which holds for all $k\geq 0$, we obtain
$$\displaystyle\sum_{\sum_{s\geq 1}(s+1)m_{s}=k+2}{1\over(\sum_{s}sm_{s})!}\prod%
_{s\geq 1}\left(\sum_{\sum_{j\geq 0}t_{j}=m_{s}}\prod_{j=0}^{+\infty}{\big{(}(%
-s)\tilde{b}_{(s,1^{j})}q^{s+j}\big{)}^{t_{j}}\over t_{j}!}\right)$$
$$\displaystyle=$$
$$\displaystyle\sum_{a,s_{1},\ldots,s_{a},b,t_{1},\ldots,t_{b}\geq 1\atop{\sum_{%
u=1}^{a}s_{u}+\sum_{v=1}^{b}t_{v}=k+2\atop{n_{1}>\cdots>n_{a},m_{1}>\cdots>m_{%
b}\atop{\sum_{u=1}^{a}n_{u}s_{u}=\sum_{v=1}^{b}m_{v}t_{v}}}}}\prod_{u=1}^{a}{(%
-1)^{s_{u}}q^{n_{u}s_{u}}\over s_{u}!\cdot(1-q^{n_{u}})^{s_{u}}}\cdot\prod_{v=%
1}^{b}{1\over t_{v}!\cdot(1-q^{m_{v}})^{t_{v}}}.$$
The largest value of $s$ satisfying $\sum_{s\geq 1}(s+1)m_{s}=k+2$
is given by $s=k+1$ together with $m_{k+1}=1$.
So the above identity can be rewritten as
$$\displaystyle{1\over k!}\sum_{j\geq 0}\tilde{b}_{(k+1,1^{j})}q^{(k+1)+j}$$
$$\displaystyle=$$
$$\displaystyle\sum_{\sum_{1\leq s<k+1}(s+1)m_{s}=k+2}{1\over(\sum_{s}sm_{s})!}%
\prod_{s\geq 1}\left(\sum_{\sum_{j\geq 0}t_{j}=m_{s}}\prod_{j=0}^{+\infty}{%
\big{(}(-s)\tilde{b}_{(s,1^{j})}q^{s+j}\big{)}^{t_{j}}\over t_{j}!}\right)$$
$$\displaystyle-\sum_{a,s_{1},\ldots,s_{a},b,t_{1},\ldots,t_{b}\geq 1\atop{\sum_%
{u=1}^{a}s_{u}+\sum_{v=1}^{b}t_{v}=k+2\atop{n_{1}>\cdots>n_{a},m_{1}>\cdots>m_%
{b}\atop{\sum_{u=1}^{a}n_{u}s_{u}=\sum_{v=1}^{b}m_{v}t_{v}}}}}\prod_{u=1}^{a}{%
(-1)^{s_{u}}q^{n_{u}s_{u}}\over s_{u}!\cdot(1-q^{n_{u}})^{s_{u}}}\cdot\prod_{v%
=1}^{b}{1\over t_{v}!\cdot(1-q^{m_{v}})^{t_{v}}}.$$
Replacing $k+1$ by $i$, we conclude that $\displaystyle{{1\over(i-1)!}\sum_{j\geq 0}\tilde{b}_{(i,1^{j})}q^{i+j}}$ is equal to
$$\displaystyle\sum_{\sum_{1\leq s<i}(s+1)m_{s}=i+1}{1\over(\sum_{s}sm_{s})!}%
\prod_{s\geq 1}\left(\sum_{\sum_{j\geq 0}t_{j}=m_{s}}\prod_{j=0}^{+\infty}{%
\big{(}(-s)\tilde{b}_{(s,1^{j})}q^{s+j}\big{)}^{t_{j}}\over t_{j}!}\right)$$
(6.11)
$$\displaystyle-\sum_{a,s_{1},\ldots,s_{a},b,t_{1},\ldots,t_{b}\geq 1\atop{\sum_%
{u=1}^{a}s_{u}+\sum_{v=1}^{b}t_{v}=i+1\atop{n_{1}>\cdots>n_{a},m_{1}>\cdots>m_%
{b}\atop{\sum_{u=1}^{a}n_{u}s_{u}=\sum_{v=1}^{b}m_{v}t_{v}}}}}\prod_{u=1}^{a}{%
(-1)^{s_{u}}q^{n_{u}s_{u}}\over s_{u}!\cdot(1-q^{n_{u}})^{s_{u}}}\cdot\prod_{v%
=1}^{b}{1\over t_{v}!\cdot(1-q^{m_{v}})^{t_{v}}}.$$
(6.12)
Note that (6.12) is equal to $0$ if $2|i$.
Letting $i=2$, we get $\displaystyle{\sum_{j\geq 0}\tilde{b}_{(2,1^{j})}q^{2+j}=0}$.
Therefore, $\tilde{b}_{(2,1^{j})}=0$ for every $j\geq 0$.
Hence we have $b_{(2,1^{j})}=0$ for every $j\geq 0$.
Next, let $i>2$ and $2|i$. Assume inductively that $b_{(s,1^{j})}=0$
for every $j\geq 0$ whenever $2\leq s<i$ and $2|s$.
Since (6.12) is $0$,
$\displaystyle{{1\over(i-1)!}\sum_{j\geq 0}b_{(i,1^{j})}q^{i+j}}$ is equal to
$$\sum_{\sum_{1\leq s<i}(s+1)m_{s}=i+1}{1\over(\sum_{s}sm_{s})!}\prod_{s\geq 1}%
\left(\sum_{\sum_{j\geq 0}t_{j}=m_{s}}\prod_{j=0}^{+\infty}{\big{(}(-s)\tilde{%
b}_{(s,1^{j})}q^{s+j}\big{)}^{t_{j}}\over t_{j}!}\right)$$
The condition $\sum_{1\leq s<i}(s+1)m_{s}=i+1$ implies that $m_{s}>0$ for
some even integer $s<i$.
Hence $\displaystyle{{1\over(i-1)!}\sum_{j\geq 0}b_{(i,1^{j})}q^{i+j}}=0$ by induction.
So $b_{(i,1^{j})}=0$ for all $j\geq 0$.
(ii) Follows immediately from (i), (6.11) and (6.12).
∎
Note that $b_{(2i)}=0$, $i\geq 1$ has been proved in [Boi, BN] (see Lemma 6.1).
Next, using the universal constants $f_{(2,1^{j})},g_{(2,1^{j})}$ and $h_{(2,1^{j})}$,
we compute the generating series $F^{\alpha}_{1}(q)$ for a cohomology class $\alpha$
with $|\alpha|<4$.
Lemma 6.5.
Let $f_{(2,1^{j})},g_{(2,1^{j})}$ and $h_{(2,1^{j})}$ be from Lemma 6.1,
and let $\alpha\in H^{*}(X)$ be a homogeneous class with $0<|\alpha|<4$. Then,
(i)
$\displaystyle{F^{1_{X}}_{1}(q)=\sum_{j\geq 0}\tilde{f}_{(2,1^{j})}q^{2+j}}$
where $\tilde{f}_{(2,1^{j})}=\chi(X)\cdot f_{(2,1^{j})}+\langle K_{X},K_{X}\rangle%
\cdot h_{(2,1^{j})}$;
(ii)
$\displaystyle{F^{\alpha}_{1}(q)=(q;q)_{\infty}^{-\chi(X)}\cdot\sum_{j\geq 0}g_%
{(2,1^{j})}q^{2+j}\cdot\langle\alpha,K_{X}\rangle}$.
Proof.
(i) Let $\alpha\in H^{*}(X)$ be an arbitrary cohomology class.
Note that for all $n\geq 1$ and $A\in{\mathbb{H}}_{X}$, we have
$\displaystyle{\left\langle(n-1)(\mathfrak{a}_{-n}\mathfrak{a}_{n})(K_{X}\alpha%
)A,|1\rangle\right\rangle=0}$.
By (6.1) and (4.37),
$$\displaystyle F^{\alpha}_{1}(q)$$
$$\displaystyle=$$
$$\displaystyle\left\langle\mathfrak{G}_{1}(\alpha)\sum_{n}c\big{(}T_{X^{[n]}}%
\big{)}q^{n},|1\rangle\right\rangle$$
(6.13)
$$\displaystyle=$$
$$\displaystyle-\sum_{\ell(\lambda)=3,|\lambda|=0}{1\over\lambda!}\left\langle%
\mathfrak{a}_{\lambda}(\alpha)\sum_{n}c\big{(}T_{X^{[n]}}\big{)}q^{n},|1%
\rangle\right\rangle$$
$$\displaystyle=$$
$$\displaystyle-{1\over 2}\left\langle(\mathfrak{a}_{-1}\mathfrak{a}_{-1}%
\mathfrak{a}_{2})(\alpha)\sum_{n}c\big{(}T_{X^{[n]}}\big{)}q^{n},|1\rangle\right\rangle$$
$$\displaystyle=$$
$$\displaystyle-{1\over 2}\left\langle\mathfrak{a}_{2}(\alpha)\sum_{n}c\big{(}T_%
{X^{[n]}}\big{)}q^{n},|1\rangle\right\rangle.$$
Set $\alpha=1_{X}$. Put $\tilde{f}_{\mu}=\chi(X)\cdot f_{\mu}+\langle K_{X},K_{X}\rangle\cdot h_{\mu}$.
By Lemma 6.1, $F^{1_{X}}_{1}(q)$ equals
$$\displaystyle-{1\over 2}\left\langle\mathfrak{a}_{2}(1_{X})\exp\left(\sum_{\mu%
\in\mathcal{P}}q^{|\mu|}\tilde{f}_{\mu}\mathfrak{a}_{-\mu}(x)\right)|0\rangle,%
|1\rangle\right\rangle$$
$$\displaystyle=$$
$$\displaystyle-{1\over 2}\left\langle\mathfrak{a}_{2}(1_{X})\exp\left(\sum_{j%
\geq 0}q^{2+j}\tilde{f}_{(2,1^{j})}\mathfrak{a}_{-2}(x)\mathfrak{a}_{-1}(x)^{j%
}\right)|0\rangle,|1\rangle\right\rangle$$
$$\displaystyle=$$
$$\displaystyle-{1\over 2}\left\langle\mathfrak{a}_{2}(1_{X})\left(\sum_{j\geq 0%
}q^{2+j}\tilde{f}_{(2,1^{j})}\mathfrak{a}_{-2}(x)\mathfrak{a}_{-1}(x)^{j}%
\right)|0\rangle,|1\rangle\right\rangle$$
$$\displaystyle=$$
$$\displaystyle\sum_{j\geq 0}\tilde{f}_{(2,1^{j})}q^{2+j}.$$
(ii) Let $0<|\alpha|<4$. Again by (6.13) and Lemma 6.1,
$F^{\alpha}_{1}(q)$ is equal to
$$\displaystyle-{1\over 2}(q;q)_{\infty}^{-\chi(X)}\left\langle\mathfrak{a}_{2}(%
\alpha)\exp\left(\sum_{\mu\in\mathcal{P}}q^{|\mu|}g_{\mu}\mathfrak{a}_{-\mu}(K%
_{X})\right)|0\rangle,|1\rangle\right\rangle$$
$$\displaystyle=$$
$$\displaystyle-{1\over 2}(q;q)_{\infty}^{-\chi(X)}\left\langle\mathfrak{a}_{2}(%
\alpha)\exp\left(\sum_{j\geq 0}q^{2+j}g_{(2,1^{j})}(\mathfrak{a}_{-2}\mathfrak%
{a}_{-1}^{j})(K_{X})\right)|0\rangle,|1\rangle\right\rangle$$
$$\displaystyle=$$
$$\displaystyle-{1\over 2}(q;q)_{\infty}^{-\chi(X)}\left\langle\mathfrak{a}_{2}(%
\alpha)\left(\sum_{j\geq 0}q^{2+j}g_{(2,1^{j})}\mathfrak{a}_{-2}(K_{X})%
\mathfrak{a}_{-1}(x)^{j}\right)|0\rangle,|1\rangle\right\rangle.$$
Therefore, $\displaystyle{F^{\alpha}_{1}(q)=(q;q)_{\infty}^{-\chi(X)}\cdot\sum_{j\geq 0}g_%
{(2,1^{j})}q^{2+j}\cdot\langle\alpha,K_{X}\rangle}$
when $0<|\alpha|<4$.
∎
Proposition 6.6.
Let the numbers $g_{(2,1^{j})}$ and $h_{(2,1^{j})}$ be from Lemma 6.1. Then,
$g_{(2,1^{j})}=-h_{(2,1^{j})}$. Moreover, $\sum_{j\geq 0}g_{(2,1^{j})}q^{2+j}$ is
the coefficient of $z^{0}$ in
$${1\over 2}\left(\sum_{n}{(n-1)q^{n}\over(1-q^{n})^{2}}+\sum_{n}{(qz)^{n}\over 1%
-q^{n}}\cdot\left(\sum_{m}{z^{-2m}\over(1-q^{m})^{2}}+2\sum_{m_{1}>m_{2}}{z^{-%
m_{1}}\over 1-q^{m_{1}}}{z^{-m_{2}}\over 1-q^{m_{2}}}\right)\right).$$
Proof.
For simplicity, denote the previous line by $A(z)$.
Let $X$ be a smooth projective surface with $\chi(X)=0$ and
$\langle K_{X},K_{X}\rangle\neq 0$.
On one hand, applying Lemma 6.5 (i) and
Proposition 4.13 to $F^{1_{X}}_{1}(q)$, we conclude that
$\sum_{j\geq 0}h_{(2,1^{j})}q^{2+j}$ is the coefficient of $z^{0}$ in $-A(z)$.
On the other hand, applying Lemma 6.5 (ii) and
Proposition 4.13 to $F^{K_{X}}_{1}(q)$, we see that
$\sum_{j\geq 0}g_{(2,1^{j})}q^{2+j}$ is the coefficient of $z^{0}$ in $A(z)$.
It follows that $g_{(2,1^{j})}=-h_{(2,1^{j})}$ for every $j\geq 0$.
∎
Remark 6.7.
Let $N\geq 1$.
Let $\alpha_{1},\ldots,\alpha_{N}\in H^{*}(X)$ be homogeneous classes such that
$K_{X}\alpha_{i}=e_{X}\alpha_{i}=0$ for all $1\leq i\leq N$,
and let $k_{1},\ldots,k_{N}\geq 0$.
(i)
As in the proof of Lemma 6.3, we have
$$\displaystyle F^{\alpha_{1},\ldots,\alpha_{N}}_{k_{1},\ldots,k_{N}}(q)=(q;q)_{%
\infty}^{-\chi(X)}\left\langle\left(\prod_{i=1}^{N}\mathfrak{G}_{k_{i}}(\alpha%
_{i})\right)\exp\left(\sum_{\mu\in\mathcal{P}}b_{\mu}\mathfrak{a}_{-\mu}(1_{X}%
)q^{|\mu|}\right)|0\rangle,|1\rangle\right\rangle.$$
In principle, together with Theorem 4.8,
this allows us to determine many of the universal constants $b_{\mu}$ in Lemma 6.1.
(ii)
In particular, $F^{\alpha_{1}}_{k_{1}}(q)=0$ if $|\alpha_{1}|<4$.
This matches with Proposition 4.14 (i).
References
[Boi]
S. Boissiére,
Chern classes of the tangent bundle on the Hilbert scheme of
points on the affine plane. J. Alg. Geom. 14 (2005), 761-787.
[BN]
S. Boissiére, M.A. Nieper-Wisskirchen,
Generating series in the cohomology of Hilbert schemes of
points on surfaces. LMS J. Comput. Math. 10 (2007), 254-270 (electronic).
[Bra1]
D. Bradley,
Multiple $q$-zeta values. J. Algebra 283 (2005), 752-798.
[Bra2]
D. Bradley,
On the sum formula for multiple $q$-zeta values.
Rocky Mountain J. Math. 37 (2007), 1427-1434.
[Car]
E. Carlsson,
Vertex operators and moduli spaces of sheaves.
Ph.D thesis, Princeton University, 2008.
[CO]
E. Carlsson, A. Okounkov,
Exts and Vertex Operators. Duke Math. J. 161 (2012), 1797-1815.
[Got]
L. Göttsche,
The Betti numbers of the Hilbert scheme of points on a smooth
projective surface, Math. Ann. 286 (1990) 193–207.
[Gro]
I. Grojnowski,
Instantons and affine algebras I: the Hilbert scheme and
vertex operators, Math. Res. Lett. 3 (1996) 275–291.
[LQW1]
W.-P. Li, Z. Qin and W. Wang,
Vertex algebras and the
cohomology ring structure of Hilbert schemes of points on
surfaces. Math. Ann. 324 (2002), 105-133.
[LQW2]
W.-P. Li, Z. Qin and W. Wang,
Hilbert schemes and $\mathcal{W}$ algebras.
Intern. Math. Res. Notices 27 (2002), 1427-1456.
[LQW3]
W.-P. Li, Z. Qin, W. Wang,
Stability of the cohomology rings of Hilbert schemes
of points on surfaces. J. reine angew. Math. 554 (2003), 217-234.
[Nak]
H. Nakajima,
Heisenberg algebra and Hilbert schemes of points on
projective surfaces, Ann. Math. 145 (1997) 379–388.
[Oko]
A. Okounkov,
Hilbert schemes and multiple $q$-zeta values.
Funct. Anal. Appl. 48 (2014), 138-144.
[OT]
J. Okuda, Y. Takeyama,
On relations for the multple $q$-zeta values.
Ramanujan J. 14 (2007), 379-387.
[Zud]
W. Zudilin,
Algebraic relations for multiple zeta values.
Russian Math. Surveys 58 (2003), 1-29. |
Sparse spectral methods for solving high-dimensional and multiscale elliptic PDEs
Craig Gross
Department of Mathematics
Michigan State University
619 Red Cedar Road
East Lansing, MI 48824
[email protected]
and
Mark Iwen
Department of Mathematics
Michigan State University
619 Red Cedar Road
East Lansing, MI 48824
Department of Computational Mathematics, Science and Engineering
Michigan State University
428 S Shaw Lane
East Lansing, MI 48824
[email protected]
Abstract.
In his monograph Chebyshev and Fourier Spectral Methods, John Boyd claimed that, regarding Fourier spectral methods for solving differential equations, “[t]he virtues of the Fast Fourier Transform will continue to improve as the relentless march to larger and larger [bandwidths] continues” [2, pg. 194].
This paper attempts to further the virtue of the Fast Fourier Transform (FFT) as not only bandwidth is pushed to its limits, but also the dimension of the problem.
Instead of using the traditional FFT however, we make a key substitution: a high-dimensional, sparse Fourier transform (SFT) paired with randomized rank-1 lattice methods.
The resulting sparse spectral method rapidly and automatically determines a set of Fourier basis functions whose span is guaranteed to contain an accurate approximation of the solution of a given elliptic PDE.
This much smaller, near-optimal Fourier basis is then used to efficiently solve the given PDE in a runtime which only depends on the PDE’s data compressibility and ellipticity properties, while breaking the curse of dimensionality and relieving linear dependence on any multiscale structure in the original problem.
Theoretical performance of the method is established herein with convergence analysis in the Sobolev norm for a general class of non-constant diffusion equations, as well as pointers to technical extensions of the convergence analysis to more general advection-diffusion-reaction equations.
Numerical experiments demonstrate good empirical performance on several multiscale and high-dimensional example problems, further showcasing the promise of the proposed methods in practice.
Key words and phrases: Spectral methods, sparse Fourier transforms, high-dimensional function approximation, elliptic partial differential equations, compressive sensing, rank-1 lattices
2010 Mathematics Subject Classification: Primary 65N35, 65T40, 35J15;
Secondary 65D40, 35J05
This work was supported in part by the National Science Foundation Award Numbers DMS 2106472 and 1912706.
1. Introduction
Consider as a model problem an elliptic PDE with periodic boundary conditions
(1)
$$-\nabla\cdot(a\nabla u)=f$$
where, for $\mathbb{T}:=\mathbb{R}/\mathbb{Z}$ taken to be the one-dimensional torus, $a,f:\mathbb{T}^{d}\rightarrow\mathbb{R}$ are the PDE data, and $u:\mathbb{T}^{d}\rightarrow\mathbb{R}$ is the solution.
Herein we propose a two stage method for solving such PDE. First, we use recently developed SFT methods for high-dimensional functions [23] to approximate the Fourier data of both the diffusion coefficient $a$ and the forcing function $f$.
So long as the PDE data, $a$ and $f$, are well represented by sparse Fourier approximations, we then provide a technique for using the SFT output to find a relatively small number of Fourier coefficients that are guaranteed to reconstruct an accurate approximation of the solution $u$. In all, this results in a sublinear-time, curse-of-dimensionality-breaking spectral method for solving non-constant diffusion equations under periodic boundary conditions.
Moreover, the technique presented is theoretically sound, with $H^{1}$ convergence guarantees provided.
These convergence guarantees hinge on a novel analysis of the Fourier-Galerkin representation of a non-constant diffusion operator where we are able to fully characterize the Fourier compressibility of the solution to (1) in terms of the Fourier compressibility of the PDE data.
Additionally, we provide algorithmic improvements to the SFT developed in [23] that allow the method to run in fully sublinear-time (with respect to the size of the initial frequency set of interest).
This is accompanied by new $L^{\infty}$ error guarantees for this SFT which, in addition to the original $L^{2}$ guarantees, allow for the final $H^{1}$ convergence analysis of the spectral method.
We also provide implementations of our methods along with various numerical experiments.
Of special note, we conclude by further extending our methods beyond the simple diffusion equation (1) to also apply to multiscale, high-dimensional advection-diffusion-reaction equations including, e.g., the governing equations for flow dynamics in a porous medium used in hydrological modeling [35].
Solving (1) using a traditional Fourier spectral method amounts to replacing the data and the solution with their Fourier series, simplifying the left-hand side into a single Fourier series, matching the Fourier coefficients of both sides, and solving the resulting system of equations for the Fourier coefficients of $u$.
See Section 5 for further explanation of this Galerkin formulation and the related formulations discussed below.
Two main sources of approximation error arise when implementing this technique computationally.
The first is due to truncating the Fourier series involved to a finite number of terms.
The second is due to numerically approximating the Fourier coefficients of the PDE data.
Due to the rich theory of traditional spectral methods, these two sources of error can directly quantify the error of the resulting approximation of $u$.
Lemma 1 (Strang’s lemma, [10]).
Let $u^{\mathrm{truncation}}$ be the function which has the same Fourier series as $u$ but truncated in some manner, and $a^{\mathrm{approximate}}$ and $f^{\mathrm{approximate}}$ be computed using approximations of the Fourier series of $a$ and $f$ truncated in the same way as $u^{\mathrm{truncation}}$.
Then the procedure outlined above produces a solution $u^{\mathrm{spectral}}$ which satisfies
$$\norm{u-u^{\mathrm{spectral}}}_{H^{1}}\lesssim_{a,f}\norm{u-u^{\mathrm{truncation}}}_{H^{1}}+\norm{a-a^{\mathrm{approximate}}}_{L^{\infty}}+\norm{f-f^{\mathrm{approximate}}}_{L^{2}}$$
where the exact notion of the periodic Sobolev space $H^{1}$ is discussed further in Section 3, and $\lesssim_{a,f}$ denotes an upper bound with constants that depend on the PDE data.
This is a rough simplification of Strang’s lemma [10], which is itself a generalization of the well-known Céa’s lemma (the specific version of this lemma used in this paper is presented and proven in Lemma 6 below).
Effectively, it states that the spectral method solution is optimal up to its Fourier series truncation and the approximation of the PDE data $a$ and $f$.
Thus, analyzing convergence reduces to estimating these two errors.
This outline provides the three primary ingredients for this paper:
(1)
a truncation method and the resulting error analysis (Section 6),
(2)
a (sparse) Fourier series approximation technique (Sections 7 and 8), and
(3)
a version of Strang’s lemma that ties everything together (Section 9).
The final method is given in Algorithm 1.
Its convergence guarantee in Corollary 5 shows that the error in approximating $u$ converges like the (near-optimal) convergence rates of the SFT approximation error of $a$ and $f$ in addition to an exponentially decaying term related to the ellipticity properties of $a$.
The sections preceding the main theoretical analysis listed above include background on sparse spectral methods and motivation for our techniques (Section 2), setting the notation and PDE setup (Sections 3 and 4 respectively), and the aforementioned Galerkin formulation of our model PDE underpinning the spectral method approach (Section 5).
The paper is closed with a numerics section (Section 10) describing the implementation of our technique and a variety of numerical experiments demonstrating the theory.
2. Background and motivation
We now outline some of the previous literature on spectral methods with an emphasis on exploiting sparsity.
Along the way, various shortcomings will arise, and we will use these as opportunities to motivate and explain our approach in the sequel.
2.1. Convergence and computational complexity
Using a $d$-dimensional FFT (see, e.g., [34, Section 5.3.5] for details) to compute $a^{\mathrm{approximate}}$ and $f^{\mathrm{approximate}}$ in the procedure suggested in Lemma 1 naturally enforces a Fourier series truncation.
A $d$-dimensional FFT using a tensorized grid of $K$ uniformly spaced points in each dimension will produce approximate Fourier coefficients indexed by frequencies in the $d$-dimensional hypercube on the integer lattice $\mathbb{Z}^{d}$ of sidelength $K$ (note that when when we refer to “bandwidth” in a multidimensional sense, we are still referring to the sidelength $K$ of the hypercube containing these integer frequencies).
The cost of each $d$-dimensional FFT in general requires more than $K^{d}$ operations, as does the linear-system solve (in the absence of any sparsity or other tricks).
Thus, not only do traditional Fourier spectral methods suffer from the curse of dimensionality, but even in moderate dimensions, multiscale problems (i.e., PDE data which require very high bandwidth to be fully resolved) can result in intractable computations.
Note that a standard FFT requires more than $K^{d}$ operations in the discussion above exactly because we implicitly chose to expand our PDE data and solution with respect to an impractically huge set of $K^{d}$ Fourier basis functions there. What if we instead expand all of $a$, $f$, and $u$ in terms of the union of their individual best possible $s\ll K^{d}$ Fourier basis functions from this larger set? Note that doing so would automatically lead to each term on the right hand side of Lemma 1 becoming related to a nonlinear best $s$-term approximation error with respect to the Fourier basis in the sense of, e.g., Cohen et al [11]. Furthermore, whenever these errors decayed fast enough in $s$ it would in fact imply that each of $a$, $f$, and $u$ was effectively sparse/compressible in the Fourier basis, allowing the theory of compressive sensing to imply the sufficiency of a small discretization of (1). Of course, this procedure is not terribly useful in practice unless one can actually rapidly discover the best possible subset of $s\ll K^{d}$ Fourier basis functions for each function involved above via, e.g., compressive sensing.
A naive application of standard compressive sensing theory in pursuit of this strategy flounders in at least two ways here, however: First, though extremely successful at reducing the number of linear measurements needed in order to reconstruct a given function, standard compressive sensing recovery algorithms such as basis pursuit must still individually represent all $K^{d}$ basis functions (in this simple case) during the function’s numerical approximation. As a result, no dramatic runtime speedups can be expected here without additional modifications. Second, standard compressive sensing theory also generally requires direct linear measurements (in the form of, e.g., point samples) to be gathered from the function whose sparse approximation one seeks. In the case of (1) this may be trivially possible for both $a$ and $f$, but is not generally possible for the a priori unknown solution $u$ that one aims to compute (at least, not without additional innovations). Of course these difficulties can be overcome to various degrees even when using standard compressive sensing reconstruction strategies, and at least one such approach for doing so will be discussed below in Section 2.5.
In this paper, however, we instead circumvent the two difficulties mentioned above by using modified sparse Fourier transform methods. SFTs [16, 26, 25, 17, 1, 32] are compressive sensing algorithms which are highly specialized to take advantage of the number theoretic and algebraic structure of the Fourier basis as much as possible. As a result, SFTs rarely have to consider Fourier basis functions individually during the reconstruction process, and so can simultaneously reduce both their measurement needs and computational complexities to effectively depend only on the number of important Fourier series coefficients in the function one aims to approximate. In the present setting, this means that SFT algorithms will run in sublinear $o(K^{d})$-time, more or less automatically sidestepping the reconstruction runtime issues plaguing standard compressive sensing recovery algorithms which must represent each of the $K^{d}$-basis functions individually as they run. To circumvent the issues related to not being able to measure the solution $u$ directly, we then use yet another approach. Instead of attempting to apply compressive sensing methods to $u$ at all, we instead use the more easily discovered most-significant Fourier basis elements of $a$ and $f$ to predict in advance where the most significant Fourier basis elements of $u$ must reside by analyzing the structure of (1). Of course, once we have discovered which Fourier basis elements are important in representing $u$ in this fashion, standard Galerkin techniques can then be used to solve a small truncated discretization of (1) thereafter.
2.2. Prior attempts to relieve dependence on bandwidth via SFT-type methods
A key work pioneering the use of SFTs in computing solutions to PDEs is due to Daubechies, et al. [13].
This work mostly focuses on time-dependent, one-dimensional problems where the spectral scheme is formulated as alternating Fourier-projections and time-steps.
Thus, there is no need to impose an a priori Fourier basis truncation on the solution.
The proposed projection step instead utilizes an SFT at each time step to adaptively retain the most significant frequencies throughout the time-stepping procedure.
Time-independent problems like (1) can then be handled by stepping in time until a stationary solution is obtained.
A simplified form of this algorithm is shown to succeed numerically in [13], and it is also analyzed theoretically in the case where the diffusion coefficient consists of a known, fine-scale mode superimposed over lower frequency terms.
There, the Fourier-projection step can be considered to be fixed.
However, removing the known fine-scale assumption leads to many difficulties, including the possibility of sparsity-induced omissions in early time steps cascading into larger errors later on.
In this paper, on the other hand, we focus on the case of time-independent problems.
This allows us to utilize SFTs only once initially. By doing so we avoid the possibility of SFT-induced error accumulation over many time steps.
The main difficulty in our analysis then becomes determining how the Fourier-sparse representations of the PDE data discovered by high-dimensional SFTs can be used to rapidly find a suitable Fourier representation of the solution.
This takes the form of mixing the Fourier supports of $a$ and $f$ into stamping sets (discussed in detail in Section 6) on which we can analyze the projection error of the solution.
In fact, these stamping sets can be viewed as a modification and generalization of the techniques used in the one-dimensional and known fine-scale analysis from [13].
2.3. Attempts to relieve curse of dimensionality
Many attempts to overcome the curse of dimensionality in Fourier spectral methods for PDE have focused on using basis truncations which allow for an efficient high-dimensional Fourier transform.
One of the most popular techniques is the sparse grid spectral method, which computes Fourier coefficients on the hyperbolic cross [28, 9, 19, 20, 36, 21, 12].
In general, a sparse grid method reduces the number of sampling points necessary to approximate the PDE data to $\mathcal{O}(K\log^{d-1}(K))$, where $K$ acts as a type of bandwidth parameter.
Algorithms to compute spectral representations using these sparse sampling grids run with similar complexity.
When used in conjunction with spectral methods for solving PDE, these sparse grid Fourier transforms produce solution approximations with error estimates similar to the full $d$-dimensional FFT-versions reduced by factors only on the order of $1/\log^{d-1}(K)$.
In the context of sparse grid Fourier transforms, these methods compute Fourier coefficients with frequencies on hyperbolic crosses of similar cardinality to the number of sampling points.
These hyperbolic crosses have intimate links with the space of bounded mixed derivative, in the sense that they are the optimal Fourier-approximation spaces for this class.
Thus, sparse grid Fourier spectral methods are particularly apt for problems where the solution is of bounded mixed derivative, as this produces an optimal $u-u^{\mathrm{truncation}}$ term in Lemma 1 above.
Though sparse-grid spectral methods can efficiently solve a variety of high-dimensional problems, there are clear downsides for the types of problems we target in this paper.
While many problems fit the bounded mixed derivative assumption, and therefore have accurate Fourier representations on the hyperbolic cross, the multiscale, Fourier-sparse problems that we are interested are especially problematic.
In fact, since a hyperbolic cross of bandwidth $K$ contains only those frequencies $\mathbold{k}\in\mathbb{Z}^{d}$ with $\prod_{i=1}^{d}\absolutevalue{k_{i}}=\mathcal{O}(K)$, $d$-dimensional frequencies active in all dimensions can have only $\norm{\mathbold{k}}_{\infty}=\mathcal{O}(K^{1/d})$.
Thus, in a multiscale problem with even one frequency that interacts in all dimensions, a hyperbolic cross is required with a bandwidth exponential in $d$ to properly resolve the data.
This then forces the traditionally curse-of-dimensionality-mitigating $\log^{d-1}(K)$ terms characteristic of sparse grid methods to be at least on the order of $d^{d-1}$.
2.4. More on high-dimensional Fourier transforms
As outlined in Section 2.2 above, this paper uses sparse Fourier transforms to create an adaptive basis truncation suited to the PDE data.
This mimics a similar evolution in the field of high-dimensional Fourier transforms from sparse grids to more flexible techniques [31, 14, 33, 29, 21, 30, 34, 24].
In particular, the high-dimensional sparse Fourier transforms discussed in Section 7 originate from a link between early high-dimensional quadrature techniques and Fourier approximations on the hyperbolic cross [29, 30].
Instead of sampling functions on sparse grids, these methods sample high-dimensional functions along a rank-1 lattice.
Rank-1 lattices are described by sampling $M$ points in $\mathbb{T}^{d}$ in the direction of a generating vector $\mathbold{z}\in\mathbb{N}^{d}$, that is, using the sampling set
$$\Lambda(\mathbold{z},M):=\left\{\frac{j}{M}\mathbold{z}\bmod\mathbold{1}\mid j\in\{0,\ldots,M-1\}\right\}.$$
So long as a rank-1 lattice satisfies certain properties with respect to a frequency space of interest $\mathcal{I}\in\mathbb{Z}^{d}$, these sampling points are sufficient to compute the Fourier coefficients of a function on $\mathcal{I}$ with a length-$M$ univariate FFT.
Though many references take $\mathcal{I}$ to be the hyperbolic cross to leverage the well-studied regularity properties and cardinality bounds similarly enjoyed in the sparse-grid literature, rank-1 lattice results are available for arbitrary frequency sets.
The computationally efficient extension of these techniques via sparse Fourier transforms in [23] as well as the randomization trick presented in Section 8 take this frequency set flexibility to its limit, allowing $\mathcal{I}$ to be the a priori unknown set of the most important Fourier coefficients of the function to be approximated.
This again suggests the applicability of these methods over sparse grid (or other non-sparsity exploiting) Fourier transforms in the context of multiscale problems involving even a small number of Fourier coefficients in extremely high dimensions.
2.5. Additional links to compressive sensing
As discussed above, the SFT literature overlaps considerably with the language and techniques of compressive sensing.
As detailed in Section 7 below, the high-dimensional SFT we use in this paper provides error bounds with best $s$-term approximation, compressive-sensing-type error guarantees [11].
As a result, the Fourier coefficients of the PDE data are approximated with errors depending on the compressibility of their true Fourier series, and then the compressibility of the PDE’s solution in the Fourier basis is inferred from the Fourier compressibility of the data in a direct and constructive fashion.
Another very successful line of work, however, aims to more directly apply standard compressive sensing reconstruction methods to the general spectral method framework for solving PDEs.
Referred to as CORSING [4, 5, 8, 3, 7], these techniques use compressed sensing concepts to recover a sparse representation of the solution to the system of equations derived from the (Petrov-)Galerkin formulation of a PDE.
These methods have been further extended to the case of pseudospectral methods in [6], in which a simpler-to-evaluate matrix equation is subsampled and used as measurements for a compressive sensing algorithm (as an aside, [6] and discussions with the author served as a primary inspiration for this paper).
This compressive spectral collocation method works by finding the largest Fourier-sine coefficients of the solution with frequencies in the integer hypercube with bandwidth $K$ by applying Orthogonal Matching Pursuit (OMP) on a set of samples of the PDE data.
By using OMP, the method is able to succeed with measurements on the order of $\mathcal{O}(d\exp(d)s\log^{3}(s)\log(K))$ where $s$ is the imposed sparsity level of the solution’s Fourier series.
Thus, while the $\mathcal{O}(K^{d})$ dependence from a traditional Fourier (pseudo)spectral method is avoided and the method adapts well to large bandwidths, the curse of dimensionality is still apparent.
In the preparation of this paper, the authors became aware of an improvement on [6] that addresses the curse of dimensionality and is therefore well-suited for similar types of problems discussed in this paper.
In [37], the approach of approximating Fourier-sine coefficients on a full hypercube is replaced with approximating Fourier coefficients on a hyperbolic cross.
This has the effect of converting the linear dependence on $d$ in the sampling complexity to a $\log(d)$ due to cardinality estimates of the hyperbolic cross.
However, the $\exp(d)$ term is refined using a different technique.
The key theoretical ingredient for being able to apply compressive sensing to these problems is bounding the Riesz constants of the basis functions that result after applying the differential operator [7].
A careful estimation of these constants on the Fourier basis on the hyperbolic cross is able to entirely remove the exponential in $d$ dependence, leading to a sampling complexity on the order of $\mathcal{O}(C_{a}s\log(d)\log^{3}(s)\log(K))$, where $C_{a}$ involves terms depending on ellipticity and compressibility properties of $a$.
Notably, this estimation procedure has connections to our stamping set techniques described in Section 6.
On the other hand, though focusing on the hyperbolic cross in compressive spectral collocation breaks the curse of dimensionality in the sampling complexity, the method still suffers from the inability to generalize to multiscale problems or generic frequency sets of interest like those described in 2.3.
Additionally, as mentioned in Section 2.5, the compressive-sensing algorithm used for recovery (in this case OMP) suffers from a computational complexity on the order of the cardinality of the truncation set of interest.
For the hyperbolic cross, this is still exponential in $\log(d)$.
Finally, the error estimates are presented in terms of the compressibility of the Fourier series of the solution $u$, which may not be known a priori from the PDE data.
We expect that there may be some way to link our stamping theory and convergence estimates with the compressive sensing theory to refine and generalize both approaches.
3. Notation
Define the one-dimensional torus to be $\mathbb{T}:=\mathbb{R}/\mathbb{Z}$.
Unless otherwise stated, all functions are complex-valued and defined on the torus $\mathbb{T}^{d}$.
For example, we take the inner product for $u,v\in L^{2}:=L^{2}(\mathbb{T}^{d};\mathbb{C})$ to be
$$\langle u,v\rangle_{L^{2}}:=\int_{\mathbb{T}^{d}}u(\mathbold{x})\overline{v}(\mathbold{x})\mathop{}\!d\mathbold{x}.$$
Additionally, unless otherwise stated, all multiindexed infinite sequences are complex-valued and indexed on $\mathbb{Z}^{d}$.
For example, we take the inner product for $\hat{u},\hat{v}\in\ell^{2}:=\ell^{2}(\mathbb{Z}^{d};\mathbb{C})$ to be
$$\langle\hat{u},\hat{v}\rangle_{\ell^{2}}:=\sum_{\mathbold{k}\in\mathbb{Z}^{d}}\hat{u}_{\mathbold}{k}\overline{\hat{v}}_{\mathbold}{k}.$$
All finite length vectors/tensors will be denoted in boldface and when required, will be implicitly extended to larger index sets by taking on the value zero wherever they are not originally defined.
We also denote the complex-valued finite-length vectors or infinite-length sequences supported on a set $\mathcal{D}$ as $\mathbb{C}^{\mathcal{D}}$.
Since sparse approximations will be an important tool in our final algorithm, we also define the best $s$-term approximation of a sequence $\hat{u}$ as $\hat{u}$ restricted to its $s$ largest magnitude entries and denote this as $\hat{u}_{s}^{\mathrm{opt}}$.
We now define periodic Sobolev spaces (see also [3, Section 2.1] and [28, Appendix A.2.2]).
Definition 1.
For $u\in L^{2}$ and $\mathbold{\alpha}\in\mathbb{N}_{0}^{d}$ a multiindex, if there exists a $v\in L^{2}$ such that
$$\langle v,\phi\rangle_{L^{2}}=(-1)^{|\mathbold{\alpha}|}\langle u,\partial^{\mathbold}{\alpha}\phi\rangle_{L^{2}}\qquad\text{for all $\phi\in C^{\infty}\subset L^{2}$},$$
we call $v$ the weak $\mathbold{\alpha}$ derivative of $u$, and write $\partial^{\mathbold}{\alpha}u:=v$.
We define the inner product
$$\langle u,v\rangle_{H^{1}}:=\langle u,v\rangle_{L^{2}}+\int_{\mathbb{T}^{d}}\nabla u(\mathbold{x})\cdot\overline{\nabla v}(\mathbold{x})\mathop{}\!d\mathbold{x},$$
(where all derivatives are taken in the weak sense) and have the associated norm $\norm{u}_{H^{1}}:=\sqrt{\langle u,u\rangle_{H^{1}}}$.
The periodic Sobolev space $H^{1}$ is defined as $H^{1}:=\{u\in L^{2}\mid\|u\|_{H^{1}}<\infty\}$.
In order to set our notation for Fourier coefficients and series, we first note the density of trigonometric monomials in $L^{2}$ and $H^{1}$.
Theorem 1.
The space of all infinitely differentiable periodic functions $C^{\infty}$ is dense in $L^{2}$ and $H^{1}$.
In particular, space of trigonometric monomials $\{e_{\mathbold}{k}(\mathbold{x}):=\mathrm{e}^{2\pi\mathrm{i}\mathbold{k}\cdot\mathbold{x}}\in C^{\infty}\mid k\in\mathbb{Z}^{d}\}$ is a basis for $C^{\infty}$, an orthonormal basis for $L^{2}$, and an orthogonal basis for $H^{1}$.
Definition 2.
For any $u\in L^{1}$, and any $\mathbold{k}\in\mathbb{Z}^{d}$, we define the $\mathbold{k}$th Fourier coefficient
$$\hat{u}_{\mathbold}{k}=\langle u,e_{\mathbold}{k}\rangle_{L^{2}}=\int_{\mathbb{T}^{d}}u(\mathbold{x})\mathrm{e}^{-2\pi\mathrm{i}\mathbold{k}\cdot\mathbold{x}}\mathop{}\!d\mathbold{x}.$$
If $u\in L^{2}$, the orthonormality of the trigonometric monomials in Theorem 1 allows us to write the Fourier series for $u$,
$$u(\mathbold{x})=\sum_{\mathbold{k}\in\mathbb{Z}^{d}}\hat{u}_{\mathbold}{k}e_{\mathbold{k}}(\mathbold{x}).$$
We also note the well-known Plancherel’s identity for use later.
Proposition 1 (Plancherel’s identity).
If $u\in L^{2}$, then $\hat{u}\in\ell^{2}$ with $\|u\|_{L^{2}}=\|\hat{u}\|_{\ell^{2}}$.
If $v\in L^{2}$, then $\langle u,v\rangle_{L^{2}}=\langle\hat{u},\hat{v}\rangle_{\ell^{2}}$.
Definition 3.
We additionally define the mean-zero periodic Sobolev space $H$ as $H^{1}/\mathbb{R}$ where the representative $u$ is chosen so that $\hat{u}_{\mathbold{0}}=0$, endowed with the inner product111
note that by Proposition 1, $\langle u,v\rangle_{H}\simeq\langle u,v\rangle_{H^{1}}$ for $u,v\in H$.
$$\langle u,v\rangle_{H}:=\int_{\mathbb{T}^{d}}\nabla u(\mathbold{x})\cdot\overline{\nabla v}(\mathbold{x})\mathop{}\!d\mathbold{x}.$$
In the sequel, we will often consider restrictions in frequency space denoted by, e.g., $\hat{u}\rvert_{\mathcal{D}}$, where $\mathcal{D}\subset\mathbb{Z}^{d}$.
We will simultaneously consider this to be an element of $\mathbb{C}^{\mathcal{D}}$ and a complex valued sequence on $\mathbb{Z}^{d}$ with zero entries on $\mathbb{Z}^{d}\setminus\mathcal{D}$.
When $\hat{u}$ represents the Fourier coefficients of a function $u$, we define the associated restriction
$$u\rvert_{\mathcal{D}}:=\sum_{\mathbold{k}\in\mathbb{Z}^{d}}\left(\hat{u}\rvert_{\mathcal{D}}\right)_{\mathbold}{k}e_{\mathbold}{k}=\sum_{\mathbold{k}\in\mathcal{D}}\hat{u}_{\mathbold}{k}e_{\mathbold}{k},$$
where the fact that $\mathcal{D}\subset\mathbb{Z}^{d}$ is treated as a set of frequencies indicates that we are restricting $u$ in frequency, not space.
Given a hatted sequence $\hat{v}$ or vector $\hat{\mathbold{v}}$, the associated function with Fourier series $\sum_{\mathbold{k}\in\mathbb{Z}^{d}}\hat{v}_{\mathbold}{k}e_{\mathbold}{k}$ will always be implicitly labeled using the non-hatted, roman font letter (in this example, $v$).
4. Elliptic PDE setup
We begin with a model elliptic partial differential equation.
Definition 4.
For some $a:\mathbb{T}^{d}\rightarrow\mathbb{R}$ sufficiently smooth, define the linear, elliptic partial differential operator in divergence form $\mathcal{L}[a]:C^{2}\rightarrow C^{0}$ by
$$\mathcal{L}[a]u=-\nabla\cdot\left(a\nabla u\right).$$
If for some $f:\mathbb{T}^{d}\rightarrow\mathbb{R}$ sufficiently smooth, $u\in C^{2}$ satisfies
(SF)
$$\mathcal{L}[a]u=f,$$
we say that $u$ solves the given elliptic PDE with periodic boundary conditions in the strong form.
Now, after multiplying by the complex conjugate of a test function $v\in H^{1}(\mathbb{T}^{d})$ and integrating by parts, we define the bilinear form associated to $\mathcal{L}[a]$ as $\mathfrak{L}[a]:H^{1}\times H^{1}\rightarrow\mathbb{C}$ with
$$\mathfrak{L}[a](u,v):=\int_{\mathbb{T}^{d}}a(\mathbold{x})\nabla u(\mathbold{x})\cdot\overline{\nabla v}(\mathbold{x})\mathop{}\!d\mathbold{x},$$
and we say that $u\in H^{1}$ solves the given elliptic PDE with periodic boundary conditions in the weak form if
(WF)
$$\mathfrak{L}[a](u,v)=\langle f,v\rangle_{L^{2}}\quad\text{for all }v\in H^{1}.$$
For our purposes, we will take $a\in L^{\infty}(\mathbb{T}^{d};\mathbb{R})$, and $f\in L^{2}(\mathbb{T}^{d};\mathbb{R})$.
By the conditions specified in the Lax-Milgram theorem (see, e.g., [15]), we are guaranteed that a unique mean-zero solution to (WF) exists so long as the right-hand side and test space is also mean-zero.
See [3, Proposition 2.1] for a more specific formulation in our setting and its proof.
Proposition 2.
For $a\in L^{\infty}(\mathbb{T}^{d};\mathbb{R})$, $\mathfrak{L}[a]$ is continuous with continuity constant $\beta\leq\norm{a}_{L^{\infty}}$, that is
(2)
$$\absolutevalue{\mathfrak{L}[a](u,v)}\leq\beta\norm{u}_{H}\norm{v}_{H}\qquad\text{for all $u,v\in H$}.$$
Additionally, if $a(\mathbold{x})\geq a_{\mathrm{min}}>0$ a.e. on $\mathbb{T}^{d}$, then $\mathfrak{L}[a]$ is also coercive with coercivity constant $\alpha\geq a_{\mathrm{min}}$, that is
(3)
$$\absolutevalue{\mathfrak{L}[a](u,u)}\geq\alpha\norm{u}_{H}^{2}\qquad\text{for all $u\in H$}.$$
Under conditions (2) and (3), if $f\in L^{2}(\mathbb{T}^{d};\mathbb{R})$ is mean-zero, that is, $\hat{f}_{0}=0$, then (WF) has unique, mean-zero solution $u\in H$ satisfying
(4)
$$\norm{u}_{H}\leq\frac{\norm{f}_{L^{2}}}{\alpha}.$$
5. Galerkin spectral methods
By Theorem 1, it is equivalent to replace the weak PDE (WF) by
$$\mathfrak{L}[a](u,e_{\mathbold}{k})=\langle f,e_{\mathbold}{k}\rangle_{L^{2}}=:\hat{f}_{\mathbold}{k}\quad\text{for all }\mathbold{k}\in\mathbb{Z}^{d}.$$
Rewriting the bilinear form on the left-hand side and using the Fourier series representations of $a$ and $u$, we obtain
$$\displaystyle\mathfrak{L}[a](u,e_{\mathbold}{k})$$
$$\displaystyle=\sum_{\mathbold{l}_{1},\mathbold{l}_{2}\in\mathbb{Z}^{d}}\hat{a}_{\mathbold{l}_{1}}\hat{u}_{\mathbold{l}_{2}}\int_{\mathbb{T}^{d}}e_{\mathbold{l_{1}}}(\mathbold{x})\nabla e_{\mathbold{l_{2}}}(\mathbold{x})\cdot\overline{\nabla e_{\mathbold{k}}}(\mathbold{x})\mathop{}\!d\mathbold{x}$$
$$\displaystyle=\sum_{\mathbold{l}_{1},\mathbold{l}_{2}\in\mathbb{Z}^{d}}(2\pi)^{2}(\mathbold{l_{2}}\cdot\mathbold{k})\hat{a}_{\mathbold{l}_{1}}\hat{u}_{\mathbold{l}_{2}}\delta_{\mathbold{l}_{1},\mathbold{k}-\mathbold{l}_{2}}$$
$$\displaystyle=\sum_{\mathbold{l}\in\mathbb{Z}^{d}}(2\pi)^{2}(\mathbold{l}\cdot\mathbold{k})\hat{a}_{\mathbold{k}-\mathbold{l}}\hat{u}_{\mathbold{l}}$$
$$\displaystyle=:(L[\hat{a}]\hat{u})_{\mathbold}{k},$$
where $L[\hat{a}]$ is an operator in $\ell^{2}$.
This leads to the Galerkin form of our PDE,
(GF)
$$L[\hat{a}]\hat{u}=\hat{f}.$$
The computational advantages of (GF) are clear.
By numerically approximating $\hat{a}$ and $\hat{f}$ (thereby also truncating $L[\hat{a}]$), we arrive at a discretized, finite system of equations that can be solved for the Fourier coefficients of our solution.
We will use a fast sparse Fourier transform (SFT) for functions of many dimensions to approximate our PDE data which then leads to a sparse system of equations that we can quickly solve to approximate $\hat{u}$.
This SFT will use the values of $a$ and $f$ at equispaced nodes on a randomized rank-1 lattice in $\mathbb{T}^{d}$, and therefore, our technique is effectively a pseudospectral method where the discretization of the solution space $\{\hat{u}\mid u\in H\}$ is adapted to the PDE data.
Before we move to the detailed discussion of this SFT, we provide a more detailed analysis of the Galerkin operator in Section 6 to help us analyze the resulting spectral method.
But first, we note that $L[\hat{a}]$ also captures the behavior of $\mathfrak{L}[a]$ as a bilinear form.
Proposition 3.
For $\hat{u},\hat{v}\in\ell^{2}$ with $u,v\in H$,
$$\mathfrak{L}[a](u,v)=\langle L[\hat{a}]\hat{u},\hat{v}\rangle_{\ell^{2}}.$$
Proof.
By the Fourier series representation of $v$,
$$\mathfrak{L}[a](u,v)=\sum_{\mathbold{k}\in\mathbb{Z}^{d}}\mathfrak{L}[a](u,e_{\mathbold}{k})\overline{\hat{v}}_{\mathbold}{k}=\sum_{\mathbold{k}\in\mathbb{Z}^{d}}\left(L[\hat{a}]\hat{u}\right)_{\mathbold{k}}\overline{\hat{v}}_{\mathbold{k}}=\langle L[\hat{a}]\hat{u},\hat{v}\rangle_{\ell^{2}}.$$
∎
6. Stamping sets and truncation analysis
Notably, (GF) gives us insight into the frequency support of $\hat{u}$.
The structure outlined in the following proposition is crucial in constructing a fast spectral method that exploits Fourier-sparsity.
Proposition 4.
For any set $F\subset\mathbb{Z}^{d}$ and $N\in\mathbb{N}_{0}$, recursively define the sets
(5)
$$\displaystyle\mathcal{S}^{N}[\hat{a}](F)$$
$$\displaystyle:=\begin{cases}F&\text{if }N=0\\
\mathcal{S}^{N-1}[\hat{a}](F)+\operatorname{supp}(\hat{a})&\text{if }N>0\end{cases},$$
$$\displaystyle\mathcal{S}^{\infty}[\hat{a}](F)$$
$$\displaystyle:=\bigcup_{N=0}^{\infty}\mathcal{S}^{N}[\hat{a}](F),$$
where here, we addition is the Minkowski sum of sets.
Under the conditions of Proposition 2, $\operatorname{supp}(\hat{u})\subset\mathcal{S}^{\infty}[\hat{a}](\operatorname{supp}(\hat{f}))$.
Proof.
The fact that $a$ is strictly positive implies that $\hat{a}_{\mathbold}{0}\neq 0$, and the fact that $a$ is real implies $\operatorname{supp}(\hat{a})=-\operatorname{supp}(\hat{a})$.
Now, for any $\mathbold{k}\in\mathcal{\mathbb{Z}}^{d}\setminus\{\mathbold{0}\}$, we may rearrange the equality $(L[\hat{a}]\hat{u})_{\mathbold}{k}=\hat{f}_{\mathbold}{k}$ to obtain
$$\displaystyle\hat{u}_{\mathbold{k}}$$
$$\displaystyle=\frac{\hat{f}_{\mathbold}{k}-\sum_{\mathbold{l}\in(\{\mathbold{k}\}+\operatorname{supp}(\hat{a}))\setminus\{\mathbold{k}\}}(2\pi)^{2}(\mathbold{l}\cdot\mathbold{k})\hat{a}_{\mathbold{k}-\mathbold{l}}\hat{u}_{\mathbold{l}}}{(2\pi)^{2}(\mathbold{k}\cdot\mathbold{k})\hat{a}_{\mathbold}{0}}$$
$$\displaystyle=\frac{\hat{f}_{\mathbold}{k}-\sum_{\mathbold{l}\in\operatorname{supp}(\hat{a})\setminus\{\mathbold{0}\}}(2\pi)^{2}(\mathbold{k}\cdot\mathbold{k}-\mathbold{l}\cdot\mathbold{k})\hat{a}_{\mathbold{l}}\hat{u}_{\mathbold{k}-\mathbold{l}}}{(2\pi)^{2}(\mathbold{k}\cdot\mathbold{k})\hat{a}_{\mathbold}{0}}.$$
Thus, $\hat{u}_{\mathbold}{k}$ explicitly depends only on the values of $\hat{u}$ on $\mathcal{S}^{1}[\hat{a}](\{\mathbold{k}\})\setminus\{\mathbold{k}\}$, which themselves then depend only on values of $\hat{u}$ on $\mathcal{S}^{2}[\hat{a}](\{\mathbold{k}\})$, and so on.
This decouples the system of equations $L[\hat{a}]\hat{u}$ into a disjoint collection of systems of equations, one for each class of frequencies $\mathcal{S}^{\infty}[\hat{a}](\{\mathbold{k}\})$.
Since Proposition 2 implies that $\hat{v}=0$ is the unique solution of $L[\hat{a}]\hat{v}=0$, the unique solution of the system of equations for $\hat{u}$ on $\mathcal{S}^{\infty}[\hat{a}](\{\mathbold{k}\})$ for any $\mathbold{k}\notin\operatorname{supp}(\hat{f})$ is $\hat{u}\rvert_{\mathcal{S}^{\infty}[\hat{a}](\{\mathbold{k}\})}=0$.
Therefore, $\operatorname{supp}\hat{u}\subset\mathcal{S}^{\infty}[\hat{a}](\operatorname{supp}(\hat{f}))$ as desired.
∎
In what follows, when the set $F$ and Fourier coefficients $\hat{a}$ are clear from context, we suppress them in the notation given by (5) so that $\mathcal{S}^{N}:=\mathcal{S}^{N}[\hat{a}](F)$.
Intuitively, we can imagine constructing $\mathcal{S}^{N}$ by first creating a “rubber stamp” in the shape of $\operatorname{supp}(\hat{a})$.
This rubber stamp is then stamped onto every frequency in $F=:\mathcal{S}^{0}$ to construct $\mathcal{S}^{1}$.
Then, this process is repeated, stamping each element of $\mathcal{S}^{1}$ to produce $\mathcal{S}^{2}$, and so on.
For this reason, we will colloquially refer to these as “stamping sets.”
Figure 1 gives an example of this stamping procedure for $d=2$.
A key approach of our further analysis will be analyzing the decay of $\hat{u}$ on successive stamping levels.
The stamping level will become the driving parameter in the spectral method rather than bandwidth in a traditional spectral method.
Before moving onto this analysis however, we provide an upper bound for the cardinality of the stamping sets.
This will ultimately be used to upper bound the computational complexity of our technique.
The proof of this bound is given in Appendix A.
Lemma 2.
Suppose that $\mathbold{0}\in\operatorname{supp}(\hat{a})$, $\operatorname{supp}(\hat{a})=-\operatorname{supp}(\hat{a})$, and $\absolutevalue{\operatorname{supp}(\hat{f})}\leq\absolutevalue{\operatorname{supp}(\hat{a})}=s$.
Then
$$\absolutevalue{\mathcal{S}^{N}[\hat{a}](\operatorname{supp}(\hat{f}))}\leq 7\max(s,2N+1)^{\min(s,2N+1)}.$$
Proposition 4 gives us a natural way to consider truncations of the solution $u$ in frequency space.
We will use these truncations to discretize the Galerkin formulation (GF) in Section 9 below.
In order to analyze the error in the resulting spectral method algorithm, we will need quantitative bounds on how the solution decays outside of the frequency sets $\mathcal{S}^{N}:=\mathcal{S}^{N}[\hat{a}](\operatorname{supp}(\hat{f}))$.
For $\mathcal{S}^{N}$ to be finite, we assume in this section that $\operatorname{supp}\hat{a}$ and $\operatorname{supp}\hat{f}$ are finite.
This assumption will be lifted later via Lemma 5.
We begin with a technical result regarding the interplay between $L[\hat{a}]$ and the supports of vectors that it acts on.
Proposition 5.
For any $\hat{v}$ with $\operatorname{supp}(\hat{v})\subset\mathcal{S}^{n}\setminus\mathcal{S}^{n-1}$, $\operatorname{supp}(L[\hat{a}]\hat{v})\subset\mathcal{S}^{n+1}\setminus\mathcal{S}^{n-2}$.
Proof.
For any $\mathbold{k}\in\mathbb{Z}^{d}$, consider
$$\displaystyle\left(L[\hat{a}]\hat{v}\right)_{\mathbold}{k}$$
$$\displaystyle=\sum_{\mathbold{l}\in\mathbb{Z}^{d}}(2\pi)^{2}(\mathbold{l}\cdot\mathbold{k})\hat{a}_{\mathbold{k}-\mathbold{l}}\hat{v}_{\mathbold}{l}$$
$$\displaystyle=\sum_{\mathbold{l}\in(\{\mathbold{k}\}-\operatorname{supp}(\hat{a}))\cap\operatorname{supp}(\hat{v})}(2\pi)^{2}(\mathbold{l}\cdot\mathbold{k})\hat{a}_{\mathbold{k}-\mathbold{l}}\hat{v}_{\mathbold}{l}$$
$$\displaystyle=\sum_{\mathbold{l}\in(\{\mathbold{k}\}-\operatorname{supp}(\hat{a}))\cap(\mathcal{S}^{n}\setminus\mathcal{S}^{n-1})}(2\pi)^{2}(\mathbold{l}\cdot\mathbold{k})\hat{a}_{\mathbold{k}-\mathbold{l}}\hat{v}_{\mathbold}{l}.$$
This sum is nonempty only if $\mathbold{k}$ is such that there exists $\mathbold{l}\in\mathcal{S}^{n}\setminus\mathcal{S}^{n-1}$ and $\mathbold{k}_{a}^{*}\in\operatorname{supp}(\hat{a})$ with $\mathbold{k}=\mathbold{l}+\mathbold{k}_{a}^{*}$.
By definition of $\mathbold{l}\in\mathcal{S}^{n}\setminus\mathcal{S}^{n-1}$, $n$ is the minimal such number that
$$\mathbold{l}=\mathbold{k}_{f}+\sum_{m=1}^{n}\mathbold{k}_{a}^{m},\text{ where }\mathbold{k}_{f}\in\operatorname{supp}(\hat{f}),\;\mathbold{k}_{a}^{m}\in\operatorname{supp}(\hat{a})\text{ for all }m=1,\ldots,n$$
holds.
In particular, this implies that $\mathbold{k}_{a}^{m}\neq\mathbold{0}$ for all $m=1,\ldots,n$.
There are now two cases.
First, if $\mathbold{k}_{a}^{*}=-\mathbold{k}_{a}^{m}$ for any $m$, $\mathbold{k}=\mathbold{l}+\mathbold{k}_{a}^{*}\in\mathcal{S}^{n-1}\setminus\mathcal{S}^{n-2}$, and the proposition is satisfied.
On the other hand, we consider the case when $\mathbold{k}_{a}^{*}$ does not negate any $\mathbold{k}_{a}^{m}$ involved in the sum equalling $\mathbold{l}$.
If $\mathbold{k}_{a}^{*}=\mathbold{0}$, then clearly $\mathbold{k}=\mathbold{l}\in\mathcal{S}^{n}\setminus\mathcal{S}^{n-1}$.
In any other case, we represent
$$\mathbold{k}=\mathbold{k}_{f}+\sum_{m=1}^{n}\mathbold{k}_{a}^{m}+\mathbold{k}_{a}^{*}=:\mathbold{k}_{f}+\sum_{m=1}^{n+1}\mathbold{k}_{a}^{m},$$
where $n+1$ is the smallest number for which this holds.
Thus, $\mathbold{k}\in\mathcal{S}^{n+1}\setminus\mathcal{S}^{n}$.
Altogether then, the only possible $\mathbold{k}$ values such that the sum is nonzero are those in $\mathcal{S}^{n+1}\setminus\mathcal{S}^{n-2}$, completing the proof.
∎
Noting that $\operatorname{supp}(L[\hat{a}]\hat{u})=\operatorname{supp}(\hat{f})$, we observe the following interesting relationship between the values of $\hat{u}$ on neighboring stamping levels.
Below, to simplify notation, for all $m,n\in\mathbb{N}_{0}$, we set
$$b_{m,n}:=\langle L[\hat{a}]\hat{u}_{\mathcal{S}^{m}\setminus\mathcal{S}^{m-1}},\hat{u}_{\mathcal{S}^{n}\setminus\mathcal{S}^{n-1}}\rangle_{\ell^{2}},$$
with the convention that $\mathcal{S}^{-1}=\emptyset$.
Corollary 1.
For all $n\in\mathbb{N}_{0}$,
$$b_{n+1,n}+b_{n,n}+b_{n-1,n}=\begin{cases}\langle\hat{f},\hat{u}\rvert_{\mathcal{S}^{0}}\rangle_{\ell^{2}}&\text{ if }n=0\\
0&\text{ otherwise}.\end{cases}$$
Proof.
By Proposition 5, $\hat{u}\rvert_{\mathcal{S}^{n}\setminus\mathcal{S}^{n-1}}$ is $\ell^{2}$-orthogonal to $L[\hat{a}]\hat{u}\rvert_{\mathcal{S}^{m}\setminus\mathcal{S}^{m-1}}$ for all $m\notin\{n-1,n,n+1\}$.
In our simplified notation, $b_{m,n}=0$ for all $m\notin\{n-1,n,n+1\}$.
Thus
$$\langle\hat{f},\hat{u}\rvert_{\mathcal{S}^{n}\setminus\mathcal{S}^{n-1}}\rangle_{\ell^{2}}=\langle L[\hat{a}]\hat{u},\hat{u}\rvert_{\mathcal{S}^{n}\setminus\mathcal{S}^{n-1}}\rangle_{\ell^{2}}=\sum_{m=0}^{\infty}b_{m,n}=b_{n+1,n}+b_{n,n}+b_{n-1,n}.$$
The proof is finished by noting that
$$\langle\hat{f},\hat{u}\rvert_{\mathcal{S}^{n}\setminus\mathcal{S}^{n-1}}\rangle_{\ell^{2}}=\begin{cases}\langle\hat{f},\hat{u}\rvert_{\mathcal{S}^{0}}\rangle&\text{ if }n=0\\
0&\text{ otherwise}.\end{cases}$$
∎
We are now ready to estimate $\hat{u}\rvert_{\mathcal{S}^{n}\setminus\mathcal{S}^{n-1}}$ in terms of its neighbors $\hat{u}\rvert_{\mathcal{S}^{n+1}\setminus\mathcal{S}^{n}}$ and $\hat{u}\rvert_{\mathcal{S}^{n-1}\setminus\mathcal{S}^{n-2}}$.
The standard approach would be to use a combination of coercivity and continuity (see, e.g., the proof of Lemma 6 or [10, Section 6.4] for other examples): for $n>0$,
$$\alpha\norm{u\rvert_{\mathcal{S}^{n}\setminus\mathcal{S}^{n-1}}}_{H}^{2}\leq|b_{n,n}|\leq|b_{n+1,n}|+|b_{n-1,n}|\leq\beta\norm{u\rvert_{\mathcal{S}^{n}\setminus\mathcal{S}^{n-1}}}_{H}\left(\norm{u\rvert_{\mathcal{S}^{n+1}\setminus\mathcal{S}^{n}}}_{H}+\norm{u\rvert_{\mathcal{S}^{n-1}\setminus\mathcal{S}^{n-2}}}_{H}\right),$$
and we obtain
$$\norm{u\rvert_{\mathcal{S}^{n}\setminus\mathcal{S}^{n-1}}}_{H}\leq\frac{\beta}{\alpha}\left(\norm{u\rvert_{\mathcal{S}^{n+1}\setminus\mathcal{S}^{n}}}_{H}+\norm{u\rvert_{\mathcal{S}^{n-1}\setminus\mathcal{S}^{n-2}}}_{H}\right).$$
However, we will hope to iterate this bound, and the fact that $\beta\geq\alpha$ will not allow for us to show any decay as $n\rightarrow\infty$.
Thus, we require a slightly subtler estimate than simply using continuity.
Proposition 6.
For $n>0$, we have
$$|b_{n\pm 1,n}|\leq\norm{a-\hat{a}_{\mathbold}{0}}_{L^{\infty}}\norm{u\rvert_{\mathcal{S}^{n}\setminus\mathcal{S}^{n-1}}}_{H}\norm{u\rvert_{\mathcal{S}^{n\pm 1}\setminus\mathcal{S}^{n\pm 1-1}}}_{H}.$$
Proof.
Restricting all sums to the support of the vectors they index, we have
$$b_{n\pm 1,n}=\sum_{\mathbold{k}\in\mathcal{S}^{n}\setminus\mathcal{S}^{n-1}}\sum_{\mathbold{l}\in(\mathbold{k}-\operatorname{supp}(\hat{a}))\cap(\mathcal{S}^{n\pm 1}\setminus\mathcal{S}^{n\pm 1-1})}(2\pi)^{2}(\mathbold{l}\cdot\mathbold{k})\hat{a}_{\mathbold{k}-\mathbold{l}}\hat{u}_{\mathbold{l}}\overline{\hat{u}}_{\mathbold{k}}.$$
Clearly, choosing $\mathbold{l}=\mathbold{k}\in\mathcal{S}^{n}\setminus\mathcal{S}^{n-1}$ would not allow for $\mathbold{l}\in\mathcal{S}^{n\pm 1}\setminus\mathcal{S}^{n\pm 1-1}$.
Thus, no term multiplying $\hat{a}_{\mathbold{k}-\mathbold{k}}=\hat{a}_{\mathbold{0}}$ will appear in this sum.
We then have the equivalence
$$b_{n\pm 1,n}=\langle L[\hat{a}-\hat{a}_{\mathbold}{0}]\hat{u}\rvert_{\mathcal{S}^{n\pm 1}\setminus\mathcal{S}^{n\pm 1-1}},\hat{u}\rvert_{\mathcal{S}^{n}\setminus\mathcal{S}^{n-1}}\rangle_{\ell^{2}},$$
which by the standard argument for continuity, implies
$$|b_{n\pm 1,n}|\leq\norm{a-\hat{a}_{\mathbold}{0}}_{L^{\infty}}\norm{u\rvert_{\mathcal{S}^{n}\setminus\mathcal{S}^{n-1}}}_{H}\norm{u\rvert_{\mathcal{S}^{n\pm 1}\setminus\mathcal{S}^{n\pm 1-1}}}_{H}.$$
as desired.
∎
The same argument preceding Proposition 6 then gives the desired “neighbor” estimate.
Corollary 2.
For all $n>1$,
$$\norm{u\rvert_{\mathcal{S}^{n}\setminus\mathcal{S}^{n-1}}}_{H}\leq\frac{\norm{a-\hat{a}_{\mathbold}{0}}_{L^{\infty}}}{a_{\mathrm{min}}}\left(\norm{u\rvert_{\mathcal{S}^{n+1}\setminus\mathcal{S}^{n}}}_{H}+\norm{u\rvert_{\mathcal{S}^{n-1}\setminus\mathcal{S}^{n-2}}}_{H}\right).$$
We now have the pieces to state an estimate of the truncation error.
Lemma 3.
Let $a$, $f$, and $u$ be as in Proposition 2.
Assume
(6)
$$3\norm{a-\hat{a}_{\mathbold}{0}}_{L^{\infty}}<a_{\mathrm{min}}$$
Then
$$\norm{u-u\rvert_{\mathcal{S}^{N}}}_{H}\leq\left(\frac{\norm{a-\hat{a}_{\mathbold}{0}}_{L^{\infty}}}{a_{\mathrm{min}}-2\norm{a-\hat{a}_{\mathbold}{0}}_{L^{\infty}}}\right)^{N+1}\frac{\norm{f}_{L^{2}}}{a_{\mathrm{min}}}.$$
Proof.
We begin by breaking $\operatorname{supp}(\hat{u})\setminus\mathcal{S}^{N}$ into sets of new contributions $\bigcup_{n=N+1}^{\infty}\left(\mathcal{S}^{n}\setminus\mathcal{S}^{n-1}\right)$ (which holds due to Proposition 4).
Thus
$$\norm{u-u\rvert_{\mathcal{S}^{N}}}_{H}\leq\sum_{n=N+1}^{\infty}\norm{u\rvert_{\mathcal{S}^{n}\setminus\mathcal{S}^{n-1}}}_{H}=:T_{N}.$$
Applying the neighbor bound, Corollary 2, (where we define $A:=\norm{a-\hat{a}_{\mathbold}{0}}_{L^{\infty}}/a_{\mathrm{min}}$), we have
$$\displaystyle T_{N}$$
$$\displaystyle\leq A\left(\sum_{n=N+1}^{\infty}\norm{u\rvert_{\mathcal{S}^{n+1}\setminus\mathcal{S}^{n}}}_{H}+\sum_{n=N+1}^{\infty}\norm{u\rvert_{\mathcal{S}^{n-1}\setminus\mathcal{S}^{n-2}}}_{H}\right)$$
$$\displaystyle=A\left(T_{N+1}+T_{N-1}\right)$$
$$\displaystyle=2AT_{N}+A\left(\norm{u\rvert_{\mathcal{S}^{N}\setminus\mathcal{S}^{N-1}}}_{H}-\norm{u\rvert_{\mathcal{S}^{N+1}\setminus\mathcal{S}^{N}}}_{H}\right).$$
After rearranging, and ignoring the negative term, we find
(7)
$$T_{N}\leq\frac{A}{1-2A}\norm{u\rvert_{\mathcal{S}^{N}\setminus\mathcal{S}^{N-1}}}_{H}.$$
Noting that we always have
(8)
$$\norm{u\rvert_{\mathcal{S}^{N}\setminus\mathcal{S}^{N-1}}}_{H}\leq T_{N-1},$$
iterating (7) and (8) in turn gives
$$\norm{u-u\rvert_{\mathcal{S}^{N}}}_{H}\leq T_{N}\leq\left(\frac{A}{1-2A}\right)^{N+1}\norm{u\rvert_{\mathcal{S}^{0}}}_{H}\leq\left(\frac{A}{1-2A}\right)^{N+1}\frac{\norm{f}_{L^{2}}}{a_{\mathrm{min}}}.$$
∎
7. Previous results on SFTs
In [23], two methods for high-dimensional SFTs are presented, each with a deterministic and Monte Carlo variant.
Here, we use the faster of the two algorithms (at the cost of slightly suboptimal error guarantees).
We focus on only the Monte Carlo variant as the improvements to this technique described in Section 8 below use an additional layer of randomization.
This method relies on applying one-dimensional SFTs to samples of a high-dimensional function along special sets called reconstructing rank-1 lattices.
Definition 5.
Given a number of sampling points $M\in\mathbb{N}$ and a generating vector $\mathbold{z}\in\{1,\ldots M-1\}^{d}$, we define the rank-1 lattice $\Lambda(\mathbold{z},M)$ as the set
$$\Lambda(\mathbold{z},M):=\left\{\frac{j}{M}\mathbold{z}\bmod\mathbold{1}\mid j\in\{0,\ldots,M-1\}\right\}\subset\mathbb{T}^{d}.$$
Additionally, given a set of frequencies $\mathcal{I}\subset\mathbb{Z}^{d}$, we say that $\Lambda(\mathbold{z},M)$ is a reconstructing rank-1 lattice for $\mathcal{I}$ if
$$\mathbold{l}\cdot\mathbold{z}\not\equiv\mathbold{k}\cdot\mathbold{z}\bmod M\quad\text{for all }\mathbold{l}\neq\mathbold{k}\in\mathcal{I}.$$
The fundamental idea of a reconstructing rank-1 lattice is that it takes a multivariate function $g:\mathbb{T}^{d}\rightarrow\mathbb{R}$ and gives the locations for $M$ equispaced samples of the univariate function $t\mapsto g(t\mathbold{z})$.
The univariate Fourier content of these samples can then be assigned to the original function $g$ with the reconstructing property ensuring that no multidimensional frequencies of interest are aliased together in the one-dimensional analysis.
For the following theorem, we assume that we know a reconstructing rank-1 lattice exists for a given frequency set of interest, $\mathcal{I}$.
This assumption will be lifted in the following section.
The following theorem is a restatement of [23, Corollary 2] with minor simplifications and improvements (most notably, $L^{\infty}$ error bounds).
The proof of these improvements is given in Appendix B.
Theorem 2 ([23], Corollary 2).
Let $\mathcal{I}\subset\mathbb{Z}^{d}$ be a frequency set of interest with expansion defined as $K:=\max_{j\in\{1,\ldots,d\}}(\max_{\mathbold{k}\in\mathcal{I}}k_{j}-\min_{\mathbold{l}\in\mathcal{I}}l_{j})$ (i.e., the sidelength of the smallest hypercube containing $\mathcal{I}$), and $\Lambda(\mathbold{z},M)$ be a reconstructing rank-1 lattice for $\mathcal{I}$.
There exists a fast, randomized SFT which, given $\Lambda(\mathbold{z},M)$, sampling access to $g\in L^{2}$, and a failure probability $\sigma\in(0,1]$, will produce a $2s$-sparse approximation $\hat{\mathbold{g}}^{s}$ of $\hat{g}$ and function $g^{s}:=\sum_{\mathbold{k}\in\operatorname{supp}(\hat{\mathbold{g}}^{s})}\hat{g}_{\mathbold{k}}^{s}e_{\mathbold}{k}$ approximating $g$ satisfying
$$\displaystyle\norm{g-g^{s}}_{L^{2}}\leq\norm{\hat{g}-\hat{\mathbold{g}}^{s}}_{\ell^{2}}$$
$$\displaystyle\leq(25+3K)\left[\frac{\norm{\hat{g}\rvert_{\mathcal{I}}-(\hat{g}\rvert_{\mathcal{I}})_{s}^{\mathrm{opt}}}_{1}}{\sqrt{s}}+\sqrt{s}\norm{\hat{g}-\hat{g}\rvert_{\mathcal{I}}}_{1}\right]$$
with probability exceeding $1-\sigma$.
If $g\in L^{\infty}$, then we additionally have
$$\norm{g-g^{s}}_{L^{\infty}}\leq\norm{\hat{g}-\hat{\mathbold{g}}^{s}}_{\ell^{1}}\leq(33+4K)\left[\norm{\hat{g}\rvert_{\mathcal{I}}-(\hat{g}\rvert_{\mathcal{I}})_{s}^{\mathrm{opt}}}_{1}+\norm{\hat{g}-\hat{g}\rvert_{\mathcal{I}}}_{1}\right]$$
with the same probability estimate.
The total number of samples of $g$ and computational complexity of the algorithm can be bounded above by
$$\mathcal{O}\left(ds\log^{3}(dKM)\log\left(\frac{dKM}{\sigma}\right)\right).$$
8. Improvements with randomized lattices
To use the previous SFT algorithm, we need to know a reconstructing rank-1 lattice in advance.
Though there are deterministic algorithms to construct a reconstructing rank-1 lattice given any frequency set $\mathcal{I}$ (for example, the component-by-component construction [34, 27]), these algorithms are are superlinear in $|\mathcal{I}|$ as they effectively search the frequency space for collisions throughout construction.
This section presents an alternative based on choosing a random lattice.
This lattice is chosen by drawing $\mathbold{z}$ from a uniform distribution over $\{1,\ldots,M-1\}^{d}$ for $M$ sufficiently large.
Below, we provide probability estimates for when this lattice is reconstructing for a frequency set $\mathcal{I}$.
Lemma 4.
Let $K:=\max_{j\in\{1,\ldots d\}}(\max_{\mathbold{k}\in\mathcal{I}}k_{j}-\min_{\mathbold{l}\in\mathcal{I}}l_{j})$ be the expansion of the frequency set $\mathcal{I}\subset\mathbb{Z}^{d}$.
Let $\sigma\in(0,1]$, and fix $M$ to be the smallest prime greater than $\max(K,\frac{|\mathcal{I}|^{2}}{\sigma})$.
Then drawing each component of $\mathbold{z}$ i.i.d from $\{1,\ldots M-1\}$ gives that $\Lambda(\mathbold{z},M)$ is a reconstructing rank-1 lattice for $\mathcal{I}$ with probability $1-\sigma$.
Proof.
In order to show that $\Lambda(\mathbold{z},M)$ is reconstructing for $\mathcal{I}$, it suffices to show that for any $\mathbold{k}\neq\mathbold{l}\in\mathcal{I}$, $\mathbold{k}\cdot\mathbold{z}\not\equiv\mathbold{l}\cdot\mathbold{z}\bmod M$.
Thus, we are interested in showing that $\mathbb{P}[\exists\mathbold{k}\neq\mathbold{l}\in\mathcal{I}\text{ s.t. }(\mathbold{k}-\mathbold{l})\cdot\mathbold{z}\equiv\mathbold{0}\bmod M]$ is small.
If $\mathbold{k},\mathbold{l}\in\mathcal{I}$ are distinct, at least one component $k_{j}-l_{j}$ is nonzero.
Since $M>K$, we therefore have that $k_{j}-l_{j}\not\equiv 0\bmod M$, and since $M$ is prime, $k_{j}-l_{j}$ has a multiplicative inverse modulo $M$.
Then $\mathbb{P}[(\mathbold{k}-\mathbold{l})\cdot\mathbold{z}\equiv\mathbold{0}\bmod M]=\mathbb{P}\left[z_{j}=\left((k_{j}-l_{j})^{-1}\sum_{i\in\{1,\ldots d\},i\neq j}(k_{i}-l_{i})z_{i}\bmod M\right)\right]$.
Since $z_{j}$ is uniformly distributed in $\{1,\ldots M-1\}$, this probability is $\frac{1}{M-1}$.
By the union bound,
$$\mathbb{P}[\exists\mathbold{k}\neq\mathbold{l}\in\mathcal{I}\text{ s.t. }(\mathbold{k}-\mathbold{l})\cdot\mathbold{z}\equiv\mathbold{0}\bmod M]\leq\sum_{\mathbold{k}\neq\mathbold{l}\in\mathcal{I}}\mathbb{P}[(\mathbold{k}-\mathbold{l})\cdot\mathbold{z}\equiv\mathbold{0}\bmod M]\leq\frac{\absolutevalue{\mathcal{I}}^{2}}{M-1}\leq\sigma$$
as desired.
∎
One important consequence of Lemma 4 is that we no longer need to provide the frequency set of interest in Theorem 2.
Having chosen $K$, the expansion, and $s$, the sparsity level, we can always take $\mathcal{I}$ to be the frequencies corresponding to the largest $s$ Fourier coefficients of the function $g$ in the hypercube $[-K/2,K/2]^{d}$.
Lemma 4 then implies that a randomly generated lattice with length $\max(K,s^{2}/\sigma)$ will be reconstructing for these optimal frequencies with probability $\sigma$.
We summarize this in the following corollary.
Corollary 3.
For a multivariate function’s Fourier series $\hat{g}$, define $\hat{g}\rvert_{K}:=\hat{g}\rvert_{[-K/2,K/2]^{d}}$.
Given a multivariate bandwidth $K$, a sparsity level $s$, probability of failure $\sigma\in(0,1]$, and sampling access to $g\in L^{2}$, there exists a fast, randomized SFT which will produce a $2s$-sparse approximation $\hat{\mathbold{g}}^{s}$ of $\hat{g}$ and function $g^{s}:=\sum_{\mathbold{k}\in\operatorname{supp}(\hat{\mathbold{g}}^{s})}\hat{g}_{\mathbold}{k}^{s}e_{\mathbold}{k}$ approximating $g$ satisfying
$$\norm{g-g^{s}}_{L^{2}}\leq\norm{\hat{g}-\hat{\mathbold{g}}^{s}}_{\ell^{2}}\leq(25+3K)\sqrt{s}\norm{\hat{g}-(\hat{g}\rvert_{K})_{s}^{\mathrm{opt}}}_{\ell^{1}}$$
with probability $1-\sigma$.
If $g\in L^{\infty}$, then $g^{s}$ and $\hat{g}^{s}$ satisfy the upper bound
$$\norm{g-g^{s}}_{L^{\infty}}\leq\norm{\hat{g}-\hat{\mathbold{g}}^{s}}_{\ell^{1}}\leq(33+4K)\norm{\hat{g}-(\hat{g}\rvert_{K})_{s}^{\mathrm{opt}}}_{\ell^{1}}$$
with the same probability estimate.
The total number of samples of $g$ and computational complexity of the algorithm can be bounded above by
$$\mathcal{O}\left(ds\log^{3}(dK\max(K,s/\sigma))\log\left(\frac{dK\max(K,s/\sigma)}{\sigma}\right)\right).$$
If we fix $\sigma$ (say $\sigma=0.95$), this reduces to a complexity of
$$\mathcal{O}\left(ds\log^{4}(dK\max(K,s))\right).$$
9. A sparse spectral method via SFTs
Let $\hat{\mathbold{a}}^{s}$ and $\hat{\mathbold{f}}^{s}$ be $s$-sparse approximations of $\hat{a}$ and $\hat{f}$ respectively.
We will use these approximations to discretize the Galerkin formulation (GF) of our PDE.
The first step is to reduce to the case where the PDE data is Fourier-sparse which is motivated by the following lemma.
Lemma 5.
Let $a^{\prime}:=a\rvert_{\operatorname{supp}\hat{\mathbold{a}}^{s}}$ and $f^{\prime}:=f\rvert_{\operatorname{supp}\hat{\mathbold{f}}^{s}}$.
Suppose that $a^{\prime}$ and $f^{\prime}$ satisfy the conditions of Proposition 2 and let $u^{\prime}$ be the unique solution of the resulting elliptic PDE, which we write in Galerkin form as
(9)
$$L[\hat{a}^{\prime}]\hat{u}^{\prime}=\hat{f}^{\prime}.$$
Then
$$\norm{u-u^{\prime}}_{H}\leq\frac{\norm{f-f^{\prime}}_{L^{2}}}{a_{\mathrm{min}}}+\frac{\norm{a-a^{\prime}}_{L^{\infty}}\norm{f^{\prime}}_{L^{2}}}{a_{\mathrm{min}}a^{\prime}_{\mathrm{min}}}.$$
Proof.
We begin by observing
$$L[\hat{a}](\hat{u}-\hat{u}^{\prime})=L[\hat{a}]\hat{u}-L[\hat{a}^{\prime}]\hat{u}^{\prime}-L[\hat{a}-\hat{a}^{\prime}]\hat{u}^{\prime}=\hat{f}-\hat{f}^{\prime}-L[\hat{a}-\hat{a}^{\prime}]\hat{u}^{\prime},$$
and therefore
$$\absolutevalue{\langle L[\hat{a}](\hat{u}-\hat{u}^{\prime}),\hat{u}-\hat{u}^{\prime}\rangle}\leq\absolutevalue{\langle\hat{f}-\hat{f}^{\prime},\hat{u}-\hat{u}^{\prime}\rangle}+\absolutevalue{\langle L[\hat{a}-\hat{a}^{\prime}]\hat{u}^{\prime},\hat{u}-\hat{u}^{\prime}\rangle}.$$
After an application of Proposition 3 to convert the $\ell^{2}$ inner products into bilinear forms, we can make use of coercivity, (3), continuity, (2) and the Cauchy-Schwarz inequality to produce the $H$ approximation
$$a_{\mathrm{min}}\norm{u-u^{\prime}}_{H}\leq\norm{\hat{f}-\hat{f}^{\prime}}_{\ell^{2}}+\norm{a-a^{\prime}}_{L^{\infty}}\norm{u^{\prime}}_{H}.$$
An application of the stability estimate (4) gives the desired bound
$$\norm{u-u^{\prime}}_{H}\leq\frac{\norm{f-f^{\prime}}_{L^{2}}}{a_{\mathrm{min}}}+\frac{\norm{a-a^{\prime}}_{L^{\infty}}\norm{f^{\prime}}_{L^{2}}}{a_{\mathrm{min}}a^{\prime}_{\mathrm{min}}}.$$
∎
We can now replace the trial and test spaces in (WF) with finite dimensional approximations so as to convert (GF) to a matrix equation.
Inspired by Proposition 4 and the truncation error analysis in Section 6, we use the space of functions whose Fourier coefficients are supported on $\mathcal{S}^{N}:=\mathcal{S}^{N}[\hat{a}](\operatorname{supp}\hat{f})$.
By doing so, we discretize the Galerkin formulation of the problem (GF) into the finite system of equations
(10)
$$(\mathbold{L}_{N}\hat{\mathbold{u}})_{\mathbold{k}}:=\sum_{\mathbold{l}\in\mathcal{S}^{N}}(2\pi)^{2}(\mathbold{l}\cdot\mathbold{k})\hat{a}_{\mathbold{k}-\mathbold{l}}\hat{u}_{\mathbold{l}}=\hat{f}_{\mathbold}{k}\quad\text{ for all }\mathbold{k}\in\mathcal{S}^{N}.$$
However, in practice, we do not know $\hat{a}$ and $\hat{f}$ exactly (and indeed, they may not be exactly sparse).
Thus, we substitute the SFT approximations $\hat{\mathbold{a}}^{s}$ and $\hat{\mathbold{f}}^{s}$, defining the new finite-dimensional operator $\mathbold{L}_{N,s}:\mathbb{C}^{\mathcal{S}^{N}}\rightarrow\mathbb{C}^{\mathcal{S}^{N}}$ by
$$\left(\mathbold{L}_{N,s}\hat{\mathbold{u}}\right)_{\mathbold}{k}:=\sum_{\mathbold{l}\in\mathcal{S}^{N}}(2\pi)^{2}(\mathbold{l}\cdot\mathbold{k})\hat{a}_{\mathbold{k}-\mathbold{l}}^{s}\hat{u}_{\mathbold}{l}\quad\text{ for all }\mathbold{k}\in\mathcal{S}^{N}.$$
Our new approximate solution will be $\hat{\mathbold{u}}^{N,s}\in\mathbb{C}^{\mathcal{S}^{N}}$ which solves
(11)
$$\mathbold{L}_{N,s}\hat{\mathbold{u}}^{N,s}=\hat{\mathbold{f}}^{s}.$$
We summarize our technique in Algorithm 1.
Showing that $u^{N,s}$ converges to $u$ now relies on a version of Strang’s lemma [10, Equation (6.4.46)].
We make the assumption here that $\operatorname{supp}(\hat{a})=\operatorname{supp}(\hat{\mathbold{a}}^{s})$ and $\operatorname{supp}(\hat{f})=\operatorname{supp}(\hat{\mathbold{f}}^{s})$ so that our use of $\mathcal{S}^{N}$ is unambiguous.
However, this assumption will be lifted by Lemma 5 in Corollary 4 below.
Lemma 6 (Strang’s Lemma).
Suppose that $\operatorname{supp}(\hat{a})=\operatorname{supp}(\hat{\mathbold{a}}^{s})$ and $\operatorname{supp}(\hat{f})=\operatorname{supp}(\hat{\mathbold{f}}^{s})$.
Also suppose that $a^{s}\geq a^{s}_{\mathrm{min}}>0$ on $\mathbb{T}^{d}$.
Let $u$ and $u^{N,s}$ be as above.
Then
$$\norm{u-u^{N,s}}_{H}\leq\left(1+\frac{\norm{a}_{L^{\infty}}}{a^{s}_{\mathrm{min}}}\right)\norm{u\rvert_{\mathbb{Z}^{d}\setminus\mathcal{S}^{N}}}_{H}+\frac{\norm{a-a^{s}}_{L^{\infty}}}{a^{s}_{\mathrm{min}}}\norm{u\rvert_{\mathcal{S}^{N}}}_{H}+\frac{\norm{f-f^{s}}_{L^{2}}}{a^{s}_{\mathrm{min}}}.$$
Proof.
We let $\hat{\mathbold{e}}:=\hat{\mathbold{u}}^{N,s}-\hat{u}\rvert_{\mathcal{S}^{N}}$, and consider
$$\displaystyle\mathbold{L}_{N,s}\hat{\mathbold{e}}$$
$$\displaystyle=\mathbold{L}_{N,s}\hat{\mathbold{u}}^{N,s}-(L[\hat{\mathbold{a}}^{s}]\hat{u}\rvert_{\mathcal{S}^{N}})\rvert_{\mathcal{S}^{N}}$$
$$\displaystyle=\hat{\mathbold{f}}^{s}-\hat{f}+(L[\hat{a}]\hat{u})\rvert_{\mathcal{S}^{N}}-(L[\hat{\mathbold{a}}^{s}]\hat{u}\rvert_{\mathcal{S}^{N}})\rvert_{\mathcal{S}^{N}}$$
$$\displaystyle=\hat{\mathbold{f}}^{s}-\hat{f}+(L[\hat{a}]\hat{u}\rvert_{\mathbb{Z}^{d}\setminus\mathcal{S}^{N}})\rvert_{\mathcal{S}^{N}}+(L[\hat{a}]\hat{u}\rvert_{\mathcal{S}^{N}}-L[\hat{\mathbold{a}}^{s}]\hat{u}\rvert_{\mathcal{S}^{N}})\rvert_{\mathcal{S}^{N}}$$
$$\displaystyle=\hat{\mathbold{f}}^{s}-\hat{f}+(L[\hat{a}]\hat{u}\rvert_{\mathbb{Z}^{d}\setminus\mathcal{S}^{N}})\rvert_{\mathcal{S}^{N}}+(L[\hat{a}-\hat{\mathbold{a}}^{s}]\hat{u}\rvert_{\mathcal{S}^{N}})\rvert_{\mathcal{S}^{N}}.$$
Noting that $\mathbold{L}_{N,s}\hat{\mathbold{e}}=(L[\hat{\mathbold{a}}^{s}]\hat{\mathbold{e}})\rvert_{\mathcal{S}^{N}}$ and owing to coercivity of $L[\hat{\mathbold{a}}^{s}]$, we have
$$\displaystyle a^{s}_{\mathrm{min}}\norm{e}_{H}^{2}$$
$$\displaystyle\leq\absolutevalue{\langle\mathbold{L}_{N,s}\hat{\mathbold{e}},\hat{\mathbold{e}}\rangle}$$
$$\displaystyle\leq\norm{f^{s}-f}_{L^{2}}\norm{e}_{H}+\norm{a}_{L^{\infty}}\norm{u\rvert_{\mathbb{Z}^{d}\setminus\mathcal{S}^{N}}}_{H}\norm{e}_{H}+\norm{a-a^{s}}_{L^{\infty}}\norm{u\rvert_{\mathcal{S}^{N}}}_{H}\norm{e}_{H}.$$
The result then follows from rearranging to estimate $\norm{e}_{H}$ and using the triangle inequality to estimate $\norm{u-u^{N,s}}_{H}\leq\norm{u-u\rvert_{\mathcal{S}^{N}}}_{H}+\norm{e}_{H}$.
∎
We can now thread all of our results together into a final convergence analysis.
The first corollary below is a more direct application of Strang’s lemma which is then followed by another corollary which takes advantage of the SFT recovery results.
We will also return to the setting where $a$ and $f$ are not necessarily Fourier sparse.
Thus, for $a^{s}$ and $f^{s}$ Fourier sparse approximations of $a$ and $f$, we again let $a^{\prime}=a\rvert_{\operatorname{supp}\hat{\mathbold{a}}^{s}}$ and $f^{\prime}=f\rvert_{\operatorname{supp}\hat{\mathbold{f}}^{s}}$ as in Lemma 5.
Corollary 4.
Suppose $a$, $f$ and $a^{s}$, $f^{s}$ respectively satisfy the conditions of Proposition 2.
Additionally, suppose that
(12)
$$3\sum_{\mathbold{k}\in\operatorname{supp}(\hat{\mathbold{a}}^{s})\setminus\{\mathbold{0}\}}\absolutevalue{\hat{a}_{\mathbold}{k}}\leq\hat{a}_{\mathbold}{0}.$$
Then with $u$ the exact solution to (WF) and $u^{N,s}$ the output of Algorithm 1, we have
$$\displaystyle\norm{u-u^{N,s}}_{H}$$
$$\displaystyle\leq\frac{\norm{f-f^{\prime}}_{L^{2}}}{a_{\mathrm{min}}}+\frac{\norm{a-a^{\prime}}_{L^{\infty}}\norm{f^{\prime}}_{L^{2}}}{a_{\mathrm{min}}a_{\mathrm{min}}^{\prime}}+\left(1+\frac{\norm{a^{\prime}}_{L^{\infty}}}{a_{\mathrm{min}}^{s}}\right)\left(\frac{\norm{a^{\prime}-\hat{a}^{\prime}_{\mathbold}{0}}_{L^{\infty}}}{a^{\prime}_{\mathrm{min}}-2\norm{a^{\prime}-\hat{a}^{\prime}_{\mathbold}{0}}_{L^{\infty}}}\right)^{N+1}\frac{\norm{f^{\prime}}_{L^{2}}}{a_{\mathrm{min}}^{\prime}}$$
$$\displaystyle\qquad+\frac{\norm{a^{\prime}-a^{s}}_{L^{\infty}}\norm{f^{\prime}}_{L^{2}}}{a_{\mathrm{min}}^{s}a_{\mathrm{min}}}+\frac{\norm{f^{\prime}-f^{s}}_{L^{2}}}{a_{\mathrm{min}}^{s}}$$
Proof.
The condition (12) ensures that $a^{\prime}$ is coercive, and therefore $a^{\prime}$ and $f^{\prime}$ also satisfy Proposition 2.
Additionally, this allows the use of Lemma 3, which upper bounds the truncation error in Lemma 6.
Combining Lemma 5 with this bound from Lemma 6 and applying the stability estimate from Proposition 2 finishes the proof.
∎
Remark 1.
In order for this bound to hold, it is necessary for the weak forms of both
$$\mathcal{L}[a]u=f\text{ and }\mathcal{L}[a^{s}]u^{s}=f^{s}$$
to be well-posed, that is, satisfy the continuity and coercivity conditions of Proposition 2.
In practice, this condition is not much more restrictive than assuming only the original equation is well-posed as long as the diffusion coefficient is Fourier-compressible and the sparsity level $s$ is large enough to ensure that $a^{s}$ stays strictly positive.
In fact, (12) allows for the simple (if pessimistic) check after computing $\hat{\mathbold{a}}^{s}$ that $\norm{\hat{\mathbold{a}}^{s}-\hat{a}^{s}_{\mathbold}{0}}_{\ell^{1}}<\absolutevalue{\hat{a}^{s}_{\mathbold}{0}}$ to ensure the positivity of $a^{s}$.
With minor modifications, we can rewrite this upper bound to pass all dependence on sparsity through the error in approximating $a$ and $f$ via SFTs.
Corollary 5.
Under the same conditions as Corollary 4 above substituting (12) with
$$3\norm{\hat{a}-\hat{a}_{\mathbold}{0}}_{\ell^{1}}+\norm{\hat{a}-\hat{\mathbold{a}}^{s}}_{\ell^{1}}<\hat{a}_{\mathbold}{0},$$
we have
$$\displaystyle\norm{u-u^{N,s}}_{H}$$
$$\displaystyle\leq\left(1+\frac{\norm{\hat{a}}_{\ell^{1}}}{a_{\mathrm{min}}-\norm{\hat{a}-\hat{\mathbold{a}}^{s}}_{\ell^{1}}}\right)\frac{\norm{f}_{L^{2}}}{a_{\mathrm{min}}-\norm{\hat{a}-\hat{\mathbold{a}}^{s}}_{\ell^{1}}}$$
$$\displaystyle\qquad\times\left(\frac{\norm{f-f^{s}}_{L^{2}}}{\norm{f}_{L^{2}}}+\norm{a-a^{s}}_{L^{\infty}}+\left(\frac{\norm{\hat{a}-\hat{a}_{\mathbold}{0}}_{\ell^{1}}}{a_{\mathrm{min}}-2\norm{\hat{a}-\hat{a}_{\mathbold}{0}}_{\ell^{1}}-\norm{\hat{a}-\hat{\mathbold{a}}^{s}}_{\ell^{1}}}\right)^{N+1}\right).$$
Proof.
Since $\hat{a}^{\prime}=\hat{a}\rvert_{\operatorname{supp}\hat{\mathbold{a}}^{s}}$,
$$\displaystyle\norm{a-a^{\prime}}_{L^{\infty}}\leq\norm{\hat{a}-\hat{a}^{\prime}}_{\ell^{1}}\leq\norm{\hat{a}-\hat{\mathbold{a}}^{s}}_{\ell^{1}},$$
$$\displaystyle\norm{a^{\prime}-a^{s}}_{L^{\infty}}\leq\norm{\hat{a}^{\prime}-\hat{\mathbold{a}}^{s}}_{\ell^{1}}\leq\norm{\hat{a}-\hat{\mathbold{a}}^{s}}_{\ell^{1}},$$
and analogously to show that $\norm{f-f^{\prime}}_{L^{2}}$ and $\norm{f^{\prime}-f^{s}}_{L^{2}}$ are bounded above by $\norm{f-f^{s}}_{L^{2}}$.
Additionally,
$$\displaystyle a^{s}\geq a-\norm{a-a^{s}}_{L^{\infty}}\geq a-\norm{\hat{a}-\hat{\mathbold{a}}^{s}}_{\ell^{1}}\text{ and}$$
$$\displaystyle a^{\prime}\geq a-\norm{a-a^{\prime}}_{L^{\infty}}\geq a-\norm{\hat{a}-\hat{\mathbold{a}}^{s}}_{\ell^{1}}$$
giving $\min(a^{s}_{\mathrm{min}},a^{\prime}_{\mathrm{min}})\geq a_{\mathrm{min}}-\norm{\hat{a}-\hat{\mathbold{a}}^{s}}_{\ell^{1}}$.
The rest follows from applications of (4) and rearranging.
∎
Remark 2.
Though this final bound is difficult to parse, we can focus our attention on the final factor
(13)
$$\frac{\norm{f-f^{s}}_{L^{2}}}{\norm{f}_{L^{2}}}+\norm{a-a^{s}}_{L^{\infty}}+\left(\frac{\norm{\hat{a}-\hat{a}_{\mathbold}{0}}_{\ell^{1}}}{a_{\mathrm{min}}-2\norm{\hat{a}-\hat{a}_{\mathbold}{0}}_{\ell^{1}}-\norm{\hat{a}-\hat{\mathbold{a}}^{s}}_{\ell^{1}}}\right)^{N+1},$$
since the other factors are more or less fixed.
The first two terms are respectively controlled by having good SFT approximations to $f$ in the $L^{2}$ norm and $a$ in the $L^{\infty}$ norm.
In our algorithm, these terms can be reduced by increasing the bandwidth $K$ and the sparsity $s$.
As a reminder, the errors in these approximations given in Theorem 2 are near optimal, as
$$\norm{f-f^{s}}_{L^{2}}\leq(25+3K)\sqrt{s}\norm{\hat{f}-\left(\hat{f}\rvert_{K}\right)_{s}^{\mathrm{opt}}}_{\ell^{1}}\text{ and }\norm{a-a^{s}}_{L^{\infty}}\leq(33+4K)\norm{\hat{a}-\left(\hat{a}\rvert_{K}\right)_{s}^{\mathrm{opt}}}_{\ell^{1}}$$
with high probability.
The final term is controlled by properties of $a$ as well as the final stamping level used.
Overall, the convergence is exponential in $N$, the stamping level.
This convergence is accelerated as the base of the exponent decreases: effectively, this happens as the diffusion coefficient approaches a large constant.
Indeed, the numerator can be thought of as an upper bound for the absolute deviation of $a$ from its mean while the denominator grows with the minimum of $a$.
Remark 3.
The computational complexity of Algorithm 1 is
$$\mathcal{O}\left(ds\log^{4}(dK\max(K,s))+\max(s,2N+1)^{3\min(s,2N+1)}\right).$$
This is due to the two SFTs and a matrix solve of a $\absolutevalue{\mathcal{S}^{N}}\times\absolutevalue{\mathcal{S}^{N}}$ system.
Note that computing the stamping set can be done by enumerating the frequencies using the techniques in Lemma 8 and therefore is subject to the same upper bound as given in Lemma 2 for a stamp set’s cardinality.
Recall also that the SFT complexity can be tuned to produce SFT approximations satisfying the above bounds higher probability.
We do not analyze the complexity of the matrix solve in depth, and instead resort to the upper bound given by Gaussian elimination on the dense matrix, $\mathcal{O}\left(\max(s,2N+1)^{3\min(s,2N+1)}\right)$.
However, $\mathbold{L}_{N,s}$ is relatively sparse for larger stamping levels.
As the capabilities of sparse solvers depend strongly on analyzing the graph connecting interacting rows in $\mathbold{L}_{N,s}$ (cf. [18, Chapter 11]), we expect that the analysis of an efficient sparse solver could be carried out using much of the same analysis of stamping sets performed in Section 6.
Remark 4.
This paper considers the theory for solving the simple diffusion equation (1).
However, these techniques extend to more complex advection-diffusion-reaction (ADR) equations.
The test problem is then
(14)
$$-\nabla\cdot(a(\mathbold{x})\nabla u(\mathbold{x}))+\mathbold{b}(\mathbold{x})\cdot\nabla u(\mathbold{x})+c(\mathbold{x})u(\mathbold{x})=f(\mathbold{x})\text{ for all }\mathbold{x}\in\mathbb{T}^{3}.$$
As before $a,f,u:\mathbb{T}^{d}\rightarrow\mathbb{R}$ are the diffusion coefficient, forcing function, and solution respectively.
These are now joined by an advection field $\mathbold{b}:\mathbb{T}^{d}\rightarrow\mathbb{R}^{d}$ and an additional reaction coefficient $c:\mathbb{T}^{d}\rightarrow\mathbb{R}$.
For more on the properties and well-posedness of this periodic ADR equation, we refer to [3].
Adapting Algorithm 1 for solving ADR equations requires two modifications:
(1)
When computing the approximations $\hat{\mathbold{a}}^{s},\hat{\mathbold{f}}^{s}$ via SFT, additionally compute $\hat{\mathbold{b}}^{s}:=(\hat{\mathbold{b}}_{j}^{s})_{j=1}^{d}$, an approximation to the Fourier coefficients of each component of $\mathbold{b}$, and compute $\hat{\mathbold{c}}^{s}$, an approximation to $\hat{c}$.
(2)
Redefine the “stamp” used to define $\mathcal{S}^{N}[\hat{\mathbold{a}}^{s}](\operatorname{supp}(\hat{\mathbold{f}}^{s}))$ by including the supports of $\hat{\mathbold{b}}^{s}$ and $\hat{\mathbold{c}}^{s}$.
Mathematically, we define
$$\mathcal{S}^{N}[\hat{\mathbold{a}}^{s},\hat{\mathbold{b}}^{s},\hat{\mathbold{c}}^{s}](\operatorname{supp}(\hat{\mathbold{f}}^{s})):=\begin{cases}\operatorname{supp}(\hat{\mathbold{f}}^{s})&\text{if }N=0\\
\mathcal{S}^{N-1}+\operatorname{supp}(\hat{\mathbold{a}}^{s})+\sum_{j=1}^{d}\operatorname{supp}(\hat{\mathbold{b}}_{j}^{s})+\operatorname{supp}(\hat{\mathbold{c}}^{s})&\text{if }N>0\end{cases}$$
where, as usual, we suppress the Fourier coefficients when clear from context.
The convergence analysis for this method is much the same as that leading to Corollary 5 where terms like $\norm{a-a^{s}}_{L^{\infty}}$ are replaced by $\max\left\{\norm{a-a^{s}}_{L^{\infty}},\norm{\norm{\mathbold{b}-\mathbold{b}^{s}}_{\ell^{2}}}_{L^{\infty}},\norm{c-c^{s}}_{L^{\infty}}\right\}$ and similarly for the mean-zero version of $a$ used in the exponentially decaying term.
For full details see [22].
10. Numerics
This section gives examples of the algorithm summarized above applied to various problems.
We begin with an overview of our implementation as well as some techniques used to evaluate the accuracy of our approximations.
We then present solutions to univariate and very high-dimensional multiscale problems with both exactly sparse and Fourier-compressible data.
We then close with an extension of our methods to a three-dimensional advection-diffusion-reaction equation.
10.1. Code and testing overview
We implement Algorithm 1 described above in MATLAB using an object-oriented approach, with all code publicly available.222https://gitlab.com/grosscra/SparseADR
All SFTs are computed using the rank-1 lattice sparse Fourier code from [23].333this code is publicly available at https://gitlab.com/grosscra/Rank1LatticeSparseFourier
In order to evaluate the quality of our approximations, we need to choose an appropriate metric.
Letting $u^{s,N}$ be the approximation returned by our algorithm, the ideal choice would be $\norm{u-u^{s,N}}_{H}$.
However, for the types of problems we will be investigating, the true solution $u$ is unavailable to us.
Instead, we will use a proxy that takes advantage of the stability result in Proposition 2.
Lemma 7.
Let $u$ be the true solution to (GF) and $u^{s,N}$ be the approximation returned by solving (11).
Define $\hat{f}^{s,N}:=L[\hat{a}]\hat{u}^{s,N}$ with $f^{s,N}=\mathcal{L}[a]u^{s,N}$.
Then
$$\norm{u-u^{s,N}}_{H}\leq\frac{\norm{f-f^{s,N}}_{L^{2}}}{a_{\mathrm{min}}}=\frac{\norm{\hat{f}-\hat{f}^{s,N}}_{\ell^{2}}}{a_{\mathrm{min}}}.$$
Proof.
The result follows from the fact that $\hat{u}-\hat{u}^{s,N}$ solves $L[\hat{a}]\left(\hat{u}-\hat{u}^{s,N}\right)=\hat{f}-L[\hat{a}]\hat{u}^{s,N}=\hat{f}-\hat{f}^{s,N}$ and applying Proposition 2.
∎
In the sequel, we will ignore $a_{\mathrm{min}}$ since we are mostly interested in convergence properties in $s$ and $N$ and we will compute the relative error
$$\frac{\norm{f-f^{s,N}}_{L^{2}}}{\norm{f}_{L^{2}}}\text{ or }\frac{\norm{\hat{f}-\hat{f}^{s,N}}_{\ell^{2}}}{\norm{\hat{f}}_{\ell^{2}}}$$
as our proxy instead.
Whenever $\hat{f}$ and $\hat{a}$ are exactly sparse, the numerator of the second term can be computed exactly due to the fact that $\operatorname{supp}(\hat{f}^{s,N})$ is known to be contained in $\mathcal{S}^{N+1}$ (cf. Proposition 5).
However, in the non-sparse setting, even though $f-f^{s,N}$ can be evaluated pointwise, computing an accurate approximation of its norm on $\mathbb{T}^{d}$ is challenging for large $d$.
For this reason, we approximate the norm via Monte Carlo sampling.
We also furnish the cases where exactly computing $\norm{\hat{f}-\hat{f}^{s,N}}_{\ell^{2}}$ is possible with the pointwise Monte Carlo estimates to show that in practice, Monte Carlo sampling does as well as the exact computation.
10.2. Univariate compressible
We begin by replicating the lone numerical example of solving an elliptic problem in [13, Section 5.1].
In this case, we solve the univariate problem
(15)
$$\begin{gathered}-(a(x)u^{\prime}(x))^{\prime}=f(x)\text{ for all }x\in\mathbb{T},\text{ where }\\
a(x)=\frac{1}{10}\exp\left(\frac{0.6+0.2\cos(2\pi x)}{1+0.7\sin(256\pi x)}\right),\quad f(x)=\exp(-\cos(2\pi x))-\int_{\mathbb{T}}\exp(-\cos(2\pi x))\mathop{}\!dx\end{gathered}$$
(note that the only difference from [13] is that we use the domain $\mathbb{T}=[0,1]$ rather than $[0,2\pi]$).
This data is not Fourier sparse, but is compressible.
In the original paper, a bandwidth of $K=1\,536$ is considered and approximations with $9$ and $17$ Fourier coefficients are used.
We first construct a high accuracy approximation of the solution to (15) by numerically integrating on an extremely fine mesh of $10\,000$ points.
This allows us to forgo our proxy error described in Lemma 7.
As in [13], the bandwidth of our SFT used is set to $K=1\,536$.
Due to our SFT returning a $2s$ sparse approximation, we use $s=4$ and $s=8$ to compare with the $9$ and $17$ terms respectively considered in the original paper, and also provide an example with $s=12$.
We set the stamping level to $N=1$ throughout, which, as discussed in the introduction, is similar to the technique used in [13].
The relative errors approximated in $L^{2}$ and $H^{1}$ are given in Figure 2.
The original paper does not give numerical results, and instead, gives qualitative results, comparing the approximate solutions and their derivatives with the true solution and its derivative.
We have replicated this qualitative analysis in Figure 3 with similar results.
Figure 2 also shows the error computed via the proxy described by Lemma 7, and in particular, how pessimistic the proxy error can be.
In this case, the small errors in the derivative (visualized in Figure 2(b)) are compounded by passing the approximate solution through the operator where $a^{\prime}$ is often large relative to $a$.
In future examples, we will see that the convergence of the proxy error is much more tolerable.
10.3. Multivariate exactly sparse
10.3.1. Low sparsity
Moving to the multivariate case, we start with a simple example with exactly sparse data.
Our goal is to solve
(16)
$$\begin{gathered}-\nabla\cdot(a(\mathbold{x})\nabla u(\mathbold{x}))=f(\mathbold{x})\text{ for all }\mathbold{x}\in\mathbb{T}^{d},\text{ where }\\
a(\mathbold{x})=\hat{a}_{0}+c_{a}\cos(2\pi\mathbold{k}_{a}\cdot\mathbold{x}),\quad f(x)=\sin(2\pi\mathbold{k}_{f}\cdot\mathbold{x}).\end{gathered}$$
We draw $c_{a}\sim\operatorname{Unif}\left([-1,1]\right)$, keep it constant for each dimension, and set $\hat{a}_{\mathbold}{0}=4$ so that our problem remains elliptic (in the specific example below, $c_{a}\approx-0.6$).
For dimensions varying from $d=1$ to $d=1\,024$, we then draw $\mathbold{k}_{a},\mathbold{k}_{f}\sim\operatorname{Unif}\left([-499,500]^{d}\cap\mathbb{Z}^{d}\right)$.
The PDE (16) is then solved for stamping levels $N=1,\ldots,5$.
The bandwidth of the SFT is set to $1000$ and the sparsity is set to $2$.
We then compute a Monte Carlo approximation of the proxy error choosing $200$ points drawn uniformly from $\mathbb{T}^{d}$ and also compute the proxy error exactly by virtue of the sparsity of $a$ and $f$.
The results are given in Figure 3(a).
We see that the results do not depend on the dimension of the problem.
Since all dependence on $d$ is in the runtime of the SFT, we also observe that in practice, after the SFTs of the data have been computed, re-solving the problem on different stamping levels takes about the same amount of time for each $d$.
The error also converges exponentially in the stamping level as suggested by the theoretical error guarantees.
Notably, we also see that the Monte Carlo approximation with $200$ points captures the same proxy error as the exact computation.
10.3.2. High sparsity
We expand on the exactly sparse case by testing a diffusion coefficient with much higher sparsity.
Here, we solve (16) with
(17)
$$a(\mathbold{x})=\hat{a}_{\mathbold}{0}+\sum_{\mathbold{k}\in\mathcal{I}_{a}}c_{\mathbold}{k}\cos(2\pi\mathbold{k}\cdot\mathbold{x}).$$
The vector of coefficients is drawn as $\mathbold{c}\sim\operatorname{Unif}\left([-1,1]^{25}\right)$ once and reused in each test.
For every $d$, the frequencies $\mathbold{k}\in\mathcal{I}_{a}$ are each drawn uniformly from $[-499,500]^{d}\cap\mathbb{Z}^{d}$ as before with $|\mathcal{I}_{a}|=25$.
Here $\hat{a}_{0}=4\left\lceil\norm{\mathbold{c}}_{2}\right\rceil$ to ensure ellipticity.
Again, the bandwidth of the SFT algorithm is set to $1\,000$, but the sparsity is now fixed to $26$.
The results are given in Figure 3(b)
Again, we see that the results do not depend on the spatial dimension except for the notable example of $d=1$.
The $d=1$ case suffers from similar issues in a pessimistic proxy error as in Figure 2.
Specifically, the right hand-side for this example was generated with frequency $k_{f}=-10$ and is therefore relatively low-frequency.
Thus, the high-frequency modes leading to errors in the approximate solution are amplified by the high-frequencies in $a$ when computing $f^{s,N}$.
Indeed, in further experiments (not pictured here), increasing the frequencies of $f$ or decreasing the frequencies of $a$ result in a lower proxy error.
For the other dimensions, the slight offsets in the exact proxy error can be attributed to the randomized frequencies as well as slight variations in the randomized SFT code.
We do see slightly more variance in the proxy error computed using Monte Carlo sampling however.
This is to be expected for data with more varied frequency content, and as such, in future experiments, we increase the number of sampling points.
Note that because we consider sparsity much larger than the stamping level, the computational and memory complexity of the stamping and solution step is much higher.
As suggested by Lemma 2, the size of the resulting stamp set (and therefore the necessary matrix solve) in the largest case is at most $7\cdot 52^{7}\approx 7\times 10^{12}$ which pushes the memory boundaries of our computational resources.
10.4. Multivariate compressible
In order to test Fourier-compressible data which is not exactly sparse, we use a series of tensorized, periodized Gaussians.
Here, we present the only details necessary to demonstrate our algorithm’s effectiveness on Fourier-compressible data, but for a fuller treatment on the Fourier properties of periodized Gaussians, see e.g., [32, Section 2.1].
Here, we define the periodic Gaussian $G_{r}:\mathbb{T}\rightarrow\mathbb{R}$ by
$$G_{r}(x)=\frac{\sqrt{2\pi}}{r}\sum_{m=-\infty}^{\infty}\mathrm{e}^{-\frac{(2\pi)^{2}(x-m)}{2r^{2}}}$$
where the dilation-type parameter $r$ allows us to control the effective support of $\hat{G}_{r}$.
In practice, we truncate the infinite sum to $m\in\{-10,\ldots,10\}$ as additional terms do not change the output up to machine precision.
Note here that the nonstandard multiplicative factors help control the behavior of the function in frequency rather than space.
Given a multivariate modulating frequency $\mathbold{k}\in\mathbb{Z}^{d}$, we define the modulated, tensorized, periodic Gaussian by
$$G_{r,\mathbold{k}}(\mathbold{x})=\prod_{j=1}^{d}\mathrm{e}^{2\pi\mathrm{i}k_{i}x_{i}}G_{r}(x_{i}).$$
Finally, given a set of frequencies $\mathcal{I}\subset\mathbb{Z}^{d}$, dilation parameters $\mathbold{r}\in\mathbb{R}_{+}^{\mathcal{I}}$, and coefficients $\mathbold{c}\in\mathbb{R}^{\mathcal{I}}$, we can define Gaussian series
$$G_{\mathbold{c},\mathbold{r}}^{\mathcal{I}}(\mathbold{x}):=\sum_{\mathbold{k}\in\mathcal{I}}c_{\mathbold}{k}G_{r_{\mathbold}{k},\mathbold{k}}(\mathbold{x}).$$
Depending on the severity of the dilations chosen (i.e., $r_{\mathbold}{k}\gg 1$), this can well approximate a Fourier series with frequencies in $\mathcal{I}$.
On the other hand, a less severe dilation results in Fourier coefficients with magnitudes forming less concentrated Gaussians centered around the “frequencies” $\mathbold{k}\in\mathcal{I}$ and $-\mathbold{k}$.
An example of a series with its associated Fourier transform is given in Figure 5.
In our first experiment, we fix $d=2$ and vary both stamp level and sparsity to again solve (16).
The diffusion coefficient in (16) is replaced with a two-term Gaussian series $a=c_{0}+G^{\mathcal{I}}_{\mathbold{c},\mathbold{r}}$, where
$$\mathcal{I}\sim\operatorname{Unif}\left(\left([-24,25]^{2}\cap\mathbb{Z}^{2}\right)^{2}\right),\quad\mathbold{c}\sim\operatorname{Unif}\left([-1,1]^{2}\right),\quad\mathbold{r}=1.1^{2}\mathbold{1},\quad c_{0}=10\left\lceil\norm{\mathbold{c}}_{2}\right\rceil.$$
Note the increased constant factor from our previous examples to decrease the likelihood of sparse approximations of $a$ not satisfying the ellipticity property.
The Fourier transform of the resulting $a$ used for the following test is depicted in Figure 5(a) below.
The diffusion equation is then solved across various sparsities with increasing stamping level.
The bandwidth parameter of the SFT is set to $K=100$ to account for the wider effective support of $\hat{a}$.
The Monte Carlo proxy error is computed with $1\,000$ samples and depicted in Figure 5(b).
Here, the stamping level does not affect convergence until the sparsity is above $s\geq 16$.
This demonstrates the tradeoff between sparsity and stamping level in regards to the error bound (13).
Until the SFT is able to capture enough useful information in $\hat{a}$, the $\norm{a-a^{s}}_{L^{\infty}}$ in the error bound dominates.
Eventually, this factor is reduced far enough that the stamping term becomes apparent.
We provide another example, where sparsity is fixed at $s=16$, and dimension and stamping level are increased.
Again we solve (16) with the diffusion coefficient replaced by the two-term Gaussian series $a=c_{0}+G^{\mathcal{I}}_{\mathbold{c},\mathbold{r}}$, where
$$\mathcal{I}\sim\operatorname{Unif}\left(\left([-249,250]^{d}\cap\mathbb{Z}^{d}\right)^{2}\right),\quad\mathbold{c}\sim\operatorname{Unif}\left([-1,1]^{2}\right),\quad\mathbold{r}=1.1^{d}\mathbold{1},\quad c_{0}=10\left\lceil\norm{\mathbold{c}}_{2}\right\rceil,$$
and $\mathbold{c}$ and $c_{0}$ are not redrawn across test cases.
The bandwidth of the SFT is set to $1\,000$ to again account for the potentially widened Fourier transform of $a$.
With a $1\,000$ point Monte Carlo approximation of the proxy error, the results are given in Figure 7.
Here we observe much the same behavior as the previous test case.
This is due to the fact that the dimension additionally drives the sparsity of the Gaussian Fourier transforms based on the choice of dilation $\mathbold{r}=1.1^{d}\mathbold{1}$.
In additional experiments performed at higher dimensions (not pictured here), this factor results in numerical instability and the approximation error blows up.
We also see that the $d=2$ and $d=4$ examples are swapped from their assumed positions (and the $d=2$ case even mildly benefits from increased stamping level).
This is attributed to the random draw of the frequency locations affecting the proxy error as well as the SFT algorithm performing better in lower dimensions when all parameters are fixed.
10.5. Three-dimensional exactly sparse advection-diffusion-reaction equation
We now extend our numerical experiments to the situation of a three-dimensional advection-diffusion-reaction equation.
See Remark 4 for the PDE setup and necessary algorithmic modifications.
Numerically, we work with the following exactly sparse data:
(18)
$$\begin{gathered}a(\mathbold{x})=\hat{a}_{\mathbold}{0}+\sum_{\mathbold{k}\in\mathcal{I}_{a}^{\mathrm{sine}}}c_{a,\mathbold{k}}^{\mathrm{sine}}\sin(2\pi\mathbold{k}\cdot\mathbold{x})+\sum_{\mathbold{k}\in\mathcal{I}_{a}^{\mathrm{cosine}}}c_{a,\mathbold{k}}^{\mathrm{cosine}}\cos(2\pi\mathbold{k}\cdot\mathbold{x})\\
b_{j}(\mathbold{x})=\sum_{\mathbold{k}\in\mathcal{I}_{b_{j}}^{\mathrm{sine}}}c_{b_{j},\mathbold{k}}^{\mathrm{sine}}\sin(2\pi\mathbold{k}\cdot\mathbold{x})+\sum_{\mathbold{k}\in\mathcal{I}_{b_{j}}^{\mathrm{cosine}}}c_{b_{j},\mathbold{k}}^{\mathrm{cosine}}\cos(2\pi\mathbold{k}\cdot\mathbold{x})\text{ for all }j=1,2,3\\
c(\mathbold{x})=\hat{c}_{\mathbold}{0}+\sum_{\mathbold{k}\in\mathcal{I}_{c}^{\mathrm{sine}}}c_{c,\mathbold{k}}^{\mathrm{sine}}\sin(2\pi\mathbold{k}\cdot\mathbold{x})+\sum_{\mathbold{k}\in\mathcal{I}_{c}^{\mathrm{cosine}}}c_{c,\mathbold{k}}^{\mathrm{cosine}}\cos(2\pi\mathbold{k}\cdot\mathbold{x})\\
f(\mathbold{x})=\sum_{\mathbold{k}\in\mathcal{I}_{f}^{\mathrm{sine}}}c_{f,\mathbold{k}}^{\mathrm{sine}}\sin(2\pi\mathbold{k}\cdot\mathbold{x})+\sum_{\mathbold{k}\in\mathcal{I}_{f}^{\mathrm{cosine}}}c_{f,\mathbold{k}}^{\mathrm{cosine}}\cos(2\pi\mathbold{k}\cdot\mathbold{x}),\end{gathered}$$
where
$$\begin{gathered}\absolutevalue{\mathcal{I}_{a}^{\mathrm{sine}}}=\absolutevalue{\mathcal{I}_{a}^{\mathrm{cosine}}}=2\\
\absolutevalue{\mathcal{I}_{b_{j}}^{\mathrm{sine}}}=\absolutevalue{\mathcal{I}_{b_{j}}^{\mathrm{cosine}}}=\absolutevalue{\mathcal{I}_{c}^{\mathrm{sine}}}=\absolutevalue{\mathcal{I}_{\mathbold}{c}^{\mathrm{cosine}}}=5\text{ for all }j=1,2,3\\
\absolutevalue{\mathcal{I}_{f}^{\mathrm{sine}}}=2,\text{ and }\absolutevalue{\mathcal{I}_{f}^{\mathrm{cosine}}}=3.\end{gathered}$$
In total, there are $45$ terms composing the differential operator, and $5$ terms composing the forcing function.
Each frequency is randomly drawn from $\operatorname{Unif}([-49,50]^{3}\cap\mathbb{Z}^{3})$ and each coefficient for $a$ and $f$ from $\operatorname{Unif}([-1,1])$.
The coefficients for $\mathbold{b}$ and $c$ are drawn from $\operatorname{Unif}([0,1])$.
To ensure well-posedness, $\hat{a}_{\mathbold}{0}=4\left\lceil\sqrt{\norm{c_{a}^{\mathrm{sine}}}_{2}^{2}+\norm{c_{a}^{\mathrm{cosine}}}_{2}^{2}}\right\rceil$, and $\hat{c}_{\mathbold}{0}=4\left\lceil\sqrt{\norm{c_{c}^{\mathrm{sine}}}_{2}^{2}+\norm{c_{c}^{\mathrm{cosine}}}_{2}^{2}}\right\rceil$.
The bandwidth of the SFT is set to $K=100$ and consider sparsity levels $s=2$ and $s=5$.
Due to the large size of the stamp, we only consider stamping levels $N=1,2$.
The resulting true and Monte Carlo proxy error (sampled over $1\,000$ points) is given in Table 1.
Additionally, Figure 8 shows a portion of a slice through $f$ as well as $f^{2,1}$ and $f^{10,2}$ which are computed by passing $u^{2,1}$ and $u^{10,2}$ through the differential operator.
We note that $f^{10,2}$ and $f$ appear qualitatively indistinguishable.
However, since the sparsity level, $s=2$, used to compute $u^{2,1}$ is lower than the sparsity of any term in (18), $f^{2,1}$ loses some of characteristics of the original source term.
Though it captures some of the true behavior in both larger scales (e.g., the oscillations moving in the northeast direction) and finer scales (e.g., the oscillations moving in the southeast direction), some interfering modes which produce the “wavy” effect are left out.
This is supported by the relative errors reported in Table 1.
Note also that the stamping level affects the convergence in $s=5$ case, but not the $s=2$ case.
This is due to the sparsity related errors in (13) overwhelming the stamping term until the SFT approximations of the data are accurate enough.
Appendix A Stamp set cardinality bound
We begin by proving the following combinatorial upper bound for the cardinality of a stamp set.
Lemma 8.
Suppose that $\mathbold{0}\in\operatorname{supp}(\hat{a})$, $\operatorname{supp}(\hat{a})=-\operatorname{supp}(\hat{a})$, $\absolutevalue{\operatorname{supp}(\hat{a})}=s$.
Then
(19)
$$\absolutevalue{\mathcal{S}^{N}[\hat{a}](\operatorname{supp}(\hat{f}))}\leq\absolutevalue{\operatorname{supp}(\hat{f})}\sum_{n=0}^{N}\sum_{t=0}^{\min(n,(s-1)/2)}2^{t}\binom{(s-1)/2}{t}\binom{n-1}{t-1}.$$
Proof.
We begin by separating $\mathcal{S}^{N}$ into the disjoint pieces
$$\mathcal{S}^{N}=\bigsqcup_{n=0}^{N}\left(\mathcal{S}^{n}\setminus\left(\bigcup_{i=0}^{n-1}\mathcal{S}^{i}\right)\right)$$
and computing the cardinality of each of these sets (where we take $S^{-1}=\emptyset$).
If $\mathbold{k}\in\mathcal{S}^{n}\setminus\left(\cup_{i=0}^{n-1}\mathcal{S}^{i}\right)$, then we are able to write $\mathbold{k}$ as
(20)
$$\mathbold{k}=\mathbold{k}_{f}+\sum_{m=1}^{n}\mathbold{k}_{a}^{m}$$
where $\mathbold{k}_{f}\in\operatorname{supp}(\hat{f})$ and $\mathbold{k}_{a}^{m}\in\operatorname{supp}(\hat{a})\setminus\{0\}$ for all $m=1,\ldots,n$.
Additionally, since $\mathbold{k}$ is not in any earlier stamping sets, this is the smallest $n$ for which this is possible.
In particular, it is not possible for any two frequencies in the sum to be negatives of each other resulting in pairs of cancelled terms.
With this summation in mind, arbitrarily split $\operatorname{supp}(\hat{a})\setminus\{\mathbold{0}\}$ into $A\sqcup-A$ (i.e., place all frequencies which do not negate each other into $A$ and their negatives in $-A$).
By collecting like frequencies that occur as a $\mathbold{k}_{a}^{m}$ term in (20), we can rewrite this sum as
(21)
$$\mathbold{k}=\mathbold{k}_{f}+\sum_{\mathbold{k}_{a}\in A}s(\mathbold{k},\mathbold{k}_{a})m(\mathbold{k},\mathbold{k}_{a})\mathbold{k}_{a},$$
where the sign function $s(\mathbold{k},\mathbold{k}_{a})$ is given by
$$s(\mathbold{k},\mathbold{k}_{a}):=\begin{cases}1&\text{if $\mathbold{k}_{a}$ is a term in the summation \eqref{eq:StampFrequencySum}}\\
-1&\text{if $-\mathbold{k}_{a}$ is a term in the summation \eqref{eq:StampFrequencySum}}\\
0&\text{otherwise}\end{cases}$$
and the multiplicity function $m(\mathbold{k},\mathbold{k}_{a})$ is defined as the number of times that $\mathbold{k}_{a}$ or $-\mathbold{k}_{a}$ appears as a $\mathbold{k}_{a}^{m}$ term in (20).
Letting $\mathbold{s}(\mathbold{k}):=(s(\mathbold{k},\mathbold{k}_{a}))_{k_{a}\in A}$ and $\mathbold{m}(\mathbold{k}):=(m(\mathbold{k},\mathbold{k}_{a}))_{k_{a}\in A}$, we can then identify any $\mathbold{k}\in\mathcal{S}^{n}\setminus\left(\cup_{i=0}^{n-1}\mathcal{S}^{i}\right)$ with the tuple
$$(\mathbold{k}_{f},\mathbold{s}(\mathbold{k}),\mathbold{m}(\mathbold{k}))\in\operatorname{supp}(\mathbold{f})\times\{-1,0,1\}^{A}\times\{0,\ldots,n\}^{A}.$$
Upper bounding the number of these tuples that can correspond to a value of $\mathbold{k}\in\mathcal{S}^{n}\setminus\left(\cup_{i=0}^{n-1}\mathcal{S}^{i}\right)$ will then upper bound the cardinality of this set.
Since any $\mathbold{k}_{f}\in\operatorname{supp}(\hat{f})$ can result in a valid $\mathbold{k}$ value, we will focus on the pairs of sign and multiplicity vectors.
Define by $T_{n}\subset\{-1,0,1\}^{A}\times\{0,\ldots,n\}^{A}$ the set of valid sign and multiplicity pairs that can correspond to a $\mathbold{k}\in\mathcal{S}^{n}\setminus\left(\cup_{i=0}^{n-1}\mathcal{S}^{i}\right)$.
In particular, for $(\mathbold{s},\mathbold{m})\in T_{n}$, $\norm{\mathbold{m}}_{1}=n$ and $\operatorname{supp}(\mathbold{s})=\operatorname{supp}(\mathbold{m})$.
Thus, we can write
$$T_{n}\subset\bigsqcup_{t=0}^{\min(n,|A|)}\left\{(\mathbold{s},\mathbold{m})\in\{-1,0,1\}^{A}\times\{0,\ldots,n\}^{A}\mid\norm{\mathbold{m}}_{1}=n\text{ and }|\operatorname{supp}(\mathbold{s})|=|\operatorname{supp}(\mathbold{m})|=t\right\}.$$
This inner set then corresponds to the $t$-partitions of the integer $n$ spread over the $|A|$ entries of $\mathbold{m}$ where each non-zero term is assigned a sign $-1$ or $1$.
The cardinality is therefore $2^{t}\binom{|A|}{t}\binom{n-1}{t-1}$: the first factor is from the possible sign options, the second is the number of ways to choose the entries of $\mathbold{m}$ which are nonzero, and the last is the number of $t$-partitions of $n$ which will fill the nonzero entries of $\mathbold{m}$.
Noting that $|A|=\frac{s-1}{2}$, our final cardinality estimate is
$$\displaystyle\absolutevalue{\mathcal{S}^{N}}$$
$$\displaystyle=\sum_{n=0}^{N}\absolutevalue{\mathcal{S}^{n}\setminus\left(\bigcup_{i=0}^{n-1}\mathcal{S}^{i}\right)}$$
$$\displaystyle\leq\sum_{n=0}^{N}\absolutevalue{\operatorname{supp}(\hat{f})}|T_{n}|$$
$$\displaystyle\leq\absolutevalue{\operatorname{supp}(\hat{f})}\sum_{n=0}^{N}\sum_{t=0}^{\min(n,(s-1)/2)}2^{t}\binom{(s-1)/2}{t}\binom{n-1}{t-1}$$
as desired.
∎
Though this upper bound is much tighter than the one given in the main text, it is harder to parse.
As such, we simplify it to the bound presented in Lemma 2, restated here for convenience.
Lemma 2.
Suppose that $\mathbold{0}\in\operatorname{supp}(\hat{a})$, $\operatorname{supp}(\hat{a})=-\operatorname{supp}(\hat{a})$, and $\absolutevalue{\operatorname{supp}(\hat{f})}\leq\absolutevalue{\operatorname{supp}(\hat{a})}=s$
Then
$$\absolutevalue{\mathcal{S}^{N}[\hat{a}](\operatorname{supp}(\hat{f}))}\leq 7\max(s,2N+1)^{\min(s,2N+1)}.$$
Proof.
Let $r=(s-1)/2$.
We consider two cases:
Case 1: $r\geq N$:
We estimate the innermost sum of (19).
Since $r\geq N\geq n$, $\min(n,(s-1)/2)=n$.
By upper bounding the binomial coefficients with powers of $r$, we obtain
$$\displaystyle\sum_{t=0}^{n}2^{t}\binom{r}{t}\binom{n-1}{t-1}$$
$$\displaystyle\leq\sum_{t=0}^{n}2^{t}(r^{t})^{2}$$
$$\displaystyle\leq 2(2r^{2})^{n}$$
where the second estimate follows from the approximating the geometric sum.
Again, bounding the next geometric sum by double the largest term, we have
$$\absolutevalue{\mathcal{S}^{N}}\leq\absolutevalue{\operatorname{supp}(\hat{f})}\sum_{n=0}^{N}2(2s^{2})^{n}\leq(2r+1)4(2r^{2})^{N}\leq 2(2r+1)^{2N+1}=s^{2N+1}.$$
Case 2: $r<N$:
Bounding the innermost sum of (19) proceeds much the same way as Case 1, but we must first split the outermost sum into the first $r+1$ terms and last $N-r$ terms.
Working with the first terms, we find
$$\sum_{n=0}^{r}\sum_{t=0}^{n}2^{t}\binom{r}{t}\binom{n-1}{t-1}\leq 4(2r^{2})^{r}$$
using the argument in Case 1.
Now, we bound
$$\displaystyle\sum_{n=r+1}^{N}\sum_{t=0}^{r}2^{t}\binom{r}{t}\binom{n-1}{t-1}$$
$$\displaystyle\leq\sum_{n=r+1}^{N}2(2(n-1)^{2})^{r}$$
$$\displaystyle\leq 2^{r+1}\int_{r}^{N}n^{2r}\,dn$$
$$\displaystyle\leq\sqrt{2}\frac{(\sqrt{2}N)^{2r+1}}{2r+1}.$$
Thus,
$$\absolutevalue{\mathcal{S}^{N}}\leq\absolutevalue{\operatorname{supp}(\hat{f})}\left[4(2r^{2})^{r}+\sqrt{2}\frac{(\sqrt{2}N)^{2r+1}}{2r+1}\right]\leq 5\sqrt{2}\left(\sqrt{2}N\right)^{s}\leq 7(2N+1)^{s}.$$
Combining the two cases gives the desired upper bound.
∎
Appendix B Proof of SFT recovery guarantees
We restate the theorem for convenience.
Theorem 2 ([23], Corollary 2).
Let $\mathcal{I}\subset\mathbb{Z}^{d}$ be a frequency set of interest with expansion defined as $K:=\max_{j\in\{1,\ldots,d\}}(\max_{\mathbold{k}\in\mathcal{I}}k_{j}-\min_{\mathbold{l}\in\mathcal{I}}l_{j})$ (i.e., the sidelength of the smallest hypercube containing $\mathcal{I}$), and $\Lambda(\mathbold{z},M)$ be a reconstructing rank-1 lattice for $\mathcal{I}$.
There exists a fast, randomized SFT which, given $\Lambda(\mathbold{z},M)$, sampling access to $g\in L^{2}$, and a failure probability $\sigma\in(0,1]$, will produce a $2s$-sparse approximation $\hat{\mathbold{g}}^{s}$ of $\hat{g}$ and function $g^{s}:=\sum_{\mathbold{k}\in\operatorname{supp}(\hat{\mathbold{g}}^{s})}\hat{g}_{\mathbold{k}}^{s}e_{\mathbold}{k}$ approximating $g$ satisfying
$$\displaystyle\norm{g-g^{s}}_{L^{2}}\leq\norm{\hat{g}-\hat{\mathbold{g}}^{s}}_{\ell^{2}}$$
$$\displaystyle\leq(25+3K)\left[\frac{\norm{\hat{g}\rvert_{\mathcal{I}}-(\hat{g}\rvert_{\mathcal{I}})_{s}^{\mathrm{opt}}}_{1}}{\sqrt{s}}+\sqrt{s}\norm{\hat{g}-\hat{g}\rvert_{\mathcal{I}}}_{1}\right]$$
with probability exceeding $1-\sigma$.
If $g\in L^{\infty}$, then we additionally have
$$\norm{g-g^{s}}_{L^{\infty}}\leq\norm{\hat{g}-\hat{\mathbold{g}}^{s}}_{\ell^{1}}\leq(33+4K)\left[\norm{\hat{g}\rvert_{\mathcal{I}}-(\hat{g}\rvert_{\mathcal{I}})_{s}^{\mathrm{opt}}}_{1}+\norm{\hat{g}-\hat{g}\rvert_{\mathcal{I}}}_{1}\right]$$
with the same probability estimate.
The total number of samples of $g$ and computational complexity of the algorithm can be bounded above by
$$\mathcal{O}\left(ds\log^{3}(dKM)\log\left(\frac{dKM}{\sigma}\right)\right).$$
Proof.
The $L^{2}$ upper bound is mostly the same as the original result.
We are not considering noisy measurements here which removes the $\sqrt{s}e_{\infty}$ term from that result (though, this could be added back in if desired).
Additionally, we have upper bounded $\norm{\hat{g}-\hat{g}\rvert_{\mathcal{I}}}_{2}$ by $\sqrt{s}\norm{\hat{g}-\hat{g}\rvert_{\mathcal{I}}}_{1}$ adding one to the constant.
The $L^{\infty}$ / $\ell^{1}$ bound was not given in the original paper, but can be proven using the same techniques.
In particular, replacing the $\ell^{2}$ norm by the $\ell^{1}$ norm in [23, Lemma 4] has the effect of replacing all $\ell^{2}$ norms with $\ell^{1}$ norms and replacing $\sqrt{2s}$ by $2s$.
This small change cascades through the proof of Property 3 in [23, Theorem 2] (again, with $\ell^{2}$ norms replaced by $\ell^{1}$ norms) to produce the univariate $\ell^{1}$ upper bound (in the language of the original paper)
$$\norm{\hat{\mathbold{a}}-\mathbold{v}}_{1}\leq\norm{\hat{\mathbold{a}}-\hat{\mathbold{a}}_{2s}^{\mathrm{opt}}}_{1}+(16+6\sqrt{2})\left(\norm{\hat{\mathbold{a}}-\hat{\mathbold{a}}_{s}^{\mathrm{opt}}}_{1}+s(\norm{\hat{a}-\hat{\mathbold{a}}}_{1}+\norm{\mu}_{\infty})\right)=:\eta_{1}.$$
A similar logic applies to revising the proof of [23, Lemma 1].
Equation (4) with all $\ell^{2}$ norms replaced by $\ell^{1}$ norms is derived the same way, and the first term is upper bounded by the maximal entry of the vector multiplied by the number of elements without the square root.
The remainder of the proof carries through without change which leads to a final error estimate of
$$\norm{\mathbold{b}-c}_{\ell^{2}}\leq(\beta+\eta_{\infty})\max(s-\absolutevalue{\mathcal{S}_{\beta}},0)+\eta_{1}+\norm{c\rvert_{\mathcal{I}}-c\rvert_{\mathcal{S}_{\beta}}}_{1}+\norm{c-c\rvert_{\mathcal{I}}}_{1}.$$
Finally, the proof of [23, Corollary 2] follows using the same logic as the original substituting these revised upper bounds.
∎
Acknowledgements
This work was supported in part by the National Science Foundation Award Numbers DMS 2106472 and 1912706.
This work was also supported in part through computational resources and services provided by the Institute for Cyber-Enabled Research at Michigan State University.
We thank Lutz Kämmerer for helpful discussions related to random rank-1 lattice construction and Ben Adcock and Simone Brugiapaglia for motivating discussions related to compressive sensing and high-dimensional PDEs.
References
[1]
Sina Bittens, Ruochuan Zhang, and Mark A Iwen, A deterministic sparse
FFT for functions with structured Fourier sparsity, Advances in
Computational Mathematics 45 (2019), no. 2, 519–561.
[2]
John P. Boyd, Chebyshev and Fourier spectral methods, 2nd ed., rev
ed., Dover Publications, Mineola, N.Y, 2001.
[3]
S Brugiapaglia, S Micheletti, F Nobile, and S Perotto,
Wavelet–Fourier CORSING techniques for multidimensional
advection–diffusion–reaction equations, IMA Journal of Numerical
Analysis (2020), no. draa036.
[4]
S. Brugiapaglia, S. Micheletti, and S. Perotto, Compressed solving: A
numerical approximation technique for elliptic PDEs based on compressed
sensing, Computers & Mathematics with Applications 70 (2015),
no. 6, 1306–1335 (en).
[5]
Simone Brugiapaglia, COmpRessed SolvING: Sparse Approximation of
PDEs based on compressed sensing, Ph.D. thesis, Polytecnico Di Milano,
Milan, Italy, January 2016.
[6]
by same author, A compressive spectral collocation method for the diffusion
equation under the restricted isometry property, Quantification of
Uncertainty: Improving Efficiency and Technology: QUIET selected
contributions (Marta D’Elia, Max Gunzburger, and Gianluigi Rozza, eds.),
Lecture Notes in Computational Science and Engineering, Springer
International Publishing, Cham, 2020, pp. 15–40 (en).
[7]
Simone Brugiapaglia, Sjoerd Dirksen, Hans Christian Jung, and Holger Rauhut,
Sparse recovery in bounded Riesz systems with applications to
numerical methods for PDEs, Applied and Computational Harmonic Analysis
53 (2021), 231–269 (en).
[8]
Simone Brugiapaglia, Fabio Nobile, Stefano Micheletti, and Simona Perotto,
A theoretical study of COmpRessed SolvING for
advection-diffusion-reaction problems, Mathematics of Computation
87 (2018), no. 309, 1–38 (en).
[9]
Hans-Joachim Bungartz and Michael Griebel, Sparse grids, Acta Numerica
13 (2004), 147–269 (en), Publisher: Cambridge University Press.
[10]
Claudio Canuto, M. Yousuff Hussaini, Alfio Quarteroni, and Thomas A. Zang,
Spectral methods: Fundamentals in single domains, Scientific
Computation, Springer-Verlag, Berlin Heidelberg, 2006 (en).
[11]
Albert Cohen, Wolfgang Dahmen, and Ronald DeVore, Compressed sensing and
best $k$-term approximation, Journal of the American Mathematical Society
22 (2009), no. 1, 211–231 (en).
[12]
Dinh Dũng, Vladimir Temlyakov, and Tino Ullrich, Hyperbolic cross
approximation, Advanced Courses in Mathematics - CRM Barcelona,
Springer International Publishing, Cham, 2018 (en).
[13]
Ingrid Daubechies, Olof Runborg, and Jing Zou, A sparse spectral method
for homogenization multiscale problems, Multiscale Modeling & Simulation
6 (2007), no. 3, 711–740, Publisher: Society for Industrial and
Applied Mathematics.
[14]
Michael Döhler, Stefan Kunis, and Daniel Potts, Nonequispaced
hyperbolic cross fast fourier transform, SIAM Journal on Numerical Analysis
47 (2010), no. 6, 4415–4428, Publisher: Society for Industrial and
Applied Mathematics.
[15]
Lawrence C. Evans, Partial differential equations, second edition ed.,
Graduate studies in mathematics, no. v. 19, American Mathematical Society,
Providence, R.I, 2010.
[16]
Anna C Gilbert, Sudipto Guha, Piotr Indyk, Shanmugavelayutham Muthukrishnan,
and Martin Strauss, Near-optimal sparse Fourier representations via
sampling, Proceedings of the thiry-fourth annual ACM symposium on Theory of
computing, 2002, pp. 152–161.
[17]
Anna C Gilbert, Piotr Indyk, Mark Iwen, and Ludwig Schmidt, Recent
developments in the sparse Fourier transform: A compressed Fourier
transform for big data, IEEE Signal Processing Magazine 31 (2014),
no. 5, 91–100.
[18]
Gene H. Golub and Charles F. Van Loan, Matrix computations, fourth ed.,
Johns Hopkins Studies in the Mathematical Sciences, Johns Hopkins University
Press, Baltimore, MD, 2013.
[19]
V Gradinaru, Fourier transform on sparse grids: Code design and the
time dependent Schrödinger equation, Computing (Wien. Print) 80
(2007), no. 1, 1–22, Place: Wien Publisher: Springer.
[20]
Michael Griebel and Jan Hamaekers, Sparse grids for the Schrödinger
equation, Special issue on molecular modelling 41 (2007), no. 2,
215–247, Place: Les Ulis Publisher: EDP Sciences.
[21]
by same author, Fast discrete Fourier transform on generalized sparse grids,
Sparse Grids and Applications - Munich 2012 (Jochen Garcke and Dirk
Pflüger, eds.), vol. 97, Springer International Publishing, Cham, 2014,
Series Title: Lecture Notes in Computational Science and Engineering,
pp. 75–107 (en).
[22]
Craig Gross, Sparsity in the spectrum: sparse Fourier transforms and
spectral methods for functions of many dimensions, Ph.D., Michigan State
University, East Lansing, Michigan, USA, May 2023 (in preparation).
[23]
Craig Gross, Mark Iwen, Lutz Kämmerer, and Toni Volkmer, Sparse
Fourier transforms on rank-1 lattices for the rapid and low-memory
approximation of functions of many variables, Sampling Theory, Signal
Processing, and Data Analysis 20 (2021), no. 1, 1.
[24]
Craig Gross, Mark A Iwen, Lutz Kämmerer, and Toni Volkmer, A
deterministic algorithm for constructing multiple rank-1 lattices of
near-optimal size, Advances in Computational Mathematics 47 (2021),
no. 6, 1–24.
[25]
Haitham Hassanieh, Piotr Indyk, Dina Katabi, and Eric Price, Simple and
practical algorithm for sparse Fourier transform, Proceedings of the
twenty-third annual ACM-SIAM symposium on Discrete Algorithms, SIAM, 2012,
pp. 1183–1194.
[26]
Mark A Iwen, Combinatorial sublinear-time Fourier algorithms,
Foundations of Computational Mathematics 10 (2010), no. 3, 303–338.
[27]
Frances Kuo, Giovanni Migliorati, Fabio Nobile, and Dirk Nuyens, Function
integration, reconstruction and approximation using rank-1 lattices,
Mathematics of Computation 90 (2021), no. 330, 1861–1897 (en).
[28]
Friedrich Kupka, Sparse grid spectral methods for the numerical solution
of partial differential equations with periodic boundary conditions,
Ph.D., Universität Wien, Vienna, Austria, November 1997.
[29]
Lutz Kämmerer, Stefan Kunis, and Daniel Potts, Interpolation lattices
for hyperbolic cross trigonometric polynomials, Journal of Complexity
28 (2012), no. 1, 76–92 (en).
[30]
Lutz Kämmerer, Daniel Potts, and Toni Volkmer, Approximation of
multivariate periodic functions by trigonometric polynomials based on rank-1
lattice sampling, Journal of Complexity 31 (2015), no. 4, 543–576
(en).
[31]
Dong Li and Fred J. Hickernell, Trigonometric spectral collocation
methods on lattices, Recent advances in scientific computing and partial
differential equations (Hong Kong, 2002), Contemp. Math., vol. 330,
Amer. Math. Soc., Providence, RI, 2003, pp. 121–132. MR 2011715
[32]
Sami Merhi, Ruochuan Zhang, Mark A. Iwen, and Andrew Christlieb, A new
class of fully discrete sparse Fourier transforms: Faster stable
implementations with guarantees, Journal of Fourier Analysis and
Applications 25 (2019), no. 3, 751–784 (en).
[33]
Hans Munthe-Kaas and Tor Sørevik, Multidimensional pseudo-spectral
methods on lattice grids, Applied Numerical Mathematics 62 (2012),
no. 3, 155–165 (en).
[34]
Gerlind Plonka, Daniel Potts, Gabriele Steidl, and Manfred Tasche,
Numerical Fourier analysis, Applied and Numerical Harmonic
Analysis, Springer International Publishing, Cham, 2018 (en).
[35]
A.D. Rubio, A. Zalts, and C.D. El Hasi, Numerical solution of the
advection-reaction-diffusion equation at different scales, Environmental
Modelling & Software 23 (2008), no. 1, 90–95 (en).
[36]
Jie Shen and Li-Lian Wang, Sparse spectral approximations of
high-dimensional problems based on hyperbolic cross, SIAM Journal on
Numerical Analysis 48 (2010), no. 3, 1087–1109, Publisher: Society
for Industrial and Applied Mathematics.
[37]
Weiqi Wang and Simone Brugiapaglia, Compressive fourier collocation
methods for high-dimensional diffusion equations with periodic boundary
conditions, 2022. |
Correlations between SIDIS azimuthal asymmetries in target and current fragmentation regions
A. Kotzinian\fromins:t\fromins:y
\ETC
M. Anselmino\fromins:t
\atqueV. Barone\fromins:a
ins:tins:tins:yins:yins:ains:a
Abstract
We shortly describe the leading twist formalism for spin and transverse-momentum dependent fracture functions recently developed and present results for the production of spinless hadrons in the target fragmentation region (TFR) of
SIDIS [1]. In this case not all fracture functions can be accessed and only a Sivers-like single spin azimuthal asymmetry shows up at LO cross-section.
Then, we show [2] that the process of double hadron production in polarized SIDIS – with one spinless hadron produced in the current fragmentation region (CFR) and another in the TFR – would provide access to all 16 leading twist fracture functions. Some particular cases are presented.
\instlist
Dipartimento di Fisica Teorica, Università
di Torino;
INFN, Sezione di Torino, 10125 Torino, Italy
Yerevan Physics Institute, 2 Alikhanyan Brothers St., 375036 Yerevan, Armenia
Di.S.T.A., Università del Piemonte
Orientale “A. Avogadro”;
INFN, Gruppo Collegato di Alessandria, 15121 Alessandria, Italy
\PACSes\PACSit13.85.FbInelastic scattering: many-particle final states.
\PACSit13.87.FhFragmentation into hadrons.
\PACSit13.88.+ePolarization in interactions and scattering.
1 Introduction
As it is becoming increasingly clear in the last decades, the study of the
three-dimensional spin-dependent partonic structure of the nucleon in SIDIS processes requires a full understanding of the hadronization process after the hard lepton-quark scattering. So far most SIDIS experiments were studied in the CFR, where an adequate theoretical formalism based on distribution and fragmentation functions has been established (see for example Ref. [3]). However, to avoid
misinterpretations, also the factorized approach to SIDIS description in the TFR
has to be explored. The corresponding theoretical basis – the fracture functions formalism – was established in Ref. [4]
for hadron transverse momentum integrated unpolarized cross-section. Recently this approach was generalized [1] to the spin and transverse momentum dependent case (STMD).
We consider the process (adopting the same notations as in
Ref. [2])
$$l(\ell,\lambda)+N(P,S)\to l(\ell^{\prime})+h(P_{h})+X(P_{X})$$
(1)
with the hadron $h$ produced in the TFR. We use the standard DIS notations
and in the $\gamma^{*}-N$ c.m. frame we define the $z$-axis along the direction
of $\textstyle q$ (the virtual photon momentum) and the $x$-axis along ${\mathchoice{\mbox{\boldmath$\displaystyle\ell$\unboldmath}}{\mbox{\boldmath$%
\textstyle\ell$\unboldmath}}{\mbox{\boldmath$\scriptstyle\ell$\unboldmath}}{%
\mbox{\boldmath$\scriptscriptstyle\ell$\unboldmath}}}_{T}$,
the lepton transverse momentum. The kinematics of the produced hadron is defined by the variable $\zeta=P_{h}^{-}/P^{-}\simeq E_{h}/E$ and its transverse momentum
$\mathchoice{\mbox{\boldmath$\displaystyle P$\unboldmath}}{\mbox{\boldmath$%
\textstyle P$\unboldmath}}{\mbox{\boldmath$\scriptstyle P$\unboldmath}}{\mbox{%
\boldmath$\scriptscriptstyle P$\unboldmath}}_{h\perp}$ (with magnitude $P_{h\perp}$ and azimuthal angle $\phi_{h}$).
Assuming TMD factorization the cross-section of the process (1)
can be written as
$$\frac{\mathrm{d}\sigma^{l(\ell,\lambda)+N(P_{N},S)\to l(\ell^{\prime})+h(P)+X}%
}{\mathrm{d}x_{B}\,\mathrm{d}Q^{2}\,\mathrm{d}\zeta\,\mathrm{d}^{2}\mathchoice%
{\mbox{\boldmath$\displaystyle P$\unboldmath}}{\mbox{\boldmath$\textstyle P$%
\unboldmath}}{\mbox{\boldmath$\scriptstyle P$\unboldmath}}{\mbox{\boldmath$%
\scriptscriptstyle P$\unboldmath}}_{h\perp}\,\mathrm{d}\phi_{S}}={\cal M}%
\otimes\frac{\mathrm{d}\sigma^{\ell(l,\lambda)+q(k,s)\to\ell(l^{\prime})+q(k^{%
\prime},s^{\prime})}}{\mathrm{d}Q^{2}}\,,$$
(2)
where $\phi_{S}$ is the azimuthal angle of the nucleon transverse polarization.
The STMD fracture functions ${\cal M}$ has a clear probabilistic meaning:
it is the conditional probability to produce a hadron $h$ in the TFR when the
hard scattering occurs on a quark $q$ from the target nucleon $N$. The
expression of the non-coplanar polarized lepton-quark hard
scattering cross-section can be found in Ref. [5].
The most general expression of the LO STMD fracture functions for unpolarized ($\mathcal{M}^{[\gamma^{-}]}$), longitudinally polarized ($\mathcal{M}^{[\gamma^{-}\gamma_{5}]}$) and transversely polarized ($\mathcal{M}^{[\mathrm{i}\,\sigma^{i-}\gamma_{5}]}$) quarks are introduced in the expansion of the leading twist
projections as [1, 2]:
$$\displaystyle\mathcal{M}^{[\gamma^{-}]}$$
$$\displaystyle=$$
$$\displaystyle\hat{u}_{1}+\frac{{\mathchoice{\mbox{\boldmath$\displaystyle P$%
\unboldmath}}{\mbox{\boldmath$\textstyle P$\unboldmath}}{\mbox{\boldmath$%
\scriptstyle P$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle P$\unboldmath}%
}}_{h\perp}\times{\mathchoice{\mbox{\boldmath$\displaystyle S$\unboldmath}}{%
\mbox{\boldmath$\textstyle S$\unboldmath}}{\mbox{\boldmath$\scriptstyle S$%
\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle S$\unboldmath}}}_{\perp}}{m_{%
h}}\,\hat{u}_{1T}^{h}+\frac{{\mathchoice{\mbox{\boldmath$\displaystyle k$%
\unboldmath}}{\mbox{\boldmath$\textstyle k$\unboldmath}}{\mbox{\boldmath$%
\scriptstyle k$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle k$\unboldmath}%
}}_{\perp}\times{\mathchoice{\mbox{\boldmath$\displaystyle S$\unboldmath}}{%
\mbox{\boldmath$\textstyle S$\unboldmath}}{\mbox{\boldmath$\scriptstyle S$%
\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle S$\unboldmath}}}_{\perp}}{m_{%
N}}\,\hat{u}_{1T}^{\perp}+\frac{S_{\parallel}\,({\mathchoice{\mbox{\boldmath$%
\displaystyle k$\unboldmath}}{\mbox{\boldmath$\textstyle k$\unboldmath}}{\mbox%
{\boldmath$\scriptstyle k$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle k$%
\unboldmath}}}_{\perp}\times{\mathchoice{\mbox{\boldmath$\displaystyle P$%
\unboldmath}}{\mbox{\boldmath$\textstyle P$\unboldmath}}{\mbox{\boldmath$%
\scriptstyle P$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle P$\unboldmath}%
}}_{h\perp})}{m_{N}\,m_{h}}\,\hat{u}_{1L}^{\perp h}$$
(3)
$$\displaystyle\mathcal{M}^{[\gamma^{-}\gamma_{5}]}$$
$$\displaystyle=$$
$$\displaystyle S_{\parallel}\,\hat{l}_{1L}+\frac{{\mathchoice{\mbox{\boldmath$%
\displaystyle P$\unboldmath}}{\mbox{\boldmath$\textstyle P$\unboldmath}}{\mbox%
{\boldmath$\scriptstyle P$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle P$%
\unboldmath}}}_{h\perp}\cdot{\mathchoice{\mbox{\boldmath$\displaystyle S$%
\unboldmath}}{\mbox{\boldmath$\textstyle S$\unboldmath}}{\mbox{\boldmath$%
\scriptstyle S$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle S$\unboldmath}%
}}_{\perp}}{m_{h}}\,\hat{l}_{1T}^{h}+\frac{{\mathchoice{\mbox{\boldmath$%
\displaystyle k$\unboldmath}}{\mbox{\boldmath$\textstyle k$\unboldmath}}{\mbox%
{\boldmath$\scriptstyle k$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle k$%
\unboldmath}}}_{\perp}\cdot{\mathchoice{\mbox{\boldmath$\displaystyle S$%
\unboldmath}}{\mbox{\boldmath$\textstyle S$\unboldmath}}{\mbox{\boldmath$%
\scriptstyle S$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle S$\unboldmath}%
}}_{\perp}}{m_{N}}\,\hat{l}_{1T}^{\perp}+\frac{{\mathchoice{\mbox{\boldmath$%
\displaystyle k$\unboldmath}}{\mbox{\boldmath$\textstyle k$\unboldmath}}{\mbox%
{\boldmath$\scriptstyle k$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle k$%
\unboldmath}}}_{\perp}\times{\mathchoice{\mbox{\boldmath$\displaystyle P$%
\unboldmath}}{\mbox{\boldmath$\textstyle P$\unboldmath}}{\mbox{\boldmath$%
\scriptstyle P$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle P$\unboldmath}%
}}_{h\perp}}{m_{N}\,m_{h}}\,\hat{l}_{1}^{\perp h}$$
(4)
$$\displaystyle\mathcal{M}^{[\mathrm{i}\,\sigma^{i-}\gamma_{5}]}$$
$$\displaystyle=$$
$$\displaystyle S_{\perp}^{i}\,\hat{t}_{1T}+\frac{S_{\parallel}\,P_{h\perp}^{i}}%
{m_{h}}\,\hat{t}_{1L}^{h}+\frac{S_{\parallel}\,k_{\perp}^{i}}{m_{N}}\,\hat{t}_%
{1L}^{\perp}$$
(5)
$$\displaystyle+\,\frac{({\mathchoice{\mbox{\boldmath$\displaystyle P$%
\unboldmath}}{\mbox{\boldmath$\textstyle P$\unboldmath}}{\mbox{\boldmath$%
\scriptstyle P$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle P$\unboldmath}%
}}_{h\perp}\cdot{\mathchoice{\mbox{\boldmath$\displaystyle S$\unboldmath}}{%
\mbox{\boldmath$\textstyle S$\unboldmath}}{\mbox{\boldmath$\scriptstyle S$%
\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle S$\unboldmath}}}_{\perp})\,P_%
{h\perp}^{i}}{m_{h}^{2}}\,\hat{t}_{1T}^{hh}+\frac{({\mathchoice{\mbox{%
\boldmath$\displaystyle k$\unboldmath}}{\mbox{\boldmath$\textstyle k$%
\unboldmath}}{\mbox{\boldmath$\scriptstyle k$\unboldmath}}{\mbox{\boldmath$%
\scriptscriptstyle k$\unboldmath}}_{\perp}}\cdot{\mathchoice{\mbox{\boldmath$%
\displaystyle S$\unboldmath}}{\mbox{\boldmath$\textstyle S$\unboldmath}}{\mbox%
{\boldmath$\scriptstyle S$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle S$%
\unboldmath}}}_{\perp})\,k_{\perp}^{i}}{m_{N}^{2}}\,\hat{t}_{1T}^{\perp\perp}$$
$$\displaystyle+\,\frac{({\mathchoice{\mbox{\boldmath$\displaystyle k$%
\unboldmath}}{\mbox{\boldmath$\textstyle k$\unboldmath}}{\mbox{\boldmath$%
\scriptstyle k$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle k$\unboldmath}%
}}_{\perp}\cdot{\mathchoice{\mbox{\boldmath$\displaystyle S$\unboldmath}}{%
\mbox{\boldmath$\textstyle S$\unboldmath}}{\mbox{\boldmath$\scriptstyle S$%
\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle S$\unboldmath}}}_{\perp})\,P_%
{h\perp}^{i}-({\mathchoice{\mbox{\boldmath$\displaystyle P$\unboldmath}}{\mbox%
{\boldmath$\textstyle P$\unboldmath}}{\mbox{\boldmath$\scriptstyle P$%
\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle P$\unboldmath}}}_{h\perp}%
\cdot{\mathchoice{\mbox{\boldmath$\displaystyle S$\unboldmath}}{\mbox{%
\boldmath$\textstyle S$\unboldmath}}{\mbox{\boldmath$\scriptstyle S$%
\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle S$\unboldmath}}}_{\perp})\,k_%
{\perp}^{i}}{m_{N}m_{h}}\,\hat{t}_{1T}^{\perp h}$$
$$\displaystyle+\,\frac{\epsilon_{\perp}^{ij}\,P_{h\perp j}}{m_{h}}\,\hat{t}_{1}%
^{h}+\frac{\epsilon_{\perp}^{ij}\,k_{\perp j}}{m_{N}}\,\hat{t}_{1}^{\perp}\,,$$
where ${\mathchoice{\mbox{\boldmath$\displaystyle k$\unboldmath}}{\mbox{\boldmath$%
\textstyle k$\unboldmath}}{\mbox{\boldmath$\scriptstyle k$\unboldmath}}{\mbox{%
\boldmath$\scriptscriptstyle k$\unboldmath}}}_{\perp}$ is the quark transverse momentum and by the vector
product of two-dimensional vectors ${\bf a}$ and ${\bf b}$ we mean the
pseudo-scalar quantity ${\bf a}\times{\bf b}=\epsilon^{ij}\,a_{i}b_{j}=ab\,\sin(\phi_{b}-\phi_{a})$.
All fracture functions depend on the scalar variables $x_{B},k_{\perp}^{2},\zeta,P_{h\perp}^{2}$ and ${\mathchoice{\mbox{\boldmath$\displaystyle k$\unboldmath}}{\mbox{\boldmath$%
\textstyle k$\unboldmath}}{\mbox{\boldmath$\scriptstyle k$\unboldmath}}{\mbox{%
\boldmath$\scriptscriptstyle k$\unboldmath}}}_{\perp}\cdot{\mathchoice{\mbox{%
\boldmath$\displaystyle P$\unboldmath}}{\mbox{\boldmath$\textstyle P$%
\unboldmath}}{\mbox{\boldmath$\scriptstyle P$\unboldmath}}{\mbox{\boldmath$%
\scriptscriptstyle P$\unboldmath}}}_{h\perp}$.
For the production of a spinless hadron in the TFR one has [1]:
$$\displaystyle \frac{\mathrm{d}\sigma^{\ell(l,\lambda)+N(P_{N},S)%
\to\ell(l^{\prime})+h(P)+X}}{\mathrm{d}x_{B}\,\mathrm{d}y\,\mathrm{d}\zeta\,%
\mathrm{d}^{2}{\mathchoice{\mbox{\boldmath$\displaystyle P$\unboldmath}}{\mbox%
{\boldmath$\textstyle P$\unboldmath}}{\mbox{\boldmath$\scriptstyle P$%
\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle P$\unboldmath}}}_{h\perp}\,%
\mathrm{d}\phi_{S}}=\frac{\alpha_{\rm em}^{2}}{Q^{2}y}$$
(6)
$$\displaystyle\times\Bigg{\{}\left[1+(1-y)^{2}\right]\,\sum_{a}e_{a}^{2}\,\left%
[\tilde{u}_{1}(x_{B},\zeta,P_{h\perp}^{2})-S_{T}\,\frac{P_{h\perp}}{m_{h}}\,%
\tilde{u}_{1T}^{h}(x_{B},\zeta,P_{h\perp}^{2})\,\sin(\phi_{h}-\phi_{S})\right]$$
$$\displaystyle+\,\lambda\,y\,(2-y)\sum_{a}e_{a}^{2}\,\left[S_{L}\,\tilde{l}_{1L%
}(x_{B},\zeta,P_{h\perp}^{2})+\,S_{T}\,\frac{P_{h\perp}}{m_{h}}\,\tilde{l}_{1T%
}^{h}(x_{B},\zeta,P_{h\perp}^{2})\,\cos(\phi_{h}-\phi_{S})\right]\Bigg{\}}\,,$$
where the ${\mathchoice{\mbox{\boldmath$\displaystyle k$\unboldmath}}{\mbox{\boldmath$%
\textstyle k$\unboldmath}}{\mbox{\boldmath$\scriptstyle k$\unboldmath}}{\mbox{%
\boldmath$\scriptscriptstyle k$\unboldmath}}}_{\perp}$-integrated fracture functions are given as
$$\displaystyle\>\tilde{u}_{1}(x_{B},\zeta,P_{h\perp}^{2})=\int\!\!\mathrm{d}^{2%
}{\mathchoice{\mbox{\boldmath$\displaystyle k$\unboldmath}}{\mbox{\boldmath$%
\textstyle k$\unboldmath}}{\mbox{\boldmath$\scriptstyle k$\unboldmath}}{\mbox{%
\boldmath$\scriptscriptstyle k$\unboldmath}}}_{\perp}\,\hat{u}_{1}\,,\quad%
\tilde{u}_{1T}^{h}(x_{B},\zeta,P_{h\perp}^{2})=\int\!\!\mathrm{d}^{2}{%
\mathchoice{\mbox{\boldmath$\displaystyle k$\unboldmath}}{\mbox{\boldmath$%
\textstyle k$\unboldmath}}{\mbox{\boldmath$\scriptstyle k$\unboldmath}}{\mbox{%
\boldmath$\scriptscriptstyle k$\unboldmath}}}_{\perp}\,\big{(}\hat{u}_{1T}^{h}%
+\frac{m_{h}}{m_{N}}\frac{{\mathchoice{\mbox{\boldmath$\displaystyle k$%
\unboldmath}}{\mbox{\boldmath$\textstyle k$\unboldmath}}{\mbox{\boldmath$%
\scriptstyle k$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle k$\unboldmath}%
}}_{\perp}\cdot{\mathchoice{\mbox{\boldmath$\displaystyle P$\unboldmath}}{%
\mbox{\boldmath$\textstyle P$\unboldmath}}{\mbox{\boldmath$\scriptstyle P$%
\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle P$\unboldmath}}}_{h\perp}}{P_%
{h\perp}^{2}}\,\hat{u}_{1T}^{\perp}\big{)}\,,$$
$$\displaystyle\quad\tilde{l}_{1L}(x_{B},\zeta,P_{h\perp}^{2})=\int\!\!\mathrm{d%
}^{2}{\mathchoice{\mbox{\boldmath$\displaystyle k$\unboldmath}}{\mbox{%
\boldmath$\textstyle k$\unboldmath}}{\mbox{\boldmath$\scriptstyle k$%
\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle k$\unboldmath}}}_{\perp}\,%
\hat{l}_{1L}\,,\quad\tilde{l}_{1T}^{h}(x_{B},\zeta,P_{h\perp}^{2})=\int\!\!%
\mathrm{d}^{2}{\mathchoice{\mbox{\boldmath$\displaystyle k$\unboldmath}}{\mbox%
{\boldmath$\textstyle k$\unboldmath}}{\mbox{\boldmath$\scriptstyle k$%
\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle k$\unboldmath}}}_{\perp}\,%
\big{(}\hat{l}_{1T}^{h}+\frac{m_{h}}{m_{N}}\frac{{\mathchoice{\mbox{\boldmath$%
\displaystyle k$\unboldmath}}{\mbox{\boldmath$\textstyle k$\unboldmath}}{\mbox%
{\boldmath$\scriptstyle k$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle k$%
\unboldmath}}}_{\perp}\cdot{\mathchoice{\mbox{\boldmath$\displaystyle P$%
\unboldmath}}{\mbox{\boldmath$\textstyle P$\unboldmath}}{\mbox{\boldmath$%
\scriptstyle P$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle P$\unboldmath}%
}}_{h\perp}}{P_{h\perp}^{2}}\,\hat{l}_{T}^{\perp}\big{)}\,.$$
(7)
We see that a single hadron production in the TFR of SIDIS does not provide access to all fracture functions. At LO the cross-section, with unpolarized leptons,
contains only the Sivers-like single spin azimuthal asymmetry.
2 Double hadron leptoproduction (DSIDIS)
In order to have access to all fracture functions one has to ”measure” the
scattered quark transverse polarization, for example exploiting he Collins effect [6] – the azimuthal correlation of the fragmenting quark transverse polarization, ${\mathchoice{\mbox{\boldmath$\displaystyle s$\unboldmath}}{\mbox{\boldmath$%
\textstyle s$\unboldmath}}{\mbox{\boldmath$\scriptstyle s$\unboldmath}}{\mbox{%
\boldmath$\scriptscriptstyle s$\unboldmath}}}^{\prime}_{T}$, with the produced hadron transverse momentum, ${\mathchoice{\mbox{\boldmath$\displaystyle p$\unboldmath}}{\mbox{\boldmath$%
\textstyle p$\unboldmath}}{\mbox{\boldmath$\scriptstyle p$\unboldmath}}{\mbox{%
\boldmath$\scriptscriptstyle p$\unboldmath}}}_{\perp}$:
$$D(z,{\mathchoice{\mbox{\boldmath$\displaystyle p$\unboldmath}}{\mbox{\boldmath%
$\textstyle p$\unboldmath}}{\mbox{\boldmath$\scriptstyle p$\unboldmath}}{\mbox%
{\boldmath$\scriptscriptstyle p$\unboldmath}}}_{\perp})=D_{1}(z,p_{\perp}^{2})%
+\frac{{\mathchoice{\mbox{\boldmath$\displaystyle p$\unboldmath}}{\mbox{%
\boldmath$\textstyle p$\unboldmath}}{\mbox{\boldmath$\scriptstyle p$%
\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle p$\unboldmath}}}_{\perp}%
\times{\mathchoice{\mbox{\boldmath$\displaystyle s$\unboldmath}}{\mbox{%
\boldmath$\textstyle s$\unboldmath}}{\mbox{\boldmath$\scriptstyle s$%
\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle s$\unboldmath}}}^{\prime}_{T}%
}{m_{h}}H_{1}^{\perp}(z,p_{\perp}^{2})\,,$$
(8)
where $s^{\prime}_{T}=D_{nn}(y)\,s_{T}$ and $\phi_{s^{\prime}}=\pi-\phi_{s}$ with
$D_{nn}(y)=[2(1-y)]/[1+(1-y)^{2}]\>$.
Let us consider a double hadron production process (DSIDIS)
$$l(\ell)+N(P)\to l(\ell^{\prime})+h_{1}(P_{1})+h_{2}(P_{2})+X$$
(9)
with (unpolarized) hadron 1 produced in the CFR ($x_{F1}>0$) and hadron 2 in the TFR ($x_{F2}<0)$, see Fig. 1. For hadron $h_{1}$ we will use the ordinary scaled variable $z_{1}=P_{1}^{+}/k^{\prime+}\simeq P{\cdot}P_{1}/P{\cdot}q$ and its transverse
momentum ${\mathchoice{\mbox{\boldmath$\displaystyle P$\unboldmath}}{\mbox{\boldmath$%
\textstyle P$\unboldmath}}{\mbox{\boldmath$\scriptstyle P$\unboldmath}}{\mbox{%
\boldmath$\scriptscriptstyle P$\unboldmath}}}_{1\perp}$ (with magnitude $P_{1\perp}$ and azimuthal angle
$\phi_{1}$) and for hadron $h_{2}$ the variables $\zeta_{2}=P_{2}^{-}/P^{-}\simeq E_{2}/E$
and ${\mathchoice{\mbox{\boldmath$\displaystyle P$\unboldmath}}{\mbox{\boldmath$%
\textstyle P$\unboldmath}}{\mbox{\boldmath$\scriptstyle P$\unboldmath}}{\mbox{%
\boldmath$\scriptscriptstyle P$\unboldmath}}}_{2\perp}$ ($P_{2\perp}$ and $\phi_{2}$).
In this case the LO expression for the DSIDIS cross-section includes all fracture functions:
$$\displaystyle \frac{\mathrm{d}\sigma^{l(\ell,\lambda)+N(P,S)\to l(\ell%
^{\prime})+h_{1}(P_{1})+h_{2}(P_{2})+X}}{\mathrm{d}x\,\mathrm{d}y\,\mathrm{d}z%
_{1}\,\mathrm{d}\zeta_{2}\,\mathrm{d}^{2}{\mathchoice{\mbox{\boldmath$%
\displaystyle P$\unboldmath}}{\mbox{\boldmath$\textstyle P$\unboldmath}}{\mbox%
{\boldmath$\scriptstyle P$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle P$%
\unboldmath}}_{1\perp}}\,\mathrm{d}^{2}{\mathchoice{\mbox{\boldmath$%
\displaystyle P$\unboldmath}}{\mbox{\boldmath$\textstyle P$\unboldmath}}{\mbox%
{\boldmath$\scriptstyle P$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle P$%
\unboldmath}}_{2\perp}}\,\mathrm{d}\phi_{S}}=\frac{\alpha^{2}\,x_{B}}{Q^{4}\,y%
}\left[1+(1-y)^{2}\right]\times$$
(10)
$$\displaystyle\bigg{(}\mathcal{M}^{[\gamma^{-}]}_{h_{2}}\otimes D_{1q}^{h_{1}}+%
\lambda\,D_{ll}(y)\,\mathcal{M}^{[\gamma^{-}\gamma_{5}]}_{h_{2}}\otimes D_{q}^%
{h_{1}}+\mathcal{M}^{[\mathrm{i}\,\sigma^{i-}\gamma_{5}]}_{h_{2}}\otimes\frac{%
{\mathchoice{\mbox{\boldmath$\displaystyle p$\unboldmath}}{\mbox{\boldmath$%
\textstyle p$\unboldmath}}{\mbox{\boldmath$\scriptstyle p$\unboldmath}}{\mbox{%
\boldmath$\scriptscriptstyle p$\unboldmath}}}_{\perp}\times{\mathchoice{\mbox{%
\boldmath$\displaystyle s$\unboldmath}}{\mbox{\boldmath$\textstyle s$%
\unboldmath}}{\mbox{\boldmath$\scriptstyle s$\unboldmath}}{\mbox{\boldmath$%
\scriptscriptstyle s$\unboldmath}}}^{\prime}_{T}}{m_{h_{1}}}H_{1q}^{\perp h_{1%
}}\bigg{)}=$$
$$\displaystyle\frac{\alpha^{2}\,x_{B}}{Q^{4}\,y}\left[1+(1-y)^{2}\right]\left(%
\sigma_{UU}+S_{\parallel}\,\sigma_{UL}+S_{\perp}\,\sigma_{UT}+\lambda\,D_{ll}%
\,\sigma_{LU}+\lambda\,S_{\parallel}D_{ll}\,\sigma_{LL}+\lambda\,S_{\perp}D_{%
ll}\,\sigma_{LT}\right)\,,$$
where $D_{ll}(y)={y(2-y)}/{1+(1-y)^{2}}$ .
3 DSIDIS cross-section integrated over ${\mathchoice{\mbox{\boldmath$\displaystyle P$\unboldmath}}{\mbox{\boldmath$%
\textstyle P$\unboldmath}}{\mbox{\boldmath$\scriptstyle P$\unboldmath}}{\mbox{%
\boldmath$\scriptscriptstyle P$\unboldmath}}_{2\perp}}$
If we integrate the fracture matrix over ${\mathchoice{\mbox{\boldmath$\displaystyle P$\unboldmath}}{\mbox{\boldmath$%
\textstyle P$\unboldmath}}{\mbox{\boldmath$\scriptstyle P$\unboldmath}}{\mbox{%
\boldmath$\scriptscriptstyle P$\unboldmath}}}_{2\perp}$
we are left with eight $k_{\perp}$-dependent fracture functions:
$$\displaystyle\int\mathrm{d}^{2}{\mathchoice{\mbox{\boldmath$\displaystyle P$%
\unboldmath}}{\mbox{\boldmath$\textstyle P$\unboldmath}}{\mbox{\boldmath$%
\scriptstyle P$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle P$\unboldmath}%
}_{2\perp}}\,\mathcal{M}^{[\gamma^{-}]}$$
$$\displaystyle=$$
$$\displaystyle u_{1}+\frac{{\mathchoice{\mbox{\boldmath$\displaystyle k$%
\unboldmath}}{\mbox{\boldmath$\textstyle k$\unboldmath}}{\mbox{\boldmath$%
\scriptstyle k$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle k$\unboldmath}%
}_{\perp}}\times\mathchoice{\mbox{\boldmath$\displaystyle S$\unboldmath}}{%
\mbox{\boldmath$\textstyle S$\unboldmath}}{\mbox{\boldmath$\scriptstyle S$%
\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle S$\unboldmath}}_{\perp}}{m_{N%
}}\,u_{1T}^{\perp}\>,$$
(11)
$$\displaystyle\int\mathrm{d}^{2}{\mathchoice{\mbox{\boldmath$\displaystyle P$%
\unboldmath}}{\mbox{\boldmath$\textstyle P$\unboldmath}}{\mbox{\boldmath$%
\scriptstyle P$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle P$\unboldmath}%
}_{2\perp}}\,\mathcal{M}^{[\gamma^{-}\gamma_{5}]}$$
$$\displaystyle=$$
$$\displaystyle S_{\parallel}\,l_{1L}+\frac{{\mathchoice{\mbox{\boldmath$%
\displaystyle k$\unboldmath}}{\mbox{\boldmath$\textstyle k$\unboldmath}}{\mbox%
{\boldmath$\scriptstyle k$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle k$%
\unboldmath}}_{\perp}}\cdot\mathchoice{\mbox{\boldmath$\displaystyle S$%
\unboldmath}}{\mbox{\boldmath$\textstyle S$\unboldmath}}{\mbox{\boldmath$%
\scriptstyle S$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle S$\unboldmath}%
}_{\perp}}{m_{N}}\,l_{1T}\>,$$
(12)
$$\displaystyle\int\mathrm{d}^{2}{\mathchoice{\mbox{\boldmath$\displaystyle P$%
\unboldmath}}{\mbox{\boldmath$\textstyle P$\unboldmath}}{\mbox{\boldmath$%
\scriptstyle P$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle P$\unboldmath}%
}_{2\perp}}\,\mathcal{M}^{[\mathrm{i}\,\sigma^{i-}\gamma_{5}]}$$
$$\displaystyle=$$
$$\displaystyle S_{\perp}^{i}\,t_{1T}+\frac{S_{\parallel}\,k_{\perp}^{i}}{m_{N}}%
\,t_{1L}^{\perp}+\frac{k_{\perp}^{i}({\mathchoice{\mbox{\boldmath$%
\displaystyle k$\unboldmath}}{\mbox{\boldmath$\textstyle k$\unboldmath}}{\mbox%
{\boldmath$\scriptstyle k$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle k$%
\unboldmath}}_{\perp}}\cdot\mathchoice{\mbox{\boldmath$\displaystyle S$%
\unboldmath}}{\mbox{\boldmath$\textstyle S$\unboldmath}}{\mbox{\boldmath$%
\scriptstyle S$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle S$\unboldmath}%
}_{\perp})}{m_{N}^{2}}\,t_{1T}^{\perp}+\frac{\epsilon_{\perp}^{ij}k_{\perp j}}%
{m_{N}}\,t_{1}^{\perp}$$
(13)
$$\displaystyle=$$
$$\displaystyle S_{\perp}^{i}\,t_{1}+\frac{S_{\parallel}\,k_{\perp}^{i}}{m_{N}}%
\,t_{1L}^{\perp}+\frac{(k_{\perp}^{i}k_{\perp}^{j}-\frac{1}{2}{\mathchoice{%
\mbox{\boldmath$\displaystyle k$\unboldmath}}{\mbox{\boldmath$\textstyle k$%
\unboldmath}}{\mbox{\boldmath$\scriptstyle k$\unboldmath}}{\mbox{\boldmath$%
\scriptscriptstyle k$\unboldmath}}_{\perp}}^{2}\delta_{ij})\,S_{\perp}^{j}}{m_%
{N}^{2}}\,t_{1T}^{\perp}+\frac{\epsilon_{\perp}^{ij}k_{\perp j}}{m_{N}}\,t_{1}%
^{\perp}\,,$$
where $t_{1}\equiv t_{1T}+({\mathchoice{\mbox{\boldmath$\displaystyle k$\unboldmath}}%
{\mbox{\boldmath$\textstyle k$\unboldmath}}{\mbox{\boldmath$\scriptstyle k$%
\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle k$\unboldmath}}_{\perp}}^{2}/%
2m_{N}^{2})\,t_{1T}^{\perp}$.
We have removed the hat to denote the ${{\mathchoice{\mbox{\boldmath$\displaystyle P$\unboldmath}}{\mbox{\boldmath$%
\textstyle P$\unboldmath}}{\mbox{\boldmath$\scriptstyle P$\unboldmath}}{\mbox{%
\boldmath$\scriptscriptstyle P$\unboldmath}}}_{2\perp}}$–integrated
fracture functions, for example:
$$t_{1}(x_{B},{\mathchoice{\mbox{\boldmath$\displaystyle k$\unboldmath}}{\mbox{%
\boldmath$\textstyle k$\unboldmath}}{\mbox{\boldmath$\scriptstyle k$%
\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle k$\unboldmath}}_{\perp}}^{2},%
\zeta)=\int\mathrm{d}^{2}{\mathchoice{\mbox{\boldmath$\displaystyle P$%
\unboldmath}}{\mbox{\boldmath$\textstyle P$\unboldmath}}{\mbox{\boldmath$%
\scriptstyle P$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle P$\unboldmath}%
}_{2\perp}}\left\{\hat{t}_{1T}+\frac{{\mathchoice{\mbox{\boldmath$%
\displaystyle k$\unboldmath}}{\mbox{\boldmath$\textstyle k$\unboldmath}}{\mbox%
{\boldmath$\scriptstyle k$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle k$%
\unboldmath}}_{\perp}}^{2}}{2m_{N}^{2}}\,\hat{t}_{1T}^{\perp\perp}+\frac{%
\mathchoice{\mbox{\boldmath$\displaystyle P$\unboldmath}}{\mbox{\boldmath$%
\textstyle P$\unboldmath}}{\mbox{\boldmath$\scriptstyle P$\unboldmath}}{\mbox{%
\boldmath$\scriptscriptstyle P$\unboldmath}}_{2\perp}^{2}}{2m_{2}^{2}}\,\hat{t%
}_{1T}^{hh}\right\}.$$
(14)
The complete expression for other seven ${\mathchoice{\mbox{\boldmath$\displaystyle P$\unboldmath}}{\mbox{\boldmath$%
\textstyle P$\unboldmath}}{\mbox{\boldmath$\scriptstyle P$\unboldmath}}{\mbox{%
\boldmath$\scriptscriptstyle P$\unboldmath}}_{2\perp}}$–integrated
fracture functions are presented in Ref. [2].
These ${\mathchoice{\mbox{\boldmath$\displaystyle P$\unboldmath}}{\mbox{\boldmath$%
\textstyle P$\unboldmath}}{\mbox{\boldmath$\scriptstyle P$\unboldmath}}{\mbox{%
\boldmath$\scriptscriptstyle P$\unboldmath}}_{2\perp}}$–integrated fracture functions are perfectly
analogous to those describing single-hadron leptoproduction
in the CFR [3], the correspondence being:
Fracture Functions $\Rightarrow$ Distribution Functions.
Thus we can use the procedure of Ref. [3] to obtain
the final expression of the cross section as
$$\displaystyle\frac{\mathrm{d}\sigma}{\mathrm{d}x_{B}\,\mathrm{d}y\,\mathrm{d}z%
_{1}\,\mathrm{d}\zeta_{2}\,\mathrm{d}\phi_{1}\,\mathrm{d}P_{T1}^{2}\,\mathrm{d%
}\phi_{S}}=\frac{\alpha_{\rm em}^{2}}{x_{B}\,y\,Q^{2}}\left\{\left(1-y+\frac{y%
^{2}}{2}\right)\,\mathcal{F}_{UU,T}+(1-y)\,\cos 2\phi_{1}\,\mathcal{F}_{UU}^{%
\cos 2\phi_{1}}\right.$$
$$\displaystyle +\,S_{\parallel}\,(1-y)\,\sin 2\phi_{1}\,\mathcal{F}_%
{UL}^{\sin 2\phi_{1}}+S_{\parallel}\,\lambda\,y\,\left(1-\frac{y}{2}\right)\,%
\mathcal{F}_{LL}$$
$$\displaystyle +\,S_{T}\,\left(1-y+\frac{y^{2}}{2}\right)\,\sin(\phi%
_{1}-\phi_{S})\,\mathcal{F}_{UT}^{\sin(\phi_{1}-\phi_{S})}$$
$$\displaystyle +\,S_{T}\,(1-y)\,\sin(\phi_{1}+\phi_{S})\,\mathcal{F}%
_{UT}^{\sin(\phi_{1}+\phi_{S})}+S_{T}\,(1-y)\,\sin(3\phi_{1}-\phi_{s})\,%
\mathcal{F}_{UT}^{\sin(3\phi_{1}-\phi_{S})}$$
$$\displaystyle +\left.S_{T}\,\lambda\,y\left(1-\frac{y}{2}\right)\,%
\cos(\phi_{1}-\phi_{S})\,\mathcal{F}_{LT}^{\cos(\phi_{1}-\phi_{S})}\right\}$$
(15)
where the structure functions are given by the same convolutions as in [3] with the replacement of the TMDs with the
${\mathchoice{\mbox{\boldmath$\displaystyle P$\unboldmath}}{\mbox{\boldmath$%
\textstyle P$\unboldmath}}{\mbox{\boldmath$\scriptstyle P$\unboldmath}}{\mbox{%
\boldmath$\scriptscriptstyle P$\unboldmath}}_{2\perp}}$–integrated fracture and fragmentation functions:
$f\to u,g\to l$ and $h\to t$.
4 DSIDIS cross-section integrated over ${{\bf P}_{T1}}$
If one integrates the DSIDIS cross-section over ${\mathchoice{\mbox{\boldmath$\displaystyle P$\unboldmath}}{\mbox{\boldmath$%
\textstyle P$\unboldmath}}{\mbox{\boldmath$\scriptstyle P$\unboldmath}}{\mbox{%
\boldmath$\scriptscriptstyle P$\unboldmath}}_{1\perp}}$ and the
quark transverse momentum only one fragmentation function, $D_{1}$, survives,
which couples to the unpolarized and the longitudinally polarized
${\mathchoice{\mbox{\boldmath$\displaystyle k$\unboldmath}}{\mbox{\boldmath$%
\textstyle k$\unboldmath}}{\mbox{\boldmath$\scriptstyle k$\unboldmath}}{\mbox{%
\boldmath$\scriptscriptstyle k$\unboldmath}}}_{\perp}$–integrated fracture functions:
$$\displaystyle\int\mathrm{d}^{2}{\mathchoice{\mbox{\boldmath$\displaystyle k$%
\unboldmath}}{\mbox{\boldmath$\textstyle k$\unboldmath}}{\mbox{\boldmath$%
\scriptstyle k$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle k$\unboldmath}%
}_{\perp}}\,\mathcal{M}^{[\gamma^{-}]}$$
$$\displaystyle=$$
$$\displaystyle\tilde{u}_{1}(x_{B},\zeta_{2},P_{2\perp}^{2})+\frac{{\mathchoice{%
\mbox{\boldmath$\displaystyle P$\unboldmath}}{\mbox{\boldmath$\textstyle P$%
\unboldmath}}{\mbox{\boldmath$\scriptstyle P$\unboldmath}}{\mbox{\boldmath$%
\scriptscriptstyle P$\unboldmath}}_{2\perp}}\times\mathchoice{\mbox{\boldmath$%
\displaystyle S$\unboldmath}}{\mbox{\boldmath$\textstyle S$\unboldmath}}{\mbox%
{\boldmath$\scriptstyle S$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle S$%
\unboldmath}}_{T}}{m_{2}}\,\tilde{u}_{1T}^{h}(x_{B},\zeta_{2},P_{2\perp}^{2}),$$
(16)
$$\displaystyle\int\mathrm{d}^{2}{\mathchoice{\mbox{\boldmath$\displaystyle k$%
\unboldmath}}{\mbox{\boldmath$\textstyle k$\unboldmath}}{\mbox{\boldmath$%
\scriptstyle k$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle k$\unboldmath}%
}_{\perp}}\,\mathcal{M}^{[\gamma^{-}\gamma_{5}]}$$
$$\displaystyle=$$
$$\displaystyle S_{\parallel}\,\tilde{l}_{1L}(x_{B},\zeta_{2},P_{2\perp}^{2})+%
\frac{{\mathchoice{\mbox{\boldmath$\displaystyle P$\unboldmath}}{\mbox{%
\boldmath$\textstyle P$\unboldmath}}{\mbox{\boldmath$\scriptstyle P$%
\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle P$\unboldmath}}_{2\perp}}%
\cdot\mathchoice{\mbox{\boldmath$\displaystyle S$\unboldmath}}{\mbox{\boldmath%
$\textstyle S$\unboldmath}}{\mbox{\boldmath$\scriptstyle S$\unboldmath}}{\mbox%
{\boldmath$\scriptscriptstyle S$\unboldmath}}_{T}}{m_{2}}\,\tilde{l}_{1T}^{h}(%
x_{B},\zeta_{2},P_{2\perp}^{2}),$$
(17)
where the fracture functions with a tilde (which means integration
over the quark transverse momentum) are as in Eqs. (7).
The final result for the cross section is [2]
$$\displaystyle\frac{\mathrm{d}\sigma}{\mathrm{d}x_{B}\,\mathrm{d}y\,\mathrm{d}z%
_{1}\,\mathrm{d}\zeta_{2}\,\mathrm{d}\phi_{2}\,\mathrm{d}P_{2\perp}^{2}\,%
\mathrm{d}\phi_{S}}=\frac{\alpha_{\rm em}^{2}}{y\,Q^{2}}\,\left\{\left(1-y+%
\frac{y^{2}}{2}\right)\right.$$
$$\displaystyle \times\,\sum_{a}e_{a}^{2}\,\left[\tilde{u}_{1}(x_{B},%
\zeta_{2},P_{2\perp}^{2})-S_{T}\,\frac{P_{2\perp}}{m_{2}}\,\tilde{u}_{1T}^{h}(%
x_{B},\zeta_{2},P_{2\perp}^{2})\,\sin(\phi_{2}-\phi_{S})\right]$$
$$\displaystyle +\,\lambda\,y\,\left(1-\frac{y}{2}\right)\sum_{a}e_{a}^{%
2}\,\left[\rule[-9.03pt]{0.0pt}{23.65pt}S_{\parallel}\,\tilde{l}_{1L}(x_{B},%
\zeta_{2},P_{2\perp}^{2})\right.$$
$$\displaystyle +\,\left.\left.S_{T}\,\frac{P_{2\perp}}{m_{2}}\,\tilde{l%
}_{1T}^{h}(x_{B},\zeta_{2},P_{2\perp}^{2})\,\cos(\phi_{2}-\phi_{S})\right]%
\right\}D_{1}(z).$$
(18)
As in the case of single-hadron production [1], there
is a Sivers-type modulation $\sin(\phi_{2}-\phi_{S})$, but no Collins-type
effect.
5 Examples of unintegrated cross-sections: beam spin asymmetry
We show here explicit expressions only for $\sigma_{UU}$ and $\sigma_{LU}$111Expressions for other terms are available in [7].
$$\displaystyle\sigma_{UU}=F_{0}^{{\hat{u}}\cdot D_{1}}$$
$$\displaystyle-$$
$$\displaystyle D_{{nn}}\Bigg{[}\frac{P_{{1\perp}}^{2}}{m_{1}m_{N}}\,F_{{kp1}}^{%
{\hat{t}}^{\perp}\cdot H_{1}^{\perp}}\,{\cos}(2\phi_{1})+\frac{P_{{1\perp}}P_{%
{2\perp}}}{m_{1}m_{2}}\,F_{{p1}}^{{\hat{t}}^{h}\cdot H_{1}^{\perp}}\,{\cos}(%
\phi_{1}+\phi_{2})$$
(19)
$$\displaystyle+$$
$$\displaystyle\left(\frac{P_{{2\perp}}^{2}}{m_{1}m_{N}}\,F_{{kp2}}^{{\hat{t}}^{%
\perp}\cdot H_{1}^{\perp}}+\frac{P_{{2\perp}}^{2}}{m_{1}m_{2}}\,F_{{p2}}^{{%
\hat{t}^{h}}\cdot H_{1}^{\perp}}\right)\,{\cos}(2\phi_{2})\Bigg{]}.$$
$$\sigma_{LU}=-\frac{P_{{1\perp}}P_{{2\perp}}}{m_{2}m_{N}}F_{{k1}}^{{\hat{l}}^{%
\perp h}\cdot D_{1}}\,\sin(\phi_{1}-\phi_{2})\,,$$
(20)
where the structure functions $F_{...}^{...}$ are specific convolutions [7, 8] of fracture and fragmentation functions depending on $x,z_{1},\zeta_{2},P_{1\perp}^{2},P_{2\perp}^{2},{\mathchoice{\mbox{\boldmath$%
\displaystyle P$\unboldmath}}{\mbox{\boldmath$\textstyle P$\unboldmath}}{\mbox%
{\boldmath$\scriptstyle P$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle P$%
\unboldmath}}}_{1\perp}\cdot{\mathchoice{\mbox{\boldmath$\displaystyle P$%
\unboldmath}}{\mbox{\boldmath$\textstyle P$\unboldmath}}{\mbox{\boldmath$%
\scriptstyle P$\unboldmath}}{\mbox{\boldmath$\scriptscriptstyle P$\unboldmath}%
}}_{2\perp}$.
We notice the presence of terms similar to the Boer-Mulders term appearing in the usual CFR of SIDIS. What is new in DSIDIS is the LO beam spin SSA, absent in the CFR of SIDIS.
We further notice that the DSIDIS structure functions may depend in principle on the relative azimuthal angle of the two hadrons, due to presence of the last term among their arguments: ${\mathchoice{\mbox{\boldmath$\displaystyle P$\unboldmath}}{\mbox{\boldmath$%
\textstyle P$\unboldmath}}{\mbox{\boldmath$\scriptstyle P$\unboldmath}}{\mbox{%
\boldmath$\scriptscriptstyle P$\unboldmath}}}_{1\perp}\cdot{\mathchoice{\mbox{%
\boldmath$\displaystyle P$\unboldmath}}{\mbox{\boldmath$\textstyle P$%
\unboldmath}}{\mbox{\boldmath$\scriptstyle P$\unboldmath}}{\mbox{\boldmath$%
\scriptscriptstyle P$\unboldmath}}}_{2\perp}=P_{1\perp}P_{2\perp}\cos(\Delta\phi)$ with $\Delta\phi=\phi_{1}-\phi_{2}$.
This term arise from ${\mathchoice{\mbox{\boldmath$\displaystyle k$\unboldmath}}{\mbox{\boldmath$%
\textstyle k$\unboldmath}}{\mbox{\boldmath$\scriptstyle k$\unboldmath}}{\mbox{%
\boldmath$\scriptscriptstyle k$\unboldmath}}}_{\perp}\cdot{\mathchoice{\mbox{%
\boldmath$\displaystyle P$\unboldmath}}{\mbox{\boldmath$\textstyle P$%
\unboldmath}}{\mbox{\boldmath$\scriptstyle P$\unboldmath}}{\mbox{\boldmath$%
\scriptscriptstyle P$\unboldmath}}}_{\perp}$ correlations in STMD fracture functions and can generate a long range correlation between hadrons produced in CFR and TFR. In practice it is convenient to chose as independent azimuthal angles $\Delta\phi$ and $\phi_{2}$.
Let us finally consider the beam spin asymmetry defined as
$$A_{LU}(x,z_{1},\zeta_{2},P_{1\perp}^{2},P_{2\perp}^{2},\Delta\phi)=\frac{\int%
\mathrm{d}\phi_{2}\,\sigma_{LU}}{\int\mathrm{d}\phi_{2}\,\sigma_{UU}}=\frac{-%
\frac{P_{{1\perp}}P_{{2\perp}}}{m_{2}m_{N}}F_{{k1}}^{{\hat{l}}^{\perp h}\cdot D%
_{1}}\,\sin(\Delta\phi)}{F_{0}^{{\hat{u}}\cdot D_{1}}}\,\cdot$$
(21)
If one keeps only the linear terms of the corresponding fracture function
expansion in series of ${\mathchoice{\mbox{\boldmath$\displaystyle P$\unboldmath}}{\mbox{\boldmath$%
\textstyle P$\unboldmath}}{\mbox{\boldmath$\scriptstyle P$\unboldmath}}{\mbox{%
\boldmath$\scriptscriptstyle P$\unboldmath}}}_{1\perp}\cdot{\mathchoice{\mbox{%
\boldmath$\displaystyle P$\unboldmath}}{\mbox{\boldmath$\textstyle P$%
\unboldmath}}{\mbox{\boldmath$\scriptstyle P$\unboldmath}}{\mbox{\boldmath$%
\scriptscriptstyle P$\unboldmath}}}_{2\perp}$ one obtains
the following azimuthal dependence of DSIDIS beam spin asymmetry:
$$A_{LU}(x,z_{1},\zeta_{2},P_{1\perp}^{2},P_{2\perp}^{2})=a_{1}\sin(\Delta\phi)+%
a_{2}\sin(2\Delta\phi)$$
(22)
with the amplitudes $a_{1},a_{2}$ independent of azimuthal angles.
We stress that the ideal opportunities to test the predictions of the present
approach to DSIDIS, would be the future JLab 12 upgrade, in progress, and the
EIC facilities, in the planning phase.
References
[1]
M. Anselmino, V. Barone and A. Kotzinian,
Phys. Lett. B 699, 108 (2011)
[arXiv:1102.4214 [hep-ph]].
[2]
M. Anselmino, V. Barone and A. Kotzinian,
arXiv:1109.1132 [hep-ph].
[3]
A. Bacchetta, M. Diehl, K. Goeke, A. Metz, P. J. Mulders and M. Schlegel,
JHEP 0702, 093 (2007)
[arXiv:hep-ph/0611265].
[4]
L. Trentadue and G. Veneziano,
Phys. Lett. B 323, 201 (1994).
[5]
A. Kotzinian,
Nucl. Phys. B 441, 234 (1995)
[arXiv:hep-ph/9412283].
[6]
J. C. Collins,
Nucl. Phys. B 396, 161 (1993)
[arXiv:hep-ph/9208213].
[7]
A. Kotzinian, SIDIS in target fragmentation region, Talk at XIX International Workshop on Deep-Inelastic Scattering and Related Subjects (DIS 2011), April 11-15, 2011, Newport News, VA USA,
https://wiki.bnl.gov/conferences/images/3/
3b/Parallel.Spin.AramKotzinian.Thursday14.
talk.pdf
[8]
M. Anselmino, V. Barone and A. Kotzinian, article in preparation. |
RamBoAttack: A Robust Query Efficient Deep Neural Network Decision Exploit
Viet Quoc Vo
The University Of Adelaide
[email protected]
Ehsan Abbasnejad
The University Of Adelaide
[email protected]
Damith C. Ranasinghe
The University Of Adelaide
[email protected]
Abstract
Machine learning models are critically susceptible to evasion attacks from adversarial examples. Generally, adversarial examples—modified inputs deceptively similar to the original input—are constructed under whitebox access settings by adversaries with full access to the model. However, recent attacks have shown a remarkable reduction in the number of queries to craft adversarial examples using blackbox attacks. Particularly alarming is the now, practical, ability to exploit simply the classification decision (hard label only) from a trained model’s access interface provided by a growing number of Machine Learning as a Service (MLaaS) providers—including Google, Microsoft, IBM—and used by a plethora of applications incorporating these models. An adversary’s ability to exploit only the predicted label from a model-query to craft adversarial examples is distinguished as a decision-based attack.
In our study, we first deep-dive into recent state-of-the-art decision-based attacks in ICLR and S&P to highlight the costly nature of discovering low distortion adversarial employing approximate gradient estimation methods. We develop a robust class of query efficient attacks capable of avoiding entrapment in a local minimum and misdirection from noisy gradients seen in gradient estimation methods. The attack method we propose, RamBoAttack, exploits the notion of Randomized Block Coordinate Descent to explore the hidden classifier manifold, targeting perturbations to manipulate only localized input features to address the issues of gradient estimation methods. Importantly, the RamBoAttack is demonstrably more robust to the different sample inputs available to an adversary and/or the targeted class. Overall, for a given target class, RamBoAttack is demonstrated to be more robust at achieving a lower distortion within a given query budget. We curate our extensive results using the large-scale high resolution ImageNet dataset and open-source our attack, test samples and artifacts on GitHub.
††publicationid: pubid:
Network and Distributed Systems Security (NDSS) Symposium 2022
27 February - 3 March 2022
ISBN 1-891562-66-5
https://dx.doi.org/10.14722/ndss.2022.24200
www.ndss-symposium.org
I Introduction
Demonstrations of super human performance from Machine Learning (ML) models, particularly Deep Neural Networks (DNNs), are leading to the industrialization of Machine Learning exemplified by self-driving cars [10] and MLaaS from a plethora of providers, including IBM Watson Visual Recognition [4], Amazon Rekognition [1] or Microsoft’s Cognitive Services [2]. Now, at the cost-per-service level, any system can easily integrate intelligence into applications. The increasingly inevitable, wide spread proliferation of machine learning in systems are creating the incentives and new attack surfaces to exploit, for malevolent actors.
Adversarial Attacks in White-box Settings. In particular, machine learning models are critically vulnerable to evasion attacks from carefully crafted adversarial examples. An adversary yields small perturbations, when added to an input, cause a failure—simply misclassifying the input in an untargeted attack or hijacking the decision of a model to generate a decision pre-selected by the adversary [29] in a targeted attack. Effective attack methods for generating adversarial examples in white-box attacks, assuming full knowledge and access to the machine learning models, exist [17, 25, 22, 9, 31].
Adversarial Attacks in Blackbox Settings. In contrast, on commercial and industrial systems, an attacker has limited or no knowledge of model architecture, parameters or weights. Access may be limited to only full or partial output scores (treated as a probability distribution). Chen et al. [12] and Ilyas et al. [19] proposed methods to exploit models revealing output scores to craft adversarial examples under so-called score-based attacks.In the most restricted threat model illustrated in Fig. 1, the information exposed to an attacker is limited to the hard-label only—the most confident label predicted or decision, for instance logo or landmark detection on Google Cloud Vision [3].
Adversarial attacks in such a decision-based scenario are the most restrictive and challenging attack setting given the severely limited access to information, but, these settings present a realistic and pragmatic threat model.
Decision-Based (Hard-Label) Adversarial Attacks. Recent studies demonstrated the practicability of blackbox attacks under the highly restrictive decision-based attack setting relying solely on the label obtained from model queries. The Boundary Attack of Brendel et al. [6] in ICLR demonstrated the feasibility of an attack and obtained adversarial examples comparable with state-of-the-art white-box attack methods in both targeted and untargeted scenarios.
For a realistic attack, achieving attack success with a limited query budget is important because: i) MLaaS providers limit the rate of queries to their services; ii) throttling at a service provider limits large-scale attacks; and iii) a provider can employ methods to recognize a large number of rapid queries in succession with similar inputs to detect malicious activity and thwart query inefficient attacks. Furthermore, from both an attacker perspective and a defense perspective, reducing the number of queries reduces the cost of mounting the attack as well the time for evaluating the model and potential defenses111For example, we consumed over 1,700 hours on two dedicated modern GPUs with 48 GB memory to curate the results in our study..
I-A Our Motivation and Attack Focus
Recent studies formulated the decision-based attack as an optimization problem to propose algorithms based on gradient estimation methods [14, 11] and demonstrated attacks with significantly reduced number of queries. However , the existing attacks suffer from the following problems:
•
Entrapment in a local minima.In gradient estimation methods, as eluded to by Cheng et al. [14], the search for an adversarial example can experience an entrapment problem in a local minimum where extra queries expended by the attacker fails to achieve a lower distortion adversarial example.
•
Unreliabiilty of gradient estimations. Further, as the magnitude of estimated gradients diminish on approach to a local minima or a plateau, the estimated gradients may become noisy and susceptible to misdirection.
•
Sensitivity to the starting image. Then, intuitively, we can expect that the initialization of optimization frameworks with an available or intended starting image, a necessity in decision-based attacks, to hinder an attacker from reaching an imperceptible adversarial example. But, there is no known method to determine a good starting image prior to an attack. Thus, the success of an attack can be expected to be sensitive to the available starting image; an attempt to discover a better starting image or target class through trial and error can not only lead to detection and discovery by effectively increasing the numbers of queries needed, but also limit the scope of the attack by reducing the number of classes that can be targeted.
In general, developing decision-based attacks poses a challenging optimization problem because only binary information from output labels are available to us from the target model as opposed to output values from a function.
Therefore, we seek to understand the fragility of gradient estimation methods and develop a more robust and query efficient attack. Consequently, we expend our efforts to answer the following research questions (RQ) .
RQ1: How can we assess the robustness of decision-based blackbox attacks to understand their fragility? (Section II-C)
RQ2: What is the impact of the source and starting target class images accessible to an adversary on the success of an attack? (Section II-D & extensive results in Section IV-D)
RQ3: How can an adversary construct a robust and query efficient attack for achieving low distortion adversarial examples for any starting image from the targeted class and avoid the pitfalls of gradient estimation based attack methods? (Section III & IV)
I-B Our Contributions
In our effort to: i) address the research questions; ii) better understand and assess the vulnerabilities of DNNs to adversarial attacks in the pragmatic decision-based threat model; and iii) explore more robust attack methods, we summarize our contributions below:
1.
Our study presents the first systematic investigation of state-of-the-art decision-based attacks to understand their robustness. Through extensive experiments, we highlight the problem of hard cases where attackers struggle to flip the prediction of images towards a chosen target class, even with increasing query budgets–see Fig. 2.
As summarized in Table A, we expend over 1800 computation hours with 2 GPUs to curate results.
2.
Motivated by our findings, we propose a new attack—RamBoAttack—that is demonstrably more robust. We propose a search algorithm analogous to Randomized Block Coordinate Descent—BlockDescent—to address the entrapment problem where gradient estimation fails to provide a useful direction to descend and propose to combine BlockDescent with gradient estimation frameworks to attain query efficiency. In contrast to existing approaches, BlockDescent focuses on altering local regions of the input commensurate with the filter sizes employed by DNNs to forge adversarial examples.
3.
We provide new insights into query efficient mechanisms for crafting adversarial perturbation to attack DNNs. Our decision-based blackbox attack method relying on localized alterations to inputs discovers effective adversarial perturbations attempting to exploit the model’s reliance on salient features of the target class to correctly classify an input to a target label in the hard cases. We illustrate clear correlations between perturbations found and added to inputs, and salient regions on target class images with the aid of a visual explanation tool.
4.
Overall, RamBoAttack is a more robust and query efficient approach for generating an adversarial example of high attack success rate compared to existing counterparts. Importantly, our attack method is significantly less impacted by a starting image from a target class accessible to an adversary.
5.
We recognize the need for reliable and reproducible attack evaluation strategies and introduce two evaluation protocols applied across CIFAR10 and ImageNet. We release the dataset constructed through our extensive study to support future benchmarking of blackbox attacks under a decision-based setting222https://ramboattack.github.io/.
II Decision-Based Attacks
In this section, we: i) formalize an adversarial attack as an optimization problem; ii) revisit current state-of-the-art methods; and iii) analyze the results to present our intuitions into the state-of-the-art attacks based on our observations.
II-A Adversarial Threat Model
We adopt the threat model proposed in prior works [11, 15, 6]. Under the decision-based blackbox setting, adversaries have no prior knowledge such as model architecture or parameters but have limited access to the output of a victim model—the model’s decision as illustrated in Fig. 1. Furthermore, an adversary can make numerous queries to a victim’s machine learning model via an access interface and receive the model’s decision. The adversary must have at least one image from a target class that is classified correctly by the victim model if the adversary aims to carry out a targeted attack. This image is the starting image used to initialize the attack. The adversary also holds at least one image from a source class correctly classified by the model. The objective of the adversary is to discover the minimum (imperceptible) perturbation—quantitatively measured by the common distortion measure adopted in recent studies—to flip the decision for the source image to the targeted class using the minimum number of queries to the model.
II-B Problem Formulation
Given a source image $\bm{x}\in\mathbb{R}^{C\times W\times H}$ its ground truth label $y$ from the label set $\mathcal{Y}=\{1,2,\cdots,K\}$, where $K$ denote the number of classes, $C$, $W$ and $H$ denotes the number of channels, width and height of an image, respectively. Given a pre-trained multi-class classification model $f:\bm{x}\rightarrow y$ so that $f(\bm{x})=y$, in a targeted attack, an adversary aims to modify an input $\bm{x}$ to craft an optimal adversarial example $\bm{x^{*}}\in\mathbb{R}^{C\times W\times H}$ that is classified as the class label desired by the adversary when used as an input for the victim model. In an untargeted attack, an adversary manipulates input $\bm{x}$ to change the decision of a classifier to any class label other than its ground-truth label. To simplify the descriptions, we refer to the desired class label as the target class while the class of the input $\bm{x}$ is called the source class.
Measuring Distortion. $l_{2}$-norm is widely adopted, in all of the recent works as in [6, 7, 13, 14, 12], to measure the distortion and similarity between a generated adversarial example and the source sample. Therefore, in this paper, our approach focuses on $l_{2}$-norm. Then, let $D(\bm{x},\bm{x^{*}})$ be the $l_{2}$-distance that measures the similarity between $\bm{x}$ and $\bm{x^{*}}$.
Optimization Problem. The main aim of adversarial attacks is to minimize the distortion measured by $D$ while ensuring the perturbed input data is classified as a target class—for a targeted attack—or non-source class—for an untargeted attack. Therefore, an adversarial attack can be formulated as a constrained optimization problem:
$$\displaystyle\min_{x^{*}}\quad$$
$$\displaystyle D(\bm{x},\bm{x^{*}})$$
(1)
s.t.
$$\displaystyle\mathcal{C}(f(\bm{x^{*}}))=1,$$
$$\displaystyle\bm{x},\bm{x^{*}}\in[0,1]^{C\times W\times H},$$
Here, $\mathcal{C}(f(\bm{x^{*}}))$ is an adversarial criterion that takes the value 1 if the attack requirement is satisfied and 0 otherwise. This requirement is satisfied if $f(\bm{x^{*}})\neq y$ for an untargeted attack or $f(\bm{x^{*}})=y^{*}$ for a targeted attack (i.e. for the instance $\bm{x^{*}}$ to be misclassified as targeted class label $y^{*}$).
II-C Understanding Robustness
The two current query efficient attack methods employ gradient approximation frameworks, whilst the earlier method relied on a stochastic approach. We briefly summarize these methods before delving into our systematic study to understand their robustness.
Random Walk along a Decision Boundary. The first attack under a decision-based threat model proposed by Brendel et al. [6] initialized an image in a target class and in each iteration, sampled a perturbation from a Gaussian distribution and projected the perturbation onto a sphere around a source image. If this perturbation yields an adversarial example, the attack makes a small movement towards the source image and repeats these steps until the decision boundary is reached. Subsequently, by traveling along the decision boundary based on sampling, projecting and moving towards the source image, the adversarial example is refined until an adversarial example with a desirable distortion is discovered.
Optimization Frameworks. In the absence of a means for computing the gradient for solving (1), the attacks in [13] and [14] attempt to solve the optimization problem using methods to estimate the gradient. Cheng et al. [13] samples directions from a Gaussian distribution and applies a zeroth-order gradient estimation method in their OPT-attack, then Cheng et al. [14] leveraged their former optimization framework and proposed a zeroth-order optimization algorithm called Sign-OPT that is much faster to converge. Chen et al. [11] introduced a different optimization framework named HopSkipJumpAttack using a Monte Carlo method to find the approximate gradient direction to descend.
Evaluating Robustness. To understand the robustness of recent attack methods and illustrate the costly nature of discovering low distortion adversarial examples with these attacks, we propose an exhaustive but tractable experiment using the relatively small number of classes albeit with a significantly large validation set offered by CIFAR10 dataset. The protocol for assessing robustness of each state-of-the-art method described is carefully described in Appendix C.
Hard Cases. Empirically, we define a hard case as a pair of source and starting images—the starting image is from a given target class—where a given decision-based attack fails to yield an adversarial example with a distortion below a desirable threshold using a set query budget.
II-D Observations from Assessing Attacks
We make the following observations from our results summarized in Figures 2 and 3.
Observation 1: Hard Cases. In decision-based attacks, specific classes and/or samples from classes are more difficult to attack than others. As illustrated in Fig. 2 (left), the current attack methods are not uniformly effective against all pairs of source and starting images from target classes.
Interestingly, any of the gradient estimation methods can approximate the true gradient given enough queries (or samples) to the target model. However, solutions can become entrapped in various local minima. Further, approaching a local minimum or a plateau can considerably undermine the quality of that approximation; for instance, estimated gradients may become noisy when the gradient magnitude diminishes whilst approaching a local minimum. As shown in Fig. 2 (right), even with 100K queries, the solutions based on the gradient direction estimation methods do not improve the distortion of the adversarial sample for the car classified as a dog (Hard case).
Observation 2: Attack initialization. An attack algorithm’s ability to find a low distortion adversarial example with a given query budget is dependent on the starting image from a target class selected for initializing the attack algorithm. Interestingly, Chen et al. [11] in their S&P2020 paper briefly noted the potential for an algorithm to get trapped in a bad local minimum based on the starting image used to initialize an attack. Our systematic study confirms this intuition.
In this case, the achievable distortion of an adversarial example is highly dependent on the starting image and the behavior of the algorithm. This observation is illustrated by comparing the results of starting image 1 with image 2 for different attack methods in Fig. 3 and by 100 samples randomly selected from the hard set of each method—see Section IV-C and IV-D for more details. The results demonstrates the dependence of attack success on the starting image accessible to an adversary.
Currently, there is no effective initialization method to determine a good starting image, prior to mounting an attack. Therefore, developing robust attack that is less sensitive to the choice of starting image remains an open challenge.
Observation 3: Perturbation Region.
Current attack approaches aim to perturb the whole image to traverse the decision boundary to find an adversarial example with minimum distortion. In other words, these methods always manipulate the whole image at a time and result in perturbations that is spread over the entire image as illustrated by perturbation heat maps in Fig. 3. Another interesting remark drawn from these figures is that the main features (for example edges) of the starting image remains super-imposed in an adversarial example. However, most of the state-of-the-art classifiers in computer vision utilize convolutional filters to extract local patterns in an image; further, visual explanation tools demonstrate the reliance of classifiers on key salient features of an image. Therefore, whether an attack could achieve a lower distortion adversarial example by targeting the filter operation over local features in contrast to manipulation of the whole image remains an interesting direction to explore.
II-E An Intuition into Attack Methods
To understand and illustrate the underlying cause of the first two observations, we use Boundary attack (BA) [6], Sign-OPT [14] and HopSkipJump [11] to attack a Toy model. The decision boundary of the Toy model in a 2D input space illustrates a constraint of the optimization problem in (1). This decision boundary is represented by $g(z_{1},z_{2})=(z_{1}-2)(z_{1}-1)^{2}(z_{1}+1)^{3}-z_{2}=0$ where $z_{1}$ and $z_{2}$ denote two coordinates of a point such as a starting point or a source point as illustrated in Fig. 4. A point above the boundary is classified as in target class; otherwise, it belongs to the source class. The black dot ($\mathbin{\vbox{\hbox{\scalebox{0.85}{$\bullet$}}}}$) source point denotes a source class example whilst black dot ($\mathbin{\vbox{\hbox{\scalebox{0.85}{$\bullet$}}}}$) starting point denotes a starting target class example. All three methods are initialized with the same starting point, we then employ the attacks to search for an adversarial point within the target class and closest to the source point; alternatively, we aim to solve the optimization problem in (1), where the objective is to minimize the $l_{2}$ distance to the source point subject to the constraint imposed by the decision boundary, using these attack algorithms.
Fig. 4 illustrates several intermediate adversarial example points denoted by blue dots and a final adversarial example achieved by each method denoted by a yellow triangle for one example attack execution. Given the stochastic nature of the algorithms, we execute each attack 100,000 times with different random seeds. All of the methods, except HopSkipJump fails to find the optimal solution—global minimum—and HopSkipJump only managed to reach the optimal solution in 2.5 $\%$ of the attempts. As illustrated in Fig. 4, the approximate gradient appears to be noisy and the methods traverses the decision boundary in an incorrect direction towards the local minimum rather than the global minimum. Although not illustrated here, changing the starting coordinate can lead all of these methods to discover the global minimum.
III Proposed Attack Framework
We observe that: i) gradient estimation methods in attacks face an entrapment problem in a highly complex loss landscape; ii) current attacks focus on altering all of the coordinates of an image simultaneously to forge a perturbation; and iii) the success of current attacks are sensitive to the chosen or available starting image possessed by an adversary.
We propose an analogous Randomized Block Coordinate Descent method—named BlockDescent—that aims to manipulate local features and target convolutional filter outputs by modifying values of coordinates in a square-block region and in different color channels with targeted perturbations. We propose localized changes to affect convolutional filter outputs and pixel values as a means of impacting on salient features and may be even mimic salient features of the target. This leads to potential redirection and escape from entrapment in a bad local minimum with minimal but effective changes to the image to mislead the classifier. In other words, we propose taking a direct path along some coordinates towards a source image whilst retaining the target class label to prevent the problem encountered by gradient estimation methods—entrapment in a local minimum as shown in Fig.4.
Further, when employing gradient estimation methods, the gradient values decrease as we move closer to the source image leading to increasingly larger number of perturbations needed to converge. This issue is exacerbated if there is a plateau in the decision boundary; now the gradient estimation methods are as effective as a random search. We conjecture that the hard cases are examples of where the gradient of the distortions are generally small and, thus, leads to a local optima. However, we observe that the gradient estimation methods are effective in two cases: (a) initial stages of optimizing Eq. (1) or (b) at close proximity to the source image. In (a), the gradients are sufficiently large to be estimated effectively, and in (b)
small changes and refinements (i.e. few perturbation iterations) facilitate a decent to the optimum.
Consequently, we propose a new framework using gradient estimation for the initial descent—case (a)—supported by BlockDescent to escape entrapment and noisy gradient problems and refining the adversarial example supported by a gradient estimation based descent to forge a robust and query efficient attack. Importantly, BlockDescent is insensitive to the choice of starting images, although it is effectively initialized with a gradient estimation, because BlockDescent manipulates blocks that causes a move away from the direction set by a starting images. The new framework we propose, RamBoAttack, is illustrated in Fig. 5.
Summary. Gradient estimation methods are fast but face the potential problem of getting trapped in a bad local minimum, particularly in hard cases. BlockDescent, on the other hand, is slower—selecting to manipulate local regions—but is capable of tackling the problems faced by gradient estimation attacks. Therefore, we develop a hybrid framework called RamBoAttack for query efficient decision-based attacks that can exploit the merits of both approaches. In particular, our derivative-free optimization method considers, for the first time, an approach to manipulate blocks of coordinates in the input image to influence the outcome of convolution operations used in deep neural networks as a means for misguiding a networks decision and generating adversarial examples with minimal manipulations.
III-A Approach
Our proposed attack thus comprises of BlockDescent and two components of gradient estimation—GradEstimation—as shown in Fig. 5 and described in Algorithm 1. The gradient estimation algorithms used by these two components can be the same or different from each other. When starting an attack, particularly in targeted setting, the first component is initialized with a starting image $\bm{\tilde{x}}$ from a target class and approaches the decision boundary via a binary search—the first step in a gradient estimation method. We employ the gradient estimation method to search for adversarial examples until reaching its own local minimum. We call it a switching point $\bm{x_{\text{s}}}$ because from this point, gradient estimation method switches to BlockDescent. If the gradient estimation method is entrapped in local minimum, BlockDescent helps to move away from that local minimum. Subsequently, when local changes are insufficient, the attack switches to the third component to refine the adversarial example crafted by BlockDescent which is considered as the second switching point. This refinement aims to search for the final adversarial example $\bm{x}_{\text{a}}$ with a lower distortion.
Fig. 6 illustrates RamBoAttack against the Toy model used in Section II-E and demonstrates the effectiveness of the attack we propose. Particularly, the first gradient estimation approach searches for and reaches the adversarial examples $\bm{\tilde{x}}^{(1)},~{}\bm{\tilde{x}}^{(2)},~{}\bm{\tilde{x}}^{(3)}$ at different steps towards approaching the source point but is stuck
at $\bm{\tilde{x}}^{(4)}$ which is a local minimum of the objective function $D(\bm{x},\bm{x^{*}})$ subject to the constraint defined by the decision boundary $g(z_{1},z_{2})$.
Henceforth, BlockDescent searches for next adversarial examples $\bm{\tilde{x}}^{(5)},\cdots,~{}\bm{\tilde{x}}^{(7)}$ by modifying one coordinate at a time—in this 2D example—by applying $\delta$ changes. Subsequently, the second gradient estimation method continues searching for adversarial examples $\bm{\tilde{x}}^{(k)}$ in the neighborhood areas until reaching the near optimal $\bm{x}_{\text{a}}$. Most importantly, in contrast to experiment in Fig. 4 when evaluating RamBoAttack over 100,000 attacks on the Toy model, our proposed attack always reached the optimal or near optimal solution.
When to switch to BlockDescent? The gradient estimation methods are designed to work alone rather than with other methods. Therefore, we develop a sub-module GradEstimation to call these methods and determine when to switch from a gradient estimation method to BlockDescent. Empirically, gradient estimation methods reach their local minimum when they cannot find any better adversarial example after several steps of searching. In practice, this can be determined by the distortion reduction rate $\Delta$ after every $T$ queries—a time frame to calculate $\Delta$. However, in gradient estimation methods, the number of queries per iteration is varied so we relax this by accumulating the number of queries after each iteration. Whenever it exceeds $T$, we compute $\Delta$ and if this distortion reduction rate is below a switching threshold $\epsilon_{\text{s}}$, it switches to BlockDescent (see Algorithm 2).
III-B BlockDescent
We recognize that the architecture of most machine learning models in computer vision is based on a Convolutional Neural Network (CNN) built on convolution operations. These convolution operations are defined as $c\times q\times q$ where $q$ is the size of the filter and $c$ is the number of channels to extract local patterns of an image. Consequently, we hypothesize that altering a block of coordinates as a square-shaped region with an appropriate size can target significant filter outputs potentially having a significant impact on the network’s decision. Perturbing these coordinates can result in an adversarial example with fewer queries since we target regions of the input to impact actual convolutional filters and potentially discover salient features to mimic.
Inspired by this, we adopt a notion of square-block perturbation regions and introduce BlockDescent that manipulates blocks of size $n$. BlockDescent has two stages: i) crafting a sample; and ii) its evaluation as described in Algorithm 3.
Crafting a Sample. In each iteration, the first stage of BlockDescent aims to yield a sample $\bm{x}^{\prime}$ that is initialized with $\bm{x}^{(k)}$ which is an adversarial example at $k$-th step. To increase convergent rate and reduce query number, BlockDescent modifies several blocks of coordinates concurrently. It firstly selects $m$ different coordinates across different channels (R, G, B) of an image by choosing a set $S=\{S_{1},S_{2},\cdots,S_{\text{m}}\}$ where $S_{\text{t}}=\{c_{\text{t}},w_{\text{t}},h_{\text{t}}\}$ is selected uniformly at random such that $c_{\text{t}}\in[1,C]$, $w_{\text{t}}\in[1,W]$ and $h_{\text{t}}\in[1,H]$, where $t=1,2,\cdots,m$ and $C,W,H$ denote the number of channel, width and height of an image.
This random selection is sampling without replacement and each selected coordinate $x^{\prime}_{\text{c,w,h}}$ is a center of a square block $\bm{x}^{\prime}_{\text{B}_{\text{t}}}$, where $\bm{x}^{\prime}_{\text{B}_{\text{t}}}$ represents $\bm{x}^{\prime}_{[c_{\text{t}},w_{\text{t}}-n:w_{\text{t}}+n,h_{\text{t}}-n:h_{\text{t}}+n]}$. Likewise, $m$ corresponding blocks $\bm{x}_{\text{B}_{\text{t}}}$ are yielded from the source image $\bm{x}$.
A mask $M$ with the same size as $\bm{x}^{\prime}_{\text{B}_{\text{t}}}$ can be defined as $M=sign(\bm{x}_{\text{B}_{\text{t}}}-\bm{x}^{\prime}_{\text{B}_{\text{t}}})$. This mask is used to identify the direction of perturbation for each element of a block $\bm{x}^{\prime}_{\text{B}_{\text{t}}}$. When each element of a block which is a coordinate of an image is manipulated to move along this direction, it tends to move towards to its corresponding element in the source image. The sample $\bm{x}^{\prime}$ is crafted when each of $m$ blocks of coordinates is updated as below:
$$\bm{x}^{\prime}_{\text{B}_{\text{t}}}\leftarrow\bm{x}^{\prime}_{\text{B}_{\text{t}}}+M\times\delta$$
(2)
Where $\delta$ is a scalar which denotes an amount of perturbation for each element and it reduces by $\lambda$ after each cycle. One cycle is ended when all coordinates are selected for perturbation. If $\delta$ is initialized with a small value, it is slow convergent and results in query inefficiency from the beginning. Whilst, for large initial $\delta$, modifying blocks of coordinate almost leads to a sample moving further from the source image from beginning rather than moving closer. Consequently, it requires several cycles until $\delta$ reduces to a suitable value. To tackle this issue, we exploit the distribution of the absolute difference between all coordinates of a sample and their corresponding coordinate in a source image and use $i$-th percentile $P_{\text{i}}$ of this distribution to specify a proper initial $\delta$. In Equation 2, only selected square blocks are perturbed while the rest of $\bm{\tilde{x}}$ remains unchanged.
Evaluate Crafted Sample. In the second stage, to ensure a descent of distortion and improve query efficiency, a crafted sample $\bm{x}^{\prime}$ is merely evaluated by the victim model if it moves closer to $\bm{x}$. If the adversarial criteria is then satisfied ($\mathcal{C}(f(\bm{x}^{\prime}))=1$), the perturbation will make a change to update the next adversarial example as $\bm{\tilde{x}}^{(k+1)}=\bm{x}^{\prime}$. Otherwise the perturbation will be discarded.
Determining when to switch to the next component. Similar to the switching criterion of gradient estimation methods, BlockDescent should switch to the next component when it cannot find any better adversarial example that can be empirically measured by distortion reduction rate $\Delta$ per $T$ queries. However, we observe that BlockDescent is a gradient-free optimization so $\Delta$ is highly varied for each subsequent query. As such we cannot simply apply the same criterion as gradient estimation methods. Consequently, to determine a better switching criterion for BlockDescent, we adopt a smoothing technique based on Simple Moving Average to measure the distortion reduction rate $\Delta$. In practice, $\Delta$ is computed as follows:
$$\Delta\leftarrow\frac{1}{T}\sum^{n_{q}-T}_{l=n_{q}-2T}(D_{l}-D_{(l+T)})$$
(3)
where $D_{\text{l}}$ is a distance between $\bm{x}$ and $\bm{\tilde{x}}^{(k)}$ at query $l$, $n_{\text{q}}$ is $n_{\text{q}}$-th query. If $\Delta$ is smaller than a switching threshold $\epsilon_{\text{s}}$, BlockDescent switch to the next component.
IV Experiments and Evaluations
IV-A Experiment Settings and Summary
Attacks and Datasets. In this section, we evaluate the effectiveness of our RamBoAttack versus current state-of-the-art attacks—Boundary attack (Boundary) [6], Sign-OPT [14] and HopSkipJump [11] on two standard datasets: CIFAR10 [20] and ImageNet [16]. All hyper-parameters of our RamBoAttack are described in Appendix B and all of the evaluation sets are described in Section IV-B, IV-C, Appendix C and D.
Models. For a fair comparison, for CIFAR10, we use the same CNN architecture used by Cheng et al. [13, 14]. This network comprises of four convolutional layers, two max-pooling layers and two fully connected layers. For evaluation on ImageNet, we use a pre-trained ResNet-50 [18] provided by torchvision [23] which obtains 76.15% Top-1 test accuracy. In addition, all images are normalized into pixel scale of $[0,1]$.
Evaluation Measures. To evaluate the performance of method, prior works use different metrics such as a score based on the median squared $l_{2}$-norm [6] and median $l_{2}$-norm distortion versus the number of queries [14, 11]. However, median metric is not able to highlight the existence of the so-called hard cases and their impact on the performance of an attack so the evaluation may be less reliable. Therefore, in addition to median, we report average $l_{2}$-norm distortion. We also adopt Attack Success Rate (ASR) used in [11] to measures the attack success of crafted adversarial samples, obtained with a given query budget, at various distortion limits.
Gradient Estimation Selection for RamBoAttack.
We apply two state-of-the-art gradient estimation methods, HopSkipJump and Sign-OPT, and derive two RamBoAttack attacks: i) RamBoAttack (HSJA), composed of HopSkipJump, BlockDescent and Sign-OPT; and ii) RamBoAttack (SOPT), composed of Sign-OPT and BlockDescent. We do not use HopSkipJump for the second gradient descent stage because we observe Sign-OPT to be more effective at refining adversarial samples—as also observed in [14].
Experimental Regime. We summarize the extensive experiments conducted with CIFA10 and ImageNet datasets. All experiments are performed on one RTX TITAN GPU and one 2080Ti GPU. The total running time for all experiments is approximately 1,742 hours. The running time of each experiment is described in Appendix A.
•
Robustness of RamBoAttack: From the observations in Section II-D, we aim to investigate the robustness of our RamBoAttack by assessing the existence of a hard set for our RamBoAttack. We execute the exhaustive evaluation protocol used in Section II-D and compare results with state-of-the-art attacks in Section IV-B.
•
Attacking Hard Sets: Most attacks demonstrate impressive performance in non-hard cases whilst struggling with hard cases. Therefore,
we compare and demonstrate the performance differences—in terms of query efficiency, attack success rate and distortion—that exists on hard evaluation sets in Section IV-C.
•
Impact of the Starting Image: We observed the impact of the starting image from the target class on the success of the attack in Section II-D. Hence, the exhaustive experimental evaluations in Section IV-D explores the sensitivity of an attack’s success to the choice of the attacker’s starting image. An important consideration to evade detection through trial-and-error testing of starting images to find easy samples or when access to samples (source or target class) are restricted.
•
Attack Insights: We observed clear correlations between perturbations yielded by our RamBoAttack and salient regions of target images embedded inconspicuously in adversarial examples. We investigate these artifacts resulting from the localized perturbation method in BlockDescent in Section IV-E.
•
Attacks Against Defended Models: Decision-based attacks are able to fool standard models. This naturally leads to the critical question of whether or not such attacks are able to bypass defensed models. Thus, the experiments in Section IV-F aim to investigate the robustness of decision-based attacks against defense mechanisms.
•
Validation on Balance Datasets: Constructing hard and non-hard sets for all decision-based attack methods through exhaustive evaluations to assess robustness is extremely time consuming. Therefore, we propose a reliable and reproducible attack evaluation strategy to validate robustness. We differ the proposed evaluation protocol and results to Appendix D and release all of the constructed sets for comparisons in future studies.
•
Untargeted Attack Validation: In addition to targeted attacks, for completeness, we evaluate our RamBoAttack and other state-of-the-art attacks on CIFAR10 and ImageNet under the untargeted attack setting. We defer these results to Appendix E.
IV-B Robustness of RamBoAttack
We carry out a comprehensive experiment, similar to that in Section II-D. In this experiment we use a range of distortion threshold of 0.7 to 1.1. Notably, both [11] and [14] reported their methods to achieve a distortion level below 0.3 after 10,000 queries; hence our proposed values are not guaranteed to discover hard cases because the smallest value, 0.7, is much higher than 0.3 achieved in other studies. The main aim is to illustrate how our RamBoAttack are able to craft more adversarial example with distortions below a range of distortions from 0.7 to 1.1 for each sample of the entire CIFAR10 test set. We compare the performance of the RamBoAttack with Sign-OPT and HopSkipJump. Fig. 7 shows a remarkably low number of hard cases for the RamBoAttack. The total number of hard cases achieved for our RamBoAttack is approximately 10 times lower for the distortion ranges from 0.9 to 1.1. For distortion at 0.7 and 0.8, the number of hard cases drops approximately 2 times and 5 times, respectively in comparison with the other attack methods. Interestingly, as expected, hard pairs encountered by Sign-OPT and HopSkipJump are resolved with RamBoAttack as shown in Appendix J—see Fig. 33.
IV-C Attacking Hard Sets
Evaluations on CIFAR10. From CIFAR10 test set, we generate a hard set for Boundary Attack called hard-set A and another hard set for both Sign-OPT and HopSkipJump called hard-set B. The hard-set A and B are composed of 400 hard sample pairs of a source image and a starting image. A hard sample is selected when a distortion between a source image and its adversarial example found after 50,000 queries is larger than or equal to 0.9. For a fair comparison, each method is employed to craft an adversarial example for each source image initialized with a given starting image. In addition, we also construct a common non-hard set for all three attacks called non-hard set C to compare and highlight the significant difference between evaluation results on hard and non-hard sets as shown in Fig. 8. In particular, Fig. 8 illustrates that the average distortion versus queries on the common non-hard set C achieved by these methods is significantly lower than that obtained on theirs own hard set after 50,000 queries.
We evaluate our RamBoAttack on hard-set A & B. Fig. 9 shows that Boundary Attack, Sign-OPT and HopSkipJump do not efficiently find an adversarial example with low distortion; however, RamBoAttack can achieve better performance on the hard-sets. We defer detailed evaluations on non-hard-sets to Appendix D; as expected, RamBoAttack performs comparably-well on these sets. Histogram charts in Fig. 10 demonstrate that for each hard-set, our attacks are able to find lower distortion adversarial examples for most hard cases and the distortion distribution on both hard-sets: i) are shifted to smaller distortion regions; and ii) show significantly smaller spread or variance.
Although we observe RamBoAttack to result in fewer hard samples in comparison to other methods at various distortion thresholds, we construct a hard set for RamBoAttack called hard-set D based on the same criteria used to generate hard-set A and B to assess if the hard-set for RamBoAttack could somehow be easier for the other attack methods. The total number of samples for this set is 115 sample pairs because RamBoAttack has a much lower number of hard cases than their counterparts (namely BA, HopSkipJump and Sign-OPT) at a given distortion threshold as illustrated in Fig. 7. We summarize the results from our evaluations in Fig. 11. As expected, RamBoAttacks are more query efficient and are able to craft lower mean and median distortion adversarial examples as well as achieve higher attack success rates at both query budgets. In particular, at distortion levels above 1.0, in comparison to other attacks, RamBoAttacks obtain much higher attack success rates—notably, with significant margins at the lower query budget of 25K, since RamBoAttacks employ BlockDescent when the gradient estimation method is unable to make progress (potentially being stuck in a bad local minimum), to discover better solutions and craft lower distortion adversarial samples.
Evaluation on ImageNet. To demonstrate the robustness of our attacks on a large scale model and dataset, we compose a hard-set with $120$ hard sample pairs from ImageNet. A hard sample is selected when a distortion between a source image and its adversarial example found after 50,000 queries by Sign-OPT and HopSkipJump is larger than or equal to $15$. Notably, we do not compose a hard set for Boundary Attack because it cannot yield low distortion adversarial examples efficiently on large scale datasets. Fig. 12 demonstrates that our RamBoAttacks outperform both Sign-OPT and HopSkipJump on the hard-set. We defer detailed evaluations on non-hard-sets to Appendix D; notably, RamBoAttacks achieve improved results on the more complex ImageNet dataset. The histograms in Fig. 13 show distortion distributions for our attacks shifted significantly to smaller distortion regions with smaller variance and fewer outliers compared to other attacks.
IV-D Impact of Various Starting Images
In this experiment, we first compose subset A and B by selecting $100$ random hard sample pairs from hard-set A and B, respectively (see Section IV-C for these sets). Our RamBoAttacks are compared with Boundary attack on subset A and with Sign-OPT and HopSkipJump, on subset B. In Section IV-C, each method needs to yield an adversarial example for a pair of a given source image and a given starting image. In contrast, in this experiment, the given starting image is replaced by 10 starting images randomly selected from the CIFAR10 evaluation set and correctly classified by the model. All evaluations are executed with a 50,000 query budget.
In Fig. 14, the size of each bubble denotes the standard deviation while y-axis value indicates average distortion. We can see that our RamBoAttacks almost achieve smaller mean and standard deviation than Sign-OPT, HopSkipJump and Boundary Attack on subset A and B. A robust method should be less susceptible to the selection of a starting image and yield a low distortion adversarial example most chosen starting images. We can observe from Fig. 14 that our RamBoAttacks are more robust than Sign-OPT, HopSkipJump and Boundary attacks as a consequence of being less sensitive to the chosen starting images. We also carry out this experiment on an non-hard subset C and defer these results to Appendix F.
IV-E Attack Insights
Perturbation Regions.
First, we develop a simple technique to transform a perturbation with size $C\times W\times H$ to a Perturbation Heat Map (PHM) with size $W\times H$ that is able to visualize perturbation magnitude of each pixel. This transformation is defined as:
$$\displaystyle PHM_{i,j}\leftarrow\frac{\text{A}_{i,j}}{\displaystyle\max(\text{A})},$$
(4)
where $\text{A}_{i,j}=\sum^{C}_{c=1}|(\bm{x}-\bm{x_{\text{a}}})_{c,i,j}|$; $c\in[1,C]$, $i\in[1,W]$ and $j\in[1,H]$. Second, since Grad-CAM [27] is a popular visual explanation technique for visualizing salient features in an input image to understand a CNN model’s decision, we use it to investigate the adversarial perturbations generated by our attack and the salient features in the target image largely responsible for a model’s decision for the classification of an input to a target class.
In all of the attack methods, we observe the attacks to embed the target image in the source image in a deceptive manner. However, in hard cases, based on PHM and Grad-CAM outcomes, we observe a strong connection between adversarial perturbations found and salient regions in starting images as illustrated in Fig. 15 for RamBoAttacks. It shows that our RamBoAttacks are able to discover and limit manipulations of pixels to salient regions responsible for determining the classification decision of an input image to the target class to craft adversarial examples. This salient region consists of the most discriminative local structures of a starting image against a source image. Because BlockDescent is able to manipulate local regions, RamBoAttacks are able to exploit only this discriminative region and employ less adversarial perturbations than Sign-OPT and HopSkipJump to promote features of a starting image and suppress the feature of the source image. Therefore, it may shed light on why RamBoAttack with the core component BlockDescent are able to tackle the so-called hard cases. Moreover, in these hard cases, we observe that our RamBoAttack is able to yield perturbations with more semantic structure if compared with Sign-OPT or HopSkipJump.
Visualization of ImageNet Hard versus Non-hard Cases. Fig. 16 illustrates adversarial examples in non-hard cases and hard cases yielded by Boundary Attack, Sign-OPT, HopSkipJump and our RamBoAttack (HSJA) after 50K and 100K queries, respectively. The second row of Fig. 16 shows each corresponding adversarial example and the third row illustrates PHM of each adversarial example. The last row shows the $l_{2}$ distortion between each adversarial example and the source image.
For the adversarial example of non-hard cases, all methods are able to craft low distortion adversarial examples except Boundary attack. These adversarial examples and their corresponding distortions are comparable. On the contrary, adversarial examples in hard cases yielded by Boundary Attack, Sign-OPT and HopSkipJump have noticeably higher distortion than the one crafted by our attack. We observe Boundary Attack, Sign-OPT and HopSkipJump to experience potential entrapment when searching for a low distortion adversarial example, even when the budget is increased to 100,000 queries.
Convergence. The problem considered in this paper is non-convex and non-differentiable. As such, providing a guaranteed global minimum is not possible. However, our insight is that the gradient estimation in blackbox attacks is unreliable particularly in the vicinity of the local minima. To remedy the problem, we propose RamBoAttack as a generic method to overcome this issue. We employ a gradient estimation method in the initial descent using any of the existing alternatives (before BlockDescent) and subsequently in the refinement stage (after BlockDescent). Hence, employing the gradient estimation in [14], for instance, would imply that the theoretical convergence analysis therein is still valid for our method.
IV-F Attack Against Defense Mechanism
In this section, we evaluate the robustness of various attacks against three different defense mechanisms including region-based classification, adversarial training and defensive distillation. We choose these defense methods due to their own strength; to illustrate, region-based classifiers can pragmatically alleviate various adversarial attacks without sacrificing classification accuracy on benign inputs [8] whilst adversarial training [17, 22, 30] is one of the most effective defense mechanisms against adversarial attacks [5] and defensive distillation [26] employ’s a form of gradient masking.
For a baseline, we choose C$\&$W attack [9], a state-of-the-art white-box attack. The adversarial training based models used in this experiment is trained with Projected Gradient Descent (PGD) adversarial training proposed in [22]. The experiment is conducted on the balance set withdrawn from CIFAR10 described in Appendix D. We evaluate our RamBoAttack and current state-of-the-art decision-based attacks at different query budgets: 5K, 10K, 25K and 50K.
Based on the results, shown in Appendix G for more details, we observe that RamBoAttacks are more robust than Boudnary, Sign-OPT, HopSkipJump and even C$\&$W (white-box attack baseline) when attacking a region-based classifier. In attacks against models using adversarial training and defensive distillation, RamBoAttacks are able to achieve comparable performance to Sign-OPT and HopSkipJump but outperform Boundary and C$\&$W attack—white-box attack baseline.
V Related Work
Transfer Approaches. Malicious adversaries are able to exploit transferability of adversarial example generated on an ensemble DNN to attack against a target neural network as shown by Liu et al. [21]. Papernot et al. [24] introduced a transfer attack by training a surrogate model with output queried from a target model. Even though this approach does not require prior knowledge and full access to a model, it must have access to a full or partial training dataset in order that they can train a surrogate model to synthesize adversarial examples. Moreover, for complex target models, the transfer approach has limited effectiveness [28].
Random Search Approaches.
In decision-based setting, Brendel et al. [6] and Brunner et al. [7] proposed Boundary Attack (BA) and Biased Boundary Attack (Biased BA) respectively that require limited information and access to a target DNN model such as top-k predicted labels. Instead of searching on Gaussian distribution like BA, Biased BA exploits low frequency perturbations based on Perlin Noise and combines with regional masking as well as gradients from surrogate models. Even though both of them work surprisingly well, they do not gain query efficiency and require a large number of queries to explore a high-dimensional space. Another attack method introduced by Ilyas et al. [19] exploits discretized score based on the ranking of the adversarial label. However, since this method requires top k sorted label results from a deep learning model to estimate the discretized score, it cannot work in top 1 label scenario like BA or Bias BA.
Optimization Approaches.
In score-based scenario, attackers can query a deep learning model to receive probability outputs or confident scores. Therefore, Chen et al. [12] can formulate an optimization problem to directly optimize an objective function based on these outputs. This method is considered as a derivative-free optimization method. Nevertheless, in the decision-based setting, adversaries have no access to confident scores or class probability to gain gradient information. Hence, the formulated optimization problem proposed by Chen et al. [12] cannot be applied. In Section II-C we discuss in detail optimization-based framework under decision-based setting and refer the reader to the section for further details.
VI Conclusion
Overall, we propose a new attack method in a decision based setting; RamBoAttack. In contrast to modifying a whole image as in current attacks, we exploit localized perturbations to yield more effective and low distortion adversarial examples in the so-called hard cases. Our empirical results demonstrate that our attack outperforms current state-of-the-art attacks. Interestingly, while the main proposed component, BlockDescent, is able to significantly improve the performance and robustness of attacks in the so-called hard cases, it does not degrade performance in non-hard cases. As a result, validation results on small and large scale evaluation sets demonstrate that RamBoAttack is more robust and query efficient than current state-of-the-art attacks.
References
[1]
”Amazon Machine Learning”. [Online]. Available:
https://aws.amazon.com/machine-learning/
[2]
”Azure Cognitive Service”. [Online]. Available:
https://azure.microsoft.com/en-us/services/cognitive-services/
[3]
”Google Cloud Vision”. [Online]. Available:
https://cloud.google.com/vision
[4]
”IBM Watson Machine Learning”. [Online]. Available:
https://www.ibm.com/cloud/machine-learning
[5]
A. Athalye, N. Carlini, and D. Wagner, “Obfuscated gradients give a false
sense of security: Circumventing defenses to adversarial examples,”
International Conference on Machine Learning (ICML), 2018.
[6]
W. Brendel, J. Rauber, and Bethge, “Decision-based adversarial attacks:
Reliable attacks against black-box machine learning models,”
International Conference on Learning Recognition(ICLR), 2018.
[7]
T. Brunner, F. Diehl, M. Le, and A. Knoll, “Guessing smart: Biased sampling
for efficient black-box adversarial attacks,” The IEEE International
Conference on Computer Vision (ICCV), 2019.
[8]
X. Cao and N. Z. Gong, “Mitigating Evasion Attacks to Deep Neural Networks
via Region-based Classification,” Annual Computer Security
Applications Conference (ACSAC), 2017.
[9]
N. Carlini and D. Wagner, “Towards evaluating the robustness of neural
networks,” IEEE Symposium on Security and Privacy, 2017.
[10]
C. Chen, A. Seff, A. Kornhauser, and J. Xiao, “DeepDriving: Learning
Affordance for Direct Perception in Autonomous Driving,” The IEEE
International Conference on Computer Vision (ICCV), 2015.
[11]
J. Chen, M. I. Jordan, and M. J. Wainwright, “HopSkipJumpAttack: A
Query-Efficient Decision-Based Attack,” IEEE Symposium on Security
and Privacy (SSP), 2020.
[12]
P. Chen, H. Zhang, Y. Sharma, J. Yi, and C. Hsieh, “Zoo: Zeroth order
optimization based black-box attacks to deep neural networks without training
substitute models,” ACM Workshop on Artificial Intelligence and
Security (AISec), pp. 15–26, 2017.
[13]
M. Cheng, T. Le, P. Chen, H. Zhang, C. Hsieh, and J. Yi, “Query-Efficient
Hard-label Black-box Attack: An Optimization-based Approach,”
International Conference on Learning Recognition(ICLR), 2019.
[14]
M. Cheng, S. Singh, P. Chen, P.-Y. Chen, H. Yi, J. Zhang, and C.-J. Hsieh,
“Sign-OPT: A Query-Efficient Hard-label Adversarial Attack,”
International Conference on Learning Recognition(ICLR), 2020.
[15]
S. Cheng, Y. Dong, T. Pang, H. Su, and J. Zhu, “Improving Black-box
Adversarial Attacks with a Transfer-based Prior,” Conference on
Neural Information Processing Systems (NeurIPS), 2019.
[16]
J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei, “ImageNet: A
large-scale hierarchical image database,” Computer Vision and Pattern
Recognition(CVPR), 2009.
[17]
I. J. Goodfellow, J. Shlens, and J. Szegedy, “Explaining and harnessing
adversarial examples,” International Conference on Learning
Recognition(ICLR), 2014.
[18]
K. He, X. Zhang, S. Ren, and J. Sun, “ Deep residual learning for image
recognition,” Computer Vision and Pattern Recognition (CVPR), p.
770–778, 2016.
[19]
A. Ilyas, L. Engstrom, A. Athalye, and J. Lin, “Black-box adversarial attacks
with limited queries and information,” International Conference on
Machine Learning (ICML), 2018.
[20]
A. Krizhevsky, V. Nair, and G. Hinton. Cifar-10 (canadian institute for
advanced research). [Online]. Available:
http://www.cs.toronto.edu/˜kriz/cifar.html
[21]
Y. Liu, X. Chen, C. Liu, and D. Song, “Delving into transferable adversarial
examples and black-box attacks,” International Conference on Learning
Recognition(ICLR), 2017.
[22]
A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep
learning models resistant to adversarial attacks,” International
Conference on Learning Recognition(ICLR), 2018. [Online]. Available:
https://arxiv.org/abs/1706.06083
[23]
S. Marcel and Y. Rodriguez, “Torchvision the machine-vision package of
torch,” Proceedings of the 18th ACM International Conference on
Multimedia, p. 1485–1488, 2010. [Online]. Available:
https://doi.org/10.1145/1873951.1874254
[24]
N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. Celik, and A. Swami,
“Practical black-box attacks against machine learning,” ACM on Asia
Conference on Computer and Communications Security (ASIA CCS), pp. 506–519,
2017a.
[25]
N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami,
“The limitations of deep learning in adversarial settings,”
Security and Privacy, 2016 IEEE European Symposium, pp. 372–387,
2016.
[26]
N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami, “Distillation as a
Defense to Adversarial Perturbations Against Deep Neural Networks,”
IEEE Symposium on Security and Privacy (SSP), 2016.
[27]
R. Selvaraju, A. Das, R. Vedantam, M. Cogswell, D. Parikh, and D. Batra,
“Grad-CAM: Visual Explanations from Deep Networks via Gradient-based
Localization,” The IEEE International Conference on Computer Vision
(ICCV), 2017.
[28]
F. Suya, J. Chi, D. Evans, and Y. Tian, “Hybrid Batch Attacks: Finding
Black-box Adversarial Examples with Limited Queries,” USENIX Security
Symposium, 2020.
[29]
C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and
R. Fergus, “Intriguing properties of neural networks,”
International Conference on Learning Recognition(ICLR), 2014.
[Online]. Available: https://arxiv.org/abs/1312.6199
[30]
F. Tramer, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel,
“Ensemble adversarial training: Attacks and defenses,”
International Conference on Learning Recognition(ICLR), 2018.
[31]
K. Xu, S. Liu, P. Zhao, P. Chen, H. Zhang, Q. Fan, D. Erdogmus, Y. Wang, and
X. Lin, “Structured Adversarial Attack: Towards General Implementation and
Better Interpretability,” International Conference on Learning
Recognition(ICLR), 2019.
Appendix A Computation Time of Experiments
Appendix B Hyper-parameters and Impacts
Gradient Estimation: The main hyper-parameter $n_{\text{t}}$ used in gradient estimation method is to control when the first component terminates and switches to BlockDescent. In practice, we keep track of query numbers executed and distortion between the source image and a crafted sample per iteration. This information is then used to determine distortion reduction rate $\Delta$ over $T$ queries. On CIFAR10, if applying HopSkipJump or Sign-OPT to the first component, $T=500~{}or~{}400$, respectively while on ImageNet, $T=2000~{}or~{}1000$, respectively.
BlockDescent: The hyper-parameters used are $n=1$, initial $\delta=P_{\text{i}}(|\bm{x}-\bm{x}_{\text{s}}|)$, $m=1,\lambda=1.2,\epsilon_{r}=0.01,\epsilon_{s}=0.01$ for GradEstimation, $\epsilon_{s}=0.01$ for BlockDescent, $T=500$ and $P_{\text{i}}=P_{\text{100}}$. For the larger dataset, ImageNet, the changes are: $m=16,\lambda=2,\epsilon_{r}=0.1,\epsilon_{s}=1$ for GradEstimation, $\epsilon_{s}=0.1$ for BlockDescent, $T=1000$ and $P_{\text{i}}=P_{\text{50}}$.
The impact of parameter $\lambda$: The key parameter that may influence BlockDescent is $\lambda$ because it controls the step size (or perturbation magnitude $\delta$) for each cycle (see line 28 in Algorithm 3). For example, $\lambda$ is used to determine the step from $x^{(4)}$ to $x^{(5)}$ in Fig. 6. If $\lambda$ is small, $\delta$ reduces slightly and thus remains relatively large after each cycle. Consequently BlockDescent takes large movements that are likely to yield large magnitude adversarial examples and/or miss the optimal solution. Alternatively, it may cross the decision boundary into an undesired class (source image class in a targeted attack).
In contrast, if $\lambda$ is large, BlockDescent takes finer steps to yield adversarial samples whilst moving towards the source image and likely stay in the desired class (target class in a targeted attack). Nevertheless, the empirical result with 100 pairs of source and target class images on ImageNet shown in Fig. 17 illustrates that the overall performance of RamBoAttack is not greatly affected by $\lambda$ and at $\lambda=2$, RamBoAttack achieves the best performance.
Appendix C Proposed Robustness Evaluation Protocol
An attack method is mounted to change the true prediction of the DNN from its ground truth label for a given source sample image to each of the different different target classes. For CIFAR10 with ten classes, an attack method selects each of the 1000 test set samples for a given class as a source image and attempts to find an adversarial example for each of the other target classes (of which there are 9). Consequently, we evaluate 90,000 pairs of source and starting images. Since there is no effective method to choose a starting image from a target class, for a fair evaluation, we apply the same protocol used in [13, 14] to initialize an attack for each method. We execute each attack with a query budget of 50,000 queries. Then we identify hard cases of each attack method against the victim model (detailed in Section IV-A). This protocol can be generalized to other datasets by choosing $n$ samples and $m$ different target classes from that dataset where each target class has its own starting image as shown in Fig. 18.
Appendix D Proposed Validation Protocol for Balanced sets and Results on Non-hard Sets
Evaluation protocol. The second research question highlights a need to evaluate the overall performance of various blackbox attacks under decision-based settings reliably. On CIFAR10, most previous works propose to choose a random evaluation set with randomly sampled images with label $y$ and select a random target label $\tilde{y}$ [14] or set $\tilde{y}=(y+1)$ mod 10 [6, 7, 13]. Nonetheless, these selection schemes may lead to an imbalanced dataset that is insufficient to evaluate the effectiveness of the attack since it may lack the so called hard cases that occur more frequently with specific pairs of classes. As a result, it may lead to a bias in evaluation results and fail to highlight potential weaknesses of an attack. Consequently, were were motivated to propose a more robust and reliable evaluation protocol and illustrate it in Fig. 19.
On balance sets: A balance set comprises of a balanced source set and a balanced target set. Both sets are composed of $N$ different source classes and $N$ corresponding groups. Each group is composed of $m$ different target classes and all source and target classes are randomly chosen from all classes of a test set. In addition, all target classes are different within a group but can be repeated in other groups. Each source class has $n$ samples selected randomly from a test set. Adversaries may have one or several images from each target class and select one to initialize an attack. Each attack method aims to craft an adversarial example for every selected sample from each source class and flip its true prediction towards every target class given in the corresponding group of balanced target set. The total number of evaluation pairs is $N\times n\times m$. For instance, every sample of source class $i$ ($\text{img: }i_{1},i_{2},\cdots,i_{n}$) is flipped towards each target class ($\text{class: }i_{1},i_{2},\cdots,i_{m}$) in the corresponding group $i$ (see Fig. 19).
Balanced Set with CIFAR10. It is simple to carry out a comprehensive evaluation over all classes, so we choose N=10, n=10 and m=9. In addition, to demonstrate the query efficiency and effectiveness of each attack, we employ a query budget of 25,000 and 50,000 across all experiments. RamBoAttack obtain slightly better median and mean distortion than HopSkipJump and Sign-OPT at 25K and 50K, as shown in Table II. On the standard deviation metric used to measure distortion variance across an evaluation set, our RamBoAttack outperform Boundary, HopSkipJump and Sign-OPT at query limit of 25K and 50K. In order words, our attack performs robustly across the evaluation set.
Balanced Set with ImageNet. ImageNet has 1000 distinct classes, hence carrying out a comprehensive evaluation like on CIFAR10 requires huge computing resources and time. Therefore, we choose N=200, n=1 m=5 and limit the query budget to 25,000 and 50,000. The average distortion (on a $\log_{10}$ scale) against the queries and attack success rate (ASR) at 25K and 50K query budgets achieved by RamBoAttack is better than Boundary, Sign-OPT and HopSkipJump attacks as seen in shown in Fig. 21. As shown in Table II, on average distortion metric, RamBoAttacks obtain better result and achieve significantly smaller standard deviation of distortion overall .
On non-hard sets:
In this section, we evaluate the performance of SignOPT, HopSkipJump and our RamBoAttacks on both CIFAR10 and ImageNet non-hard set. The common non-hard set C drawn from CIFAR10 for all methods is composed of 400 non-hard sample pairs. They are selected such that a distortion between a source image and its adversarial example found after 50,000 is smaller or equal 0.6. Likewise, a non-hard set from ImageNet is composed of $120$ non-hard sample pairs and the distortion threshold to select these is is 7. Fig. 22 and 23 show that our attack has comparable performance to SignOPT and HopSkipJump on CIFAR10 non-hard subsets whilst demonstrating improved attack performance by yielding more effective adversarial examples, especially with a 50K query budget, as seen in the higher attack success rates obtained by RamBoAttacks.
Appendix E Untageted Attack Validation
Here, we evaluate our RamBoAttack and other state-of-the-art attacks on two different balanced sets from CIFAR10 and ImageNet as described in Appendix D under an untargeted scenario. for completeness. First, on the balance set from CIFAR10
, our attacks can achieve comparable performance with Sign-OPT and HopSkipJump and obtain approximately 97$\%$ success rate at a distortion of 0.5 on a 25K query budget (see Fig. 24); however, our attack method outperforms Boundary attack. In contrast, on the balance set selected from ImageNet, we observe that our methods can achieve comparable performance with Sign-OPT but outperform HopSkipJump and Bourndary attack as shown in Fig. 25.
Appendix F Impact of Starting Images
In this section, we first compose a non-hard subset with 100 random non-hard sample pairs selected from non-hard set C. We also compose a balance subset from the balance set described in Appendix D. We then evaluate our RamBoAttack, Sign-OPT, HopSkipJump and Boundary attack on these subsets. To conduct this experiment, for every source image and each of its target classes, we randomly select 10 different starting images and these attacks are executed with a query budget of 50K. We calculate the mean and standard deviation of distortion for each sample to measure the robustness of each attack to yield adversarial examples for each source image and target class pair.
In Fig. 26, size of each bubble denotes the standard deviation while the y-axis indicates mean distortion value. We can see that, on the non-hard subset, the RamBoAttacks are able to achieve comparable result to all of the state-of-the-art methods. On the balance subset, our RamBoAttacks can achieve significantly less variance (smaller bubbles) at lower distortions while most results achieved by Sign-OPT, HopSkipJump and Boundary indicate larger variance (larger bubbles) and higher distortions. Consequently, our RamBoAttacks are more robust than Sign-OPT and HopSkipJump and less sensitive to the chosen starting image.
Appendix G Attack against Defended Models
In this section, we illustrate the results that we briefly mention in Section IV-F. Fig. 27 shows that the average and median distortion (on a $\log_{10}$ scale) achieved by RamBoAttacks are significantly lower than BA, Sign-OPT and HopSkipJump. In addition, our attack outperform others in terms of attack success rate (ASR) at 25K and 50K query budgets—i.e. achieves higher ASR on defended models under different query budgets and distortion thresholds. Based on these results, we observe our attack to be robust than exiting attacks when mounting an attack against region-based classifiers.
The reason for this is that existing attack methods need to follow the decision boundary where region-based classifiers are capable of correcting its prediction by uniformly generating a large amount of data points at random and returning the most frequent predicted label. This capability of region-based classifiers prevents binary search in Sign-OPT and HopSkipJump from specifying the boundary exactly and results in noisy and coarse boundary estimations that cause all attack methods aiming to walk along the boundary fail to estimate a useful gradient direction. Nevertheless, our RamBoAttacks are able to break this defense mechanism because the core component, BlockDescent, is a derivative-free optimization that does not need to determine the boundary and estimate a gradient direction to descend.
G-A Results
Fig. 28 shows the attack success rate (ASR) at different distortion levels and query limits for various attack methods against an adversarially trained model and defensive distillation model. Particularly, for adversarial training, our RamBoAttacks can achieve comparable performance with Sign-OPT and HopSkipJump while outperforming Boundary attack within the query limits of 5K, 10K or 25K. In addition, we compare the performance of our attack at different query budgets with the whitebox attack—C$\&$W—used as a baseline for comparison. Notably, we do not execute C$\&$W attack at different query setting because it is a whitebox method and use the best result produced by this attack.
We observe that our attacks are able to obtain a comparable performance with the C$\&$W attack at the 5K query budget. When the query limit is up to 10K and higher, our RamBoAttacks outperform the whitebox C$\&$W baseline attack method. Nevertheless, Adversarial Training is still effective at reducing the ASR achieved by our method, even with a 25K query budget. Success falls from around 99$\%$ (see Fig. 24) to approximately 43$\%$ (see Fig. 28) at a distortion of 1.0 ($l_{2}$ norm). Similarly, at a distortion of 0.3, the ASR decreases from about 60$\%$ (see Fig. 24) to approximately 10$\%$ (see Fig. 28). However, what we can observe is that as the distortion increases, the attack is more effective. This is expected because the attack budget of the adversary is increased above and beyond the budget used for generating the adversarial examples used for building the adversarially trained model.
Likewise, for defensive distillation, our RamBoAttacks can achieve comparable performance with Sign-OPT and HoSkipJump whilst outperforming Boundary attack and C$\&$W whitebox baseline attack at different query budgets. These results confirm the results and findings presented in [11].
G-B C&W Attack Configuration and Results Collection
For clarity, here we describe the configuration used for the C$\&$W attack, the C$\&$W execution strategy, results collection for the C$\&$W attack and blackbox attacks.
For the C$\&$W attack, we adopt the PyTorch implementation of the C$\&$W method used in [13, 14]. In their implementation, they use a learning rate of 0.1 and 1000 iterations for all evaluations (see published code). To search for an adversarial example for an image, the method performs a binary search step to find a relevant constant $c$ within a range from 0.01 to 1000 until a successful attack is achieved. With this configuration, the C$\&$W attack is run once to always yield an adversarial example for every instance. We record the distortion of the adversarial example found.
C$\&$W Results Collection. To construct ASR vs. distortion results, at different distortion thresholds: i) we compute the number of source images in the evaluation set meeting a given distortion threshold (along the x-axis); ii) then divide this by the total number of images in the evaluation set to compute the ASR at each distortion value.
Blackbox Attack Results Collection. For the blackbox attacks, we perform a blackbox attack for each evaluation-set source image, using the set query budgets: 5K, 10K, and 25K. We record the distortion achieved by each source image with a set query budget. To construct ASR vs. distortion, at different distortion thresholds with a given query budget: i) we compute the number of source images in the evaluation set meeting a given distortion threshold (along the x-axis); and ii) then divide this by the total number of images in the evaluation set to compute the ASR at each distortion value.
Appendix H Perturbation Regions and Attack Insights
In this section, we provide additional results on the connection between the adversarial perturbations yielded by RamBoAttack and salient regions visualized by the Grad-CAM tool. Effectively, all of the attack methods embedded the target features within the source image where the changes are effectively unnoticeable. However, Fig. 29 illustrates that a high density of adversarial perturbations yielded by our attack concentrates on a region that is matched to the salient features visualized by the Grad-CAM tool. This is possible because our attack methods employs localized changes to search for adversarial examples and is able to effectively find perturbations targeting salient features of the target class to apply to the input source class image to fool the classifier to classify the source image as the target class.
Further, to help visualize different level of $l_{2}$ distortions, we include Fig. 30. We illustrate two examples where we showcase the sample adversarial examples crafted by RamBoAttack during the progression of the attack.
Appendix I Attack Success Rates vs Query Budgets
In this section, we show results at three different perturbation budgets $\epsilon$ = 0. 4 and 0. 6 for hard-sets A and B from CIFAR10 and $\epsilon$ = 10 and 20 for the hard-set selected from ImageNet. The results demonstrate that our attack is significantly more robust than other attacks within 4-11K query budgets. From 11K, RamBoAttack outperforms others. The reason is that, around this region, the gradient estimation method switches to BlockDescent, resulting in much higher attack success rates compared to the baselines. Notably, on the realistic, high-resolution benchmark task ImageNet, RamBoAttack achieves significantly better results compared to the baselines.
Appendix J Robustness of RamBoAttack
Fig. 33 provides further detailed results on hard cases encountered by different attack methods at distortion threshold of 0.8, 0.9 and 1.0. Compared to Boundary, Sign-OPT and HopSkipJump attacks, our RamBoAttacks achieve much lower number of hard cases at all distortion thresholds. |
Convex Spaces I: Definition and Examples
Tobias Fritz
[email protected]
Max Planck Institute for Mathematics, Vivatsgasse 7, 53111 Bonn, Germany.
Abstract.
We propose an abstract definition of convex spaces as sets where one can take convex combinations in a consistent way. A priori, a convex space is an algebra over a finitary version of the Giry monad. We identify the corresponding Lawvere theory as the category from [pcsm] and use the results obtained there to extract a concrete definition of convex space in terms of a family of binary operations satisfying certain compatibility conditions. After giving an extensive list of examples of convex sets as they appear throughout mathematics and theoretical physics, we find that there also exist convex spaces that cannot be embedded into a vector space: semilattices are a class of examples of purely combinatorial type. In an information-theoretic interpretation, convex subsets of vector spaces are probabilistic, while semilattices are possibilistic. Convex spaces unify these two concepts.
I would like to thank the Max Planck Institute for providing an excellent research environment and finanical support. Branimir Ćaćić and Jens Putzka provided helpful comments on a previous version of this paper.
Contents
1 Introduction
2 Notation
3 Defining convex spaces
1. Introduction
Looking at the history of mathematics, one easily finds an abundance of cases where abstract generalizations of concrete structures into abstract concepts spurred a variety of interesting developments or even opened up completely new fields. Some of the most obvious examples that spring to mind are:
•
The concept of a group, which provides an abstract framework for the study of symmetries.
•
Riemannian manifolds, were modelled after submanifolds of $\mathbb{R}^{n}$ with their intrinsic geometry.
•
Category theory, conceived as an abstract framework for cohomology theories.
•
Operators on Hilbert space, which generalize the Fourier transform and integral equations.
We now consider the notion of convexity as that property of a subset of a vector space that means that the set contains the line segment connecting every two points in that subset. Perhaps surprisingly, an abstract generalization has not (yet) been proposed for this concept of convexity. To the author’s knowledge, the present literature does not contain any concept of abstract convex set that provides a nice notion of convex combinations for its elements. The aim of this paper is to remedy this omission. We note however that ideas similar to the ones presented here also appeared in the online discussion [Lei], at about the same time as the present work started to take shape.
We shall call a set together with a certain notion of abstract convex combinations a convex space. The most obvious examples are convex subsets of vector spaces. However, there is an entirely different class of convex spaces all of which are of a discrete nature, namely meet-semilattices, where the meet operation serves as a convex combination operation. Moreover, one can also construct examples of mixed type, where one has a semilattice as an underlying discrete structure, together with a convex subset of a vector space over each element of the semilattice. This is similar to how one can project a polytope onto its face lattice by mapping each point to the face it generates: then, the polytope becomes a “fiber bundle” over its face lattice with the face interiors as fibers. We will describe a variant of this construction in [propclass] and show that every convex space is of such a form.
Our main motivation for studying this subject comes from quantum mechanics, in particular the search for the most general framework for theories of physics. Without loss of generality, we can assume a theory of physics to be of epistemological nature; this means that what we describe is not the actual reality of the system itself, but merely the information an observer has about the system. Now information is usually incomplete, in which case the state that the observer believes the system to be in is given by a statistical ensemble. Therefore, it seems reasonable to assume that the set of the information states has the mathematical structure of convex combinations, which correspond to statistical superpositions of ensembles. This is the framework known as general probabilistic theories [Bar], where the set of information states is taken to be a convex subset of a vector space. However since the underlying vector space lacks any physical motivation and solely serves the purpose of defining the convex combinations, we felt the need to develop an abstract concept of convex spaces.
We now give an outline of the paper. After settling notation in section 2, we start section 3 by proposing our definition of convex spaces in terms of a family of binary operations satisfying certain compatibility conditions. Using concepts from category theory, we then show that these compatibility conditions imply all the relations that we expect convex combinations to have. The main step relies on the results of [pcsm]. As a first exercise in the theory of convex spaces, we then show in theorem LABEL:convspc how a convex space structure on a set is uniquely determined by the collection of those maps that preserve convex combinations.
The remaining three sections are entirely dedicated to various classes of examples. Section LABEL:examplesgeom then proceeds by giving a list of examples of “geometric type”, which refers to those convex spaces that can be written as a convex subset of a vector space. Then in section LABEL:examplescomb, we study a discrete class of convex spaces. A discrete convex space in that sense turns out to be the same thing as a semilattice. None of these can be embedded into a vector space. Finally, section LABEL:examplesmixed describes constructions of convex spaces that have both a geometric and a combinatorial flavor. This concludes the paper. We hope that the long list of examples explains why we deem convex spaces worthy of study.
2. Notation
The typewriter font denotes a category, for example $\mathtt{Set}$. As in [pcsm], we write $[n]$ as shorthand for the $n$-element set $\{1,\ldots,n\}$. The symbol $\ast$ stands for any one-element set and also for the unique convex space over that set. For a real number $\alpha\in[0,1]$, we set $\overline{\alpha}\equiv 1-\alpha$. This notation increases readability in formulas involving binary convex combinations. The $\overline{\,\cdot\,}$ operation satisfies the important relations
$$\overline{\overline{\alpha}}=\alpha,\quad\overline{\alpha+\beta}=\overline{%
\alpha}+\overline{\beta}-1,\quad\overline{\alpha\beta}=\overline{\alpha}+%
\overline{\beta}-\overline{\alpha}\overline{\beta}.$$
Given a set $X\in\mathtt{Set}$, we call
$$\Delta_{X}\equiv\left\{f:X\rightarrow[0,1]\>\Bigg{|}\>f\textrm{ has finite %
support and }\sum_{x\in X}f(x)=1\right\}$$
the simplex over $X$. We also consider $\Delta_{X}$ as the set of all finite formal convex combinations $\sum_{i}\lambda_{i}\underline{x}_{i}$ with $x_{i}\in X$, where we use the underline notation $\underline{x}_{i}$ to emphasize that the sum is formal; this allows us to distinguish $x\in X$ from $\underline{x}\in\Delta_{X}$. Two formal convex combinations represent the same element of $\Delta_{X}$ if and only if they assign the same total weight to each element $x\in X$.
3. Defining convex spaces
We first define convex spaces and convex maps before turning to a formal justification of these definitions and proving a certain uniqueness property of a convex space structure.
Definition 3.1.
A convex space is given by a set $\mathcal{C}$ together with a family of binary convex combination operations
$$cc_{\lambda}:\mathcal{C}\times\mathcal{C}\longrightarrow\mathcal{C},\quad%
\lambda\in[0,1]$$
that satisfies
•
The unit law:
$$cc_{0}(x,y)=y$$
(3.1)
•
Idempotency:
$$cc_{\lambda}(x,x)=x$$
(3.2)
•
Parametric commutativity:
$$cc_{\lambda}(x,y)=cc_{1-\lambda}(y,x)$$
(3.3)
•
Deformed parametric associativity:
$$cc_{\lambda}(cc_{\mu}(x,y),z)=cc_{\widetilde{\lambda}}(x,cc_{\widetilde{\mu}}(%
y,z))$$
(3.4)
with
$$\widetilde{\lambda}=\lambda\mu,\qquad\widetilde{\mu}=\left\{\begin{array}[]{cl%
}\frac{\lambda\overline{\mu}}{\>\stackrel{{\scriptstyle}}{{\overline{\lambda%
\mu}}}\>}&\textrm{ if }\lambda\mu\neq 1\\
\textrm{arbitrary}&\textrm{ if }\lambda=\mu=1.\end{array}\right.$$
The most obvious example for this kind of structure is a vector space, with convex combinations defined via the vector space structure as $cc_{\lambda}(x,y)\equiv\lambda x+\overline{\lambda}y$.
Definition 3.1 is the picture of convex space that we shall work with. Usually, a convex space will be referred to simply by its underlying set $\mathcal{C}$, with the convex combination operations $cc_{\lambda}$ being implicit. Also, instead of $cc_{\lambda}(x,y)$, we will usually use the more suggestive notation
$$\lambda x+\overline{\lambda}y\equiv cc_{\lambda}(x,y)$$
in which the laws (3.1)–3.4 now read
$$\displaystyle 0x+\overline{0}y$$
$$\displaystyle=$$
$$\displaystyle y$$
(3.5)
$$\displaystyle\lambda x+\overline{\lambda}x$$
$$\displaystyle=$$
$$\displaystyle x$$
(3.6)
$$\displaystyle\lambda x+\overline{\lambda}y$$
$$\displaystyle=$$
$$\displaystyle\overline{\lambda}y+\overline{\overline{\lambda}}x$$
(3.7)
$$\displaystyle\lambda\left(\mu x+\overline{\mu}y\right)+\overline{\lambda}z$$
$$\displaystyle=$$
$$\displaystyle\lambda\mu x+\overline{\lambda\mu}\left(\lambda\frac{\overline{%
\mu}}{\>\stackrel{{\scriptstyle}}{{\overline{\lambda\mu}}}\>}y+\frac{\overline%
{\lambda}}{\>\stackrel{{\scriptstyle}}{{\overline{\lambda\mu}}}\>}z\right)%
\quad(\lambda\mu\neq 1)$$
(3.8)
Also, we will occassionally use convex combinations
$$\sum_{i=1}^{n}\lambda_{i}x_{i},\qquad\lambda_{i}\geq 0,\quad\sum_{i=1}\lambda_%
{i}=1$$
of more than two elements. This are to interpreted as iterated binary convex combinations. Appropriate normalizations have to be inserted, e.g. for $n=3$,
$$\lambda_{1}x_{1}+\lambda_{2}x_{2}+\lambda_{3}x_{3}=\overline{\lambda}_{3}\left%
(\frac{\lambda_{1}}{\lambda_{1}+\lambda_{2}}x_{1}+\frac{\lambda_{2}}{\lambda_{%
1}+\lambda_{2}}x_{2}\right)+\lambda_{3}x_{3}.$$
(Note that $\overline{\lambda}_{3}=\lambda_{1}+\lambda_{2}$.) Deformed parametric associativity (3.4) then expresses the fact that this reduction to binary convex combinations does not depend on the order of bracketing.
Definition 3.2.
Given convex spaces $\mathcal{C}$ and $\mathcal{C}^{\prime}$, a convex map from $\mathcal{C}$ to $\mathcal{C}^{\prime}$ is a map $f:\mathcal{C}\rightarrow\mathcal{C}^{\prime}$ that commutes with the convex combination operations:
$$f(\lambda x+\overline{\lambda}y)=\lambda f(x)+\overline{\lambda}f(y).$$
Convex spaces together with convex maps form the category of convex spaces $\mathtt{ConvSpc}$.
For example, a map between vector spaces is convex if and only if it is affine. Therefore in this context, the words “affine” and “convex” will be used synonymously.
We now turn to the technical task of justifying these definitions. The goal here is to justify these definitions: why are the compatibility conditions (3.1) to (3.4) sufficient to guarantee that the binary operations have all the properties we expect convex combinations to have? A less formally inclined reader may want to skip the remainder of this section.
So, what should a convex space formally be? Clearly, it has to be a set $\mathcal{C}$ together with some additional structure. This additional structure should make precise the intuition of an assignment
$$\mathfrak{m}:\Delta_{\mathcal{C}}\longrightarrow\mathcal{C},\qquad\sum_{i=1}^{%
n}\lambda_{i}\underline{x}_{i}\mapsto\sum_{i=1}^{n}\lambda_{i}x_{i},$$
(3.9)
mapping a formal convex combination $\left(\sum_{i=1}^{n}\lambda_{i}\underline{x}_{i}\right)\in\Delta_{\mathcal{C}}$ to an actual convex combination $\left(\sum_{i=1}^{n}\lambda_{i}x_{i}\right)\in\mathcal{C}$, in such a way that the properties
$$\mathfrak{m}(\underline{x})=x,\qquad\mathfrak{m}\left(\sum_{i=1}^{n}\lambda_{i%
}\,\underline{\mathfrak{m}\left(\sum_{j=1}^{m_{i}}\mu_{ij}\underline{x}_{ij}%
\right)}\,\right)=\mathfrak{m}\left(\sum_{i=1}^{n}\sum_{j=1}^{m_{i}}\lambda_{i%
}\mu_{ij}\underline{x}_{ij}\right)$$
(3.10)
hold. This intuition is straightforward to make precise using the theory of monads and their algebras111As pointed out by Leinster [Lei], defining convex spaces in terms of an operad does not yield all properties that one desires; in particular, taking some convex combination of a point with itself would not necessarily give that point back. Therefore, defining them as algebras of a monad seems like the most canonical choice.. The following definition is a discrete version of the Giry monad studied in categorical probability theory [Gir].
Definition 3.3 (the finitary Giry monad).
We define the simplex functor $\Delta$ to be given by
$$\Delta:\mathtt{Set}\rightarrow\mathtt{Set},\quad\mathcal{C}\mapsto\Delta_{%
\mathcal{C}},\quad\left(\mathcal{C}\stackrel{{\scriptstyle f}}{{\rightarrow}}%
\mathcal{D}\right)\mapsto\left(\sum_{i}\lambda_{i}\underline{x}_{i}\mapsto\sum%
_{i}\lambda_{i}\underline{f(x_{i})}\right).$$
Then the finitary Giry monad $\mathscr{G}_{\mathrm{fin}}=(\Delta,\eta,\mu)$ is defined by the unit natural transformation
$$\eta_{\mathcal{C}}:\mathcal{C}\rightarrow\Delta_{\mathcal{C}},\quad x\mapsto%
\underline{x}$$
and the multiplication transformation
$$\mu_{\mathcal{C}}:\Delta_{\Delta_{\mathcal{C}}}\rightarrow\Delta_{\mathcal{C}}%
,\quad\sum_{i=1}^{n}\lambda_{i}\,\underline{\sum_{j=1}^{m_{i}}\mu_{ij}%
\underline{x}_{ij}}\mapsto\sum_{i=1}^{n}\lambda_{i}\sum_{j=1}^{m_{i}}\mu_{ij}%
\underline{x}_{ij}$$
An algebra of $\mathscr{G}_{\mathrm{fin}}$ is given by a set $\mathcal{C}$ together with a structure map $\mathfrak{m}:\Delta_{\mathcal{C}}\rightarrow\mathcal{C}$, such that the diagrams
(3.11) |
An Enhanced DMT-optimality Criterion for STBC-schemes for Asymmetric MIMO Systems
K. Pavan Srinath and B. Sundar Rajan,
This work was supported in part by the DRDO-IISc program on Advanced Research in Mathematical Engineering through research grants, and by the INAE Chair Professorship to B. Sundar Rajan. The material in this paper was presented in part at the IEEE International Symposium on Information Theory (ISIT 2012), Cambridge, MA, USA, July 01–06, 2012.
K. Pavan Srinath is with Broadcom Communication Technologies Pvt. Ltd., Bangalore. This work was carried out when he was with the Department of Electrical Communication Engineering, Indian Institute of Science, Bangalore. Email: [email protected]. Sundar Rajan is with the Department of ECE, Indian Institute of Science, Bangalore - 560012. Email: [email protected].
Abstract
For any $n_{t}$ transmit, $n_{r}$ receive antenna ($n_{t}\times n_{r}$) MIMO system in a quasi-static Rayleigh fading environment, it was shown by Elia et al. that linear space-time block code-schemes (LSTBC-schemes) which have the non-vanishing determinant (NVD) property are diversity-multiplexing gain tradeoff (DMT)-optimal for arbitrary values of $n_{r}$ if they have a code-rate of $n_{t}$ complex dimensions per channel use. However, for asymmetric MIMO systems (where $n_{r}<n_{t}$), with the exception of a few LSTBC-schemes, it is unknown whether general LSTBC-schemes with NVD and a code-rate of $n_{r}$ complex dimensions per channel use are DMT-optimal. In this paper, an enhanced sufficient criterion for any STBC-scheme to be DMT-optimal is obtained, and using this criterion, it is established that any LSTBC-scheme with NVD and a code-rate of $\min\{n_{t},n_{r}\}$ complex dimensions per channel use is DMT-optimal. This result settles the DMT-optimality of several well-known, low-ML-decoding-complexity LSTBC-schemes for certain asymmetric MIMO systems.
Asymmetric MIMO system, diversity-multiplexing gain tradeoff, linear space-time block codes, low ML-decoding complexity, non-vanishing determinant, outage-probability, STBC-schemes.
I Introduction and Background
Space-time coding (STC) [1] for multiple-input, multiple-output (MIMO) antenna systems has extensively been studied as a tool to exploit the diversity provided by the MIMO fading channel. MIMO systems have the capability of permitting reliable data transmission at higher rates compared to that provided by the single-input, single-output (SISO) antenna system. In particular, when the delay requirement of the system is less than the coherence time (the time frame during which the channel gains are constant and independent of the channel gains of other time frames) of the channel, Zheng and Tse showed in their seminal paper [2] that for the Rayleigh fading channel with STC, there exists a fundamental tradeoff between diversity gain and multiplexing gain (see Definition 3 and Definition 4, Section II), referred to as “diversity-multiplexing gain tradeoff” (DMT). The optimal DMT was also characterized with the assumption that the block length of the space-time block codes (STBC) of the scheme (see Definition 2, Section II, for a definition of “STBC-scheme”) is at least $n_{t}+n_{r}-1$, where $n_{t}$ and $n_{r}$ are the number of transmit and receive antennas, respectively. The first explicit DMT-optimal STBC-scheme was presented in [3] for $2$ transmit antennas, and subsequently, in another landmark paper [4], explicit DMT-optimal STBC-schemes consisting of both square (minimal-delay) and rectangular STBCs from cyclic division algebras were presented for arbitrary values of $n_{t}$ and $n_{r}$. In the same paper, a sufficient criterion for achieving DMT-optimality was proposed for general STBC-schemes. For a class of STBC-schemes based on linear STBCs111In the literature, linear STBCs are also popularly called linear dispersion codes [5]. (LSTBCs) which have a code-rate (see Definition 6, Section IV, for a formal definition of “code-rate”, and Definition 7 for a definition of “LSTBC-scheme”. Henceforth in this paper, an LSTBC-scheme with code-rate equal to $k$ complex dimensions per channel use is referred to as “rate-$k$ LSTBC-scheme”) of $n_{t}$ complex dimensions per channel use, this criterion translates to the non-vanishing determinant property (see Definition 8, Section IV), a term first coined in [6], being sufficient for DMT-optimality. It was later shown in [7] that the DMT-optimal LSTBC-schemes constructed in [4] are also approximately universal for arbitrary number of receive antennas. In the literature, there exist several other rate-$n_{t}$ LSTBC-schemes with NVD - for example, see [8], [9], [10], and references therein. It is to be noted that the sufficient criterion presented in [4] for DMT-optimality holds only for LSTBC-schemes whose code-rate equals $n_{t}$ complex dimensions per channel use.
A few LSTBC-schemes with code-rate less than $n_{t}$ complex dimensions per channel use have been shown to be DMT-optimal for certain asymmetric MIMO systems. The Alamouti code-scheme [11] for the $2\times 1$ system is known to be DMT-optimal [2] while diagonal rate-1 STBC-schemes with NVD have been shown to be DMT-optimal for arbitrary $n_{t}\times 1$ systems [7]. In [12], the DMT-optimality of a few rate-1 LSTBC-schemes for certain multiple-input, single-output (MISO) systems has been established, including that of the full-diversity quasi-orthogonal STBC-scheme of Su and Xia [13] for the $4\times 1$ system. For asymmetric MIMO systems with $n_{r}\geq 2$, the only known DMT-optimal, rate-$n_{r}$ LSTBC-schemes are the rectangular LSTBC-schemes of [14], which exist for $n_{r}=2$ and $n_{r}=n_{t}-1$. Whether every rate-$n_{r}$ LSTBC-scheme that is equipped with the non-vanishing determinant property is DMT-optimal for an asymmetric $n_{t}\times n_{r}$ MIMO system has been an open problem up to now.
I-A Motivation for our results
It is natural to question the need for establishing the DMT-optimality of rate-$n_{r}$ LSTBC-schemes for asymmetric MIMO systems when there already exist DMT-optimal, rate-$n_{t}$ LSTBC-schemes for arbitrary values of $n_{t}$ and $n_{r}$. However, it is important to note that all the known results on DMT-optimality of explicit LSTBC-schemes are with regards to maximum-likelihood (ML)-decoding, and in the literature, barring a few notable exceptions (for example, [14]), the issue of ML-decoding complexity is generally excluded from the discussion on DMT-optimal LSTBC-schemes. There exist several low-ML-decoding complexity LSTBC-schemes that have a code-rate less than $n_{t}$ complex dimensions per channel use and are equipped with the NVD property. Examples of these for asymmetric MIMO systems are rate-$n_{r}$ LSTBC-schemes that are based on fast-decodable LSTBCs [15] from cyclic division algebras, LSTBC-schemes from co-ordinate interleaved orthogonal designs [16], and four-group decodable LSTBC-schemes [17]-[20]. For these LSTBC-schemes, the sufficient criterion provided in [4] for DMT-optimality, which requires that LSTBCs have a code-rate of $n_{t}$ complex dimensions per channel use irrespective of the number of receive antennas, is not applicable. Hence, there is a clear need for obtaining a new DMT-criterion that can take into account LSTBC-schemes (with NVD) whose code-rate is less than $n_{t}$ complex dimensions per channel use.
Further, for asymmetric MIMO systems, the standard sphere decoder [21] or its variations (see, for example, [22], [23], and references therein) cannot be used in entirety to decode-rate-$n_{t}$ LSTBCs. For an $n_{t}\times n_{r}$ MIMO system, the standard sphere decoder can be used to decode LSTBCs whose code-rate is at most222When a rate-$n_{t}$ STBC is used in an asymmetric MIMO system, there exist techniques (see [24] and references therein) to make use of the sphere decoder. However, these are either sub-optimal decoding techniques with no guarantee on preserving the diversity order of ML-decoding, or demand a high computational complexity when ML-decoding is employed. $n_{min}=\min\{n_{t},n_{r}\}$ complex dimensions per channel use. Recent results on fixed-complexity sphere decoders [25], [26] are extremely promising from the point of view of low complexity decoding. In particular, it has been shown analytically in [26] that the fixed-complexity sphere decoder, although provides quasi-ML performance, helps achieve the same diversity order of ML-decoding with a worst-case complexity of the order of $M^{\sqrt{K}}$, where $M$ is the number of possibilities for each complex symbol (or the size of the signal constellation employed when each symbol is encoded independently), and $K$ is the dimension of the search. On the other hand, an exhaustive ML-search would incur a complexity of the order of $M^{K}$. In the same paper, it has also been shown that the gap between quasi-ML performance and the actual ML performance approaches zero at high signal-to-noise ratio, independent of the constellation employed. In any case, it has been established in [27] that the exact ML-decoding complexity of the sphere decoder is lesser than that of other known ML-decoders at high SNR. This motivates one to seek DMT-optimal LSTBC-schemes whose LSTBCS are entirely sphere decodable, i.e., have a code-rate that is at most $n_{min}$ complex dimensions per channel use.
In this paper, we present a new criterion for DMT-optimality of general STBC-schemes using which we prove the DMT-optimality of many low-ML-decoding-complexity LSTBC-schemes [15]-[20] for asymmetric MIMO systems. Since the new criterion enables us to identify a larger class of DMT-optimal LSTBC-schemes which was not possible using the DMT-criterion in [4], we call our criterion an enhanced one.
I-B Contributions and paper organization
The contributions of this paper are the following.
1.
We present a new criterion for DMT-optimality of general STBC-schemes. This criterion enables us to encompass all rate-$n_{min}$ LSTBC-schemes with NVD which was not possible using the DMT-criterion of [4].
2.
In the context of LSTBCs, we show that a code-rate of $n_{min}$ complex dimensions per channel use is necessary for LSTBC-schemes to be DMT-optimal, and for asymmetric MIMO systems, we show that rate-$n_{r}$ LSTBC-schemes are DMT-optimal if they have the NVD property.
3.
We show that some well-known low-ML-decoding-complexity LSTBC-schemes (STBC-schemes based on LSTBCs with low ML-decoding complexity) are DMT-optimal for certain asymmetric MIMO systems (see Table I).
The rest of the paper is organized as follows. Section II deals with the system model and relevant definitions while Section III presents the main result of the paper - an enhanced sufficient criterion for DMT-optimality of general STBC-schemes. Section IV gives a brief introduction to linear STBCs along with a few relevant definitions, and provides a new criterion for DMT-optimality of LSTBCs for asymmetric MIMO systems. A discussion on the DMT-optimality of some well-known low-ML-decoding-complexity LSTBC-schemes is presented in Section V. Concluding remarks constitute Section VI.
Notation: Throughout the paper, bold, lowercase letters are used to denote vectors, and bold, uppercase letters are used to denote matrices. For a complex matrix X, its Hermitian transpose, transpose, trace, determinant, rank, and Frobenius norm are denoted by $\textbf{X}^{H}$, $\textbf{X}^{\textrm{T}}$, $tr(\textbf{X})$, $det(\textbf{X})$, $Rank(\textbf{X})$, and $\|\textbf{X}\|$, respectively. The set of all real numbers, complex numbers, and integers are denoted by $\mathbb{R}$, $\mathbb{C}$, and $\mathbb{Z}$, respectively. The real and the imaginary parts of a complex-valued vector x are denoted by $\textbf{x}_{I}$ and $\textbf{x}_{Q}$, respectively. The cardinality of a set $\mathcal{S}$ is denoted by $|\mathcal{S}|$, while $\mathcal{S}\times\mathcal{T}$ denotes the Cartesian product of sets $\mathcal{S}$ and $\mathcal{T}$, meaning which $\mathcal{S}\times\mathcal{T}=\{(s,t)~{}|~{}s\in\mathcal{S},t\in\mathcal{T}\}$. The notation $\mathcal{S}\subset\mathcal{T}$ implies that $\mathcal{S}$ is a proper subset of $\mathcal{T}$. The $T\times T$ sized identity matrix is denoted by $\textbf{I}_{T}$, and O denotes the null matrix of appropriate dimension.
For a complex number $x$, its complex conjugate is denoted by $x^{*}$, and the $\check{(.)}$ operator acting on $x$ is defined as
$$\check{x}\triangleq\left[\begin{array}[]{rr}x_{I}&-x_{Q}\\
x_{Q}&x_{I}\\
\end{array}\right].$$
The $\check{(.)}$ operator can similarly be applied to any matrix $\textbf{X}\in\mathbb{C}^{n\times m}$ by replacing each entry $x_{ij}$ with $\check{x}_{ij}$, $i=1,2,\cdots,n,j=1,2,\cdots,m$, resulting in a matrix denoted by $\check{\textbf{X}}\in\mathbb{R}^{2n\times 2m}$. Given a complex vector $\textbf{x}=[x_{1},x_{2},\cdots,x_{n}]^{\textrm{T}}$, $\tilde{\textbf{x}}$ is defined as $\tilde{\textbf{x}}\triangleq[x_{1I},x_{1Q},\cdots,x_{nI},x_{nQ}]^{\textrm{T}}$. It follows that for matrices $\textbf{A}\in\mathbb{C}^{m\times n}$, $\textbf{B}\in\mathbb{C}^{n\times p}$, and $\textbf{C}=\textbf{AB}$, the equalities $\check{\textbf{C}}=\check{\textbf{A}}\check{\textbf{B}}$, and $\widetilde{vec(\textbf{C})}=(\textbf{I}_{p}\otimes\check{\textbf{A}})%
\widetilde{vec(\textbf{B})}$ hold.
For a complex random matrix X, $\mathbb{E}_{\textbf{X}}(f(\textbf{X}))$ denotes the expectation of a real-valued function $f(\textbf{X})$ over X. For any real number $x$, $\lfloor x\rfloor$ denotes the largest integer not greater than $x$, and $x^{+}=\max\{0,x\}$. The Q-function of $x$ is denoted by $Q(x)$ and given as
$$\displaystyle Q(x)=\int_{x}^{\infty}\frac{1}{\sqrt{2\pi}}e^{-\frac{t^{2}}{2}}dt.$$
Throughout the paper, $\log x$ denotes the logarithm of $x$ to base 2, and $\log_{e}x$ denotes the natural logarithm of $x$. For real-valued functions $f(x)$ and $g(x)$, we write $f(x)=o\left(g(x)\right)$ as $x\to\infty$ if and only if
$$\lim_{x\to\infty}\frac{f(x)}{g(x)}=0.$$
Further, $f(x)\doteq x^{b}$ implies that $\underset{x\to\infty}{\operatorname{lim}}\frac{\log f(x)}{\log x}=b$, and $\dot{\leq}$, $\dot{\geq}$, $\dot{>}$, $\dot{<}$ are similarly defined.
II System Model
We consider an $n_{t}$ transmit antenna, $n_{r}$ receive antenna MIMO system ($n_{t}\times n_{r}$ system) with perfect channel-state information available at the receiver (CSIR) alone. The channel is assumed to be quasi-static with Rayleigh fading. The system model is
$$\textbf{Y}=\textbf{HX}+\textbf{N},$$
(1)
where $\textbf{Y}\in\mathbb{C}^{n_{r}\times T}$ is the received signal matrix, $\textbf{X}\in\mathbb{C}^{n_{t}\times T}$ is the codeword matrix that is transmitted over a block of $T$ channel uses, $\textbf{H}\in\mathbb{C}^{n_{r}\times n_{t}}$ and $\textbf{N}\in\mathbb{C}^{n_{r}\times T}$ are respectively the channel matrix and the noise matrix with entries independently and identically distributed (i.i.d.) circularly symmetric complex Gaussian random variables with zero mean and unit variance. The average signal-to-noise ratio at each receive antenna is denoted by $SNR$.
Definition 1
(Space-time block code) A space-time block code (STBC) of block-length $T$ for an $n_{t}$ transmit antenna MIMO system is a finite set of complex matrices of size $n_{t}\times T$.
Definition 2
(STBC-scheme) An STBC-scheme $\mathcal{X}$ is defined as a family of STBCs indexed by $SNR$, each STBC of block length $T$ so that $\mathcal{X}=\{\mathcal{X}(SNR)\}$, where the STBC $\mathcal{X}(SNR)$ corresponds to a signal-to-noise ratio of $SNR$ at each receive antenna.
At a signal-to-noise ratio of $SNR$, the codeword matrices of $\mathcal{X}(SNR)$ are transmitted over the channel. Assuming that all the codeword matrices of $\mathcal{X}(SNR)\triangleq\{\textbf{X}_{i}(SNR),i=1,\cdots,|\mathcal{X}(SNR)|\}$ are equally likely to be transmitted, we have
$$\frac{1}{|\mathcal{X}(SNR)|}\sum_{i=1}^{|\mathcal{X}(SNR)|}\|\textbf{X}_{i}(%
SNR)\|^{2}=T~{}SNR.$$
(2)
It follows that for the STBC-scheme $\mathcal{X}$,
$$\|\textbf{X}_{i}(SNR)\|^{2}~{}~{}\dot{\leq}~{}~{}SNR,~{}~{}\forall~{}i=1,2,%
\cdots,|\mathcal{X}(SNR)|.$$
(3)
The bit rate of transmission is $(1/T)\log|\mathcal{X}(SNR)|$ bits per channel use. Henceforth in this paper, a codeword $\textbf{X}_{i}(SNR)\in\mathcal{X}(SNR)$ is simply referred to as $\textbf{X}_{i}\in\mathcal{X}(SNR)$.
Definition 3
(Multiplexing gain) Let the bit rate of transmission of the STBC $\mathcal{X}(SNR)$ in bits per channel use be denoted by $R(SNR)$. Then, the multiplexing gain $r$ of the STBC-scheme is defined [2] as
$$r=\lim_{SNR\to\infty}\frac{R(SNR)}{\log SNR}.$$
Equivalently, $R(SNR)=r\log SNR+o(\log SNR)$ where, for reliable communication, $r\in[0,n_{min}]$ [2].
Definition 4
(Diversity gain) Let the probability of codeword error of the STBC $\mathcal{X}(SNR)$ be denoted by $P_{e}(SNR)$. Then, the diversity gain $d(r)$ of the STBC-scheme corresponding to a multiplexing gain of $r$ is given by
$$d(r)=-\lim_{SNR\to\infty}\frac{\log P_{e}(SNR)}{\log SNR}.$$
For an $n_{t}\times n_{r}$ MIMO system, the maximum achievable diversity gain is $n_{t}n_{r}$.
Definition 5
(Optimal DMT curve [2]) The optimal DMT curve $d^{*}(r)$ that is achievable with STBC-schemes for an $n_{t}\times n_{r}$ MIMO system is a piecewise-linear function connecting the points $\left(k,d(k)\right)$, $k=0,1,\cdots,n_{min}$, where
$$d(k)=(n_{t}-k)(n_{r}-k).$$
(4)
Theorem 3 of [4], which provides a sufficient criterion for DMT-optimality of an STBC-scheme, is rephrased here with its statement consistent with the notation and terminology used in this paper.
Theorem 1
[4]
For a quasi-static $n_{t}\times n_{r}$ MIMO channel with Rayleigh fading and perfect CSIR, an STBC-scheme $\mathcal{X}$ that satisfies (3) is DMT-optimal for any value of $n_{r}$ if for all possible pairs of distinct codewords $(\textbf{X}_{1},\textbf{X}_{2})$ of $\mathcal{X}(SNR)$, the difference matrix $\textbf{X}_{1}-\textbf{X}_{2}=\Delta\textbf{X}\neq\textbf{O}$ is such that,
$$det\left(\Delta\textbf{X}\Delta\textbf{X}^{H}\right)~{}~{}\dot{\geq}~{}~{}SNR^%
{n_{t}\left(1-\frac{r}{n_{t}}\right)}.$$
(5)
Relying on Theorem 1, an explicit construction scheme was presented to obtain DMT-optimal LSTBC-schemes whose LSTBCs are minimal-delay ($T=n_{t}$) and obtained from cyclic division algebras (CDA). All these STBCs have a code-rate of $n_{t}$ complex dimensions per channel use irrespective of the value of $n_{r}$. However, Theorem 1 does not account for LSTBC-schemes whose LSTBCs have code-rate less than $n_{t}$ complex dimensions per channel use. In the following section, we present an enhanced DMT-criterion that brings within its scope all rate-$n_{min}$ LSTBC-schemes with NVD.
III Main Result
We present below the main result of our paper - an enhanced sufficient criterion for DMT-optimality of general STBC-schemes.
Theorem 2
For a quasi-static $n_{t}\times n_{r}$ MIMO channel with Rayleigh fading and perfect CSIR, an STBC-scheme $\mathcal{X}$ that satisfies (3) is DMT-optimal for any value of $n_{r}$ if for all possible pairs of distinct codewords $(\textbf{X}_{1},\textbf{X}_{2})$ of $\mathcal{X}(SNR)$, the difference matrix $\textbf{X}_{1}-\textbf{X}_{2}=\Delta\textbf{X}\neq\textbf{O}$ is such that,
$$det(\Delta\textbf{X}\Delta\textbf{X}^{H})~{}~{}\dot{\geq}~{}~{}SNR^{n_{t}\left%
(1-\frac{r}{n_{min}}\right)}.$$
(6)
Remark 1
Notice that compared to the criterion given by (5), our criterion given by (6) places less demand on the determinants of codeword difference matrices of the STBCs that the STBC-scheme comprises of. This enables one to widen the class of DMT-optimal LSTBC-schemes, and for this reason, we call our criterion an “enhanced criterion” compared to that given by (5).
Proof:
To prove the theorem, we first show that the STBC-scheme $\mathcal{X}$ is DMT-optimal when each codeword difference matrix $\Delta\textbf{X}\neq\textbf{O}$ of $\mathcal{X}(SNR)$ satisfies
$$det\left(\Delta\textbf{X}\Delta\textbf{X}^{H}\right)~{}~{}\dot{\geq}~{}~{}SNR^%
{n_{t}\left(1-\frac{r}{n_{r}}\right)},$$
(7)
and then conclude the proof taking aid of Theorem 1. Towards this end, we assume without loss of generality that the codeword $\textbf{X}_{1}$ of $\mathcal{X}(SNR)$ is transmitted. It is also assumed that $T\geq n_{t}$, which is a prerequisite for achieving a diversity gain of $n_{t}n_{r}$ when the bit rate of the STBC-scheme is constant with $SNR$ (a special case of the $r=0$ condition).
Let $\Delta\textbf{X}_{l}=\textbf{X}_{1}-\textbf{X}_{l}$, where $\textbf{X}_{l}$, $l=2,\cdots,|\mathcal{X}(SNR)|,$ are the remaining codewords of $\mathcal{X}(SNR)$. It is to be noted that the bit rate of transmission is $r\log SNR+o(\log SNR)$ bits per channels use, and so, $|\mathcal{X}(SNR)|\doteq SNR^{rT}$, with $r\in[0,n_{min}]$. Considering the channel model given by (1) with ML-decoding employed at the receiver, the probability that $\textbf{X}_{1}$ is wrongly decoded to $\textbf{X}_{2}$ for a particular channel matrix H is given by
$$P_{e}(\textbf{X}_{1}\to\textbf{X}_{2}|\textbf{H})=Q\left(\frac{\|\textbf{H}%
\Delta\textbf{X}_{2}\|}{\sqrt{2}}\right).$$
So, the probability that $\textbf{X}_{1}$ is wrongly decoded conditioned on H is upper bounded as
$$P_{e}(\textbf{X}_{1}|\textbf{H})\leq\sum_{l=2}^{|\mathcal{X}(SNR)|}Q\left(%
\frac{\|\textbf{H}\Delta\textbf{X}_{l}\|}{\sqrt{2}}\right).$$
(10)
The probability of codeword error averaged over all channel realizations is given by
$$\displaystyle P_{e}$$
$$\displaystyle=$$
$$\displaystyle\mathbb{E}_{\textbf{H}}\left(P_{e}(\textbf{X}_{1}|\textbf{H})\right)$$
$$\displaystyle=$$
$$\displaystyle\int p(\textbf{H})P_{e}(\textbf{X}_{1}|\textbf{H})d\textbf{H},$$
where throughout the paper, $p(.)$ denotes the probability density function (pdf). Let
$$\mathcal{E}:=\textrm{event that there is a codeword error},$$
and consider the set of channel realizations $\mathcal{O}$ defined in (8) at the top of the page. Now,
$$\displaystyle P_{e}$$
$$\displaystyle=$$
$$\displaystyle\int_{\mathcal{O}}p(\textbf{H})P_{e}(\textbf{X}_{1}|\textbf{H})d%
\textbf{H}+\int_{\mathcal{O}^{c}}p(\textbf{H})P_{e}(\textbf{X}_{1}|\textbf{H})%
d\textbf{H}$$
(11)
$$\displaystyle=$$
$$\displaystyle\textrm{P}\left(\mathcal{O},\mathcal{E}\right)+\textrm{P}\left(%
\mathcal{O}^{c},\mathcal{E}\right)$$
$$\displaystyle=$$
$$\displaystyle\textrm{P}(\mathcal{O})\textrm{P}(\mathcal{E}|\mathcal{O})+%
\textrm{P}\left(\mathcal{O}^{c},\mathcal{E}\right),$$
where $\textrm{P}(.)$ denotes “probability of”, and $\mathcal{O}^{c}=\{\textbf{H}~{}|~{}\textbf{H}\notin\mathcal{O}\}$. $\textrm{P}(\mathcal{O})$ is the well-known probability of outage333In the literature, ’$<$’ is often used instead of ’$\leq$’ in (8) to define the outage probability. However, for either definition, (12) holds true. [2], and $\textrm{P}(\mathcal{E}|\mathcal{O})$ is the probability of codeword error given that the channel is in outage. $P(\mathcal{O})$ and $\textrm{P}(\mathcal{E}|\mathcal{O})$ have been derived [2] to be
$$\displaystyle\textrm{P}(\mathcal{O})$$
$$\displaystyle\doteq$$
$$\displaystyle SNR^{-d^{*}(r)},$$
(12)
$$\displaystyle\textrm{P}(\mathcal{E}|\mathcal{O})$$
$$\displaystyle\doteq$$
$$\displaystyle SNR^{0},$$
(13)
where $d^{*}(r)$ is given in Definition 5. So, the DMT curve of an STBC-scheme is determined completely by $\textrm{P}\left(\mathcal{O}^{c},\mathcal{E}\right)$, which is the probability that there is a codeword error and the channel is not in outage. To obtain an upper bound on $\textrm{P}\left(\mathcal{O}^{c},\mathcal{E}\right)$, we proceed as follows. Note that $\textbf{I}_{n_{r}}+$ $(SNR/n_{t})\textbf{H}\textbf{H}^{H}$ is a positive definite matrix. Denoting the rows of H by $\textbf{h}_{i}$, $i=1,\cdots,n_{r}$, we have
$$\log det\left(\textbf{I}_{n_{r}}+\frac{SNR}{n_{t}}\textbf{H}\textbf{H}^{H}%
\right)\leq\sum_{i=1}^{n_{r}}\log\left(1+\frac{SNR}{n_{t}}\|\textbf{h}_{i}\|^{%
2}\right),$$
which is due to Hadamard’s inequality which states that the determinant of a positive definite matrix is less than or equal to the product of its diagonal entries. We define the set of channel realizations $\bar{\mathcal{O}}$ as shown in (9) at the top of the page. Clearly, $\mathcal{O}^{c}\subseteq\bar{\mathcal{O}}$, and hence,
$$\textrm{P}\left(\mathcal{O}^{c},\mathcal{E}\right)~{}\leq~{}\textrm{P}\left(%
\bar{\mathcal{O}},\mathcal{E}\right).$$
(14)
Hence, using (14) in (11), we have
$$P_{e}~{}\leq~{}\textrm{P}(\mathcal{O})\textrm{P}(\mathcal{E}|\mathcal{O})+%
\textrm{P}\left(\bar{\mathcal{O}},\mathcal{E}\right).$$
(15)
We now need to evaluate $\textrm{P}\left(\bar{\mathcal{O}},\mathcal{E}\right)$. Denoting the entries of H by $h_{ij}$, $i=1,\cdots,n_{r}$, $j=1,\cdots,n_{t}$, we observe that
$\sum_{i=1}^{n_{r}}\log\left(1+\frac{SNR}{n_{t}}\|\textbf{h}_{i}\|^{2}\right)$
$$\displaystyle=~{}\sum_{i=1}^{n_{r}}\log\left(\frac{1}{n_{t}}\sum_{j=1}^{n_{t}}%
\left(1+SNR|{h}_{ij}|^{2}\right)\right)$$
$$\displaystyle\geq~{}\frac{1}{n_{t}}\sum_{i=1}^{n_{r}}\sum_{j=1}^{n_{t}}\log(1+%
SNR|h_{ij}|^{2}),$$
(16)
with (16) following from the concavity of $\log(.)$ and Jensen’s inequality.
We now define two disjoint sets of channel realizations $\widetilde{\mathcal{O}}$ and $\ddot{\mathcal{O}}$ as shown in (21) and (22) at the top of the next page. Clearly, $\bar{\mathcal{O}}$ is the disjoint union of $\widetilde{\mathcal{O}}$ and $\ddot{\mathcal{O}}$. Therefore,
$$\displaystyle\textrm{P}\left(\bar{\mathcal{O}},\mathcal{E}\right)$$
$$\displaystyle=$$
$$\displaystyle\textrm{P}\left(\widetilde{\mathcal{O}},\mathcal{E}\right)+%
\textrm{P}\left(\ddot{\mathcal{O}},\mathcal{E}\right)$$
(17)
$$\displaystyle=$$
$$\displaystyle\textrm{P}(\widetilde{\mathcal{O}})\textrm{P}\left(\mathcal{E}|%
\widetilde{\mathcal{O}}\right)+\textrm{P}\left(\ddot{\mathcal{O}},\mathcal{E}\right)$$
$$\displaystyle\leq$$
$$\displaystyle\textrm{P}(\widetilde{\mathcal{O}})+\textrm{P}\left(\ddot{%
\mathcal{O}},\mathcal{E}\right).$$
In Appendix A, it is shown that
$$\textrm{P}(\widetilde{\mathcal{O}})\doteq SNR^{-n_{t}(n_{r}-r)}.$$
(18)
So, we are now left with the evaluation of $\textrm{P}\left(\ddot{\mathcal{O}},\mathcal{E}\right)$, which is done as follows.
$$\displaystyle\textrm{P}\left(\ddot{\mathcal{O}},\mathcal{E}\right)$$
$$\displaystyle=$$
$$\displaystyle\int_{\ddot{\mathcal{O}}}p(\textbf{H})P_{e}(\textbf{X}_{1}|%
\textbf{H})d\textbf{H}$$
$$\displaystyle\leq$$
$$\displaystyle\sum_{l=2}^{|\mathcal{X}(SNR)|}\int_{\ddot{\mathcal{O}}}p(\textbf%
{H})Q\left(\frac{\|\textbf{H}\Delta\textbf{X}_{l}\|}{\sqrt{2}}\right)d\textbf{H}$$
$$\displaystyle=$$
$$\displaystyle\sum_{l=2}^{|\mathcal{X}(SNR)|}\int_{\ddot{\mathcal{O}}}p(\textbf%
{H})Q\left(\frac{\left\|\textbf{H}\textbf{U}_{l}\textbf{D}_{l}\textbf{V}_{l}^{%
H}\right\|}{\sqrt{2}}\right)d\textbf{H}$$
$$\displaystyle=$$
$$\displaystyle\sum_{l=2}^{|\mathcal{X}(SNR)|}\int_{\ddot{\mathcal{O}}}p(\textbf%
{H})Q\left(\frac{\|\textbf{H}\textbf{U}_{l}\textbf{D}_{l}\|}{\sqrt{2}}\right)d%
\textbf{H}$$
$$\displaystyle=$$
$$\displaystyle\sum_{l=2}^{|\mathcal{X}(SNR)|}\int_{\mathcal{O}_{l}}p(\textbf{H}%
_{l})Q\left(\frac{\|\textbf{H}_{l}\textbf{D}_{l}\|}{\sqrt{2}}\right)d\textbf{H%
}_{l},$$
(20)
where (III) is obtained using (10), and $\Delta\textbf{X}_{l}=\textbf{U}_{l}\textbf{D}_{l}\textbf{V}_{l}^{H}$, obtained upon singular value decomposition (SVD), with $\textbf{U}_{l}\in\mathbb{C}^{n_{t}\times n_{t}}$, $\textbf{D}_{l}\in\mathbb{R}^{n_{t}\times T}$, $\textbf{V}_{l}\in\mathbb{C}^{T\times T}$. In (20), $\textbf{H}_{l}=\textbf{HU}_{l}$, and $\mathcal{O}_{l}$ is as defined in (23) at the top of the next page.
Denoting the entries of $\textbf{H}_{l}=\textbf{HU}_{l}$ by $h_{ij}(l)$, we define the set $\mathcal{O}_{l}^{\prime}$ as shown in (24) at the top of the page. In Appendix B, it is shown that $\mathcal{O}_{l}=\mathcal{O}_{l}^{\prime}$ almost surely as $SNR\to\infty$. As a result, in the high SNR scenario, (20) becomes
$$\textrm{P}\left(\ddot{\mathcal{O}},\mathcal{E}\right)\leq\sum_{l=2}^{|\mathcal%
{X}(SNR)|}\int_{\mathcal{O}_{l}^{\prime}}p(\textbf{H}_{l})Q\left(\frac{\|%
\textbf{H}_{l}\textbf{D}_{l}\|}{\sqrt{2}}\right)d\textbf{H}_{l}.$$
(26)
Now, we evaluate each of the summands of (26). Let
$$\displaystyle P_{\mathcal{O}_{l}^{\prime}}\triangleq\int_{\mathcal{O}_{l}^{%
\prime}}p(\textbf{H}_{l})Q\left(\frac{\|\textbf{H}_{l}\textbf{D}_{l}\|}{\sqrt{%
2}}\right)d\textbf{H}_{l}.$$
Now, we define $P_{\mathcal{O}_{l}^{\prime}}(\delta)$ as
$$P_{\mathcal{O}_{l}^{\prime}}(\delta)\triangleq\int_{\mathcal{O}_{l}^{\prime}(%
\delta)}p(\textbf{H}_{l})Q\left(\frac{\|\textbf{H}_{l}\textbf{D}_{l}\|}{\sqrt{%
2}}\right)d\textbf{H}_{l},$$
where $\mathcal{O}_{l}^{\prime}(\delta)$ is as defined in (25) at the top of the page with $\delta>0$. It is clear that as $SNR\to\infty$,
$$P_{\mathcal{O}_{l}^{\prime}}\geq P_{\mathcal{O}_{l}^{\prime}}(\delta_{1})\geq P%
_{\mathcal{O}_{l}^{\prime}}(\delta_{2})\geq P_{\mathcal{O}_{l}^{\prime}}(%
\delta_{3})\geq\cdots$$
for $0<\delta_{1}<\delta_{2}<\delta_{3}<\cdots$. To be precise,
$$\lim_{SNR\to\infty}P_{\mathcal{O}_{l}^{\prime}}\geq\lim_{SNR\to\infty}P_{%
\mathcal{O}_{l}^{\prime}}(\delta_{1})\geq\lim_{SNR\to\infty}P_{\mathcal{O}_{l}%
^{\prime}}(\delta_{2})\geq\cdots$$
and hence
$$\displaystyle\lim_{SNR\to\infty}\frac{\log P_{\mathcal{O}_{l}^{\prime}}}{\log
SNR}$$
$$\displaystyle\geq$$
$$\displaystyle\lim_{SNR\to\infty}\frac{\log P_{\mathcal{O}_{l}^{\prime}}(\delta%
_{1})}{\log SNR}$$
$$\displaystyle\geq$$
$$\displaystyle\lim_{SNR\to\infty}\frac{\log P_{\mathcal{O}_{l}^{\prime}}(\delta%
_{2})}{\log SNR}\geq\cdots$$
for $0<\delta_{1}<\delta_{2}<\cdots$. Also, from the definitions of $P_{\mathcal{O}_{l}^{\prime}}$ and $P_{\mathcal{O}_{l}^{\prime}}(\delta)$, it is evident that
$$\lim_{\delta\to 0^{+}}\left(\lim_{SNR\to\infty}P_{\mathcal{O}_{l}^{\prime}}(%
\delta)\right)=\lim_{SNR\to\infty}P_{\mathcal{O}_{l}^{\prime}},$$
where “$\delta\to 0^{+}$” means that $\delta$ tends to $0$ through positive values. Therefore,
$$\lim_{\delta\to 0^{+}}\left(\lim_{SNR\to\infty}\frac{\log P_{\mathcal{O}_{l}^{%
\prime}}(\delta)}{\log SNR}\right)=\lim_{SNR\to\infty}\frac{\log P_{\mathcal{O%
}_{l}^{\prime}}}{\log SNR}$$
(27)
In Appendix C, it is shown that for every $\delta>0$, as $SNR\to\infty$,
$$P_{\mathcal{O}_{l}^{\prime}}(\delta)\leq\frac{1}{2}e^{-\left(aSNR^{\frac{%
\delta}{n_{r}}}+o\left(SNR^{\frac{\delta}{n_{r}}}\right)\right)}$$
(28)
where $a\doteq SNR^{0}$. Using (28) in (27), we obtain
$$\displaystyle\lim_{SNR\to\infty}\frac{\log P_{\mathcal{O}_{l}^{\prime}}}{\log
SNR}$$
$$\displaystyle=$$
$$\displaystyle-\infty$$
so that
$$P_{\mathcal{O}_{l}^{\prime}}\doteq SNR^{-\infty}.$$
(29)
The interpretation of (29) is that $P_{\mathcal{O}_{l}^{\prime}}$ experiences an exponential fall with increasing $SNR$, and the dependency with $SNR$ is not polynomial (unlike, for example, $\textrm{P}(\widetilde{\mathcal{O}})$ given by (18)). Using (29) in (26), we have as $SNR\to\infty$,
$$\displaystyle\textrm{P}\left(\ddot{\mathcal{O}},\mathcal{E}\right)$$
$$\displaystyle\leq$$
$$\displaystyle\sum_{l=2}^{|\mathcal{X}(SNR)|}P_{\mathcal{O}_{l}^{\prime}}~{}%
\doteq~{}SNR^{-\infty},$$
(30)
which is because $|\mathcal{X}(SNR)|$ has a polynomial dependency with $SNR$ (since $|\mathcal{X}(SNR)|\doteq SNR^{rT}$) but all the $P_{\mathcal{O}_{l}^{\prime}}$ experience an exponential fall with increasing $SNR$ (so that they are exponentially equal to $SNR^{-\infty}$).
Using (18) and (30) in (17), we obtain
$$\displaystyle P(\bar{\mathcal{O}},\mathcal{E})$$
$$\displaystyle\dot{\leq}$$
$$\displaystyle SNR^{\max\left\{-n_{t}(n_{r}-r),-\infty\right\}}$$
(31)
$$\displaystyle=$$
$$\displaystyle SNR^{-n_{t}(n_{r}-r)}.$$
Using (12), (13), and (31) in (15), we arrive at
$$\displaystyle P_{e}$$
$$\displaystyle\doteq$$
$$\displaystyle SNR^{\max\left\{-d^{*}(r),-n_{t}(n_{r}-r)\right\}}=SNR^{-d^{*}(r%
)},$$
where $d^{*}(r)$ is given in Definition 5. This proves the DMT-optimality of the STBC-scheme when (7) is satisfied.
Now, combining this obtained result with that of Theorem 1, we see that an STBC-scheme is DMT-optimal if for each codeword difference matrix $\Delta\textbf{X}\neq\textbf{O}$,
$$\displaystyle det\left(\Delta\textbf{X}\Delta\textbf{X}^{H}\right)$$
$$\displaystyle\dot{\geq}$$
$$\displaystyle SNR^{\left(\min\left\{n_{t}\left(1-\frac{r}{n_{r}}\right),n_{t}%
\left(1-\frac{r}{n_{t}}\right)\right\}\right)}$$
$$\displaystyle=$$
$$\displaystyle SNR^{n_{t}\left(1-\frac{r}{n_{min}}\right)}.$$
This completes the proof of the theorem.
∎
Note 1
Theorem 1 can also be proved using the steps of the proof of Theorem 2. To do so, we need to redefine $\mathcal{O}$ given by (8) as being equal to
$$\left\{\textbf{H}~{}\left|~{}\begin{array}[]{r}\log det\left(\textbf{I}_{n_{t}%
}+\frac{SNR}{n_{t}}\textbf{H}^{H}\textbf{H}\right)~{}\leq~{}~{}~{}~{}r\log SNR%
\\
+~{}o(\log SNR)\\
\end{array}\right.\right\}.$$
Redefining $\mathcal{O}$ this way is justified because $det(\textbf{I}+\textbf{AB})=det(\textbf{I}+\textbf{BA})$, with I begin the identity matrix of compatible dimensions. With $\mathcal{O}$ thus redefined, proceeding as in the proof of Theorem 2 from (8) onwards helps us arrive at the proof of Theorem 1.
The implication of Theorem 2 is that for asymmetric MIMO systems, the requirement demanded by Theorem 1 on the minimum of the determinants of the codeword difference matrices of STBCs that the STBC-scheme consists of is relaxed. In the following section, we show the usefulness of Theorem 2 in the context of LSTBCs for asymmetric MIMO systems.
IV DMT-optimality criterion for LSTBC-schemes
In its most general form, an LSTBC $\mathcal{X}_{L}$ is given by
$$\mathcal{X}_{L}=\left\{\sum_{i=1}^{k}(s_{iI}\textbf{A}_{iI}+s_{iQ}\textbf{A}_{%
iQ})\right\},$$
(32)
where $[s_{1I},s_{1Q},\cdots,s_{kI},s_{kQ}]^{\textrm{T}}\in\mathcal{A}\subset\mathbb{%
R}^{2k\times 1}$, and $\textbf{A}_{iI}$, $\textbf{A}_{iQ}\in\mathbb{C}^{n_{t}\times T}$ are called weight matrices [16] associated with the real information symbols $s_{iI}$ and $s_{iQ}$, respectively. In the case of most known LSTBCs, either all the real symbols $s_{iI}$, $s_{iQ}$, respectively take values independently from the same signal set $\mathcal{A}^{\prime}$, in which case
$$\mathcal{A}=\underbrace{\mathcal{A}^{\prime}\times\mathcal{A}^{\prime}\times%
\cdots\times\mathcal{A}^{\prime}}_{2k\textrm{ times}},$$
or each symbol pair $(s_{iI},s_{iQ})$ jointly takes values from a real constellation $\mathcal{A}^{\prime\prime}\subset\mathbb{R}^{2\times 1}$ (the same can be viewed as each complex symbol $s_{i}=s_{iI}+js_{iQ}$, $j=\sqrt{-1}$, taking values from a complex constellation that is subset of $\mathbb{C}$), independent of other symbol pairs, in which case
$$\mathcal{A}=\underbrace{\mathcal{A}^{\prime\prime}\times\mathcal{A}^{\prime%
\prime}\times\cdots\times\mathcal{A}^{\prime\prime}}_{k\textrm{ times}}.$$
For the LSTBC given by (32), the system model given by (1) can be rewritten as
$$\widetilde{vec(\textbf{Y})}=\left(\textbf{I}_{T}\otimes\check{\textbf{H}}%
\right)\textbf{Gs}+\widetilde{vec(\textbf{N})},$$
(33)
where $\textbf{G}\in\mathbb{R}^{2Tn_{t}\times 2k}$ is called the Generator matrix of the STBC, and $\textbf{s}\in\mathbb{R}^{2k\times 1}$, both defined as
G
$$\displaystyle\triangleq$$
$$\displaystyle\left[\widetilde{vec(\textbf{A}_{1I})}~{}\widetilde{vec(\textbf{A%
}_{1Q})},\cdots,~{}\widetilde{vec(\textbf{A}_{kQ})}\right],$$
(34)
s
$$\displaystyle\triangleq$$
$$\displaystyle[s_{1I},s_{1Q},\cdots,s_{kI},s_{kQ}]^{\textrm{T}},$$
(35)
with $\mathbb{E}_{\textbf{s}}\left(tr\left(\textbf{Gss}^{\textrm{T}}\textbf{G}^{%
\textrm{T}}\right)\right)\leq$ $T~{}SNR$.
Definition 6
(Code-rate of an LSTBC) The code-rate444In the literature, “code-rate” is referred to simply as ’rate’. In this paper, to avoid confusion with the bit rate which is $\frac{\log|\mathcal{A}|}{T}$ bits per channel use, we have opted to use the term “code-rate”. of the LSTBC $\mathcal{X}_{L}$ defined in (32) is
$$\displaystyle\textrm{Code-Rate}(\mathcal{X}_{L})$$
$$\displaystyle=$$
$$\displaystyle\frac{Rank(\textbf{G})}{T}\textrm{ real dpcu}$$
$$\displaystyle=$$
$$\displaystyle\frac{Rank(\textbf{G})}{2T}\textrm{ complex dpcu}$$
where “dpcu” stands for “dimensions per channel use”, and G is the generator matrix of $\mathcal{X}_{L}$. If $Rank(\textbf{G})=2k$, $\mathcal{X}_{L}$ is called a rate-$k/T$ STBC, meaning which it has a code-rate of $k/T$ complex dpcu.
A necessary condition for an LSTBC given by (32) to be sphere-decodable [21] is that the constellation $\mathcal{A}$ should be a finite subset of a $2k$-dimensional real lattice with each of the real symbols independently taking $|\mathcal{A}|^{\frac{1}{2k}}$ possible values. Further, if $k/T\leq n_{min}$, all the symbols of the STBC can be entirely decoded using the standard sphere-decoder [21] or its variations [22], [23]. However, when $k/T>n_{min}$, for each of the $|\mathcal{A}|^{\left(1-\frac{n_{min}T}{k}\right)}$ possibilities for any $2(k-n_{min}T)$ real symbols, the remaining $2n_{min}T$ real symbols can be evaluated using the sphere decoder. Hence, the ML-complexity of the rate-$\frac{k}{T}$ STBC in such a scenario is approximately $|\mathcal{A}|^{\left(1-\frac{n_{min}T}{k}\right)}$ times the sphere-decoding complexity of a rate-$n_{min}$ STBC.
Definition 7
(LSTBC-scheme) A rate-$k/T$ LSTBC-scheme $\mathcal{X}$ is defined as a family of rate-$k/T$ LSTBCs (indexed by $SNR$) of block length $T$ so that $\mathcal{X}\triangleq\{\mathcal{X}_{L}(SNR)\}$, where the STBC $\mathcal{X}_{L}(SNR)$ corresponds to a signal-to-noise ratio of $SNR$ at each receive antenna.
For an LSTBC $\mathcal{X}_{L}(SNR)$ of the form given by (32) with the $2k$-dimensional real constellation denoted by $\mathcal{A}(SNR)$, from (3), we have that for each codeword matrix $\textbf{X}_{i}\in\mathcal{X}_{L}(SNR)$, $i=1,2,\cdots,|\mathcal{X}_{L}(SNR)|$,
$$\|\textbf{X}_{i}\|^{2}=\|\textbf{Gs}\|^{2}~{}\dot{\leq}~{}SNR,$$
where G and s are as defined in (34) and (35), respectively. For convenience, we assume that
$$\max_{\textbf{s}\in\mathcal{A}(SNR)}\{\|\textbf{Gs}\|^{2}\}~{}\doteq~{}SNR$$
and hence,
$$\left.\begin{array}[]{l}\underset{s_{iI}}{\operatorname{max}}|s_{iI}|^{2}~{}%
\doteq~{}SNR,\\
\underset{s_{iQ}}{\operatorname{max}}|s_{iQ}|^{2}~{}\doteq~{}SNR\\
\end{array}\right\}\forall~{}i=1,\cdots,k.$$
(36)
When the bit rate of $\mathcal{X}_{L}(SNR)$ is $r\log SNR+o(\log SNR)$ bits per channel use, we have $|\mathcal{A}(SNR)|\doteq SNR^{rT}$. Further, when each of the $2k$ real symbols takes values from the same real constellation $\mathcal{A}^{\prime}(SNR)$, it follows that
$$|\mathcal{A}^{\prime}(SNR)|~{}\doteq~{}SNR^{\frac{rT}{2k}}.$$
(37)
Let $\mathcal{A}^{\prime}(SNR)=\mu\mathcal{A}_{M-\textrm{PAM}}$, where $\mu$ is a scalar normalizing constant designed to satisfy the constraints in (36), $\mathcal{A}_{M-\textrm{PAM}}$ is the regular $M$-PAM constellation given by
$$\mathcal{A}_{M-\textrm{PAM}}=\left\{2\left\lfloor-\frac{M}{2}\right\rfloor+l~{%
},~{}l=1,3,\cdots,2M-1\right\},$$
(38)
and $\mu\mathcal{A}_{M-\textrm{PAM}}=\{\mu a~{}|~{}a\in\mathcal{A}_{M-\textrm{PAM}}\}$. Now, we have from (37) and (36),
$$\displaystyle M$$
$$\displaystyle\doteq$$
$$\displaystyle~{}SNR^{\frac{rT}{2k}},$$
$$\displaystyle\mu M$$
$$\displaystyle\doteq$$
$$\displaystyle~{}SNR^{\frac{1}{2}},$$
and hence, $\mu^{2}~{}\doteq~{}SNR^{\left(1-\frac{rT}{k}\right)}$.
For an LSTBC-scheme $\mathcal{X}$ that satisfies (3) and has a bit rate of $r\log SNR+o(\log SNR)$ bits per channel use with the real symbols of its LSTBCs taking values from a scaled $M$-PAM, the LSTBCs $\mathcal{X}_{L}(SNR)$ can be expressed as
$$\mathcal{X}_{L}(SNR)=\left\{\mu\textbf{X}~{}|~{}\textbf{X}\in\mathcal{X}_{U}(%
SNR)\right\},$$
where $\mu^{2}~{}\doteq~{}SNR^{\left(1-\frac{rT}{k}\right)}$, and $\mathcal{X}_{U}(SNR)$ is the unnormalized (so that it does not satisfy the energy constraint given in (3)) LSTBC given by
$$\mathcal{X}_{U}(SNR)=\left\{\sum_{i=1}^{k}(s_{iI}\textbf{A}_{iI}+s_{iQ}\textbf%
{A}_{iQ})\right\}$$
(39)
with $s_{iI},s_{iQ}\in\mathcal{A}_{M-\textrm{PAM}}$, $i=1,2,\cdots,k$, and $M\doteq SNR^{\frac{rT}{2k}}$. With $\mathcal{X}_{L}(SNR)$ and $\mathcal{X}_{U}(SNR)$ thus defined, we define the non-vanishing determinant property of an LSTBC-scheme as follows.
Definition 8
(Non-vanishing determinant) An LSTBC-scheme $\mathcal{X}$ is said to have the non-vanishing determinant property if the codeword difference matrices $\Delta\textbf{X}$ of $\mathcal{X}_{U}(SNR)$ are such that
$$\min_{\Delta\textbf{X}\neq\textbf{O}}det\left(\Delta\textbf{X}\Delta\textbf{X}%
^{H}\right)~{}~{}\doteq~{}~{}SNR^{0}.$$
A necessary and sufficient condition for an LSTBC-scheme $\mathcal{X}=\{\mathcal{X}_{L}(SNR)\}$, where $\mathcal{X}_{L}(SNR)$ has weight matrices $\textbf{A}_{iI}$, $\textbf{A}_{iQ}$, $i=1,\cdots,k$, and encodes its real symbols using PAM, to have the non-vanishing determinant property is that the design $\mathcal{X}_{\mathbb{Z}}$, defined as
$$\mathcal{X}_{\mathbb{Z}}=\left\{\left.\sum_{i=1}^{k}(s_{iI}\textbf{A}_{iI}+s_{%
iQ}\textbf{A}_{iQ})\right|\begin{array}[]{l}s_{iI},s_{iQ}\in\mathbb{Z},\\
i=1,2,\cdots,k.\\
\end{array}\right\},$$
(40)
is such that for any non-zero matrix X of $\mathcal{X}_{\mathbb{Z}}$,
$$det\left(\textbf{X}\textbf{X}^{H}\right)\geq C,$$
where $C$ is some strictly positive constant bounded away from zero.
Remark 2
Any LSTBC is completely specified by a set of weight matrices (equivalently, its generator matrix, defined in (34)) and a $2k$-dimensional real constellation $\mathcal{A}$ that its real symbol vector takes values from, as evident from (32). However, for an LSTBC, the set of weight matrices (equivalently, its generator matrix) and the $2k$-dimensional constellation need not be unique. As an example, consider the perfect code for 3 transmit antennas, which encodes $9$ independent complex symbols, and can be expressed as
$$\mathcal{X}_{P}=\left\{\left.\sum_{i=1}^{9}(x_{iI}\textbf{A}_{iI}+x_{iQ}%
\textbf{A}_{iQ})\right|\begin{array}[]{l}x_{i}\in\mathcal{A}_{M^{2}-HEX},\\
i=1,2,\cdots,9\\
\end{array}\right\},$$
where $\mathcal{A}_{M^{2}-HEX}$ is an $M^{2}$-HEX constellation given by
$$\mathcal{A}_{M^{2}-HEX}=\left\{a+\omega b\left|\begin{array}[]{l}a,b\in%
\mathcal{A}_{M-PAM},\\
\omega=e^{\frac{j2\pi}{3}}\\
\end{array}\right.\right\}.$$
We can equivalently express $\mathcal{X}_{P}$ as
$$\mathcal{X}_{P}=\left\{\left.\sum_{i=1}^{9}(s_{iI}\textbf{A}^{\prime}_{iI}+s_{%
iQ}\textbf{A}^{\prime}_{iQ})\right|\begin{array}[]{l}s_{iI},s_{iQ}\in\mathcal{%
A}_{M-PAM},\\
i=1,2,\cdots,9\\
\end{array}\right\},$$
where $\textbf{A}^{\prime}_{iI}=\textbf{A}_{iI}$, $\textbf{A}^{\prime}_{iQ}=-\frac{1}{2}\textbf{A}_{iI}+\frac{\sqrt{3}}{2}\textbf%
{A}_{iQ}$, $i=1,2,\cdots,9$.
In general, any LSTBC $\mathcal{X}_{L}$ with a generator matrix G and a $2k$-dimensional constellation $\mathcal{A}$ that is a subset of a $2k$-dimensional real lattice $\mathcal{L}$ can be alternatively viewed to have $\textbf{GG}_{\mathcal{L}}$ as its generator matrix and a $2k$-dimensional constellation $\mathcal{A}^{\prime}$ that is a subset of $\mathbb{Z}^{2k\times 1}$, where $\textbf{G}_{\mathcal{L}}\in\mathbb{R}^{2k\times 2k}$ is the generator matrix of $\mathcal{L}$.
In the following lemma, we prove that for an LSTBC-scheme to be DMT-optimal, the code-rate of its LSTBCs has to be at least equal to $n_{min}$ complex dpcu.
Lemma 1
A rate-$p$ LSTBC-scheme with $p<\min\{n_{t},n_{r}\}$ is not DMT-optimal.
Proof:
With the system model given by (33), from (2), we have $\mathbb{E}_{\textbf{s}}\left(tr\left(\textbf{Gss}^{\textrm{T}}\textbf{G}^{%
\textrm{T}}\right)\right)\leq$ $T~{}SNR$. Hence, $tr\left(\textbf{GQG}^{\textrm{T}}\right)\leq T~{}SNR$, where $\textbf{Q}=\mathbb{E}_{s}\left(\textbf{ss}^{\textrm{T}}\right)\in\mathbb{R}^{2%
k\times 2k}$. Since G is fixed for an LSTBC, we assume that $tr(\textbf{Q})=\alpha~{}SNR$ for some finite positive constant $\alpha$ with the overall constraint $tr\left(\textbf{GQG}^{\textrm{T}}\right)\leq T~{}SNR$ being satisfied. Now, the ergodic capacity [36] $C$ of the equivalent channel is given by [5]
$$\displaystyle C$$
$$\displaystyle=$$
$$\displaystyle\max_{tr\left(\textbf{GQG}^{\textrm{T}}\right)\leq T~{}SNR}C(%
\textbf{Q}),$$
$$\displaystyle C(\textbf{Q})$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{2T}\mathbb{E}_{\textbf{H}}\left[\log det\left(\textbf{I}%
_{2Tn_{r}}+\bar{\textbf{H}}\textbf{GQG}^{\textrm{T}}\bar{\textbf{H}}^{\textrm{%
T}}\right)\right]$$
where $\bar{\textbf{H}}=\textbf{I}_{T}\otimes\check{\textbf{H}}$, and capacity is achieved if s is jointly Gaussian with zero mean and a covariance matrix Q that satisfies $tr(\textbf{Q})=\alpha~{}SNR$. Now, $(\alpha~{}SNR)\textbf{I}_{2k}-\textbf{Q}$ is positive semidefinite555Since Q is symmetric and positive semidefinite with $tr(\textbf{Q})=\alpha~{}SNR$, each eigenvalue of Q is at most equal to $\alpha~{}SNR$. With $\textbf{Q}=\textbf{UPU}^{\textrm{T}}$, where U is an orthonormal matrix and P is a diagonal matrix with the diagonal entries being the eigenvalues of Q, it is clear that $(\alpha~{}SNR)\textbf{I}_{2k}-\textbf{Q}$ is positive semidefinite. and so is $\bar{\textbf{H}}\textbf{G}\left((\alpha~{}SNR)\textbf{I}_{2k}-\textbf{Q}\right%
)\textbf{G}^{\textrm{T}}\bar{\textbf{H}}^{\textrm{T}}$. Hence,
$$\textbf{I}_{2Tn_{r}}+(\alpha~{}SNR)\bar{\textbf{H}}\textbf{G}\textbf{G}^{%
\textrm{T}}\bar{\textbf{H}}^{\textrm{T}}\succeq\textbf{I}_{2Tn_{r}}+\bar{%
\textbf{H}}\textbf{G}\textbf{Q}\textbf{G}^{\textrm{T}}\bar{\textbf{H}}^{%
\textrm{T}},$$
where $\textbf{A}\succeq\textbf{B}$ denotes that $\textbf{A}-\textbf{B}$ is positive semidefinite. Using the inequality $det(\textbf{A})\geq det(\textbf{B})$ when $\textbf{A}\succeq\textbf{B}$ [37, Corollary 7.7.4], we have
$$\displaystyle C$$
$$\displaystyle\leq$$
$$\displaystyle\frac{1}{2T}\mathbb{E}_{\textbf{H}}\left(\log det\left(\textbf{I}%
_{2Tn_{r}}+(\alpha~{}SNR)\bar{\textbf{H}}\textbf{GG}^{\textrm{T}}\bar{\textbf{%
H}}^{\textrm{T}}\right)\right)$$
(41)
$$\displaystyle=$$
$$\displaystyle\frac{1}{2T}\mathbb{E}_{\textbf{H}}\left(\log det\left(\textbf{I}%
_{2k}+(\alpha~{}SNR)\textbf{G}^{\textrm{T}}\bar{\textbf{H}}^{\textrm{T}}\bar{%
\textbf{H}}\textbf{G}\right)\right)$$
$$\displaystyle\leq$$
$$\displaystyle\frac{1}{2T}\log det\left(\mathbb{E}_{\textbf{H}}\left(\textbf{I}%
_{2k}+(\alpha~{}SNR)\textbf{G}^{\textrm{T}}\bar{\textbf{H}}^{\textrm{T}}\bar{%
\textbf{H}}\textbf{G}\right)\right)$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{2T}\log det\left(\textbf{I}_{2k}+(\alpha n_{r}~{}SNR)%
\textbf{G}^{\textrm{T}}\textbf{G}\right)$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{2T}\log det\left(\textbf{I}_{2k}+(\alpha n_{r}~{}SNR)%
\textbf{D}\right),$$
(43)
where (41) is due to the identity $det(\textbf{I}+\textbf{AB})=det(\textbf{I}+\textbf{BA})$, (IV) is due to Jensen’s inequality and the fact that $\log det(.)$ is concave [37, Theorem 7.6.7] on the convex set of positive definite matrices, and (43) is obtained upon the singular value decomposition of $\textbf{G}^{\textrm{T}}\textbf{G}$, resulting in $\textbf{G}^{\textrm{T}}\textbf{G}=\textbf{UDU}^{\textrm{T}}$. Let $Rank(\textbf{G})=2pT$ (since the code-rate of the LSTBC is $p$ complex dpcu), and denoting the non-zero diagonal entries of D by $d_{i}$, $i=1,2,\cdots,2pT$, we have
$$\displaystyle C$$
$$\displaystyle\leq$$
$$\displaystyle\frac{1}{2T}\sum_{i=1}^{2pT}\log\left(1+\left(\alpha n_{r}d_{i}%
\right)SNR\right).$$
(44)
Equation (44) reveals that as $SNR\to\infty$, $C\leq p\log SNR+o(\log SNR)$. Since the ergodic capacity itself is at most $p\log SNR+o(\log SNR)$, if $p<n_{min}$, the error probability of the LSTBC-scheme is bounded away from 0 when $r>p$. Hence, the diversity gain $d(r)$ of the LSTBC-scheme is not given by (4), making the LSTBC-scheme strictly sub-optimal with respect to DMT.
∎
So, for DMT-optimality, the LSTBCs of the LSTBC-scheme should have a code-rate of at least $n_{min}$ complex dpcu. Now, we give a sufficiency criterion for an LSTBC-scheme to be DMT-optimal.
Corollary 1
Let the LSTBCs of an LSTBC-scheme $\mathcal{X}$ be given by $\mathcal{X}_{L}(SNR)=\{\mu\textbf{X}~{}|~{}\textbf{X}\in\mathcal{X}_{U}(SNR)\}$, with $\mu^{2}\doteq SNR^{\left(1-\frac{r}{n_{min}}\right)}$, and
$$\mathcal{X}_{U}(SNR)=\left\{\sum_{i=1}^{n_{min}T}(s_{iI}\textbf{A}_{iI}+s_{iQ}%
\textbf{A}_{iQ})\right\}$$
where $s_{iI},s_{iQ}\in\mathcal{A}_{M-\textrm{PAM}}$, $i=1,2,\cdots,n_{min}T$, $M\doteq SNR^{\frac{r}{2n_{min}}}$. Then, $\mathcal{X}$ is DMT optimal for the quasi-static Rayleigh faded $n_{t}\times n_{r}$ MIMO channel with CSIR if it has the non-vanishing determinant property.
The proof follows from the application of Theorem 2. Notice the difference between the result of Corollary 1 and that of Theorem 3 of [4]. The latter result relies on STBC-schemes that are based on rate-$n_{t}$ LSTBCs, irrespective of the value of $n_{r}$, while our result only requires that the code-rate of the LSTBC be $\min\{n_{t},n_{r}\}$ complex dpcu which, together with NVD, guarantees DMT-optimality of the LSTBC-scheme. The usefulness of our result for asymmetric MIMO systems is discussed in the following section.
V DMT-optimal LSTBC-schemes for Asymmetric MIMO systems
Rate-$n_{t}$ LSTBC-schemes having the NVD property are known to be DMT-optimal for arbitrary number of receive antennas. The methods to construct LSTBCs of such schemes for arbitrary values of $n_{t}$ with minimal-delay ($T=n_{t}$) have been proposed in [4], [10], and such constructions with additional properties have also been proposed for specific number of transmit antennas - the perfect codes for 2, 3, 4, and 6 transmit antennas [9]. For the case $n_{r}<n_{t}$, Corollary 1 establishes that a rate-$n_{r}$ LSTBC-scheme with the NVD property achieves the optimal DMT and such LSTBC-schemes can make use of the sphere decoder efficiently. For asymmetric MIMO systems, rate-$n_{r}$ LSTBC-schemes with the NVD property can be obtained directly from rate-$n_{t}$ LSTBC-schemes with the NVD property, as shown in the following corollary.
Corollary 2
Consider a rate-$n_{t}$, minimum delay LSTBC-scheme $\mathcal{X}=\{\mathcal{X}(SNR)\}$ equipped with the NVD property, where $\mathcal{X}(SNR)=\{\mu\textbf{X}~{}|~{}\textbf{X}\in\mathcal{X}_{U}(SNR)\}$, with $\mu^{2}\doteq$ $SNR^{\left(1-\frac{r}{n_{t}}\right)}$ and
$$\mathcal{X}_{U}(SNR)=\left\{\sum_{i=1}^{n_{t}^{2}}(s_{iI}\textbf{A}_{iI}+s_{iQ%
}\textbf{A}_{iQ})\right\}$$
where $s_{iI},s_{iQ}\in\mathcal{A}_{M-\textrm{PAM}}$, $i=1,2,\cdots,n_{t}^{2}$, $M\doteq SNR^{\frac{r}{2n_{t}}}$. Let $\mathcal{I}\subset\{1,2,\cdots,n_{t}^{2}\}$, with $|\mathcal{I}|=n_{t}n_{r}$, where $n_{r}<n_{t}$. Then, the rate-$n_{r}$ LSTBC-scheme $\mathcal{X}^{\prime}$ consisting of LSTBCs $\mathcal{X}^{\prime}(SNR)=\{\mu\textbf{X}~{}|~{}\textbf{X}\in\mathcal{X}_{U}^{%
\prime}(SNR)\}$, with $\mu^{2}\doteq SNR^{\left(1-\frac{r}{n_{r}}\right)}$ and
$$\mathcal{X}_{U}^{\prime}(SNR)=\left\{\sum_{i\in\mathcal{I}}(s_{iI}\textbf{A}_{%
iI}+s_{iQ}\textbf{A}_{iQ})\right\}$$
where $s_{iI},s_{iQ}\in\mathcal{A}_{M-\textrm{PAM}}$, $i\in\mathcal{I}$, $M\doteq SNR^{\frac{r}{2n_{r}}}$, is DMT-optimal for the asymmetric $n_{t}\times n_{r}$ quasi-static MIMO channel with Rayleigh fading and CSIR.
The proof is a trivial application of Corollary 1 and the fact that $\mathcal{X}^{\prime}$ also has the NVD property. As an example, consider the Golden code-scheme [6] $\mathcal{X}_{G}=\{\mathcal{X}_{G}(SNR)\}$, where
$$\mathcal{X}_{G}(SNR)=\left\{\mu\left[\begin{array}[]{cc}\alpha(s_{1}+s_{2}%
\theta)&\alpha(s_{3}+s_{4}\theta)\\
j\bar{\alpha}(s_{3}+s_{4}\bar{\theta})&\bar{\alpha}(s_{1}+s_{2}\bar{\theta})\\
\end{array}\right]\right\},$$
$s_{iI},s_{iQ}\in\mathcal{A}_{M-\textrm{PAM}}$, $i=1,2,3,4$, $M\doteq SNR^{\frac{r}{4}}$, $\mu^{2}\doteq SNR^{\left(1-\frac{r}{2}\right)}$, $\theta=(1+\sqrt{5})/2$, $\bar{\theta}=(1-\sqrt{5})/2$, $j=\sqrt{-1}$, $\bar{\alpha}=1+j\theta$, and $\alpha=1+j\bar{\theta}$. It is known that $\mathcal{X}_{G}$ is DMT-optimal for arbitrary values of $n_{r}$. So, from Corollary 2, the LSTBC-scheme $\mathcal{X}_{G}^{\prime}=\{\mathcal{X}_{G}^{\prime}(SNR)\}$, where
$$\mathcal{X}_{G}^{\prime}(SNR)=\left\{\mu\left[\begin{array}[]{cc}\alpha(s_{1}+%
s_{2}\theta)&0\\
0&\bar{\alpha}(s_{1}+s_{2}\bar{\theta})\\
\end{array}\right]\right\}$$
with $s_{iI},s_{iQ}\in\mathcal{A}_{M-\textrm{PAM}}$, $i=1,2$, $M\doteq SNR^{\frac{r}{2}}$, $\mu^{2}\doteq SNR^{1-r}$, is DMT-optimal for the $2\times 1$ MIMO system.
Note 2
The described method of obtaining a rate-$n_{r}$ LSTBC from a rate-$n_{t}$ LSTBC ($n_{r}<n_{t}$) is called puncturing [20].
V-A Schemes based on CIOD for the $2\times 1$ and $4\times 1$ MIMO systems
The STBC from CIOD [16] for $4$ transmit antennas, denoted by $\mathcal{X}_{C}$ and given by (45) at the top of the next page, is a rate-$1$ LSTBC with symbol-by-symbol ML-decodability. $\mathcal{X}_{C}$ has a minimum determinant of $10.24$ when its symbols $x_{i}$, $i=1,2,3,4$ take values from a $\tan^{-1}(2)/2$ radian rotated $M^{2}$-QAM constellation, irrespective of the value of $M$. Expressing (45) as
$$\mathcal{X}_{C}=\left\{\sum_{i=1}^{4}(x_{iI}\textbf{A}_{iI}+x_{iQ}\textbf{A}_{%
iQ})\right\}$$
(46)
where $x_{i}\in e^{j\theta}\mathcal{A}_{M^{2}-QAM}$, $i=1,2,3,4$, $\theta=\frac{1}{2}\tan^{-1}(2)$. we note that (46) can be alternatively written as
$$\mathcal{X}_{C}=\left\{\sum_{i=1}^{4}(s_{iI}\textbf{A}^{\prime}_{iI}+s_{iQ}%
\textbf{A}^{\prime}_{iQ})\right\}$$
where $s_{iI},s_{iQ}\in\mathcal{A}_{M-PAM}$, $i=1,\cdots,4$, and
$$\displaystyle\left.\begin{array}[]{ll}\textbf{A}^{\prime}_{iI}=&\cos\theta%
\textbf{A}_{iI}+\sin\theta\textbf{A}_{iQ},\\
\textbf{A}^{\prime}_{iQ}=&-\sin\theta\textbf{A}_{iI}+\cos\theta\textbf{A}_{iQ}%
.\\
\end{array}\right\}\begin{array}[]{l}i=1,2,3,4,\\
\theta=\frac{1}{2}\tan^{-1}(2).\\
\end{array}$$
Since $\mathcal{X}_{C}$ has a minimum determinant of $10.24$ independent of the value of $M$, any non-zero matrix X of
$$\mathcal{X}_{\mathbb{Z}}=\left\{\left.\sum_{i=1}^{4}(s_{iI}\textbf{A}^{\prime}%
_{iI}+s_{iQ}\textbf{A}^{\prime}_{iQ})~{}\right|~{}\begin{array}[]{l}s_{iI},s_{%
iQ}\in\mathbb{Z}\\
\end{array}\right\}$$
is such that
$$det\left(\textbf{X}\textbf{X}^{H}\right)\geq 0.04.$$
Hence, the CIOD based STBC-scheme has the NVD property and is DMT-optimal for the $4\times 1$ MIMO system. Using the same analysis, one can show that the STBC-scheme based on the CIOD for $2$ transmit antennas is DMT-optimal for the $2\times 1$ MIMO system.
V-B Four-group decodable STBC-schemes for $n_{t}\times 1$ MIMO systems
For the special case of $n_{t}$ being a power of $2$, rate-$1$, $4$-group decodable STBCS have been extensively studied in the literature [17]-[20]. For all these STBCs, the $2n_{t}$ real symbols taking values from PAM constellations can be separated into four equal groups such that the symbols of each group can be decoded independently of the symbols of all the other groups. For all these STBCs, the minimum determinant, irrespective of the size of the signal constellation, is given by [20]
$$\min_{\Delta\textbf{X}\neq\textbf{O}}(\Delta\textbf{X}\Delta\textbf{X}^{H})=d_%
{\textrm{P,min}}^{4}$$
where $d_{\textrm{P,min}}$ is the minimum product distance in $n_{t}/2$ real dimensions, which has been shown to be a constant bounded away from $0$ in [38]. Hence, from Corollary 1, LSTBC-schemes consisting of these $4$-group decodable STBCs are DMT-optimal for $n_{t}\times 1$ MIMO systems with $n_{t}$ being a power of $2$.
V-C Fast-decodable STBCs
In [20] a rate-$2$, LSTBC was constructed for the $4\times 2$ MIMO system, and in [39], the LSTBC-scheme based on this code is shown to have the NVD property when QAM is used. An interesting property of this LSTBC is that it allows fast-decoding, meaning which, for the ML-decoding of the $16$ real symbols (or $8$ complex symbols) of the STBC using a sphere decoder, it suffices to use a $9$ real-dimensional sphere decoder instead of a $16$ real-dimensional one. Since the LSTBC-scheme based on this fast-decodable STBC has the non-vanishing determinant property, it is DMT-optimal for the $4\times 2$ MIMO system.
Several rate-$n_{r}$, fast-decodable STBCs have been constructed in [15] for various asymmetric MIMO configurations - for example, for $4\times 2$, $6\times 2$, $6\times 3$, $8\times 2$, $8\times 3$, $8\times 4$ MIMO systems. For an $n_{t}\times n_{r}$ asymmetric MIMO system, these STBCs transmit a total of $n_{t}n_{r}$ complex symbols in $n_{t}$ channel uses, and with regards to ML-decoding, only an $n_{t}n_{r}-\frac{n_{t}}{2}$ complex-dimensional sphere decoder is required as against an $n_{t}n_{r}$ complex-dimensional sphere decoder required for decoding general rate-$n_{r}$ LSTBCs. These STBCs are constructed from division algebra and the STBC-schemes based on these STBCs have the NVD property [15]. Hence, for an $n_{t}\times n_{r}$ asymmetric MIMO system, LSTBC-schemes consisting of these rate-$n_{r}$ fast-decodable STBCs are DMT-optimal. Table I lists some known LSTBC-schemes that are now proven to be DMT-optimal using the sufficient criterion proposed in this paper.
The DMT curves for some well-known DMT-optimal LSTBC-schemes are shown in Fig. 4, Fig. 4, Fig. 4 and Fig. 4. In all the figures, the perfect code-scheme refers to the LSTBC-scheme that is based on rate-$n_{t}$ perfect codes [9], [10], and this scheme is known to be DMT-optimal for arbitrary number of receive antennas [4]. The DMT-curves of the LSTBC-schemes that are based on rate-$n_{r}$ LSTBCs coincide with that of the rate-$n_{t}$ perfect code-scheme.
VI Concluding Remarks
In this paper, we have presented an enhanced sufficient criterion for DMT-optimality of STBC-schemes using which we have established the DMT optimality of several low-ML-decoding-complexity LSTBC-schemes for certain asymmetric MIMO systems. However, obtaining a necessary and sufficient condition for DMT-optimality of STBC-schemes is still an open problem. Further, obtaining low-ML-decoding-complexity STBC-schemes with NVD for arbitrary number of transmit antennas is another possible direction of research.
Appendix A Evaluation of $\textrm{P}(\widetilde{\mathcal{O}})$
We have
$$\displaystyle\textrm{P}(\widetilde{\mathcal{O}})$$
$$\displaystyle=$$
$$\displaystyle\int_{\widetilde{\mathcal{O}}}p(\textbf{H})d\textbf{H}=\int_{%
\widetilde{\mathcal{O}}}\prod_{i=1}^{n_{r}}\prod_{j=1}^{n_{t}}p(h_{ij})d(h_{ij})$$
(48)
$$\displaystyle=$$
$$\displaystyle\int_{\check{\mathcal{O}}}\prod_{i,j}p(|h_{ij}|^{2})d(|h_{ij}|^{2%
}),$$
(49)
where (48) is because of the independence of the entries of H, and (49) is by change of variables with $\check{\mathcal{O}}$ as defined in (47) at the bottom of the page. It is well known that $p(|h_{ij}|^{2})=e^{-|h_{ij}|^{2}}$ for the case of Rayleigh fading. Let $|h_{ij}|^{2}=SNR^{-\alpha_{ij}}$. Now, $p(\alpha_{ij})=(\log_{e}SNR)e^{-SNR^{-\alpha_{ij}}}SNR^{-\alpha_{ij}}$. Defining the column vector $\boldsymbol{\alpha}\in\mathbb{R}^{n_{t}n_{r}\times 1}$ as $\boldsymbol{\alpha}=[\alpha_{ij}]_{i=1,\cdots,n_{r},~{}j=1,\cdots,n_{t}}$, we have
$$\textrm{P}(\widetilde{\mathcal{O}})=\kappa\int_{\vec{\mathcal{O}}}e^{-\sum_{i,%
j}SNR^{-\alpha_{ij}}}SNR^{-\sum_{i,j}\alpha_{ij}}d\boldsymbol{\alpha},$$
(50)
where $\kappa=(\log_{e}SNR)^{n_{t}n_{r}}$ and
$$\displaystyle\vec{\mathcal{O}}=\left\{\boldsymbol{\alpha}\left|\begin{array}[]%
{rl}\sum_{i}\log\left(1+\sum_{j}\frac{SNR^{1-\alpha_{ij}}}{n_{t}}\right)>&r%
\log SNR\\
&+~{}o(\log SNR),\\
\sum_{i,j}\log\left(1+SNR^{1-\alpha_{ij}}\right)\leq&n_{t}r\log SNR\\
&+~{}o(\log SNR)\\
\end{array}\right.\right\}$$
$$\displaystyle~{}=\left\{\boldsymbol{\alpha}\left|\begin{array}[]{rl}\sum_{i}%
\max\{(1-\alpha_{ij})^{+},j=1,\cdots,n_{t}\}&>r,\\
\sum_{i,j}(1-\alpha_{ij})^{+}&\leq n_{t}r\\
\end{array}\right.\right\},$$
where $\max\{.\}$ denotes “the largest element of”. Note that in (50), the integrand is exponentially decaying with $SNR$ when any one of the $\alpha_{ij}$ is negative, unlike a polynomial decay when all the $\alpha_{ij}$ are non-negative. Hence, using the concept developed in [2] (see [2, p. 1079] for details),
$$\textrm{P}(\widetilde{\mathcal{O}})\doteq SNR^{-f(\boldsymbol{\alpha}^{*})},$$
where
$$f(\boldsymbol{\alpha}^{*})=\inf_{\vec{\mathcal{O}}\bigcap\mathbb{R}_{+}^{n_{t}%
n_{r}\times 1}}\left\{\sum_{i=1}^{n_{r}}\sum_{j=1}^{n_{t}}\alpha_{ij}\right\},$$
with $\mathbb{R}_{+}$ representing the set of non-negative real numbers. It is easy to check that the infimum occurs when all but two of $\alpha_{ij}$ are $1-\frac{r}{n_{r}}$, while the other two are $1-\frac{r}{n_{r}}+\delta$ and $1-\frac{r}{n_{r}}-\delta$ respectively, where $\delta\to 0^{+}$. Hence,
$$\textrm{P}(\widetilde{\mathcal{O}})\doteq SNR^{-n_{t}(n_{r}-r)}.$$
Appendix B Proof that $\mathcal{O}_{l}=\mathcal{O}_{l}^{\prime}$ almost surely as $SNR\to\infty$
As done earlier, the rows of the random matrix H are denoted by $\textbf{h}_{i}$, $i=1,2$, $\cdots,n_{r}$. Let $|h_{ij}|^{2}=SNR^{-\alpha_{ij}}$ with $\alpha_{ij}\in\mathbb{R}$, and $\textbf{u}\triangleq[u_{1}$, $u_{2},\cdots,u_{n_{t}}]^{\textrm{T}}$ be a complex column vector independent of $\textbf{h}_{i}$, with either $|u_{j}|^{2}\doteq SNR^{0}$ or $u_{j}=0$, $j=1,2,\cdots,n_{t}$. Defining the indicators $I_{1}$, $I_{2}$, $\cdots$, $I_{n_{t}}$ as
$$I_{j}=\left\{\begin{array}[]{ll}1,&\textrm{if }|u_{j}|^{2}\doteq SNR^{0}\\
0&\textrm{otherwise},\\
\end{array}\right.~{}~{}~{}j=1,\cdots,n_{t},$$
we have, as $SNR\to\infty$,
$$\displaystyle|\textbf{h}_{i}\textbf{u}|^{2}$$
$$\displaystyle=$$
$$\displaystyle\sum_{j=1}^{n_{t}}h_{ij}u_{j}\sum_{k=1}^{n_{t}}h_{ik}^{*}u_{k}^{*}$$
(51)
$$\displaystyle=$$
$$\displaystyle\sum_{j=1}^{n_{t}}|h_{ij}|^{2}|u_{j}|^{2}+2\sum_{j=1}^{n_{t}-1}%
\sum_{k=j+1}^{n_{t}}\textrm{Re}\left(h_{ij}h_{ik}^{*}u_{j}u_{k}^{*}\right)$$
$$\displaystyle\dot{\geq}$$
$$\displaystyle SNR^{-\beta}~{}~{}\textrm{almost surely},$$
where $\textrm{Re}(.)$ denotes “the real part of”, and
$$\beta=\min\{\alpha_{ij}~{}|~{}I_{j}\neq 0,~{}j=1,2,\cdots,n_{t}\}.$$
We use the term “almost surely” in (51) because the ’$h_{ij}$’s are independent random variables. Now, denoting the $i^{th}$ row of $\textbf{HU}_{l}$ by $\textbf{h}_{i}(l)$ (with entries $h_{ij}(l)$, $j=1,\cdots,n_{t}$) and the $(i,j)^{th}$ entry of $\textbf{U}_{l}$ by $u_{ij}(l)$, let $|h_{ij}(l)|^{2}\doteq SNR^{-\beta_{ij}}$ with $\beta_{ij}\in\mathbb{R}$. It is to be noted that since $\textbf{U}_{l}$ is unitary, each row and column of $\textbf{U}_{l}$ has at least one non-zero entry. Since $\textbf{U}_{l}$ is full-ranked, it is always possible to obtain $\eta_{i}\in\{1,\cdots,n_{t}\}$, $i=1,2,\cdots,n_{t}$, such that
$$\displaystyle[\eta_{1},\cdots,\eta_{n_{t}}]$$
$$\displaystyle=$$
$$\displaystyle[1,2,\cdots,n_{t}]\textbf{P},$$
(52)
$$\displaystyle u_{\eta_{j}j}(l)$$
$$\displaystyle\neq$$
$$\displaystyle 0,~{}~{}\forall j=1,\cdots,n_{t},$$
(53)
where P is some permutation matrix of size $n_{t}\times n_{t}$. In other words, for any unitary matrix, one can choose a non-zero element in each column such that in each column, the position of the chosen non-zero element is different from that of the chosen non-zero elements of all other columns. Using (51), we have for all $i=1,\cdots,n_{r}$, $j=1,\cdots,n_{t}$,
$$\displaystyle|h_{ij}(l)|^{2}$$
$$\displaystyle\dot{\geq}$$
$$\displaystyle~{}SNR^{-\min\{\alpha_{ik}|u_{kj}(l)\neq 0,k=1,\cdots,n_{t}\}}$$
$$\displaystyle\dot{\geq}$$
$$\displaystyle~{}SNR^{-\alpha_{i\eta_{j}}}$$
almost surely so that
$$\beta_{ij}\leq\alpha_{i\eta_{j}}~{}~{}~{}\textrm{almost surely}.$$
By assumption, $|h_{ij}(l)|^{2}\doteq SNR^{-\beta_{ij}}$. So, let
$$|h_{ij}(l)|^{2}=cSNR^{-\beta_{ij}}+o\left(SNR^{-\beta_{ij}}\right)$$
with $c\doteq SNR^{0}$. Hence, $\sum_{j=1}^{n_{t}}\log\left(1+SNR|h_{ij}(l)|^{2}\right)$
$$\displaystyle=$$
$$\displaystyle\sum_{j=1}^{n_{t}}\log\left(1+cSNR^{1-\beta_{ij}}+o\left(SNR^{1-%
\beta_{ij}}\right)\right)$$
$$\displaystyle\geq$$
$$\displaystyle\sum_{j=1}^{n_{t}}\log\left(1+SNR^{1-\alpha_{i\eta_{j}}}\right),~%
{}~{}SNR\to\infty,$$
$$\displaystyle=$$
$$\displaystyle\sum_{j=1}^{n_{t}}\log\left(1+SNR^{1-\alpha_{ij}}\right),$$
$$\displaystyle=$$
$$\displaystyle\sum_{j=1}^{n_{t}}\log\left(1+SNR|h_{ij}|^{2}\right)~{}\textrm{%
almost surely}$$
and this is true for all $i=1,2,\cdots,n_{r}$. Note that (B) is due to (52). Hence, at a high SNR, almost surely
$$\displaystyle\sum_{i,j}\log\left(1+SNR|h_{ij}(l)|^{2}\right)\geq\sum_{i,j}\log%
\left(1+SNR|h_{ij}|^{2}\right).$$
So, if
$$\sum_{i,j}\log\left(1+SNR|h_{ij}|^{2}\right)>n_{t}r\log SNR+o(\log SNR),$$
then
$$\sum_{i,j}\log\left(1+SNR|h_{ij}(l)|^{2}\right)>n_{t}r\log SNR+o(\log SNR)$$
almost surely as $SNR\to\infty$. Since $\textbf{U}_{l}$ is unitary, it can be similarly proven using the same steps taken in this appendix that if
$$\sum_{i,j}\log\left(1+SNR|h_{ij}(l)|^{2}\right)>n_{t}r\log SNR+o(\log SNR),$$
then
$$\sum_{i,j}\log\left(1+SNR|h_{ij}|^{2}\right)>n_{t}r\log SNR+o(\log SNR)$$
almost surely at a high SNR. Hence, as $SNR\to\infty$,
$$\sum_{i,j}\log\left(1+SNR|h_{ij}|^{2}\right)>n_{t}r\log SNR+o(\log SNR)$$
is equivalent to
$$\sum_{i,j}\log\left(1+SNR|h_{ij}(l)|^{2}\right)>n_{t}r\log SNR+o(\log SNR)$$
almost surely and so, $\mathcal{O}_{l}=\mathcal{O}_{l}^{\prime}$ almost surely as $SNR\to\infty$.
Appendix C Proof that $P_{\mathcal{O}_{l}^{\prime}}(\delta)\leq\frac{1}{2}e^{-\left(aSNR^{\frac{%
\delta}{n_{r}}}+o\left(SNR^{\frac{\delta}{n_{r}}}\right)\right)}$, $\delta>0$
Recall that
$$P_{\mathcal{O}_{l}^{\prime}}(\delta)=\int_{\mathcal{O}_{l}^{\prime}(\delta)}p(%
\textbf{H}_{l})Q\left(\frac{\|\textbf{H}_{l}\textbf{D}_{l}\|}{\sqrt{2}}\right)%
d\textbf{H}_{l}$$
where
$$\mathcal{O}_{l}^{\prime}(\delta)\triangleq\left\{\textbf{H}_{l}\left|\sum_{i,j%
}\log\left(1+SNR|h_{ij}(l)|^{2}\right)\geq n_{t}(r+\delta)\log SNR\right.%
\right\}.$$
We define $\|\textbf{H}_{l}\textbf{D}_{l}\|_{min}(\delta)$ as
$$\|\textbf{H}_{l}\textbf{D}_{l}\|^{2}_{min}(\delta)=\min_{\mathcal{O}_{l}^{%
\prime}(\delta)}\{\|\textbf{H}_{l}\textbf{D}_{l}\|^{2}\}.$$
(55)
We have
$$\displaystyle P_{\mathcal{O}_{l}^{\prime}}(\delta)$$
$$\displaystyle\leq$$
$$\displaystyle\int_{\mathcal{O}_{l}^{\prime}(\delta)}p(\textbf{H}_{l})Q\left(%
\frac{\|\textbf{H}_{l}\textbf{D}_{l}\|_{min}(\delta)}{\sqrt{2}}\right)d\textbf%
{H}_{l}$$
(56)
$$\displaystyle\leq$$
$$\displaystyle Q\left(\frac{\|\textbf{H}_{l}\textbf{D}_{l}\|_{min}(\delta)}{%
\sqrt{2}}\right)$$
$$\displaystyle\leq$$
$$\displaystyle\frac{1}{2}e^{-\frac{\|\textbf{H}_{l}\textbf{D}_{l}\|^{2}_{min}(%
\delta)}{4}}$$
which is due to the bound $Q(x)\leq\frac{1}{2}e^{\frac{-x^{2}}{2}}$, $x\geq 0$. We now proceed to evaluate $\|\textbf{H}_{l}\textbf{D}_{l}\|^{2}_{min}(\delta)$ as follows. Denoting the non-zero entries of $\textbf{D}_{l}$ by $d_{j}(l)$, $j=1,2,\cdots n_{t}$ (it is to be noted that these are the singular values of $\Delta\textbf{X}_{l}$ and we assume that $\Delta\textbf{X}_{l}$ is full-ranked, i.e. of rank $n_{t}$, which is necessary for the STBC to have a diversity gain of $n_{t}n_{r}$ when $r=0$), and letting $a_{ij}\triangleq|h_{ij}(l)|^{2}$, the problem of evaluating $\|\textbf{H}_{l}\textbf{D}_{l}\|^{2}_{min}(\delta)$ can be interpreted as the following convex optimization problem:
$$\underset{a_{ij}}{\operatorname{minimize}}~{}\sum_{i=1}^{n_{r}}\sum_{j=1}^{n_{%
t}}a_{ij}d_{j}^{2}(l)$$
(57)
subject to
$$\displaystyle-\frac{1}{n_{t}}\sum_{i=1}^{n_{r}}\sum_{j=1}^{n_{t}}\log(1+a_{ij}SNR)$$
$$\displaystyle+~{}~{}(r+\delta)\log SNR$$
$$\displaystyle\leq$$
$$\displaystyle 0,$$
$$\displaystyle-a_{ij}$$
$$\displaystyle\leq$$
$$\displaystyle 0,~{}\left\{\begin{array}[]{l}\forall i=1,\cdots,n_{r},\\
\forall j=1,\cdots,n_{t}.\\
\end{array}\right.$$
The solution to this optimization problem is
$$a_{ij}=\frac{1}{SNR}\left[\frac{\lambda SNR}{n_{t}d_{j}^{2}(l)}-1\right]^{+},$$
(58)
where $\lambda$ is the Karush-Kuhn-Tucker (KKT) multiplier satisfying
$$\sum_{i=1}^{n_{r}}\sum_{j=1}^{n_{t}}\log\left(1+\left[\frac{\lambda SNR}{n_{t}%
d_{j}^{2}(l)}-1\right]^{+}\right)=n_{t}(r+\delta)\log SNR,$$
and hence,
$$\sum_{i=1}^{n_{r}}\sum_{j=1}^{n_{t}}\left[\log\left(\frac{\lambda SNR}{n_{t}d_%
{j}^{2}(l)}\right)\right]^{+}=n_{t}(r+\delta)\log SNR.$$
(59)
Noting that $d_{j}^{2}(l)$ are the eigenvalues of $\Delta\textbf{X}_{l}\Delta\textbf{X}_{l}^{H}$, we have $\|\Delta\textbf{X}_{l}\|^{2}~{}\dot{\leq}~{}SNR$ from (3). Therefore, $tr\left(\Delta\textbf{X}_{l}\Delta\textbf{X}_{l}^{H}\right)~{}\dot{\leq}~{}SNR$ which leads to $\sum_{j=1}^{n_{t}}d_{j}^{2}(l)~{}\dot{\leq}~{}SNR$. Therefore, we obtain
$$d_{j}^{2}(l)~{}\dot{\leq}~{}SNR,~{}~{}~{}\forall j=1,2,\cdots,n_{t}.$$
(60)
Without loss of generality, let $a_{ij}$, $i=1,\cdots,n_{r}$, $j=1,\cdots,k$, for some $k\leq n_{t}$, be positive. So, from (59), we have
$$\sum_{i=1}^{n_{r}}\sum_{j=1}^{k}\left[\log\left(\frac{\lambda SNR}{n_{t}d_{j}^%
{2}(l)}\right)\right]=n_{t}(r+\delta)\log SNR$$
so that
$$\displaystyle\lambda$$
$$\displaystyle=$$
$$\displaystyle n_{t}SNR^{-\left(1-\frac{n_{t}(r+\delta)}{kn_{r}}\right)}\left(%
\prod_{j=1}^{k}d_{j}^{2}(l)\right)^{\frac{1}{k}}$$
(61)
$$\displaystyle\dot{\geq}$$
$$\displaystyle SNR^{-\left(1-\frac{n_{t}(r+\delta)}{kn_{r}}\right)}\left(\frac{%
\prod_{j=1}^{n_{t}}d_{j}^{2}(l)}{SNR^{n_{t}-k}}\right)^{\frac{1}{k}}$$
$$\displaystyle\dot{\geq}$$
$$\displaystyle SNR^{-\left(1-\frac{n_{t}(r+\delta)}{kn_{r}}\right)}\left(\frac{%
SNR^{n_{t}\left(1-\frac{r}{n_{r}}\right)}}{SNR^{n_{t}-k}}\right)^{\frac{1}{k}}$$
$$\displaystyle=$$
$$\displaystyle SNR^{\frac{n_{t}\delta}{kn_{r}}},$$
where (61) is due to (60), and (C) is due to the assumption that $det(\Delta\textbf{X}\Delta\textbf{X}^{H})=\prod_{j=1}^{n_{t}}d_{j}^{2}(l)~{}%
\dot{\geq}~{}SNR^{n_{t}\left(1-\frac{r}{n_{r}}\right)}$. So, we have $\lambda~{}\dot{\geq}~{}SNR^{\frac{\delta n_{t}}{kn_{r}}}$, and using this in (58), we obtain, as $SNR\to\infty$,
$$a_{ij}=\left[\frac{\lambda}{n_{t}d_{j}^{2}(l)}-\frac{1}{SNR}\right],j=1,\cdots%
,n_{t}.$$
It is now clear that all the $a_{ij}$, $i=1,\cdots,n_{r}$, $j=1,\cdots,n_{t}$, are positive (i.e., $k=n_{t}$) so that $\lambda~{}\dot{\geq}~{}SNR^{\frac{\delta}{n_{r}}}$.
Using these obtained values of $a_{ij}$ in (57), we have, as $SNR\to\infty$,
$$\displaystyle\|\textbf{H}_{l}\textbf{D}_{l}\|^{2}_{min}(\delta)$$
$$\displaystyle=$$
$$\displaystyle\sum_{i=1}^{n_{r}}\sum_{j=1}^{n_{t}}\left(\frac{\lambda}{n_{t}}-%
\frac{d_{j}^{2}(l)}{SNR}\right)$$
$$\displaystyle\geq$$
$$\displaystyle\sum_{i=1}^{n_{r}}\sum_{j=1}^{n_{t}}\left(\frac{\lambda}{n_{t}}-o%
(\log SNR)\right)$$
$$\displaystyle=$$
$$\displaystyle n_{r}\lambda-o(\log SNR)$$
$$\displaystyle\dot{\geq}$$
$$\displaystyle SNR^{\frac{\delta}{n_{r}}},$$
(64)
where (C) is because $d_{j}^{2}(l)~{}\dot{\leq}~{}SNR$ so that $d_{j}^{2}(l)/SNR$ is $o(\log SNR)$, and (64) is due to the fact that $\lambda~{}\dot{\geq}~{}SNR^{\frac{\delta}{n_{r}}}$. So, $\|\textbf{H}_{l}\textbf{D}_{l}\|^{2}_{min}(\delta)\geq aSNR^{\frac{\delta}{n_{%
r}}}+o\left(SNR^{\frac{\delta}{n_{r}}}\right)$ with $a\doteq SNR^{0}$. Using this result in (56), we arrive at
$$P_{\mathcal{O}_{l}^{\prime}}(\delta)\leq\frac{1}{2}e^{-\left(aSNR^{\frac{%
\delta}{n_{r}}}+o\left(SNR^{\frac{\delta}{n_{r}}}\right)\right)}.$$
This completes the proof.
Acknowledgements
We thank L. P. Natarajan for useful discussions on DMT-optimality of STBCs. We also thank the anonymous reviewers for their constructive comments which have greatly helped in improving the quality of the paper.
References
[1]
V. Tarokh, N. Seshadri, and A. R. Calderbank, “Space time codes for high date rate wireless communication : performance criterion and code construction,”
IEEE Trans. Inf. Theory., vol. 44, no. 2, pp. 744–765, Mar. 1998.
[2]
L. Zheng and D. Tse, “Diversity and Multiplexing: A Fundamental Tradeoff in Multiple-Antenna Channels,” IEEE Trans. Inf. Theory, vol. 49, no. 5, pp. 1073–1096, May 2003.
[3]
H. Yao and G. W. Wornell, “Achieving the full MIMO diversity-multiplexing frontier with rotation-based space-time codes,” in Proc. $41$st Annual Allerton Conf. Commun. Control and Comput., Monticello, IL, Oct. 02–04, 2003.
[4]
P. Elia, K. R. Kumar, S. A. Pawar, P. V. Kumar, and H.-F. Lu, “Explicit Space-Time Codes Achieving the Diversity-Multiplexing Gain Tradeoff,” IEEE Trans. Inf. Theory, vol. 52, no. 9, pp. 3869$-$3884, Sep. 2006.
[5]
B. Hassibi and B. Hochwald, “High-rate codes that are
linear in space and time,” IEEE Trans. Inf. Theory, vol. 48, no. 7, pp. 1804–1824, Jul. 2002.
[6]
J. C. Belfiore, G. Rekaya, and E. Viterbo, “The Golden Code: A $2\times 2$ full rate space-time code with non-vanishing determinants,” IEEE Trans. Inf. Theory, vol. 51, no. 4, pp. 1432–1436, Apr. 2005.
[7]
S. Tavildar and P. Vishwanath, “Approximately universal codes over slow-fading channels,” IEEE Trans. Inf. Theory, vol. 52, no. 7, pp. 3233–3258, Jul. 2006.
[8]
T. Kiran and B. S. Rajan, “STBC-schemes with non-vanishing determinant for certain number of transmit antennas,” IEEE Trans. Inf. Theory, vol. 51, no. 8, pp. 2984–2992, Aug. 2005.
[9]
F. Oggier, G. Rekaya, J. C. Belfiore, and E. Viterbo, “Perfect space time block codes,” IEEE Trans. Inf. Theory, vol. 52, no. 9, pp. 3885–3902, Sep. 2006.
[10]
P. Elia, B. A. Sethuraman, and P. V. Kumar, “Perfect Space-Time Codes for Any Number of Antennas,” IEEE Trans. Inf. Theory, vol. 53, no. 11, pp. 3853–3868, Nov. 2007.
[11]
S. M. Alamouti, “A simple transmit diversity technique for wireless communications,” IEEE J. Sel. Areas Commun., vol. 16, no. 8, pp. 1451–1458, Oct. 1998.
[12]
R. Vehkalahti, C. Hollanti, J. Lahtonen, and H.-F. Lu, “Some simple observations on MISO codes,” in Proc. Int. Symp. Inf. Theory Appl. (ISITA 2010), Taiwan, Oct. 2010.
[13]
W. Su and X.-G. Xia, “Signal Constellations for Quasi-Orthogonal Space-Time Block Codes With Full Diversity,” IEEE Trans. Inf. Theory, vol. 50, no. 10, pp. 2331–2347, Oct. 2004.
[14]
H.-F. Lu and C. Hollanti, “Optimal Diversity-Multiplexing Tradeoff and Code Constructions of Some Constrained Asymmetric MIMO Systems,” IEEE Trans. Inf. Theory, vol. 56, no. 5, pp. 2121–2129, May 2010.
[15]
R. Vehkalahti, C. Hollanti, and F. Oggier, “Fast-Decodable Asymmetric Space-Time Codes from Division Algebras,” IEEE Trans. Inf. Theory, vol. 58, no. 4, pp. 2362–2385, Apr. 2012.
[16]
Z. A. Khan and B. S. Rajan, “Single-Symbol Maximum-Likelihood Decodable Linear STBCs,” IEEE Trans. Inf. Theory, vol. 52, no. 5, pp. 2062–2091, May 2006.
[17]
D. N. Dao, C. Yuen, C. Tellambura, Y. L. Guan, and T. T. Tjhung, “Four-group decodable space-time block codes,” IEEE Trans. Signal Process., vol. 56, no. 1, pp. 424--430, Jan. 2008.
[18]
S. Karmakar and B. S. Rajan, “Multigroup-Decodable STBCs from Clifford Algebras,”
IEEE Trans. Inf. Theory, vol. 55, no. 1, pp. 223–231, Jan. 2009.
[19]
G. S. Rajan and B. S. Rajan, “Multi-group ML Decodable Collocated and Distributed Space Time Block Codes,” IEEE Trans. Inf. Theory, vol. 56, no. 7, pp. 3221–3247, Jul. 2010.
[20]
K. P Srinath and B. S. Rajan,“Generalized Silver Codes,” IEEE Trans. Inf. Theory, vol. 57, no. 9, pp. 6134–6147, Sep. 2011.
[21]
E. Viterbo and J. Boutros, “A universal lattice code decoder for fading channels,” IEEE Trans. Inf. Theory, vol. 45, no. 5, pp. 1639–1642, Jul. 1999.
[22]
A.M. Chan and I. Lee, “A new reduced-complexity sphere decoder for multiple antenna systems,” in Proc. IEEE Int. Conf. Commun. (ICC 2002), pp. 460–464, Apr. 28–May 02, 2002.
[23]
M. O. Damen, K. A.-Meraim, and M. S. Lemdani, “Further results on the sphere decoder,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT 2001), pp. 333, Jun. 24–29, 2001.
[24]
W. Ping and Le-Ngoc Tho, “A low-complexity generalized sphere decoding approach for underdetermined linear communication systems: performance and complexity evaluation,” IEEE Trans. Commun., vol. 57, no. 11, pp. 3376–3388, Nov. 2009.
[25]
L. G. Barbero and J. S. Thompson, “Fixing the Complexity of the Sphere Decoder for MIMO Detection,” IEEE Trans. Wireless Commun., vol. 7, no. 6, pp. 2131–2142, Jun. 2008.
[26]
J. Jalden, L. G. Barbero, B. Ottersten, and J. S. Thompson, “The Error Probability of the Fixed-Complexity Sphere Decoder,” IEEE Trans. Signal Process., vol. 57, no. 7, pp. 2711–2720, Jul. 2009.
[27]
J. Jalden and B. Ottersten, “On the complexity of sphere decoding in digital communications,” IEEE Trans. Signal Process., vol. 53, no. 4, pp. 1474–1484, Apr. 2005.
[28]
C. Hollanti, J. Lahtonen, and H.-F. Lu, “Maximal orders in the design of dense space-time lattice codes,” IEEE Trans. Inf. Theory, vol. 54, no. 10, pp. 4493–4510, Oct. 2008.
[29]
C. Hollanti and H.-F. Lu, “Construction methods for asymmetric and multi-block space-time codes,” IEEE Trans. Inf. Theory, vol. 55, no. 3, pp. 1086–1103, Mar. 2009.
[30]
C. Hollanti, J. Lahtonen, K. Ranto, R. Vehkalahti, and E. Viterbo, “On the algebraic structure of the Silver code: A 2x2 Perfect space-time code with non-vanishing determinant,” in Proc. IEEE
Inf. Theory Workshop (ITW 2008), Porto, Portugal, May 2008.
[31]
P. Dayal and M. K. Varanasi, “An optimal two transmit antenna space-time code and its stacked extensions,” IEEE Trans. Inf. Theory, vol. 51, no. 12, pp. 4348–4355, Dec. 2005.
[32]
A. Hottinen, O. Tirkkonen, and R. Wichman, Multi-antenna Transceiver Techniques for 3G and Beyond. John Wiley and Sons, Feb. 2003.
[33]
J.M. Paredes, A.B. Gershman, and M. G.-Alkhansari, “A new full-rate full-diversity space-time block code with non-vanishing determinants and simplified maximum likelihood decoding,” IEEE Trans. Signal Process., vol. 56, no. 6, pp. 2461–2469, Jun. 2008.
[34]
S. Sezginer and H. Sari, “Full-Rate Full-Diversity $2\times 2$ Space-Time Codes of Reduced Decoder Complexity,” IEEE Commun. Letters, vol. 11, no. 12. Dec. 2007.
[35]
K. P. Srinath and B. S. Rajan, “Low ML-Decoding Complexity, Large Coding Gain, Full-Rate, Full-Diversity STBCs for $2\times 2$ and $4\times 2$ MIMO Systems,” IEEE J. Sel. Topics Signal Process., vol. 3, no. 6, pp. 916–927, Dec. 2009.
[36]
I. E. Telatar, “Capacity of multi-antenna Gaussian channels,” Eur. Trans. Telecommun., vol. 10, no. 6, pp. 585–595, Nov. 1999.
[37]
R. A. Horn and C. R. Johnson, Matrix Analysis. Cambridge, U.K.: Cambridge Univ. Press, 1985.
[38]
http://www.ecse.monash.edu.au/staff/eviterbo/rotations/rotations.html.
[39]
K. P. Srinath and B. S. Rajan, “Fast-Decodable MIDO Codes with Large Coding Gain,” [Online]. Available: http://arxiv.org/abs/1208.1593. |
Path-Integral Ground-State and Superfluid Hydrodynamics
of a Bosonic Gas of Hard Spheres
Maurizio Rossi and Luca Salasnich
Dipartimento di Fisica e Astronomia
”Galileo Galilei” and CNISM, Università di Padova,
Via Marzolo 8, 35122 Padova, Italy
(November 25, 2020)
Abstract
We study a bosonic gas of hard spheres by using of the exact zero-temperature
Path-Integral Ground-State (PIGS) Monte Carlo method and the equations of
superfluid hydrodynamics.
The PIGS method is implemented to calculate for the bulk system the energy
per particle and the condensate fraction through a large range of the gas
parameter $na^{3}$ (with $n$ the number density and $a$ the s–wave scattering
length), going from the dilute gas into the solid phase.
The Maxwell construction is then adopted to determine the freezing at
$na^{3}=0.278\pm 0.001$ and the melting at $na^{3}=0.286\pm 0.001$.
In the liquid phase, where the condensate fraction is finite, the equations
of superfluid hydrodynamics, based on the PIGS equation of state, are used to
find other relevant quantities as a function of the gas parameter: the
chemical potential, the pressure and the sound velocity.
In addition, within the Feynman’s approximation, from the PIGS static structure
factor we determine the full excitation spectrum, which displays a maxon-roton
behavior when the gas parameter is close to the freezing value.
Finally, the equations of superfluid hydrodynamics with the PIGS equation of
state are solved for bosonic system under axially–symmetric harmonic
confinement obtaining its collective breathing modes.
pacs: 02.70.Ss, 03.75.Hh, 03.75.Kk
I Introduction
In this paper we analyze a system of identical interacting bosons by using the
hard sphere (HS) model hans2 , which is a useful reference system for
classical and quantum many-body theories both for weak and strong interactions
because it depends only on one interaction parameter: the sphere diameter $a$
hans2 ; book1 ; book2 .
The quantum HS model has led to the understanding of several general features
of helium in its condensed phases book1 ; book2 , serving as a reference or
a starting point for studies with more accurate potentials kalo .
In addition the quantum HS model provides the standard benchmark for
mean–field approaches boro2 such as, for example, Gross-Pitaevskii
equation or Hartree–Fock–Bogoliubov approximation kim .
A large number of approaches have been put forward to deal with quantum HS and,
among them, Monte Carlo method based on Feynman‘s path integrals come out as a
most powerful tool sese2 .
Path integral Monte Carlo (PIMC) studies of quantum HS systems at finite
temperature cover almost the whole relevant gas parameter range
ches ; sese ; sese2 ; sese3 .
However, at zero temperature, there are studies that cover (with different
techniques) only portions of the $na^{3}$ range and are mainly devoted to the
investigation of different properties such as the universal behavior in the
dilute limit boro2 or the gas–solid transition kalo .
Here we calculate the equation of state of the bulk quantum HS system of
identical bosons from very low gas parameter value up to high density solid with
path integral ground state (PIGS) Monte Carlo pigs method, which provides
exact expectation values on the ground state.
Our exact PIGS results for the equation of state are then used to derive other
relevant properties by means of the equations of superfluid hydrodynamics
book1 ; book2 .
Actual experiments on Bosonic atomic gases reach so low temperatures that the
effects of thermal fluctuations are largely negligible, making a zero temperature
approach well justifiedbook1 ; book2 .
The paper is organized in the following way.
The basic features of PIGS method are reported in Section II.
Numerical results on the ground-state energy and condensate fraction are shown
and discussed in Section III, where we compare our data with previous Monte Carlo
calculations and other theoretical approaches.
In Section IV we introduce the zero-temperature hydrodynamic equations of
superfluids book1 ; book2 and we use them (with the PIGS equation of state)
to find other relevant quantities as a function of the gas parameter: the chemical
potential, the pressure and the sound velocity.
We find that our sound velocity, which gives the low-momentum linear slope of the
excitation spectrum, is in excellent agreement with the numerical results obtained
with the help of the PIGS static response function.
Moreover, within the Feynman’s approximation, we determine the full spectrum of
elementary excitations, which displays a maxon-roton behavior when the gas
parameter is close to the freezing value.
In section V we consider the inclusion of an anisotropic but axially-symmetric
trapping harmonic potential.
The collective modes of the confined Bose gas are then easily calculated using
again the equations of superfluid hydrodynamics with the PIGS equation of state,
which is locally approximated with a polytropic equation of state nick .
The paper is concluded by Section VI.
II PIGS method
The aim of PIGS is to improve a variationally optimized trial wave function
$\psi_{t}$ by constructing, in the Hilbert space of the system, a path which
connects the starting $\psi_{t}$ with the exact lowest energy wave function of
the system, $\psi_{0}$, constrained by the choice of the number of particles $N$, the
geometry of the simulation box, the boundary conditions and the density $n$,
provided that $\langle\psi_{t}|\psi_{0}\rangle\neq 0$.
The correct correlations among the particles arise during this path through the
action of the imaginary time evolution operator $\hat{G}=e^{-\tau\hat{H}}$, where
$\hat{H}$ is the Hamiltonian operator.
In principle, $\psi_{0}$ is reached in the limit of infinite imaginary time, but a
very accurate representation for $\psi_{0}$ is given by
$\psi_{\tau}=e^{-\tau\hat{H}}\psi_{t}$, if $\tau$ is large enough (but finite).
The wave function $\psi_{\tau}$ can be analytically written by discretizing the path
in small imaginary time steps.
This discretization is necessary since the aviable approximations for $\hat{G}$
became more accurate as the imaginary time step goes smaller cepe .
Here we have used the Cao–Berne approximation caob , which is one of the most
efficient propagators (i.e. allows for larger values of imaginary time step) for HS
sese .
Because because of this discretization of the imaginary time path, the quantum system
is mapped into a system of specially interacting classical open polymers pigs .
Each open polymer represents the full imaginary time path of a quantum particle that
is sampled by means of the Metropolis algorithm.
Thus, the entire imaginary time evolution of the system is sampled at each Monte Carlo
step pata .
An appealing feature of the PIGS method is that, in $\psi_{\tau}$, the variational
ansatz acts only as a starting point, while the full path is governed by $\hat{G}$,
which depends only on the Hamiltonian $\hat{H}$.
Thus the PIGS method turns out to be unbiased by the choice of the trial wave function
pata and then the only input is $\hat{H}$.
In the coordinate representation, the Hamiltonian of the quantum HS system is
$$H=-\frac{\hbar^{2}}{2m}\sum_{i=1}^{N}\nabla^{2}_{i}+\sum_{<i,j>}V(r_{ij})$$
(1)
where $r_{ij}=|\vec{r}_{i}-\vec{r}_{j}|$ and
$$V(r)=\left\{\begin{array}[]{ll}+\infty&{\rm for}\quad r<a\\
0&{\rm otherwise}\;.\end{array}\right.$$
(2)
The Hamiltonian (1) can be reduced in a useful adimensional form by giving the
energies in unit of $\frac{\hbar^{2}}{2ma^{2}}$ and the lengths in unit of $a$, which
represents also the s–wave scattering length.
We make use of these reduced units throughout the paper.
The trial wave function $\psi_{t}$ does not really need to be fully variational optimized:
in fact, for a large enough value of $\tau$, PIGS results turn out to be independent on
$\psi_{t}$, in both the phases pata ; vita .
The sole role of $\psi_{t}$ is to determine the length of the path in imaginary time
pata to converge on $\psi_{0}$: better is $\psi_{t}$, faster is the convergence.
Here, as $\psi_{t}$, we have employed a Jastrow wave function, where the two-body
correlations are given by the fist order expansion of the exact solution for the two-body
problem i.e.:
$$\psi_{t}(R)=\prod_{<i,j>}\left(1-\frac{a}{r_{ij}}\right)$$
(3)
where $R=\{\vec{r}_{1},\dots,\vec{r}_{N}\}$ are the coordinates of the $N$ HS.
All the approximations involved in the PIGS method, i.e. the choice of the total imaginary
time $\tau$ and of the imaginary time step $\delta\tau$ (that fixes the quality of the
approximation on $\hat{G}$), are so well controlled that the resulting systematic errors
can be reduced within the unavoidable Monte Carlo statistical error.
In this sense PIGS is an exact $T=0$ K method pigs ; pata .
In order to improve the ergodicity of the Monte Carlo sampling, we have implemented bosonic
permutations boni , even if not required in principle since (3) has the
correct Bose symmetry, and a canonical (i.e. with fixed $N$) version of the worm algorithm
worm , which has the ulterior advantage of giving access also to off-diagonal
properties within the same simulation.
III Ground-state energy and condensate fraction
We have studied with PIGS a system of $N=256$ HS in a cubic box with periodic boundary
conditions in all the directions, for values of the gas parameter $na^{3}$ ranging from dilute
gas, namely $na^{3}=10^{-3}$, up to $na^{3}=0.5$, deep inside the solid phase.
By studying the convergence in $\tau$ and $\delta\tau$ of the energy per particle we have
fixed the values $\tau=0.225$ $2ma^{2}/\hbar^{2}$ and $\delta\tau=0.015$ $2ma^{2}/\hbar^{2}$ to
be a very good compromise between accuracy and computational cost.
For some values of $na^{3}$, we have check the convergence of our results both by reducing the
time step to $\delta\tau=0.005$ $2ma^{2}/\hbar^{2}$ and by extending the total projection time
up to $\tau=0.245$ $2ma^{2}/\hbar^{2}$.
We have performed also simulations with $N=400$ and $N=500$ HS in order to verify the presence
of size effects, especially close to the gas-solid transition region.
We find that the energy per particle does not sensibly change within the error bars, inferring
that our results are no affected by a significant size effect.
Our results for the energy per particle $E/N$ as a function of the gas parameter are reported
in Fig. 1.
We find an excellent agreement with previous GFMC kalo and DMC data boro2 in the
range of gas parameter values covered by the previous studies.
We report also two mean field predictions for $E/N$: the perturbative correction to the
Bogoliubov mean-field due to Lee, Huang and Yang (LHY) lhy that turns out to be in a
quite fair good agreement with Monte Carlo data up to $na^{3}\simeq 5\times 10^{-2}$ boro1 ,
and a more recent perturbative approach due to Yukalov and Yukalova yuka that, however,
distances itself from Monte Carlo data for just lower values of $na^{3}$.
By adding successive powers at the mean-field prediction of Bogoliubov with the LHK perturbative
correction, we have fit our data with the expression boro1
$${E\over N}={\hbar^{2}\over 2ma^{2}}\,f_{g}(na^{3})\;,$$
(4)
where
$$\begin{split}\displaystyle f_{g}(x)=&\displaystyle 4\pi x\left(1+\frac{128}{15%
\sqrt{\pi}}\sqrt{x}\right)+a_{2}x^{2}\log(x)+b_{2}x^{2}\\
&\displaystyle+a_{5/2}x^{5/2}\log(x)+b_{5/2}x^{5/2}\;.\end{split}$$
(5)
The best values for the parameters coming from the fit of the PIGS data are $a_{2}=145.5$,
$b_{2}=842.8$, $a_{5/2}=422$ and $b_{5/2}=-492$.
The resulting curve of $f_{g}(x)$ is also reported as a solid line in Fig. 1.
By increasing the gas parameter the system spontaneously breaks the translational invariance
due to the effect of the increased correlations among the particles, resulting in a solid phase,
as inferred also from the characteristic oscillations in the pair correlation function
$$g(r)=\frac{N(N-1)}{n^{2}}\frac{\int\prod_{j=3}^{N}d\vec{r}_{j}\>\left|\psi^{*}%
_{0}(\vec{r},0,\vec{r}_{3},\dots,\vec{r}_{N})\right|^{2}}{\int\prod_{j=1}^{N}d%
\vec{r}_{j}\>\left|\psi^{*}_{0}(\vec{r}_{1},\vec{r}_{2},\dots,\vec{r}_{N})%
\right|^{2}}$$
(6)
reported in Fig. 2.
The emerging crystal is the FCC, that is the lattice that best fits the cubic geometry of the
simulation box.
Very recent PIMC simulations sese2 have shown, however, that the free energy difference
between the two closed packed crystals, FCC and HCP, for the quantum HS, is vanishing small.
This is not surprising since the difference in these two lattices arises from the second shell
of neighbors, and the HS potential is short ranged.
In Fig. 3 we report the resulting energy per particles $E/N$ as a function of the gas
parameter.
Even in this case, we find a quite good agreement with older GFMC data kalo .
We find that our results can be well fitted with a standard third-order polynomial hans
$${E\over N}={\hbar^{2}\over 2ma^{2}}\,f_{s}(na^{3})\;,$$
(7)
where
$$f_{s}(x)=E_{0}+Ax+Bx^{2}+Cx^{3}\;,$$
(8)
and the best values for the fit parameters are $E_{0}=-9.33$, $A=132.6$, $B=-253.6$ and $C=609.1$.
The resulting $f_{s}(x)$ is plotted as a solid line in Fig. 3.
By using the polynomial fit to the PIGS data (5) and (8) it is possible
to locate the transition region between the gas and the solid phase via the standard Maxwell
(double tangent) construction.
We find that the coexistence region is bounded by $n_{f}a^{3}=0.264\pm 0.003$ (freezing gas parameter)
and $n_{m}a^{3}=0.290\pm 0.003$ (melting gas parameter).
These values are close, but not perfectly compatible, with the older GFMC results kalo
$n_{f}a^{3}=0.25\pm 0.01$ and $n_{m}a^{3}=0.27\pm 0.01$.
The shift to higher values for the bounding gas parameters can be due to a greater accuracy of
the imaginary time propagator used here sese2 .
Another source of difference can be the strong dependence of such bounding values on the different
used fitting formula, even if the energies $E/N$ obtained with the two exact Monte Carlo methods
are very close (as one expects from exact techniques).
The worm algorithm worm , give direct access also to the one-body density matrix
$$\rho_{1}(\vec{r},\vec{r}^{\prime})=\int\prod_{j=2}^{N}d\vec{r}_{j}\>\psi^{*}_{%
0}(\vec{r},\vec{r}_{2},\dots,\vec{r}_{N})\psi_{0}(\vec{r}^{\prime},\vec{r}_{2}%
,\dots,\vec{r}_{N})$$
(9)
that in a uniform system turns out to be function only of the difference $|\vec{r}-\vec{r}^{\prime}|$.
$\rho_{1}$ is the Fourier Transform of the momentum distribution of the system, then
a finite plateau in the large distance tail of $\rho_{1}$ means a Dirac’s delta in
the zero momentum state, i.e. a macroscopic occupation of a single particle quantum
state that is the Bose–Einstein condensation.
The condensate fraction $n_{0}/n$ turns out to be equal to the limiting value of the
tail of the one body density matrix.
We plot our results for $n_{0}/n$ in Fig. 4.
In the solid phase the condesate fraction turns out to be zero, in agreement with
what found in ${}^{4}$He systems worm ; vita .
In the gas phase, even for the condensate fraction we find a satisfactory agreement
with previous DMC results boro2 in the $na^{3}$ range where they were aviable.
Our data confirm that the Bogoliubov prediction overestimates the condensate
fraction for gas parameter larger than $na^{3}\simeq 10^{-3}$ boro2 .
The improved perturbative approach of Ref. yuka gives a better prediction of
$n_{0}/n$ starting to overestimate the condensate fraction for values of the gas
parameter larger than $10^{-1}$, as shown in Fig. 4.
To provide an analytical expression for the condensate fraction as a function of the
gas parameter, we follow Ref. boro1 and fit our data with the formula
$${n_{0}\over n}=\Xi(na^{3})\;,$$
(10)
where
$$\Xi(x)=1-\frac{8}{3\sqrt{\pi}}\sqrt{x}-c_{1}x-c_{3/2}x^{3/2}-c_{2}x^{2}-c_{5/2%
}x^{5/2}\;.$$
(11)
The best values for the fit parmeters are $c_{1}=5.49$, $c_{3/2}=-7.86$, $c_{2}=-9.52$ and
$c_{5/2}=13.65$.
IV Superfluid hydrodynamics and elementary excitations
The advantage of a functional parametrization $f_{g}(x)$, Eq. (5), of the
ground-state energy $E$ of the bosonic gas is that it allows straightforward analytical
calculations of several physical properties nick .
For example, the bulk chemical potential $\mu$ is given
$$\mu={\partial E\over\partial N}={\hbar^{2}\over 2ma^{2}}\,\left(f_{g}(x)+xf_{g%
}^{\prime}(x)\right)\;,$$
(12)
as found by using Eqs. (7) and (5) and taking into account that
$x=na^{3}$ and ${\partial x}/{\partial n}=x/n$, while the bulk pressure $P$ reads
$$P=n^{2}{\partial\over\partial n}\left({E\over N}\right)={\hbar^{2}\over 2ma^{2%
}}\,n\,x\,f_{g}^{\prime}(x)\;.$$
(13)
Moreover, the collective dynamics of our bosonic gas of HS with local density
$n({\bf r},t)$ and local velocity ${\bf v}({\bf r},t)$ can be described by the following
zero-temperature hydrodynamic equations of superfluids book1 ; book2
$$\displaystyle{\partial n\over\partial t}+{\boldsymbol{\nabla}}\cdot\left(n\,{%
\bf v}\right)=0\;,$$
(14)
$$\displaystyle m{\partial{\bf v}\over\partial t}+{\boldsymbol{\nabla}}\left[{1%
\over 2}mv^{2}+\mu[n,a]\right]={\bf 0}\;,$$
(15)
where $\mu[n,a]$ is the bulk chemical potential, given by Eq. (12).
These equations describe a generic fluid at zero temperature which is
inviscid (zero viscosity) and irrotational (${\bf v}\wedge{\bf v}={\bf 0}$)
book1 ; book2 . The irrotationality implies that ${\bf v}={\boldsymbol{\nabla}}\theta$, where
$\theta=\theta({\bf r},t)$ is a scalar field which must be an angle variable
to get the quantization of the circulation of the velocity book1 ; book2 .
Thus, from the knowledge of the bulk equation of state (12) one can study
the collective superfluid dynamics of the system by solving Eqs. (14) and (15).
In particular, we are interested on the propagation of sound waves in the superfluid.
In this case, by taking into account a small $\delta n({r},t)$ variation of the local
density with respect to the uniform value $n$ and linearizing the hydrodynamic equations
one finds the familiar wave equation
$$\left[{\partial^{2}\over\partial t^{2}}-c_{s}^{2}\,\nabla^{2}\right]\delta n({%
\bf r},t)=0\;,$$
(16)
where $c_{s}$ is the sound velocity, given by
$$mc_{s}^{2}=n\,{\partial\mu\over\partial n}={\hbar^{2}\over 2ma^{2}}\left(2xf_{%
g}^{\prime}(x)+x^{2}f_{g}^{\prime\prime}(x)\right)\;.$$
(17)
It is well known that this wave equation admits monochromatic plane–wave solutions,
where the frequency $\omega$ and the wave vector ${\bf k}$ are related by the phononic
dispersion formula
$$\hbar\omega(k)=c_{s}\,\hbar k\;,$$
(18)
where $k=|{\bf k}|$ is the wavenumber.
In Fig. 5 we plot the bulk chemical potential $\mu$, the bulk pressure $P$ and
the sound velocity $c_{s}$ as a function of the gas parameter $na^{3}$.
All these physical quantities are calculated on the basis of the parametrization
(7) and (5) of the PIGS energy $E$.
The zero-temperature equations of superfluid hydrodynamics (14) and (15),
equipped by the constitutive equation of state (12) which is based on the
parametrization (7) and (5) of the PIGS energy, give reliable
informations only on the low wavenumber branch (linear part) of the spectrum $\omega(k)$
of the elementary excitations.
Unfortunately, the imaginary-time formulation of PIGS method prevent us from obtaining
the exact dynamical properties of the system, such us the full excitation spectrum
$\omega(k)$, directly from simulations.
Some features of $\omega(k)$ can be obtained within the Feynman’s approximation:
$$\hbar\omega(k)=\frac{\hbar^{2}k^{2}}{2mS(k)}$$
(19)
where
$$S(k)=\frac{1}{N}\langle\sum_{j=1}^{N}e^{-i\vec{k}\cdot\vec{r}_{j}}\sum_{l=1}^{%
N}e^{i\vec{k}\cdot\vec{r}_{l}}\rangle$$
(20)
is the static structure factor that can be readily obtained during a PIGS simulation.
Our results for the Feynman’s excitation spectrum for HS at different values of the gas
parameter are reported in Fig. 6.
The Feynman’s approximation is known to be accurate only at very low $na^{3}$, and to
become only qualitative at higher values of the gas parameter.
For example, in the case of superfluid ${}^{4}$He, where $na^{3}=0.244$, it overestimates the
roton minimum by a factor of about two.
In the low wave vector limit we find that, in spite of the well known size effect on the
static structure factor (20) drae , the Feynman’s approximation turn out
to be in a remarkable agreement with the phononic dispersion (18) with the
values of the sound velocity $c_{s}$ given by Eq. (17) and reported in Fig. 5.
It is worthy to note that even the Feynman’s approximation (19) for the excitation
spectrum, as the energy per particle and the condensate fraction, deviates from the
Bogoliubov approximation for $na^{3}\simeq 10^{-3}$.
Another remarkable feature is that, even within this simple approximation, the occurrence
of a roton minimum at high density is correctly described.
V Inclusion of a trapping harmonic potential
We consider now the effect of confinement due to an external anisotropic harmonic potential
$$U({\bf r})={m\over 2}\left(\omega_{\bot}(x^{2}+y^{2})+\omega_{z}z^{2}\right)\;$$
(21)
where $\omega_{\bot}$ is the cylindric radial frequency and $\omega_{z}$ is the cylindric
longitudinal frequency.
The collective dynamics of the system can be described efficiently by the hydrodynamic
equations, modified by the inclusion of the external potential $U({\bf r})$
book1 ; book2 , namely
$$\displaystyle{\partial n\over\partial t}+{\boldsymbol{\nabla}}\cdot\left(n\,{%
\bf v}\right)=0\;,$$
(22)
$$\displaystyle m{\partial{\bf v}\over\partial t}+{\boldsymbol{\nabla}}\left[{1%
\over 2}mv^{2}+\mu[n,a]+U({\bf r})\right]={\bf 0}\;.$$
(23)
It has been shown in Ref. cozzini that by assuming a power-law dependence
$\mu=\mu_{0}\,n^{\gamma}$ for the chemical potential (polytropic equation of state) from
Eqs. (14) and (15) one finds analytic expressions for the collective
frequencies.
In particular, for very elongated cigar–shaped traps ($\omega_{\rho}/\omega_{z}\gg 1$) the
collective radial breathing mode frequency $\Omega_{\rho}$ is given by
$$\Omega_{\rho}=\sqrt{2(\gamma+1)}\,\omega_{\rho}\;,$$
(24)
while the collective longitudinal breathing mode $\Omega_{z}$ is
$$\Omega_{z}=\sqrt{3\gamma+2\over\gamma+1}\,\omega_{z}\;.$$
(25)
In our problem we introduce an effective polytropic index $\gamma$ as the logarithmic
derivative of the chemical potential $\mu$, that is
$$\gamma={n\over\mu}{\partial\mu\over\partial n}={2xf_{g}^{\prime}(x)+x^{2}f_{g}%
^{\prime\prime}(x)\over f_{g}(x)+xf_{g}^{\prime}(x)}\;,$$
(26)
where $f_{g}(x)$ is given by Eq. (5).
This approach has been very successful nick in the study the the
experimentally-observed grimm breathing modes of a two-component Fermi gas of
${}^{6}$Li atoms in the BCS-BEC crossover.
Indeed in Ref. nick we have suggested relevant deviations to the mean-field results,
which have been subsequently confirmed by improved experiments grimm2 .
In Fig. 7 we report the frequencies $\Omega_{\rho}$ and $\Omega_{z}$ of
breathing modes as a function of the gas parameter $n(0)a^{3}$, where $n(0)$ is the
density at the center of the strongly-anisotropic harmonic trap.
The figure shows a relevant change in the scaled radial frequency
$\Omega_{\rho}/\omega_{\rho}$ that is a direct consequence of the fact that the
effective polytropic index $\gamma$ increases from $\gamma\simeq 1$ in the weak-coupling
regime to $\gamma\simeq 2.2$ in the strong-coupling regime as shown in the inset of
Fig. 7.
VI Conclusions
The properties of bulk systems of HS for a wide range of the gas parameter $na^{3}$, going
from the dilute gas to the solid phase, have been investigated with the exact $T=0$ PIGS
Monte Carlo methods.
Our results for the energy per particle turn out to be in good agreement with previous
calculations, performed with different Monte Carlo techniques, in the gas parameter range
in which they were aviable kalo ; boro2 .
We have found that recent beyond mean–field approximations are compatible with our Monte
Carlo data up to $na^{3}\simeq 10^{-3}$.
We have then fitted our PIGS data via polynomials functions, Eqs. (5) and
(7), which have been then used to locate the gas-liquid transition with a
standard Maxwell construction.
Our analytical fit extends the range of applicability of previous equation of state
boro1 up to the freezing point, and beyond it in the metastable region.
We have computed also the condensate fraction $n_{0}/n$ in the whole considered gas
parameter range.
In particular, we have found that the condensate fraction is zero in the solid phase, in
agreement with what happens in the solid phase of systems interacting with more realistic
potentials such as ${}^{4}$He worm ; vita .
We have provided an analytical fit also for $n_{0}/n$ showing that, as in Ref. boro2 ,
the Bogoliubov approximation overestimates the condensate fraction for $na^{3}$ larger than
$10^{-3}$, while the recent improved perturbative approach of Ref. yuka extends
the predictive region of mean field approaches of about an order of magnitude, up to
$na^{3}\simeq 10^{-2}$.
The fit of PIGS data are useful in order to derive other relevant properties of the bulk
system, such as the chemical potential and the pressure.
By means of the zero-temperature hydrodynamics equations of superfluids it is indeed
possible to obtain other relevant physical quantities.
In particular we have calculated the sound velocity for gas parameters up to $0.3$.
This is relevant also because PIGS cannot give direct access to dynamical properties of
the system.
Some qualitative informations about the excitation spectrum can be recovered via the
Feynman’s approximation: the low-wave vector limit of such approximate spectrum agrees with
the linear phononic dispersion obtained from the hydrodynamic equation of superfluids
(14) and (15) with the equation of state (5).
More quantitative results on the excitation spectrum, can be obtained by computing the
intermediate scattering functions via PIGS and then by analytically continuing them with
inversion methods, like GIFT gift for example, in order to recover the dynamical
structure factor.
These procedures are typically laborious and require a large amount of computations, and
they go beyond the aim of this paper; anyway, while writing this paper, we went aware that
a similar study is under investigation rota .
It is worthy to note, however, that even in an approximate fashion, also an approximation
as simple as the Feynman’s one is able to capture the emerging of the phonon–roton spectrum
deduced by Landau land with the increasing gas parameter.
Finally, we have shown that analytical expressions of the
exact equation of state can be useful also for predictions in
confined systems. The hydrodyanamic equations can be used to calculate
density profiles and collective modes in various trap configurations book1 ; book2 .
Here we have derived the frequencies
of the collective breathing modes of an HS gas confined in a
strongly–anisotropic harmonic trap as a function of the local gas parameter.
By including a gradient correction in the hydrodynamic
equations one can rewrite them as a nonlinear
Schrödinger equation (generalized Gross-Pitaevskii equation) sala-snlse ; sala-adh
and study other foundamental properties
like quantized vortices sala-adh , solitons sala-soli and
shock waves sala-shock .
Acknowledgments
The authors thank F. Ancilotto, D.E. Galli, R. Rota, and F. Toigo for useful discussions.
The authors acknowledge for partial support Università di Padova (Research Project
”Quantum Information with Ultracold Atoms in Optical Lattices”), Cariparo Foundation
(Excellence Project ”Macroscopic Quantum Properties of Ultracold Atoms under Optical
Confinement”), and Ministero Istruzione Università Ricerca (PRIN Project ”Collective
Quantum Phenomena: from Strongly-Correlated Systems to Quantum Simulators”).
References
(1)
J.P. Hansen and I.R. McDonalds,
Theory of simple fluids, 3rd edition
(Academic Press, London, 2006).
(2)
L.P. Pitaevskii and S. Stringari,
Bose-Einstein Condensation
(Oxford Univ. Press, Oxford, 2003).
(3)
A.J. Leggett,
Quantum liquids. Bose condensation and Cooper pairing
in condensed-matter systems
(Oxford Univ. Press, Oxford, 2006).
(4)
M.H. Kalos, D. Levesque and L. Verlet,
Phys. Rev. A 9, 2178 (1974).
(5)
S. Giorgini, J. Boronat and J. Casulleras,
Phys. Rev. A 60, 5129 (1999).
(6)
H. Kim, C.S. Kim, C.L. Huang, H.S. Song and X.X. Yi,
Phys. Rev. A 85, 053629 (2012).
(7)
L.M. Sesé,
J. Chem. Phys 139, 044502 (2013)..
(8)
K.J. Runge and G.V. Chester,
Phys. Rev. B 38, 135 (1988).
(9)
L.M. Sesé and R. Ledesma,
J. Chem. Phys 102, 3776 (1995).
(10)
L.M. Sesé,
J. Chem. Phys. 108, 9086 (1998).
(11)
A. Sarsa, K.E. Schmidt, and W.R. Magro,
J. Chem. Phys. 113, 1366 (2000).
(12)
N. Manini and L. Salasnich,
Phys. Rev. A 71, 033625 (2005)
(13)
D. M. Ceperley,
Rev. Mod. Phys. 67, 279 (1995).
(14)
J. Cao and B.J. Berne,
J. Chem. Phys. 97, 2382 (1992).
(15)
M. Rossi, M. Nava, D.E. Galli and L. Reatto,
J. Chem. Phys. 131, 154108 (2009).
(16)
E. Vitali, M. Rossi, F. Tramonto, D.E. Galli and L. Reatto,
Phys. Rev. B 77, 180505(R) (2008).
(17)
M. Boninsegni,
J. Low Temp. Phys. 141, 27 (2005).
(18)
M. Boninsegni, N.V. Prokofev and B.V. Svistunov,
Phys. Rev. Lett. 96, 070601 (2006);
Phys. Rev. E 74, 036701 (2006).
(19)
T.D. Lee, K. Huang and C.N. Yang,
Phys. Rev. 106, 1135 (1957).
(20)
J. Boronat, J. Casulleras and S. Giorgini,
Physica B 284-288, 1 (2000).
(21)
V.I. Yukalov and E.P. Yukalova,
Phys. Rev. A 74, 063623 (2006).
(22)
J.P. Hansen, D. Levesque and D. Schiff,
Phys. Rev. A 3, 776 (1971).
(23)
E.W. Draeger and D.M. Ceperley,
Phys. Rev. B 61, 12094 (2000).
(24)
M. Cozzini and S. Stringari,
Phys. Rev. Lett. 91, 070401 (2003).
(25)
M. Bartenstein, A. Altmeyer, S. Riedl, S. Jochim, C. Chin,
J.H. Denschlag, and R. Grimm,
Phys. Rev.Lett. 92, 203201 (2004).
(26)
A. Altmeyer, S. Riedl, C. Kohstall, M.J. Wright, R. Geursen,
M. Bartenstein, C. Chin, J.H. Denschlag, and R. Grimm,
Phys. Rev. Lett. 98, 040401 (2007).
(27)
E. Vitali, M. Rossi, L. Reatto and D.E. Galli,
Phys. Rev. B 82, 174510 (2010).
(28)
R. Rota, F. Tramonto, D.E. Galli and S. Giorgini,
arXiv:1310.1753.
(29)
L.D. Landau,
J. Phys. (USSR) 11, 91 (1947).
(30)
L. Salasnich, Laser Phys. 19, 642 (2009).
(31)
S.K. Adhikari and L. Salasnich, Phys. Rev. A 77, 033618 (2008).
(32)
L. Salasnich, A. Parola, and L. Reatto,
J. Phys. B: At. Mol. Opt. Phys. 39, 2839 (2006).
(33)
B. Damski, Phys. Rev. A 69, 043610 (2004);
L. Salasnich, EPL 96, 40007 (2011). |
SLAC-PUB-8822
DESY-01-060
hep-ph/0105194
May 2001
New ways to explore factorization in $b$
decays ***Work supported by Department of Energy contract
DE–AC03–76SF00515.
M. Diehl${}^{1}$ and G. Hiller${}^{2}$
1. Deutsches Elektronen-Synchroton DESY, 22603 Hamburg, Germany
2. Stanford Linear Accelerator Center, Stanford University,
Stanford, CA 94309, U.S.A.
Abstract
[0.5]
We propose to study factorization breaking
effects in exclusive $b$ decays where they are strongly enhanced over
the factorizing contributions. This can be done by selecting
final-state mesons with a small decay constant or with spin greater
than one. We find a variety of decay modes which could help understand
the dynamical origin of factorization and the mechanisms responsible
for its breaking.
1 Introduction
An outstanding task in heavy-flavor physics is to understand the
strong interaction effects in exclusive weak decays of hadrons
containing a $b$-quark. For many decay channels, such an understanding
is a precondition for gaining information on the quark mixing matrix
or on physics beyond the standard model. In addition, the dynamics of
quarks, gluons, and hadrons in the presence of a large mass $m_{b}$ is
interesting in QCD by its own right.
Introduced in [1], the concept of factorization has
been one of the most successful tools in this respect, providing fair
agreement between theory and data for many channels. In other cases,
factorization in its most naive version fails when compared with
experiment, and there have been several phenomenologically motivated
improvements over its original form [2, 3].
There are several dynamical arguments why and where factorization
should be valid. One is based on the large $N_{c}$ limit of QCD
[4], whereas a different line of approach builds on
the color transparency phenomenon [5]. More
recently, the framework of QCD factorization has implemented the color
transparency argument in the language of perturbation theory and power
counting in $1/m_{b}$ [6, 7, 8].
In these approaches it is understood that there are corrections to
naive factorization, which are suppressed in a small parameter such as
$1/N_{c}$, or $\alpha_{s}$ and $\Lambda_{\mathrm{QCD}}/m_{b}$. A more
quantitative understanding of their size is crucial in order to assess
for which channels and to which precision the factorization concept
can be applied. There are also scenarios where factorization in the
sense of [1] does not appear as a limit when a small
parameter vanishes, and where conceptually factorization breaking
terms in the decay amplitude may be as large as the factorizing ones.
An example is the perturbative hard scattering (PQCD) approach
[9]. In view of such controversies, and given that
present day theory can at best estimate the size of most
nonfactorizing contributions, quantitative tests of factorization in
the data are of great importance.
We propose here to study decay channels where the factorizing
contributions to the amplitude are small or zero for symmetry
reasons. In such a situation nonfactorizing contributions, which would
otherwise be suppressed, have a chance to be clearly visible. The
measurement of the corresponding decay rates can thus give rather
direct information on their size, and the comparison of different
channels may give indications on the relevant dynamical mechanisms.
Our suggestion is to choose decay channels whose flavor structure is
such that a selected meson $X$ must be emitted from the weak current
mediating the $b$-quark decay. Taking then a meson which has as very
small decay constant, the factorizable contributions to the decay are
suppressed. A second possibility is to consider mesons $X$ with spin
$J\geq 2$. A tensor meson for instance cannot be produced from a
decaying $W$ boson, which has spin 1, unless there are interactions
involving the other hadrons in the decay process. We will find a
variety of decay channels where these ideas can be realized, which
will allow us to address different issues related to factorization and
its breaking.
The organization of this paper is as follows. In Sect. 2
we review the basics of factorization which will be essential for our
arguments. We select the mesons for which factorizing contributions in
decays are suppressed in Sect. 3. In the following
section we identify which flavor structure a decay must have in order
for this suppression to apply, and take a closer look at specific
issues in the different channels. Sect. 5 discusses how
suppression can be circumvented by different nonfactorizing
mechanisms. Some of these can be treated within QCD factorization and
will be investigated in Sect. 6. We estimate branching
ratios of suppressed decays into a heavy and a light meson in the
subsequent section, before concluding in Sect. 8. Some
numerical estimates concerning meson distribution amplitudes, which we
need in our paper, are given in an Appendix.
2 Matrix elements for hadronic two-body decays
We start by briefly recalling the low-energy effective Hamiltonian and
some basics of the factorization approach. Hadronic two-body
$b$-decays are described by the effective weak Hamiltonian
$$\displaystyle{\cal{H}}_{\mathrm{eff}}=\frac{G_{F}}{\sqrt{2}}\left[\,\sum_{j,k=%
u,c}V_{jb}V^{*}_{kd}\,\Big{(}C_{1}O_{1}^{jk}+C_{2}O_{2}^{jk}\Big{)}-V_{tb}V^{*%
}_{td}\sum_{i}C_{i}O_{i}\,\right]+\{d\rightarrow s\}+\mathrm{h.c.}\,,$$
(1)
where $V$ denotes the CKM matrix. The operators $O_{1,2}^{jk}$ result
from tree level $W$ exchange and in the case $i=k=u$ read
$$\displaystyle O_{1}^{uu}$$
$$\displaystyle=$$
$$\displaystyle\bar{u}_{\alpha}\gamma^{\mu}(1-\gamma_{5})b_{\alpha}\>\bar{d}_{%
\beta}\gamma_{\mu}(1-\gamma_{5})u_{\beta},$$
$$\displaystyle O_{2}^{uu}$$
$$\displaystyle=$$
$$\displaystyle\bar{u}_{\alpha}\gamma^{\mu}(1-\gamma_{5})b_{\beta}\>\bar{d}_{%
\beta}\gamma_{\mu}(1-\gamma_{5})u_{\alpha}.$$
(2)
Here $\alpha$ and $\beta$ are color indices, and it is understood that
all fields are taken at space-time argument zero. The remaining
operators in ${\cal{H}}_{\mathrm{eff}}$ are so-called penguins. For a
detailed discussion of the operators $O_{i}$ and the Wilson coefficients
$C_{i}$ we refer to [10].
In naive factorization, the matrix element $\langle YX|\,{\cal{H}}_{\mathrm{eff}}\,|B\rangle$ is written as a product of
matrix elements of quark currents between $B$ and $Y$, and between the
vacuum and $X$. Only the color-singlet piece of each current is
retained, while the color-octet piece is neglected. This leads to
replacing ${\cal{H}}_{\mathrm{eff}}$ by the effective transition
operator ${\cal T}$, whose tree level operators read
$$\displaystyle{\cal T}^{(1,2)}$$
$$\displaystyle=$$
$$\displaystyle\frac{G_{F}}{\sqrt{2}}\,V_{ub}V_{ud}^{*}\,\Big{[}\,a_{1}\;\bar{u}%
\gamma^{\mu}(1-\gamma_{5})b\otimes\bar{d}\gamma_{\mu}(1-\gamma_{5})u$$
(3)
$$\displaystyle {}+a_{2}\;\bar{d}\gamma^{\mu}(1-\gamma_{5})b\otimes%
\bar{u}\gamma_{\mu}(1-\gamma_{5})u\,\Big{]}$$
for $i=k=u$. Here the notation $\otimes$ indicates that the matrix
elements are to be taken in factorized form as described above. The
new coefficients $a_{1}=C_{1}+C_{2}/3$ and $a_{2}=C_{2}+C_{1}/3$ have been
obtained from projecting on color-singlet currents, and are commonly
referred to as color allowed and color suppressed,
respectively. Numerically, $a_{1}$ is close to 1 and $a_{2}$ of order 0.1
at a renormalization scale $\mu=m_{b}$. Notice that the Fierz transform
performed in the second term of Eq. (3) has left the $(V-A)\times(V-A)$ structure invariant, where $V$ and $A$ respectively
denote the vector and axial vector current. The situation is
analogous for those penguin operators which again involve $(V-A)\times(V-A)$ currents, for explicit formulae see
e.g. [8].
Important for us will be the strong penguins with $(V-A)\times(V+A)$
structure (the electroweak ones with similar Dirac structure are
numerically less important in the Standard Model). They are
$$\displaystyle O_{5}$$
$$\displaystyle=$$
$$\displaystyle\bar{d}_{\alpha}\gamma^{\mu}(1-\gamma_{5})b_{\alpha}\>\sum_{q}%
\bar{q}_{\beta}\gamma_{\mu}(1+\gamma_{5})q_{\beta},$$
$$\displaystyle O_{6}$$
$$\displaystyle=$$
$$\displaystyle\bar{d}_{\alpha}\gamma^{\mu}(1-\gamma_{5})b_{\beta}\>\sum_{q}\bar%
{q}_{\beta}\gamma_{\mu}(1+\gamma_{5})q_{\alpha},$$
(4)
with a sum over $q=u,d,s,c,b$, and in naive factorization lead to
$$\displaystyle{\cal T}^{(5,6)}$$
$$\displaystyle=$$
$$\displaystyle-\frac{G_{F}}{\sqrt{2}}\,V_{tb}V_{td}^{*}\,\Big{[}\,a_{5}\;\sum_{%
q}\bar{d}\gamma^{\mu}(1-\gamma_{5})b\otimes\bar{q}\gamma_{\mu}(1+\gamma_{5})q$$
(5)
$$\displaystyle {}+a_{6}\;\sum_{q}\,(-2)\,\bar{q}(1-\gamma_{5})b%
\otimes\bar{d}(1+\gamma_{5})q\;\Big{]}.$$
The structure $(P-S)\times(P+S)$ involving the scalar and
pseudoscalar currents $S$ and $P$ has emerged from the Fierz transform
in the term with $a_{6}=C_{6}+C_{5}/3$, whose value is about $-0.03$ at
$\mu=m_{b}$. The corresponding operator provides one possibility to
circumvent the suppression mechanisms we will discuss shortly, and we
will often refer to it as scalar penguin.
3 Meson candidate selection
Let us now specify our mechanisms to suppress decay amplitudes in
naive factorization and see to which final states they apply.
3.1 Suppression mechanisms
There are several reasons why the coupling of a meson to the local
currents of the effective weak Hamiltonian can be suppressed. Clearly,
a meson with spin $J=2$ or larger has no matrix element with either of
the currents $S$, $P$, $V$, $A$, and in naive factorization cannot be
produced as a meson ejected by the effective weak current. From
Table 1 we see that examples for such mesons are the
$a_{2}$, $\pi_{2}$, $\rho_{3}$, and $K^{*}_{2}$ in the light quark
sector. Heavy tensor mesons are the $D_{2}^{*}$ and $D_{sJ}$, and there is
a tensor charmonium state, $\chi_{2c}$.
Let us now turn to mesons with $J=0,1$ whose production from the weak
current in naive factorization is forbidden or suppressed because
their coupling to $V$ and $A$ is zero or small. We define the decay
constants of the negatively charged mesons with isospin $I=1$
as
$$\displaystyle\langle{S(q)}|\,{\bar{d}(0)\,\gamma^{\mu}\,u(0)}\,|0\rangle$$
$$\displaystyle=$$
$$\displaystyle-if_{S}\,q^{\mu},$$
(6)
$$\displaystyle\langle{P(q)}|\,{\bar{d}(0)\,\gamma^{\mu}\gamma_{5}\,u(0)}\,|0\rangle$$
$$\displaystyle=$$
$$\displaystyle-if_{P}\,q^{\mu},$$
(7)
$$\displaystyle\langle{V(q,\epsilon)}|\,{\bar{d}(0)\,\gamma^{\mu}\,u(0)}\,|0\rangle$$
$$\displaystyle=$$
$$\displaystyle-if_{V}m_{V}\epsilon^{\mu},$$
(8)
$$\displaystyle\langle{A(q,\epsilon))}|\,{\bar{d}(0)\,\gamma^{\mu}\gamma_{5}\,u(%
0)}\,|0\rangle$$
$$\displaystyle=$$
$$\displaystyle-if_{A}m_{A}\epsilon^{\mu},$$
(9)
for scalar, pseudoscalar, vector, and axial mesons, respectively.
Here $q^{\mu}$ denotes the meson momentum and, if applicable,
$\epsilon^{\mu}$ its polarization vector. The choice of vector or
axial quark current on the right-hand sides of these definitions is
dictated by the parity of the meson. For the corresponding neutral
mesons, the flavor structure of the current is $(\bar{u}u-\bar{d}d)/\sqrt{2}$ instead of $\bar{d}u$, and for mesons with different
quark content one has to take $\bar{s}d$, $\bar{s}u$, $\bar{c}c$, etc.
Because of charge conjugation invariance, the decay constants for the
neutral $a_{0}$ and $b_{1}$ mesons and for the $\chi_{0c}$ must be zero.
In the isospin limit, the decay constants of the charged $a_{0}$ and
$b_{1}$ must thus vanish, too, so that $f_{a0}$ and $f_{b1}$ defined in
Eqs. (6), (9) are small, of order $m_{d}-m_{u}$. For
the $a_{0}$ mesons, this can explicitly be seen by taking the divergence
of Eq. (6), which by virtue of the equations of motion gives
$$\displaystyle m_{a_{0}}^{2}\,f_{a_{0}}$$
$$\displaystyle=$$
$$\displaystyle i(m_{d}-m_{u})\,\langle{a_{0}}|\,{\bar{d}(0)u(0)}\,|0\rangle.$$
(10)
For the charged $K^{*}_{0}$ the analogous relation reads
$$\displaystyle m_{K^{*}_{0}}^{2}\,f_{K^{*}_{0}}$$
$$\displaystyle=$$
$$\displaystyle i(m_{s}-m_{u})\,\langle{K^{*}_{0}}|\,{\bar{s}(0)u(0)}\,|0\rangle,$$
(11)
which becomes zero in the flavor SU(3) limit and indicates that
$f_{K^{*}_{0}}$ should be suppressed.
It is instructive to compare Eqs. (10) and (11)
with their analogs for the pseudoscalars,
$$\displaystyle m_{\pi}^{2}\,f_{\pi}$$
$$\displaystyle=$$
$$\displaystyle i(m_{d}+m_{u})\,\langle{\pi}|\,{\bar{d}(0)\gamma_{5}u(0)}\,|0\rangle,$$
(12)
$$\displaystyle m_{K}^{2}\,f_{K}$$
$$\displaystyle=$$
$$\displaystyle i(m_{s}+m_{u})\,\langle{K}|\,{\bar{s}(0)\gamma_{5}u(0)}\,|0\rangle.$$
(13)
Because the axial current is not conserved, these decay constants do
not vanish in the isospin or SU(3) limit. Moreover, they do not
vanish in the chiral limit, $m_{u}=m_{d}=m_{s}=0$, for the light pion and
kaon, since these mesons are Goldstone bosons and become massless in
the same limit. Numerically, $f_{\pi}$ and $f_{K}$ are in fact not small
and of the same order of magnitude as for instance $f_{\rho}$ and
$f_{K^{*}}$. The decay constant for the heavy $\pi(1300)$, however,
does become zero in the chiral limit, and its actual value is
small due to chiral suppression.
We remark that the spin-zero mesons whose coupling to the $V$ and $A$
currents is small or zero for one of the above reasons can still
couple to the $S$ or $P$ currents appearing in penguin operators of
the effective Hamiltonian, as discussed in Sect. 2. This
does however not hold for the $b_{1}$, which has no matrix elements with
$S$ or $P$.
3.2 Decay constants
The decay constants of the $a_{0}(980)$, $a_{0}(1450)$, $K^{*}_{0}$,
$\pi(1300)$, and the $b_{1}$ are poorly known at present. Using finite
energy sum rules, Maltman [12]
obtained111Maltman defines the $a_{0}$ decay constants with an
extra factor $(m_{s}-m_{u})/(m_{d}-m_{u})$. To convert them into our
convention, we take the quark masses in
Eq. (18) below.
$$f_{a0(980)}=1.1\mbox{~{}MeV},\hskip 20.0ptf_{a0(1450)}=0.7\mbox{~{}MeV},\hskip
2%
0.0ptf_{K^{*}_{0}}=42\mbox{~{}MeV},$$
(14)
consistent with the ranges estimated by Narison [13]
$$f_{a0(980)}=0.7\mbox{~{}to~{}}2.5\mbox{~{}MeV},\hskip 20.0ptf_{K^{*}_{0}}=33%
\mbox{~{}to~{}}46\mbox{~{}MeV}.$$
(15)
For the heavy pion, the theoretical estimates in [14]
provide a range
$$f_{\pi(1300)}=0.5\mbox{~{}to~{}}7.2\mbox{~{}MeV}.$$
(16)
Comparing these values to
$$f_{\pi}=131\mbox{~{}MeV},\hskip 20.0ptf_{K}=160\mbox{~{}MeV},$$
(17)
we find that the suppression patterns discussed in the previous
subsection are indeed seen numerically, with the decay constants for
the $a_{0}$ mesons smaller than those for $\pi(1300)$ because of the
relative signs between the quark masses in Eqs. (10) and
(12). We also see that $f_{K^{*}_{0}}$ is suppressed relative
to $f_{K}$, but not as strongly as $f_{a_{0}}$ compared with $f_{\pi}$,
because SU(3) symmetry breaking is rather strong for the quark masses.
Here we have implicitly assumed that the (pseudo)scalar matrix
elements on the right-hand sides of Eqs. (10) to
(13) are not anomalously small or large. Indeed, taking
from [15] 222We have taken the average values
given in Table 6 and evolved them down using Eq. (20) in that
reference.
$$m_{u}=4.8\mbox{~{}MeV},\hskip 20.0ptm_{d}=8.7\mbox{~{}MeV},\hskip 20.0ptm_{s}=%
164\mbox{~{}MeV}$$
(18)
for the $\overline{\mbox{MS}}$ quark masses at $\mu=1$ GeV, we find
with the values (16) and (17)
$$\displaystyle i\,\langle{\pi}|\,{\bar{d}\gamma_{5}u}\,|0\rangle$$
$$\displaystyle\approx$$
$$\displaystyle 0.19\mbox{~{}GeV}^{2},$$
$$\displaystyle i\,\langle{\pi(1300)}|\,{\bar{d}\gamma_{5}u}\,|0\rangle$$
$$\displaystyle\approx$$
$$\displaystyle 0.06\mbox{~{}to~{}}0.90\mbox{~{}GeV}^{2},$$
$$\displaystyle i\,\langle{K}|\,{\bar{s}\gamma_{5}u}\,|0\rangle$$
$$\displaystyle\approx$$
$$\displaystyle 0.23\mbox{~{}GeV}^{2},$$
(19)
and with (14) and (15)
$$\displaystyle i\,\langle{a_{0}(980)}|\,{\bar{d}u}\,|0\rangle$$
$$\displaystyle\approx$$
$$\displaystyle 0.17\mbox{~{}to~{}}0.62\mbox{~{}GeV}^{2},$$
$$\displaystyle i\,\langle{a_{0}(1450)}|\,{\bar{d}u}\,|0\rangle$$
$$\displaystyle\approx$$
$$\displaystyle 0.39\mbox{~{}GeV}^{2},$$
$$\displaystyle i\,\langle{K^{*}_{0}(1430)}|\,{\bar{s}u}\,|0\rangle$$
$$\displaystyle\approx$$
$$\displaystyle 0.42\mbox{~{}to~{}}0.58\mbox{~{}GeV}^{2}.$$
(20)
Despite a certain spread these values are remarkably close to each
other, given that the corresponding squared meson masses vary by more
than two orders of magnitude. We note that Chernyak
[16] has recently estimated $f_{K^{*}_{0}}=(70\pm 10)$ MeV. We consider this to be rather high as it is far away from
the range (15) obtained in other studies. Also, the
corresponding value for $\langle{K^{*}_{0}(1430)}|\,{\bar{s}u}\,|0\rangle$ would
correspond to a quite strong SU(3) breaking for the scalar matrix
elements.
We wish to emphasize at this point that the decay constants for the
$a_{0}(980)$, $a_{0}(1450)$, $K^{*}_{0}$, $\pi(1300)$, and the $b_{1}$ can be
measured very cleanly in $\tau$ decays. In fact, from the bound on the
branching ratio ${\cal{B}}(\tau\to\pi(1300)\nu_{\tau})<1\cdot 10^{-4}$ in [11] we infer $f_{\pi(1300)}<8.4$ MeV,
which is not far from the upper end of the theory estimates
(16). The decay constants in Eq. (14) correspond
to branching ratios of
$$\displaystyle{\cal{B}}(\tau\to a_{0}(980)\,\nu_{\tau})$$
$$\displaystyle\simeq$$
$$\displaystyle 3.8\cdot 10^{-6},$$
$$\displaystyle{\cal{B}}(\tau\to a_{0}(1450)\,\nu_{\tau})$$
$$\displaystyle\simeq$$
$$\displaystyle 3.7\cdot 10^{-7},$$
$$\displaystyle{\cal{B}}(\tau\to K^{*}_{0}(1430)\,\nu_{\tau})$$
$$\displaystyle\simeq$$
$$\displaystyle 7.7\cdot 10^{-5}.$$
(21)
These estimates are rather encouraging, given that at the $B$
factories one expects to have about $3\cdot 10^{7}$ $\tau$ pairs with
30 fb${}^{-1}$ [17], and that the potential of
$\tau$-charm factories would be even higher. A measurement of the
decay constants for the above mesons would greatly reduce the
uncertainties in the predictions we will give in
Sect. 7. It could also provide valuable information
on the nature in particular of the $a_{0}(980)$ and $a_{0}(1450)$, only
one of which can be a member of the conventional $q\bar{q}$ meson
nonett.
3.3 Kinematics
The color transparency argument [5] for
factorization of decays $B\to YX$ requires the meson $X$ emitted from
the weak current to be fast. More quantitatively, its time dilation
factor $E_{X}/m_{X}$ should be large, where $E_{X}=(m_{B}^{2}-m_{Y}^{2}+m_{X}^{2})/(2m_{B})$ is the energy of $X$ in the $B$ meson rest frame. We
show the values of $E_{X}/m_{X}$ for $Y=D$ and $Y=\pi$ in
Fig. 1. The corresponding curves for $B\to D^{*}X$ and $B_{s}\to D_{s}^{(*)}X$ are very close to the one for $B\to DX$, and the
ones for $B\to\rho X$, $B_{s}\to KX$, and $B_{s}\to K^{*}(892)\,X$ are
practically the same as for $B\to\pi X$. Only if $X$ is a pion does
one have a very large $E_{X}/m_{X}$, namely $E_{\pi}/m_{\pi}=16.5$ for $B\to D\pi$ and $E_{\pi}/m_{\pi}=19$ for $B\to\pi\pi$. For $m_{X}$ above 1 GeV,
relevant for the mesons in Table 1, this ratio
decreases rather gently from moderate values down to little above
1. Even so, the mass range of our candidate mesons seems sufficiently
large so that a study of the corresponding decay channels could
provide valuable clues to whether corrections to factorization
significantly depend on $E_{X}/m_{X}$, or more generally, on the mass
$m_{X}$. We remark in passing that even the lowest value of $E_{X}/m_{X}$ in
Fig. 1 corresponds to a velocity $\beta_{X}=0.5$ and a
recoil momentum of $p_{X}=1.5$ GeV in the $B$ rest frame, indicating
that $X$ is still relativistic.
3.4 Resonance decays and the continuum
Clearly, the measurement of rare $B$ decays involving higher mass
resonances presents an experimental challenge. An experimental
analysis will be easier if the meson $X$ in question has a decay
channel with sizeable branching fraction that is not also accessible
to mesons with nearby mass whose production is not suppressed. To give
some examples, the decays of $a_{0}(980)$, $a_{2}(1320)$, and $a_{0}(1450)$
into $\pi\eta$ appear rather clean in this respect, as do the modes
$b_{1}(1235)\to\omega\pi$ and $\pi_{2}(1670)\to f_{2}(1270)\pi$. On the
other hand, the decays of $a_{2}(1320)$, $\pi(1300)$, and $\pi_{2}(1670)$
into $\rho\pi$ are more problematic because of background from the
rather broad $a_{1}(1260)$. The same holds true for the decays of the
$K_{0}^{*}(1430)$ and $K_{2}^{*}(1430)$ into $K\pi$ because of background from
the $K^{*}(1410)$. In such cases a partial wave analysis of the decay
products will probably be necessary in order to constrain the decays
into the mesons with spin $J=0$ or $J=2$.
We emphasize at this point that mesons $X$ with a rather large decay
width do not present as serious a problem in our context as in other
studies. The physical arguments leading to a suppression of their
production within the factorization mechanism (as the arguments for
factorization itself) do in fact not depend on $X$ being a narrow
resonance. Our arguments in Sect. 3.1 were at the level
of current matrix elements and go through in complete analogy if, for
instance, the $|\pi(1300)\rangle$ state in Eq. (12) is
replaced with $|\rho\pi\rangle$ in appropriate partial waves that have
definite quantum numbers $J^{P}=0^{-}$. Moreover, the main idea here is to
use the branching ratios of suppressed decays as quantitative
estimates for the size of corrections to factorization, and to study
their pattern by comparing different decay channels. An uncertainty on
the branching ratio of $B\to YX$ due to the line shape of $X$ is
therefore less severe as in channels which are allowed by
factorization, and where factorization tests need branching ratios to
a much higher precision.
One could in fact also perform the studies we propose here not with
particular meson resonances but with continuum states, similarly to a
recent test of factorization by Ligeti et al. [18].
Our main reason to concentrate on resonances $X$ here is that, by
definition, their production should be enhanced with respect to
continuum states with the same quantum numbers, which is important
since we are looking for decays with small branching ratios from the
start.
4 Decay mode selection
We are looking for exclusive decays $B\to YX$, where the meson $X$
must be emitted from the weak decay vertex and cannot pick up the
spectator quark. Only then will the factorizing contribution to the
decay be suppressed for mesons $X$ with small or vanishing decay
constant or with spin $J\geq 2$. This puts requirements on the flavor
structure of the decay, which we now discuss. One may avoid these
requirements by studying decays where both final state mesons
are taken from Table 1, such as $\bar{B}^{0}\to b_{1}^{+}b_{1}^{-}$ or $\bar{B}^{0}\to\pi_{2}^{+}a_{2}^{-}$. For one of the mesons the
appropriate suppression mechanism will then always be at work.
For definiteness we consider in the following the case where the $B$
meson contains a $b$ and not a $\bar{b}$. One requirement now is that
the flavors of the spectator antiquark and of the antiquark emitted
from the $b$ decay must not be the same, otherwise $X$ can pick up
either of them. We thus cannot use decays such as $B^{-}\to D^{0}a_{0}^{-}$,
whose flavor structure reads $\bar{u}b\to\bar{u}(c\bar{u}d)$, where
the brackets indicate the quarks originating from the electroweak
vertex.
A second requirement is due to imperfect knowledge of the initial
state. Whereas the decay $\bar{B}^{0}\to D^{+}a_{0}^{-}$ with flavor
structure $\bar{d}b\to\bar{d}(c\bar{u}d)$ satisfies our requirement,
the same final state can be reached in the decay of a $B^{0}$, where the
$a_{0}^{-}$ contains the spectator. The corresponding diagrams are shown
in Fig. 2. This $C\!P$ conjugated background can in
principle be removed by flavor tagging, which puts of course stronger
demands on the experiment. In the example just given, the amplitude
from $B^{0}$ decay is suppressed by $\lambda^{2}$ relative to the direct
decay, where $\lambda\approx 0.22$ is the Wolfenstein parameter in the
CKM matrix. We will give a more quantitative estimate of this
background in Sect. 7.1. In other cases however,
e.g. for $\bar{B}_{s}\to D_{s}^{+}K_{0}^{*-}$ versus $B_{s}\to D_{s}^{+}K_{0}^{*-}$, both signal and background amplitudes are of order
$\lambda^{3}$. We will not consider such modes in the following, and
list in Tables 2 and 3 the flavor
structure of decays satisfying the following conditions:
1.
It is ensured that the meson selected from
Table 1 cannot pick up the spectator antiquark in the
$B$ meson.
2.
A $C\!P$ conjugated background that violates condition 1 either
does not exist or is CKM suppressed. It turns out that the only case
we retain that has such a background is the one mentioned above,
listed in the first row of Table 2.
Let us now consider the different decay categories separately.
4.1 Decays with one or two heavy mesons
Decays into heavy-light final states with open charm are the simplest
from the point of view of their electroweak structure, since only the
$W$ exchange operators $O_{1}$ and $O_{2}$ of the effective Hamiltonian
contribute. For color allowed decays $B\to DX$ with $X=\pi,\rho,a_{1}$, naive factorization is in rather good agreement with
data [2, 19].
For color allowed decays where the heavy meson is the emission
particle, the color transparency argument does not hold, but arguments
based on the large $N_{c}$ limit do. Relevant meson candidates here are
the $D_{2}^{*}$ and the $D_{sJ}$. Comparing the size of nonfactorizing
contributions for the two types of channels with the suppression
mechanisms discussed here might thus shed light on the question which
type of mechanism is more relevant to ensure factorization. One may
also use decays into two charmed mesons to address the same
question. We remark in this context that in recent study, Luo and
Rosner found factorization to work reasonably well for $\bar{B}^{0}\to D^{(*)+}D_{s}^{(*)-}$ within present errors [19].
Notice that for color suppressed channels such as $\bar{B}^{0}\to\pi^{0}D_{2}^{*0}$ naive factorization is neither backed up by color
transparency nor by $1/N_{c}$ arguments. The decays into a $D_{2}^{*0}$ in
Table 3 will thus show whether the factorization
concept still applies here.
Let us finally consider $B$ decays into charmonium. Naive
factorization has notorious problems with these channels
[2], compounded by the fact that the coefficient
$a_{2}$ is extremely dependent on the factorization scale $\mu$. A
comparison of the decays involving a $\chi_{c0}$ or $\chi_{c2}$ and
the corresponding ones with $J/\psi$ or $\chi_{c1}$ may thus shed
light on the relative importance of factorizable and nonfactorizable
contributions.
4.2 Penguins and decays into two light mesons
Decays into two light mesons present some specifics due to the
presence of penguin operators. First, we have no meson candidates with
isospin $I=0$, which are a superposition of quark states $u\bar{u}$,
$d\bar{d}$, $s\bar{s}$. Penguin transitions lead to all three of them,
and one will always violate our condition that the spectator and the
emitted antiquark must have different flavors. Second, as remarked at
the end of Sect. 3.1, suppression mechanisms based on
the smallness of the decay constant $f_{X}$ are not effective if scalar
penguins occur in the factorization ansatz. Only spin suppression, and
the isospin suppression for the $b_{1}$, are still at work then.
It is nevertheless instructive to look at the relative importance of
the $S$ and $P$ operators in decays involving scalars or
pseudoscalars. In the decays $\bar{B}^{0}\to a_{0}^{+}\,a_{0}^{-}$ or
$\bar{B}^{0}\to\pi^{+}(1300)\pi^{-}(1300)$ the scalar penguins come with
huge enhancement factors in the amplitudes
$$r^{a_{0}}=\frac{2m_{a_{0}}^{2}}{m_{b}\,(m_{d}-m_{u})},\hskip 20.0ptr^{\pi}=%
\frac{2m_{\pi}^{2}}{m_{b}\,(m_{d}+m_{u})}.$$
(22)
Numerically, $r^{\pi(1300)}=85$, $r^{a_{0}(980)}=170$, and
$r^{a_{0}(1450)}=380$, to be compared with $r^{\pi}=1$ for the light pion,
where we have evolved the light quark masses (18) up
to $\mu=m_{b}=4.4$ GeV. The strong scalar penguin can hence compete with
the current-current operators, even though its coefficient $a_{6}$
introduced in Sect. 2 is only of order of several
$10^{-2}$.
Using Eqs. (10), (12), (22), we can
express the products $f_{X}r^{X}$ in terms of the (pseudo)scalar matrix
elements (19) and (20), and
observe that
$$f_{a_{0}}r^{a_{0}}\approx f_{\pi(1300)}r^{\pi(1300)}\approx f_{\pi}r^{\pi},$$
(23)
i.e., they are all roughly of the same size. Since $\bar{B}^{0}\to\pi^{+}\pi^{-}$ is driven by the color allowed tree level coefficient
$a_{1}$, naive factorization predicts the decay rate for $\bar{B}^{0}\to a_{0}^{+}\,a_{0}^{-}$ to be small compared with the one for $\bar{B}^{0}\to\pi^{+}\pi^{-}$. Namely, the ratio of their amplitudes is controlled by
the small parameters $f_{a_{0}}/f_{\pi}$ and
$$\frac{a_{6}}{a_{1}}\,\frac{f_{a_{0}}r^{a_{0}}}{f_{\pi}}=\frac{a_{6}}{a_{1}}\,r%
^{\pi}\,\frac{\langle{a_{0}}|\,{\bar{d}u}\,|0\rangle}{\langle{\pi}|\,{\bar{d}%
\gamma_{5}\,u}\,|0\rangle},$$
(24)
corresponding to the tree level and scalar penguin contribution to
$\bar{B}^{0}\to a_{0}^{+}\,a_{0}^{-}$, respectively. Analogous estimates can be
given for $\bar{B}^{0}\to\pi^{+}(1300)\,\pi^{-}(1300)$, and also for
$\bar{B}_{s}$ decays into $K^{+}a_{0}^{-}$ and $K^{+}\pi^{-}(1300)$ compared to
$K^{+}\pi^{-}$.
The situation is different for decays where the emitted meson is a
kaon. The decay mode $\bar{B}^{0}\to\pi^{+}K^{-}$ is penguin dominated
since its tree level contribution is CKM suppressed. With the analog
of (23) for strange mesons one thus obtains similar
decay rates for $\bar{B}^{0}\to\pi^{+}K^{-}$ and $\bar{B}^{0}\to\pi^{+}K_{0}^{*-}$, where the latter receives most of its contribution from the
scalar penguin, as was pointed out in [16]. We
emphasize that, in contrast, naive factorization predicts
${\cal{B}}(\bar{B}^{0}\to\pi^{+}K^{*-}_{2})=0$.
4.3 Bottom baryon decays
Bottom baryons provide a complementary field to study exclusive
hadronic decays, making more degrees of freedom such as polarization
accessible to experimental investigation. The most notable differences
between $q\bar{q}$ and $qqq$ bound states in the context of
factorization studies are the quark content and the role of
annihilation topologies. Since the initial baryon can never be
completely annihilated by the operators in Eq. (1), we
call the corresponding topologies shown in Fig. 3
pseudo-annihilation. For an overview of heavy baryon decays, we refer
to [20].
Let us adapt our ideas to study factorization and its breaking with
spin or decay constant suppression to the case of exclusive heavy
baryon decays. Of course there is no background here from decays of
the $C\!P$ conjugated parent into the same final state, such as
discussed in Sect. 4. In order to ensure the formation
of the final state meson from the electroweak current, we must however
require that the spectator quarks in the baryon be different from the
quarks produced in the weak decay. In addition, we can only consider
bottom baryons that have weak decays and do not dominantly decay
strongly or electromagnetically. A possible decay channel is for
instance $\Lambda_{b}\to\Lambda_{c}D_{sJ}^{-}$. Also, Mannel et
al. [21] have mentioned that the $\Omega_{b}$ might only
have electroweak decays. In Table 4 we list the decays
of these two baryons for which it is assured that the meson cannot
pick up a spectator, so that corrections to factorization can be
studied with our method.
5 Escaping suppression by factorization breaking
The idea of this paper is to study the size and pattern of corrections
to factorization in an environment where they are not “hidden”
behind a larger factorizing piece. Without giving an exhaustive
discussion of nonfactorizing contributions, we now consider two of
them and see why the various suppression mechanisms discussed in
Sect. 3.1 do not apply.
5.1 Nonfactorizing gluon exchange
Naive factorization is broken by strong interactions of the quarks
originating from the $b$ decay. In the language of quarks and gluons
they correspond to diagrams like the ones in
Fig. 4. What is important in our context is that such
contributions no longer involve the matrix elements of the meson $X$
with the local currents $V$, $A$, $S$, $P$. As a consequence the
suppression mechanisms based on the smallness of the decay constant
$f_{X}$ are not effective. Also, one or several gluons absorbed by the
quark-antiquark pair that will form $X$ can transfer both helicity and
orbital angular momentum, so that the spin of $X$ is no longer
restricted to be 0 or 1.
Note that these arguments are independent on whether the internal
lines in the diagrams of Fig. 4 have large virtualities
or not. In the first case the corresponding contributions can be
calculated in perturbation theory. In Sect. 6 we will
analyze them in the QCD factorization approach and explicitly see that
our suppression mechanisms are no longer operative.
If the internal lines in these diagrams are not hard, perturbation
theory is not reliable, and other descriptions of the corresponding
reactions might be more adequate. If one treats them for instance as
hadronic rescattering, there is again no reason why final state mesons
$X$ with small decay constants or higher spin should be suppressed.
5.2 Annihilation contributions
Annihilation diagrams are another important contribution violating
naive factorization. In Tables 2 and
3 we have indicated the channels where they can
occur, either from tree level $W$ exchange or from penguin
operators. An indication of the importance of annihilation could be
obtained from data by comparing decays with and without such
contributions that are otherwise similar, for instance $\bar{B}^{0}\to D^{+}a_{0}^{-}$ and $\bar{B}_{s}\to D_{s}^{+}a_{0}^{-}$, or $B^{-}\to\pi^{0}D_{sJ}^{-}$
and $B^{-}\to\eta D_{sJ}^{-}$.
Depending on the flavor structure of the decay there are three
annihilation topologies. The one shown in Fig. 5a is for
example relevant for $\bar{B}^{0}\to D^{+}a_{0}^{-}$ and for $B^{-}\to\pi^{-}\bar{K}_{2}^{*0}$. In this case only one of the quarks forming our
candidate meson $X$ originates from the effective weak vertex, so that
our suppression mechanisms are not relevant. Notice that this holds
true irrespective of whether the interactions between the
$q\bar{q}$-pair and the quarks attached to the decay vertex are under
perturbative control or not, a point that is controversial in the
literature [8, 9].
The two remaining topologies correspond to Zweig forbidden
contributions. Neglecting electromagnetic interactions, they require
that the meson formed from the $q\bar{q}$-pair has isospin $I=0$. In
Fig. 5b, relevant for our decays into charmonium, neither
of the quarks from the annihilation vertex enters in $X$, so that our
suppression mechanisms again do not apply. Finally, there is the case
of Fig. 5c, where $X$ is formed from the quarks of the
decay vertex. Our suppression mechanisms only apply here if the
$\bar{q}q$-pair interacts solely with the quarks in the $B$ meson, but
not with the ones forming the meson $X$.
The topologies for pseudo-annihilation contributions in $b$ baryon
decays have already been shown in Fig. 3. All of them
circumvent suppression.
6 The case of QCD factorization
In the QCD factorization approach developed by Beneke et
al. [6, 7, 8], those
corrections to naive factorization that are dominated by hard gluon
exchange are calculated in perturbation theory, all other
contributions are found to be power suppressed in $1/m_{b}$. The
physical mechanism underlying these results is color transparency of
the meson ejected by the effective weak current. QCD
factorization can therefore not be applied to decays where a $D$ meson
with its highly asymmetric quark-antiquark configurations is emitted
from the current, and we will thus not consider the corresponding
channels in this section.
As QCD factorization relies on color transparency, it also requires
the emitted meson to be fast in the $B$ rest frame. Certain types of
power corrections will therefore increase with the mass $m_{X}$ of the
emitted meson. One can reduce the bias due to such effects by
comparing our suppressed decays with unsuppressed channels involving
mesons of similar mass. Examples are the $\rho(770)$, $a_{1}(1260)$, or
$\rho(1450)$ for isospin-one mesons, and the $K^{*}$ and $K_{1}$
resonances for the strange sector.
6.1 Distribution amplitudes
The most important effect of radiative corrections in our context is
that the currents like $\langle{X}|\,{\bar{d}(0)\gamma^{\mu}(1-\gamma_{5})u(0)}\,|0\rangle$ occurring in naive factorization become nonlocal. The
leading configurations in $1/m_{b}$ involve light-like separations $z$
and are parameterized by meson distribution amplitudes. In light-cone
gauge, they read for a $d\bar{u}$ meson
$$\langle{X(q,J_{3}=0)}|\,{\bar{d}(z)\,\gamma^{\mu}(\gamma_{5})u(-z)}\,|0\rangle%
\Big{|}_{z^{2}=0}=-iq^{\mu}\int_{0}^{1}du\,e^{i(2u-1)\,q\cdot z}\,\varphi(u)+\ldots$$
(25)
to twist-two accuracy, where the $\ldots$ stand for terms of twist
three and higher, which contribute to hard processes only at the power
correction level. The Dirac matrices $\gamma^{\mu}$ and $\gamma^{\mu}\gamma_{5}$ are to be taken for mesons with natural and unnatural
parity, $P=(-1)^{J}$ and $P=(-1)^{J+1}$, respectively. The variable $u$
gives the momentum fraction carried by the quark in the meson $X$, a
natural frame of reference in our case being the $B$ rest
frame. Notice that the twist-two distribution amplitudes involving the
vector and axial currents select the polarization state with zero
angular momentum $J_{3}$ along $\vec{q}$ in that frame. We remark that
our subsequent discussion remains valid if instead of a meson $X$ one
considers a continuum final state with appropriate quantum numbers as
discussed in Sect. 3.4. In this case $\varphi$ is to
be replaced with a generalized distribution amplitude
[22], defined as in Eq. (25) with the
appropriate replacement of state vectors $|X\rangle$.
One easily sees that the lowest moment of $\varphi$ in $u$ gives back
the local currents, so that for mesons with spin 0 or 1 one recovers
the decay constant,
$$\int_{0}^{1}du\,\varphi(u)=f_{X}.$$
(26)
The nonlocal currents in Eq. (25) have however matrix
elements for mesons of any spin $J$. Taylor expanding the bilocal
operators around $z=0$ gives in fact operators with arbitrarily high
numbers of partial derivatives between $\bar{d}(0)$ and $u(0)$, and
thus with arbitrarily high spin. In other words the lowest moment
$\int du\,\varphi(u)$ of the distribution amplitude, projecting out
the local $V$ or $A$ current, vanishes for mesons with $J\geq 2$, but
not the function $\varphi(u)$ itself. Hence the production of mesons
with spin 2 and higher is no longer forbidden at the level of
$\alpha_{s}$ corrections to naive factorization. We thus find an
explicit realization of our arguments in Sect. 5.1:
gluon exchange such as in Fig. 4 indeed makes the
production of higher-spin mesons possible.
Let us now see what becomes of the other suppression mechanisms we
discussed in Sect. 3.1. Charge conjugation invariance
implies that the distribution amplitudes $\varphi_{X_{0}}$ for the
neutral $a_{0}$ and $b_{1}$ mesons are odd under the exchange of
quark and antiquark momenta, $\varphi_{X^{0}}(u)=-\varphi_{X^{0}}(1-u)$, so that their first moment (26) is
zero. In the exact isospin limit, the distribution amplitudes for the
charged and neutral mesons in an isotriplet are the same, so that
with Eq. (26) the decay constants of the charged $a_{0}$ and
$b_{1}$ have to vanish, as already seen in Sect. 3.1. In
the real world, the part of the distribution amplitude that is even
under $u\to 1-u$ is therefore small for the charged $a_{0}$ and $b_{1}$,
i.e., $\varphi_{X^{-}}(u)+\varphi_{X^{-}}(1-u)\sim m_{d}-m_{u}$. This does
however not restrict the odd part of $\varphi_{X^{-}}(u)$, which can be
comparable in size to the distribution amplitudes of, say, the $\pi$
or the $\rho$. Considering the distribution amplitude of the $K_{0}^{*}$
and using SU(3) symmetry we see that its even part is suppressed by
$m_{s}-m_{u}$, but not its odd part.
Along the same lines of reasoning, we find that up to isospin breaking
effects the distribution amplitudes for the charged heavy pions are
even under $u\to 1-u$. The lowest moment $\int du\,\varphi_{\pi(1300)}(u)$ is small of order $m_{u}+m_{d}$ according to
Eqs. (12) and (26), but there is no such
restriction on higher even moments such as $\int du\,(2u-1)^{2}\,\varphi_{\pi(1300)}(u)$. We thus see that neither spin nor any of our
other suppression mechanisms applies at the level of $\alpha_{s}$
corrections.
It is useful to expand the distribution amplitude in Gegenbauer
polynomials $C_{n}^{3/2}$, which are the eigenfunctions of the
leading-order evolution equation [23] for quark
distribution amplitudes. In order to achieve a uniform notation for
mesons with different spins we write
$$\varphi(u;\mu)=f^{\varphi}\,6u(1-u)\Big{[}B_{0}+\sum_{n=1}^{\infty}B_{n}(\mu)%
\,C_{n}^{3/2}(2u-1)\Big{]},$$
(27)
where we have explicitly displayed the dependence on the factorization
scale $\mu$. For mesons $X$ with spin 0 or 1, we have $B_{0}=1$ and
$f^{\varphi}$ is just the decay constant $f_{X}$, whereas for mesons with
$J\geq 2$ we have $B_{0}=0$. We will in the following only use products
$f^{\varphi}B_{n}$ so that we need not specify the separate
normalizations of $f^{\varphi}$ and $B_{n}$ in that case. From our above
discussion it follows that for our candidate mesons one or more of the
lowest coefficients in the expansion (27) are either zero
or small of order $m_{d}-m_{u}$, $m_{d}+m_{u}$, or $m_{s}-m_{u}$. In the Appendix
we will estimate the orders of magnitude of the leading coefficients
to be
$$\displaystyle|f^{\varphi}B_{1}|_{a_{0},b_{1},a_{2},K_{0}^{*},K_{2}^{*}}$$
$$\displaystyle\approx$$
$$\displaystyle 75~{}\mbox{MeV},$$
$$\displaystyle|f^{\varphi}B_{2}|_{\pi(1300),\pi_{2},\rho_{3}}$$
$$\displaystyle\approx$$
$$\displaystyle 50~{}\mbox{MeV},$$
(28)
evaluated at the renormalization scale $\mu=m_{b}=4.4$ GeV.
6.2 Decays into a $D$ and a light meson $X$
For these channels the only diagrams one needs to consider at leading
order in $1/m_{b}$ are vertex corrections such as in
Fig. 4a. The result of the $O(\alpha_{s})$ calculation for
the matrix element of the effective weak Hamiltonian can be written as
the sum of
$$\displaystyle\langle D^{+}X^{-}|{\cal H}_{\mathrm{eff}}|\bar{B}^{0}\rangle^{%
\mathrm{corr}}$$
$$\displaystyle=$$
$$\displaystyle-i\,\frac{G_{F}}{\sqrt{2}}V_{cb}V_{ud}^{*}\;a_{1}^{\mathrm{corr}}%
f^{\varphi}\,q_{\mu}\langle D^{+}|\bar{c}\gamma^{\mu}b|\bar{B}^{0}\rangle,$$
$$\displaystyle\langle D^{*+}X^{-}|{\cal H}_{\mathrm{eff}}|\bar{B}^{0}\rangle^{%
\mathrm{corr}}$$
$$\displaystyle=$$
$$\displaystyle i\,\frac{G_{F}}{\sqrt{2}}V_{cb}V_{ud}^{*}\;a_{1}^{\mathrm{corr}}%
f^{\varphi}\,q_{\mu}\langle D^{*+}|\bar{c}\gamma^{\mu}\gamma_{5}b|\bar{B}^{0}\rangle,$$
(29)
and the contribution of naive factorization, taken with a coefficient
$a_{1}^{\mathrm{fact}}$ which equals the coefficient $a_{1}$ of
Sect. 2 evaluated at next-to leading order, up to a small
term removing the renormalization scheme dependence. For details we
refer to Eqs. (95) and (96) of [7]. The coefficients
$a_{1}^{\mathrm{corr}}$ are given by
$$f^{\varphi}a_{1}^{\mathrm{corr}}=\frac{\alpha_{s}(\mu)}{4\pi}\,C_{2}(\mu)\,%
\frac{C_{F}}{N_{c}}\,\int_{0}^{1}du\,F(u,\pm z)\,\varphi(u;\mu),$$
(30)
where the function $F(u,z)$ with $z=m_{c}/m_{b}$ can be found
in [7]. Its second argument is $+z$ for decays into
$D$ and $-z$ for decays into $D^{*}$, with $z=m_{c}/m_{b}$, so that
$a_{1}^{\mathrm{corr}}$ depends on both mesons in the final state. Its
dependence on the distribution amplitude of $X$ can be expressed as
$$a_{1}^{\mathrm{corr}}(\mu)=a_{1}^{(0)}(\mu)\,B_{0}+\sum_{n=1}^{\infty}a_{1}^{(%
n)}(\mu)\,B_{n}(\mu).$$
(31)
The first few coefficients $a_{1}^{(n)}$ are listed in
Table 5. We see that they are small compared with 1,
and that they tend to decrease with $n$. They depend substantially on
the renormalization scale $\mu$. One finds
$$\frac{a_{1}^{(n)}(m_{b}/2)}{a_{1}^{(n)}(m_{b})}\approx\frac{a_{1}^{(n)}(m_{b})%
}{a_{1}^{(n)}(2m_{b})}\approx 2,$$
(32)
a falloff mostly due to the Wilson coefficient $C_{2}$. The Gegenbauer
coefficients $B_{n}$ also decrease with the factorization scale,
although by less than a factor 1.2 for $B_{1}$ and $B_{2}$ when $\mu$ is
varied between $m_{b}/2$ and $m_{b}$ or between $m_{b}$ and $2m_{b}$. The
effects of this dependence on $\mu$ are quite mild for decays where
most of the result is due to the Born level term $f^{\varphi}a_{1}^{\mathrm{fact}}$, but not for our decays where this contribution
is absent or suppressed by a small value of $f^{\varphi}$. A more stable
prediction would require the inclusion of $O(\alpha_{s}^{2})$ corrections.
Since they involve again the color allowed Wilson coefficient $C_{1}$
they may actually not be small compared with the $O(\alpha_{s})$ terms.
Whereas $a_{1}^{\mathrm{fact}}$ is real valued, we observe from
Eq. (31) and Table 5 that
$a_{1}^{\mathrm{corr}}$ is complex. The strong phases of unsuppressed
channels like $\bar{B}^{0}\to D^{+}\pi^{-}$ are thus small in QCD
factorization. On the contrary, they can be sizeable in our suppressed
decays, where $\alpha_{s}$ contributions are essential.
6.3 Decays into charmonium
The radiative corrections to factorization for decays into charmonium
states have not been calculated yet. The following observation
[7] is however relevant in our context. The naive
factorization formula for these decays involves the color suppressed
coefficient $a_{2}$ and color suppressed penguins, but at the level of
loop corrections the color allowed coefficient $C_{1}$ will come
in. Hence the $O(\alpha_{s})$ terms will probably be sizeable compared
with the naive factorization result. This expectation is supported by
an analysis of inclusive $B$ decays into charmonium
[24]. Whereas naive factorization forbids the decays
of Table 3 into a $\chi_{c0}$ or a $\chi_{c2}$
instead, one may then expect that within QCD factorization their
branching ratios are not much smaller or even of similar size than for
the corresponding decays into $J/\Psi$ or $\chi_{c1}$.
6.4 Decays into two light mesons
For these channels, not only vertex corrections need to be considered,
but also so-called penguin contractions (cf. Fig. 7 of
[7]) and hard interactions with the spectator quark
from the $B$ as shown in Fig. 4b. The latter involve the
twist-two distribution amplitudes $\varphi_{Y}$ and $\varphi_{X}$ of both
final state mesons. Mesons with $J\geq 1$ have a second twist-two
distribution amplitude [25] involving the nonlocal
tensor current $\bar{d}(z)\,\sigma^{\mu\nu}u(-z)$, which describes
states with helicity $J_{3}=\pm 1$ and can also contribute now.
Beyond the level of leading twist contributions, several power
corrections have been considered in the literature
[8, 26]. A particular type of correction that
is numerically not suppressed occurs for final-state mesons with spin
$J=0$ and involves their twist-three distribution amplitudes defined
from the nonlocal (pseudo)scalar and tensor currents
[27]. Their contribution to the amplitude relative to
the twist-two radiative corrections is controlled by the ratio $r$
defined as in (22), which is formally of order $1/m_{b}$ but
numerically not small. With the quark mass values we use, one has
$r_{\pi}=r_{K}=1$, and the corresponding twist-three radiative
corrections have size similar to the leading-twist ones. This is also
the case for the $a_{0}$, $\pi(1300)$, and $K_{0}^{*}$ mesons we are
considering here. Indeed, the twist-two distribution amplitudes are
comparable in size for $a_{0}$, $\pi(1300)$, $K_{0}^{*}$, and $\pi$, $K$, as
follows from comparing our estimates (28) with $f_{\pi}$ or
$f_{K}$. The same holds for the respective twist-three distribution
amplitudes, which are controlled by the local (pseudo)scalar matrix
elements in Eqs. (19) and
(20). We recall that the twist-three pieces just
discussed can only be estimated in QCD factorization because
they lead to logarithmic divergences at the endpoints of the
distribution amplitudes.
Logarithmic endpoint divergences also appear in annihilation
contributions, which in the power counting scheme of QCD factorization
are again $1/m_{b}$ corrections, even for those terms involving only
twist-two distribution amplitudes. In a recent study of decays $B\to\pi\pi$ and $B\to\pi K$ Beneke et al. [8] estimated
annihilation contributions to be moderate corrections to the leading
terms computed in the QCD factorization framework, giving a benchmark
number of 25% in the branching ratio, although with large
uncertainties. Notice that the small parameter controlling the
relative weight of annihilation and leading contributions in their
calculation is
$$\frac{f_{B}f_{Y}}{(m_{B}^{2}-m_{Y}^{2})F_{0}^{B\to Y}(m_{X}^{2})},$$
(33)
which depends only mildly on the mass of the emitted meson through the
$B\to Y$ transition form factor. We take this as an indication that
the importance of annihilation contributions is not primarily driven
by the size of $m_{X}$.
We expect then that in decays such as $\bar{B}^{0}\to a_{0}^{+}a_{0}^{-}$ the
hard nonfactorizable terms calculable in QCD factorization, and
possibly also the power corrections estimated there should be small
compared to the amplitude for $\bar{B}^{0}\to\pi^{+}\pi^{-}$, which is
dominated by the contribution of the large coefficient $a_{1}$.
For penguin dominated decays like $B\to\pi K$, the overall size of
corrections found in Ref. [8] is not so small
compared with the result of the naive factorization formula. With the
“designer” modes $\bar{B}^{0}\to\pi^{+}K_{2}^{*-}$ and $B^{-}\to\pi^{-}\bar{K}_{2}^{*0}$ we can isolate such nonfactorizable terms. Among
these, annihilation contributions from scalar penguin operators are of
special interest. In QCD factorization they are a power correction,
but not in the PQCD approach, where Keum et al. [9]
found that they contribute at order one and with a large phase to the
$B\to\pi K$ decay amplitudes. Comparing the branching ratios of the
above decays into $K_{2}^{*}$ with those into a $K$ or $K^{*}$ can show
whether scalar penguin annihilation contributions are indeed large.
7 Branching ratio estimates for decays $B\to DX$
We present now our numerical estimates of the branching ratios for
decays $\bar{B}^{0}\to D^{+}X^{-}$, where $X$ is one of $a_{0}$, $a_{2}$,
$b_{1}$, $\pi(1300)$, $\pi_{2}$, $\rho_{3}$, or $K_{0}^{*}(1430)$, $K_{2}^{*}$. As
discussed in Sect. 6.2, these channels receive
hard gluon corrections to naive factorization, which can be calculated
in the QCD factorization approach.
We recall that for the $a_{2}$, $\pi_{2}$, $\rho_{3}$, and $K_{2}^{*}$, the
tree term proportional to $a_{1}^{\mathrm{fact}}$ is absent. The
contribution from the $\alpha_{s}$ correction $a_{1}^{\mathrm{corr}}$
given in (29) involves a contraction of $q_{\mu}$ with the
matrix element parameterized as
$$\displaystyle\langle D(p^{\prime})|\bar{c}\gamma^{\mu}b|\bar{B}(p)\rangle$$
$$\displaystyle=$$
$$\displaystyle F_{1}(q^{2})\left\{(p+p^{\prime})^{\mu}-\frac{m_{B}^{2}-m_{D}^{2%
}}{q^{2}}\,q^{\mu}\right\}+F_{0}(q^{2})\,\frac{m_{B}^{2}-m_{D}^{2}}{q^{2}}\,q^%
{\mu},$$
(34)
where $q=p-p^{\prime}$. One easily sees that the $\alpha_{s}$
contributions always pick up the form factor $F_{0}$, independent of the
spin of $X$.
Thus, the decay rate for our candidates $X$, as well as for any other
spin zero meson like the $\pi$, can be written as
$$\displaystyle\Gamma(\bar{B}^{0}\to D^{+}X^{-})$$
$$\displaystyle=$$
$$\displaystyle\frac{G_{F}^{2}}{16\pi}\,\frac{(m_{B}^{2}-m_{D}^{2})^{2}}{m_{B}^{%
2}}\,p_{X}\,\Big{|}V_{cb}V_{ud}^{*}\Big{|}^{2}\;\Big{|}\,f^{\varphi}(a_{1}^{%
\mathrm{fact}}+a_{1}^{\mathrm{corr}})\,F_{0}(m_{X}^{2})\,\Big{|}^{2},$$
(35)
where $p_{X}$ denotes the magnitude of the three-momentum of $X$
in the $B$ rest frame.
We normalize the rate of the decays into $X$ to the unsuppressed one
into a light pion. This has the advantage that the CKM factors cancel
(except for the strange mesons) and that we can use
the measured branching ratio for $\bar{B}^{0}\to D^{+}\pi^{-}$ decays,
where naive factorization works well
[2, 7].
For the ratio of decay rates we have the simple expression
$$\frac{\Gamma(\bar{B}^{0}\to D^{+}X^{-})}{\Gamma(\bar{B}^{0}\to D^{+}\pi^{-})}%
\approx\left|\,\frac{f^{\varphi}(a_{1}^{\mathrm{fact}}+a_{1}^{\mathrm{corr}})}%
{f_{\pi}a_{1}^{\mathrm{fact}}}\,\right|^{2},$$
(36)
where for simplicity we have neglected the term $a_{1}^{\mathrm{corr}}$
for the $\pi^{-}$, where its effect is of order 1% to 2%.
Eq. (36) has corrections due to phase space and the
evaluation of the form factor $F_{0}$ at different momentum transfer
$q^{2}$, which go in opposite directions. The relevant $q^{2}$ ranges from
roughly 1 GeV${}^{2}$ for the $a_{0}(980)$ to 3 GeV${}^{2}$ for the $\pi_{2}$ and
$\rho_{3}$, to be compared with $q^{2}\approx 0$ for the $\pi$. We have
checked that the relation (36) is affected by these mass
effects by not more than 10% to 15%. This is sufficient in our
context, given the dominant theoretical uncertainty of our calculation
hidden in the decay constants and distribution amplitudes. For
$X=K_{0}^{*}(1430)$, $K_{2}^{*}$ we have to include a CKM factor
$|V_{us}/V_{ud}|^{2}$, which is known to a good precision. In our
numerical analysis we take $|V_{us}/V_{ud}|=0.23$ from
[11].
We evaluate (36) using the expansion (31) with
the coefficients $a_{1}^{(n)}$ of Gegenbauer moments in Table
5, the estimates (28) for the Gegenbauer
moments $B_{n}$, and the maximal values of the decay constants in
(14) to (17). The $b_{1}$ can have a small decay
constant, for which we are not aware of any information in the
literature, and in our calculation we set it to zero.
Before presenting the branching ratios let us study the magnitude of
the individual terms entering in (36). We have $f_{\pi}a_{1}^{\mathrm{fact}}=136$ MeV for the pion, and for the mesons
with leading Gegenbauer moments $f^{\varphi}B_{1}$ and $f^{\varphi}B_{2}$,
we respectively obtain
$$\displaystyle|f^{\varphi}a_{1}^{\mathrm{corr}}\,|_{a_{0},b_{1},a_{2},K_{0}^{*}%
,K_{2}^{*}}$$
$$\displaystyle=$$
$$\displaystyle 1.5~{}\mbox{MeV},$$
$$\displaystyle|f^{\varphi}a_{1}^{\mathrm{corr}}\,|_{\pi(1300),\pi_{2},\rho_{3}}$$
$$\displaystyle=$$
$$\displaystyle 0.1~{}\mbox{MeV},$$
(37)
at renormalization scale $\mu=m_{b}$. The second term is tiny mainly
because of the small coefficient $a_{1}^{(2)}$ from the one-loop
calculation. Since our estimate in the Appendix does not yield the
sign of $f^{\varphi}B_{n}$, we have a twofold ambiguity when adding
$f^{\varphi}a_{1}^{\mathrm{corr}}$ to $f^{\varphi}a_{1}^{\mathrm{fact}}$,
and find
$$\displaystyle f_{a0(980)}\,a_{1}^{\mathrm{fact}}$$
$$\displaystyle=$$
$$\displaystyle 2.6\mbox{~{}MeV},\hskip 30.0pt|f^{\varphi}(a_{1}^{\mathrm{fact}}%
+a_{1}^{\mathrm{corr}})\,|_{a0(980)}\hskip 5.5pt\,=\;2.4\mbox{~{}or~{}}3.5%
\mbox{~{}MeV},$$
$$\displaystyle f_{a0(1450)}\,a_{1}^{\mathrm{fact}}$$
$$\displaystyle=$$
$$\displaystyle 0.7\mbox{~{}MeV},\hskip 30.0pt|f^{\varphi}(a_{1}^{\mathrm{fact}}%
+a_{1}^{\mathrm{corr}})\,|_{a0(1450)}\hskip 2.5pt\,=\;1.3\mbox{~{}or~{}}1.9%
\mbox{~{}MeV},$$
$$\displaystyle f_{\pi(1300)}\,a_{1}^{\mathrm{fact}}$$
$$\displaystyle=$$
$$\displaystyle 7.5\mbox{~{}MeV},\hskip 30.0pt|f^{\varphi}(a_{1}^{\mathrm{fact}}%
+a_{1}^{\mathrm{corr}})\,|_{\pi(1300)}\hskip 5.5pt\,=\;7.4\mbox{~{}or~{}}7.6%
\mbox{~{}MeV},$$
$$\displaystyle f_{K_{0}^{*}(1430)}\,a_{1}^{\mathrm{fact}}$$
$$\displaystyle=$$
$$\displaystyle\;48\mbox{~{}MeV},\hskip 30.0pt|f^{\varphi}(a_{1}^{\mathrm{fact}}%
+a_{1}^{\mathrm{corr}})\,|_{K_{0}^{*}(1430)}\,=\;\;47\mbox{~{}or~{}}\,48\mbox{%
~{}MeV}.$$
(38)
Notice the enormous correction to the naive factorization result for
the $a_{0}(1450)$. The impact of nonfactorizing corrections is similarly
strong for the $a_{0}(980)$ if we take the minimum value
$f_{a_{0}(980)}=0.7$ MeV from Eq. (15). For the $\pi(1300)$,
nonfactorizing terms remain always moderate since $|f^{\varphi}a_{1}^{\mathrm{corr}}|_{\pi(1300)}$ is small even compared with the
lowest estimate of $f_{\pi(1300)}$ in Eq. (16). The
$K_{0}^{*}$ has a decay constant much larger than $|f^{\varphi}a_{1}^{\mathrm{corr}}|_{K_{0}^{*}}$ and is the only case where the
corrections hardly matter.
Using ${\cal{B}}(B^{0}\to D^{-}\pi^{+})=(3.0\pm 0.4)\cdot 10^{-3}$ from
[11] and the maximal values in (38), we
obtain the branching ratios in Table 6. To show their
dependence on the choice of renormalization scale, we give them for
$\mu=m_{b}$ and $\mu=m_{b}/2$. We take the latter as an indication of how
large the branching ratios can be in QCD factorization, although they
are not upper bounds in a rigorous sense. We also show the
corresponding results from naive factorization, where for consistency
of comparison we have again neglected the corrections discussed below
Eq. (36). Here the scale dependence is minute, less than 2
percent, and the values in Table 6 are those for
$\mu=m_{b}$.
We proceed to decays into a vector meson, $\bar{B}^{0}\to D^{*+}X^{-}$.
The contraction of $q_{\mu}$ with the matrix element $\langle D^{*}|\bar{c}\gamma^{\mu}\gamma_{5}b|\bar{B}\rangle$ depends again only on a
single form factor, commonly referred to as $A_{0}$ and defined e.g. in
[3]. The decay rate is then given by
$$\displaystyle\Gamma(\bar{B}^{0}\to D^{*+}X^{-})$$
$$\displaystyle=$$
$$\displaystyle\frac{G_{F}^{2}}{4\pi}\,p_{X}^{3}\,|V_{cb}V_{ud}^{*}|^{2}\,\Big{|%
}f^{\varphi}(a_{1}^{\mathrm{fact}}+a_{1}^{\mathrm{corr}})\,A_{0}(m_{X}^{2})\,%
\Big{|}^{2}.$$
(39)
As mentioned in Sect. 6.2, the coefficients
$a_{1}^{\mathrm{corr}}$ for decays into $D^{*}$ are different from those
into $D$, and we now have
$$\displaystyle|f^{\varphi}(a_{1}^{\mathrm{fact}}+a_{1}^{\mathrm{corr}})\,|_{a0(%
980)}$$
$$\displaystyle=$$
$$\displaystyle 2.2\mbox{~{}or~{}}3.5\mbox{~{}MeV},$$
$$\displaystyle|f^{\varphi}(a_{1}^{\mathrm{fact}}+a_{1}^{\mathrm{corr}})\,|_{a0(%
1450)}$$
$$\displaystyle=$$
$$\displaystyle 1.2\mbox{~{}or~{}}1.9\mbox{~{}MeV},$$
$$\displaystyle|f^{\varphi}(a_{1}^{\mathrm{fact}}+a_{1}^{\mathrm{corr}})\,|_{\pi%
(1300)}$$
$$\displaystyle=$$
$$\displaystyle 7.5\mbox{~{}MeV},$$
$$\displaystyle|f^{\varphi}(a_{1}^{\mathrm{fact}}+a_{1}^{\mathrm{corr}})\,|_{K_{%
0}^{*}(1430)}$$
$$\displaystyle=$$
$$\displaystyle 47\mbox{~{}or~{}}49\mbox{~{}MeV}$$
(40)
at $\mu=m_{b}$. The analogous expression for the ratio (36)
of decay rates still holds, although with different mass corrections.
We estimate them to be not much larger than for the $D$ and neglect
them as before. Normalizing to ${\cal{B}}(B^{0}\to D^{*-}\pi^{+})=(2.76\pm 0.21)\cdot 10^{-3}$ from [11] we obtain the
branching ratios in Table 6, which are somewhat smaller
than the corresponding ones for decays into the $D$. With the
exception of decays into $K^{*}_{2},\pi_{2}$, or $\rho_{3}$, all estimated
branching ratios are larger than $10^{-7}$ and within experimental
reach at the $B$-factories. We expect similar branching ratios for
$B_{s}$ decays into $D_{s}^{(*)}$ and the same candidate mesons $X$, up to
SU(3) breaking effects.
To facilitate comparison of the branching ratios of
Table 6 into $K_{0}^{*}$ and $K_{2}^{*}$ with those into a $K$,
we estimate the latter as
$$\displaystyle{\cal{B}}(\bar{B}^{0}\to D^{+}K^{-})$$
$$\displaystyle\simeq$$
$$\displaystyle\frac{f^{2}_{K}}{f_{\pi}^{2}}\left|\frac{V_{us}}{V_{ud}}\right|^{%
2}{\cal{B}}(\bar{B}^{0}\to D^{+}\pi^{-})=(2.4\pm 0.3)\cdot 10^{-4}\,,$$
$$\displaystyle{\cal{B}}(\bar{B}^{0}\to D^{*+}K^{-})$$
$$\displaystyle\simeq$$
$$\displaystyle\frac{f^{2}_{K}}{f_{\pi}^{2}}\left|\frac{V_{us}}{V_{ud}}\right|^{%
2}{\cal{B}}(\bar{B}^{0}\to D^{*+}\pi^{-})=(2.2\pm 0.2)\cdot 10^{-4}\,,$$
(41)
which is in good agreement with data on the ratios
${\cal{B}}(\bar{B}^{0}\to D^{+}K^{-})/{\cal{B}}(\bar{B}^{0}\to D^{+}\pi^{-})=0.%
079\pm 0.011$
and
${\cal{B}}(\bar{B}^{0}\to D^{*+}K^{-})/{\cal{B}}(\bar{B}^{0}\to D^{*+}\pi^{-})=%
0.074\pm 0.016$
reported in [28].
A considerable source of uncertainty in our decay rate estimates are
the unknown meson decay constants and distribution amplitudes. Let us
illustrate this for the channel $\bar{B}^{0}\to D^{+}a_{0}(980)$. With the
minimal value of $f_{a_{0}(980)}$ in Eq. (14) we obtain
$|f^{\varphi}(a_{1}^{\mathrm{tree}}+a_{1}^{\mathrm{corr}})|_{a0(980)}=\mbox{1.3%
or 1.9~{}MeV}$ at $\mu=m_{b}$. The larger of the two
possibilities corresponds to a branching ratio ${\cal{B}}(\bar{B}^{0}\to D^{+}a_{0}(980))=5.8\cdot 10^{-7}$, about three times smaller than
the corresponding one in Table 6.
We have already seen the sensitivity of our results on the choice of
renormalization scale. This uncertainty is largest for the branching
ratios which are zero in naive factorization, with a variation by a
factor of 5 to 6 between $\mu=m_{b}$ and $\mu=m_{b}/2$. How important it
is for the other channels depends on the actual size of the decay
constants and distribution amplitudes. We also remark that the
reliability of the QCD factorization approach for our “light” mesons
with masses in the range from 1 to 1.7 GeV might be questionable, at
least unless finite-mass corrections can be taken into account.
Most important, however, is that the hard nonfactorizing contributions
are small on the scale of the amplitudes for unsuppressed decays. With
all uncertainties discussed above we find that $f^{\varphi}a_{1}^{\mathrm{corr}}$ is at most a few MeV, i.e., less than 5% of
$f_{\pi}a_{1}^{\mathrm{fact}}$. One may well expect that soft corrections
(or annihilation graphs when they can occur) are bigger than the hard
ones. For all our meson candidates except the $K_{0}^{*}$ they would then
lead to considerably larger branching ratios than we have
estimated.
The decays into $K_{0}^{*}$ are different in this context. Here the
perturbative corrections to the prediction of naive factorization are
quite small and their uncertainties less relevant, and further soft
corrections might or might not overshadow the factorizing piece. Once
the decay constant of the $K_{0}^{*}$ is known experimentally, one should
of course refine our estimates by taking into account the phase space
and form factor corrections in (36).
7.1 Background from $B^{0}$ decay
We see in Fig. 2 that the same final state of our signal
mode $\bar{B}^{0}\to D^{+}X^{-}$ just discussed can be produced in the
decay of the $C\!P$ conjugated parent meson, $B^{0}\to D^{+}X^{-}$. As
mentioned in Sect. 4, this background is CKM suppressed
with respect to the signal. On the other hand, the signal mode is
punished by a small decay constant, whereas the background goes with
$f_{D}\sim 200$ MeV. One expects the background-to-signal ratio to be
large, since at the amplitude level $\lambda^{2}f_{D}/f_{X}\sim{\cal{O}}(1)$. We recall that no such background exists in decays into
a strange final state, $\bar{B}^{0}\to D^{+}X_{s}^{-}$ or $\bar{B}_{s}\to D_{s}^{+}X^{-}$.
The background to $\bar{B}^{0}\to D^{+}X^{-}$ can of course be removed by
flavor tagging. Experimental discrimination between $B$ and $\bar{B}$
is however challenging in decays with branching ratios of less than
$10^{-5}$, and it is worthwhile to see how far one can go without a
flavor tag.
Let us therefore investigate in more detail the branching ratios of
the background decays for the case of the $a_{0}$. We parameterize the
matrix element for the $B\to a_{0}$ transition in terms of form factors
$F_{0}^{a}$ and $F_{1}^{a}$ as
$$\displaystyle\langle a_{0}(p^{\prime})|\bar{u}\gamma^{\mu}\gamma_{5}b|\bar{B}(%
p)\rangle$$
$$\displaystyle=$$
$$\displaystyle F^{a}_{1}(q^{2})\left\{(p+p^{\prime})^{\mu}-\frac{m_{B}^{2}-m_{a%
_{0}}^{2}}{q^{2}}\,q^{\mu}\right\}+F^{a}_{0}(q^{2})\,\frac{m_{B}^{2}-m_{a_{0}}%
^{2}}{q^{2}}\,q^{\mu}.$$
(42)
Assuming naive factorization, the decay rates for $B^{0}\to D^{(*)+}a_{0}^{-}$ decays can be written as
$$\displaystyle\Gamma(B^{0}\to D^{+}a_{0}^{-})$$
$$\displaystyle=$$
$$\displaystyle\frac{G_{F}^{2}}{16\pi}\,\frac{(m_{B}^{2}-m_{a_{0}}^{2})^{2}}{m_{%
B}^{2}}\,p_{a_{0}}\,\Big{|}V_{cd}V_{ub}^{*}\Big{|}^{2}\,(a_{1}f_{D})^{2}\,|F^{%
a}_{0}(m_{D}^{2})|^{2},$$
$$\displaystyle\Gamma(B^{0}\to D^{*+}a_{0}^{-})$$
$$\displaystyle=$$
$$\displaystyle\frac{G_{F}^{2}}{4\pi}\,p_{a_{0}}^{3}\,\Big{|}V_{cd}V_{ub}^{*}%
\Big{|}^{2}\,(a_{1}f_{D^{*}})^{2}\,|F^{a}_{1}(m_{D^{*}}^{2})|^{2},$$
(43)
where $a_{1}$ is the universal coefficient for color allowed decays in
naive factorization, introduced in Sect. 2. Using
$a_{1}=1.03$ we find
$$\displaystyle{\cal{B}}(B^{0}\to D^{+}a_{0}(980))$$
$$\displaystyle=$$
$$\displaystyle 2.1\cdot 10^{-6}\left(\frac{|V_{cd}V_{ub}^{*}|}{7.3\cdot 10^{-4}%
}\right)^{2}\left(\frac{f_{D}}{200\mbox{~{}MeV}}\right)^{2}\left(\frac{F^{a}_{%
0}(m_{D}^{2})}{0.5}\right)^{2}\frac{\tau_{B^{0}}}{1.55\mbox{~{}ps}},$$
$$\displaystyle{\cal{B}}(B^{0}\to D^{*+}a_{0}(980))$$
$$\displaystyle=$$
$$\displaystyle 1.9\cdot 10^{-6}\left(\frac{|V_{cd}V_{ub}^{*}|}{7.3\cdot 10^{-4}%
}\right)^{2}\left(\frac{f_{D^{*}}}{230\mbox{~{}MeV}}\right)^{2}\left(\frac{F^{%
a}_{1}(m_{D^{*}}^{2})}{0.5}\right)^{2}\frac{\tau_{B^{0}}}{1.55\mbox{~{}ps}},$$
(44)
where we have indicated the sensitivity to several input parameters
which at present have significant uncertainties. In particular,
very little is known about the form factors for $B\to X$
transitions. Chernyak [16] has recently estimated
the form factor $F_{1}(0)^{B\to a_{0}(1450)}\simeq 0.46$ with light-cone
sum rules, indicating only a small enhancement over the corresponding
one into a pion, where he cites $F_{1}(0)^{B\to\pi}\simeq 0.3$. That
the form factor for $B\to a_{0}$ should rather be larger than the one
for $B\to\pi$ is also plausible in the Bauer-Stech-Wirbel approach
[29]. To see this, consider the constituent $q\bar{q}$
wave function of a charged $a_{0}$. In the case where the $q$ and
$\bar{q}$ spins couple to $S_{3}=0$, it has a zero at momentum fraction
$u=\frac{1}{2}$ due to charge conjugation, up to small isospin
breaking effects. Being normalized to one, this wave function is then
more pronounced towards the endpoints $u=0$ and $u=1$, and thus can
have a greater overlap with the asymmetric wave function of the $B$
than the pion wave function can. Further information may be obtained
in relativistic quark models [30].
We wish to point out that experimental information on the form factor
$F^{a}_{1}(m_{D}^{2})$ can be obtained from semileptonic decays at
$q^{2}=m_{D}^{2}$,
$$\displaystyle\frac{d\Gamma(\bar{B}^{0}\to a_{0}^{+}\,\ell^{-}\bar{\nu}_{\ell})%
}{dq^{2}}$$
$$\displaystyle=$$
$$\displaystyle\frac{G_{F}^{2}}{24\pi^{3}}\,p_{a_{0}}^{3}\,|V_{ub}|^{2}\,|F^{a}_%
{1}(q^{2})|^{2},$$
(45)
where we expect similar statistics as for the semileptonic decays $B\to\pi,\rho$ with branching ratios of a few $10^{-4}$, if the form
factors have comparable size.
One may then relate $F^{a}_{0}$ with $F^{a}_{1}$ using large energy effective
theory (LEET) [31], originally introduced in
Ref. [32]. We are in the kinematical situation where a
light meson (here the $a_{0}$) is emitted from a heavy parent with large
recoil $q^{2}=m_{D}^{2}\ll m_{b}^{2}$ and an energy $E=(m_{B}^{2}-q^{2}+m_{a0}^{2})/(2m_{B})$ much larger than $\Lambda_{QCD}$ and the light masses in the
process. This is the region of applicability of LEET. To leading
order in $1/E$ and $1/m_{b}$, we derive
$$\displaystyle F^{a}_{1}(E,m_{b})=\frac{m_{B}}{2E}\,F^{a}_{0}(E,m_{b}).$$
(46)
This is the analog of Eqs. (104) and (105) in [31],
to which we refer for details.
Hence, the form factors are equal to leading order in the
large energy limit.
From the branching ratios in Eq. (44) we conclude that
rather likely the background from decays of $B^{0}$ mesons into $a_{0}$
does not overshadow the signal. A similar discussion can be given for
decays into the other $I=1$ mesons of Table 1.
Assuming naive factorization and no anomalous behavior of the relevant
form factors, we quite generally expect branching ratios of the
background modes $B^{0}\to D^{(*)+}X^{-}$ of order $10^{-6}$. Any
significant excess over both this and the branching ratios given in
Table 6 would imply that either there are important
nonfactorizing contributions in the signal, or that naive
factorization drastically fails in the background channel.
8 Summary
We have explored how to obtain quantitative information on
nonfactorizing effects in exclusive $b$ decays, using channels where
such contributions are not hidden behind larger factorizing pieces. We
achieve this through “switching off” the factorizing contribution by
choosing final-state mesons with either a small decay constant or spin
$J\geq 2$. Our proposal is similar in spirit to the study of decay
channels where the quark content of the final state does not admit
factorizing contributions, such as $B^{0}\to K^{+}K^{-}$
[33], $B_{s}\to\pi^{+}\pi^{-}$, $\pi^{0}\pi^{0}$, or
$b$-decays into baryon-antibaryon pairs (see
e.g. [2] and references therein). Suppression of the
factorizable contributions thus highlights factorization breaking
effects, such as annihilation graphs, soft or hard interactions, and
in general any mechanism dominated by long-distance physics, which
disconnects the $b$ decay vertex from the final state meson. We have
explicitly shown that hard nonfactorizing contributions, calculated in
the QCD factorization framework, can yield sizeable contributions to
the decay amplitude.
In a systematic study, compiled in Tables 2,
3, and 4, we have shown that our
method applies to a variety of mesons and channels, in decays of
$B_{u,d}$, $B_{s}$, and $b$ baryons. In particular, the mesons $X$ we
have selected cover a wide range of masses, which makes it possible to
explore whether the energy-mass ratio $E_{X}/m_{X}$ in the parent rest
frame is a relevant parameter to ensure factorization, as is suggested
by color transparency but not by large $N_{c}$ arguments
[32].
We have presented a detailed analysis of color allowed decays $B\to D^{(*)}X$ and $B_{s}\to D_{s}^{(*)}X$ with a light meson
$X$. When the factorizable contribution is suppressed, e.g. for the
scalar $a_{0}$, we found that hard nonfactorizing corrections can be of
similar magnitude or even larger than the Born term. They remain
however much smaller than the amplitudes of corresponding
nonsuppressed decays, for instance into a $\pi$. In several cases we
found branching ratios substantially enhanced over the ones calculated
in the naive factorization approach, see Table 6, and
are within the reach of existing and future experiments at the
$B$-factories BaBar, Belle, CLEO and at hadron colliders like the
Tevatron and the LHC. Comparison of these decays with modes where the
ejected meson is a $D$ meson and further study of those into charmonia
$\chi_{c0}$ or $\chi_{c2}$ should give complementary information on
the origin and limitations of the factorization approach.
$B$ decays with light-light final states are more complex. Modes such
as $\bar{B}^{0}\to a_{0}^{+}a_{0}^{-}$ and $B_{s}\to K^{+}a_{0}^{-}$ are not entirely
suppressed by the small decay constant of the $a_{0}$ due to the
presence of scalar penguin operators, but we find the corresponding
amplitudes to be much smaller than those of $\bar{B}^{0}\to\pi^{+}\pi^{-}$
or $B_{s}\to K^{+}\pi^{-}$. Such factorizing penguin contributions can be
eliminated altogether with higher-spin mesons like the $b_{1}$ or $a_{2}$
instead of the $a_{0}$. We expect hard nonfactorizing contributions to
be moderate, too, so that experimental information on such decays
could again tell us whether nonperturbative effects are large.
For penguin dominated decays like $\bar{B}^{0}\to\pi^{+}K_{2}^{*-}$ the
situation is less clear-cut on the quantitative level, but we argue
that they can give valuable indications on the importance of penguin
annihilation contributions. This is a particularly controversial issue
since different conclusions regarding such decays have been drawn in
the QCD factorization and the PQCD scenarios
[8, 9].
To conclude, we find that the $b$ decays presented here provide a tool
for studying important issues in exclusive nonleptonic decays. We
stress that in order to make this tool more quantitative, the decay
constants of the $a_{0}(980)$, $a_{0}(1450)$, $\pi(1300)$, $K_{0}^{*}(1430)$
and $b_{1}$ mesons should be known experimentally. Their determination
from $\tau$ decays should be in reach of the existing experiments at
BaBar, Belle and CLEO, and even more of dedicated $\tau$-charm
factories. Information on the distribution amplitudes of $a_{0}(980)$,
$a_{0}(1450)$, $a_{2}$, $\pi(1300)$, $\pi_{2}$, which are needed for the
calculation of hard nonfactorizable contributions, could be obtained
from $\gamma^{*}\gamma$ collisions at the $B$ factories.
Note added:
The suppression mechanisms discussed in this work provide
opportunties to study $C\!P$ violation in various $B$ decays into
designer mesons $a_{0}$, $b_{1}$, $a_{2}$, etc. This has been explored
independently in two recent studies [37] and [38].
Acknowledgments
It is a pleasure to thank P. Ball, S. J. Brodsky, G. Buchalla,
H. G. Dosch, A. Kagan, H.-n. Li, M. Neubert, S. Spanier, and H. Quinn
for discussions, and V. Braun for correspondence. We also thank A. Ali
for his careful reading of the manuscript.
This work was initiated when M. D. was visiting SLAC. He acknowledges
financial support through the Feodor Lynen Program of the Alexander
von Humboldt Foundation, and thanks the SLAC theory group for its
hospitality.
Appendix
In this appendix we estimate the size of the leading-twist
distribution amplitude $\varphi$ of several mesons. Our method is
based on the connection between distribution amplitudes and the Fock
state expansion in QCD, and closely follows the discussion in
[34], to which we refer for details. The starting point
is to decompose a hadron state on Fock states consisting of current
quarks and gluons, $q\bar{q}$, $q\bar{q}g$, etc. The coefficients in
this expansion are the light-cone wave functions for each parton
configuration. For a $d\bar{u}$ meson, one has
$$|X^{-}\rangle_{J_{3}=0}=\int\frac{du}{\sqrt{u(1-u)}}\,\frac{d^{2}k_{\perp}}{16%
\pi^{3}}\,\frac{|d_{\uparrow}\,\bar{u}_{\downarrow}\rangle\pm|d_{\downarrow}\,%
\bar{u}_{\uparrow}\rangle}{\sqrt{2}}\,\psi(u,k_{\perp})\,+\ldots,$$
(47)
where $u$ and $k_{\perp}$ denote the light-cone momentum fraction and
transverse momentum of the $d$ quark in the meson. The arrows indicate
quark and antiquark helicities, and the $+$ and $-$ respectively apply
to mesons with natural and unnatural parity. The states $|d_{\uparrow}\,\bar{u}_{\downarrow}\rangle$ and $|d_{\downarrow}\,\bar{u}_{\uparrow}\rangle$ are understood to be coupled to color
singlets. By $\ldots$ we have denoted the Fock states $|d_{\uparrow}\,\bar{u}_{\uparrow}\rangle$ and $|d_{\downarrow}\,\bar{u}_{\downarrow}\rangle$ with aligned quark helicities, and Fock
states with additional partons. The connection of the light-cone wave
function $\psi(u,k_{\perp})$ with the distribution amplitude defined in
Eq. (25) is
$$\int\frac{d^{2}k_{\perp}}{16\pi^{3}}\,\psi(u,k_{\perp})=\frac{1}{2\sqrt{6}}\,%
\varphi(u).$$
(48)
The probability to find the $d\bar{u}$ Fock state with antialigned
helicities in the meson $X$ is
$$P=\int du\,\frac{d^{2}k_{\perp}}{16\pi^{3}}\,|\psi(u,k_{\perp})|^{2}.$$
(49)
This should be below 1 since it is the probability to find a current
$q\bar{q}$ pair in the meson, without further gluons or sea quark
pairs. Note that this is different from the $q\bar{q}$ wave functions
in constituent quark models, which are by definition normalized to
1. Let us now use the relation (49) to estimate the size of
$\varphi(u)$. In order to achieve this, we need to make an ansatz for
the $k_{\perp}$ dependence. A form consistent with several theoretical
requirements [34, 35] is
$$\psi(u,k_{\perp})=\frac{16\pi^{2}a^{2}}{u(1-u)}\exp\left[-\frac{a^{2}k_{\perp}%
^{2}}{u(1-u)}\right]\,\frac{1}{2\sqrt{6}}\,\varphi(u),$$
(50)
where the prefactor of the exponential is imposed by the
normalization (48). $a$ plays the role of a transverse
size parameter of the $d\bar{u}$-pair in the meson. For the pion,
Brodsky and Lepage [34] obtained $a_{\pi}\approx 0.86$ GeV${}^{-1}$ with the above ansatz and the asymptotic form
$\varphi_{\pi}(u)=f_{\pi}\,6u(1-u)$ for the distribution amplitude. This
corresponds to an average transverse momentum $\langle k_{\perp}^{2}\rangle\approx(370$ MeV)${}^{2}$ and to a Fock state probability of
$P_{\pi}\approx 0.25$. We will take the same values for the mesons we
discuss here, which is certainly a crude assumption but should give
the correct order of magnitude.
With the ansatz (50) and the Gegenbauer expansion
(27) we obtain
$$P=2\pi^{2}(af^{\varphi})^{2}\left(B_{0}^{2}+\sum_{n=1}^{\infty}\frac{3(n+2)(n+%
1)}{2(2n+3)}\,B_{n}^{2}\,\right).$$
(51)
Consider now a meson for which $B_{0}=0$, such as the $a_{2}$ or
$K_{2}^{*}$. If we take $a=a_{\pi}$ and $P=P_{\pi}\approx 0.25$ and retain
only the term with $B_{1}$ in the Gegenbauer expansion, we obtain
$$|f^{\varphi}B_{1}|\approx 100~{}\mbox{MeV}.$$
(52)
Including the zeroth term $f^{\varphi}B_{0}$ in the Gegenbauer expansion,
as is appropriate for the charged $a_{0}$, $K_{0}^{*}$ and $b_{1}$, would
decrease this estimate by about $5\%$ for the $K_{0}^{*}$ when taking
$f_{K_{0}^{*}}=42$ MeV. The effect of that term for the $a_{0}$ or $b_{1}$ can
be neglected even more safely.
In order to explore the dependence of our estimate on the ansatz we
made for the $k_{\perp}$ dependence of the wave function, we take an
alternative form
$$\psi(u,k_{\perp})=\frac{16\pi^{2}\tilde{a}^{4}}{u^{2}(1-u)^{2}}\,k_{\perp}^{2}%
\exp\left[-\frac{\tilde{a}^{2}k_{\perp}^{2}}{u(1-u)}\right]\,\frac{1}{2\sqrt{6%
}}\,\varphi(u),$$
(53)
which has a node at $k_{\perp}=0$. For the Fock state probability we
find the same expression as (51) with $a$ replaced by
$\tilde{a}/\sqrt{2}$. Choosing $\tilde{a}=a$ we then get an estimate
of $|f^{\varphi}B_{1}|$ larger by a factor $\sqrt{2}$. If instead one
requires the average $\langle k_{\perp}^{2}\rangle$ to be the same with
the two forms (50) and (53), one finds
$\tilde{a}=\sqrt{3}a$ and thus an estimate of $|f^{\varphi}B_{1}|$
smaller by a factor of $\sqrt{2/3}$. Given these observations we
expect that (52) should give the correct order of magnitude
of the first Gegenbauer coefficient.
We should add that this does not hold for mesons that are not
$q\bar{q}$ bound states in the constituent quark picture but for
instance made from $q\bar{q}q\bar{q}$, which may be the case for one
of the $a_{0}$ mesons. Is plausible that for such a system the
probability of finding a single current $q\bar{q}$ pair in this meson
is reduced compared with the one of a conventional $q\bar{q}$
state. Correspondingly, its twist-two distribution amplitude $\varphi$
and the coefficients $f^{\varphi}B_{n}$ would be smaller than estimated
here.
Our considerations are easily adapted to the case of mesons where $B_{1}$
is zero or isospin suppressed, such as the $\pi(1300)$, the $\pi_{2}$,
or the $\rho_{3}$. Retaining only $B_{2}$ in the Gegenbauer expansion of
$\varphi(u)$ and taking as before $a\approx a_{\pi}$ and $P=P_{\pi}\approx 0.25$, we obtain
$$|f^{\varphi}B_{2}|\approx 80~{}\mbox{MeV}.$$
(54)
So far we have not displayed the dependence of both the distribution
amplitude $\varphi$ and the light-cone wave function $\psi$ on the
factorization scale $\mu$, which physically represents the resolution
scale of the $q\bar{q}$ pair. Our above estimates are understood as
corresponding to a hadronic scale, say, $\mu=1$ GeV. Evolving up to
$\mu=m_{b}$ we obtain $|f^{\varphi}B_{1}|\approx 75$ MeV from
Eq. (52) and $|f^{\varphi}B_{2}|\approx 50$ MeV from
Eq. (54).
To conclude this section, we wish to point out that experimental
constraints on the distribution amplitudes for the neutral mesons
$X=a_{0}$, $a_{2}$, $\pi(1300)$, $\pi_{2}$ can be obtained from the process
$\gamma^{*}\gamma\to X$ at virtualities $Q^{2}$ of the photon much larger
than the meson mass. This can be measured in $e^{+}e^{-}\to e^{+}e^{-}X$, and
the CLEO data for $X=\pi,\eta,\eta^{\prime}$ are in fact one of our best
sources of information on the corresponding distribution amplitudes
[36]. To leading order in $1/Q^{2}$ and in $\alpha_{s}$,
the amplitude for $\gamma^{*}\gamma\to X$ is proportional to $f^{\varphi}(B_{0}+B_{2}+B_{4}+\ldots)$ for mesons with unnatural parity, and to
$f^{\varphi}(B_{1}+B_{3}+B_{5}+\ldots)$ for mesons with natural
parity. According to our above estimates, one then expects cross
sections comparable to the one for $\pi$ production, so that the
measurement of these reactions at large $Q^{2}$ may well be in the reach
of the $B$ factories.
References
[1]
M. Bauer, B. Stech and M. Wirbel,
Z. Phys. C 34, 103 (1987).
[2]
M. Neubert and B. Stech,
hep-ph/9705292.
[3]
A. Ali, G. Kramer and C. Lü,
Phys. Rev. D 58, 094009 (1998)
[hep-ph/9804363].
[4]
A. J. Buras, J.-M. Gérard and R. Rückl,
Nucl. Phys. B 268, 16 (1986).
[5]
J. D. Bjorken,
Nucl. Phys. Proc. Suppl. 11, 325 (1989).
[6]
M. Beneke, G. Buchalla, M. Neubert and C. T. Sachrajda,
Phys. Rev. Lett. 83, 1914 (1999)
[hep-ph/9905312].
[7]
M. Beneke, G. Buchalla, M. Neubert and C. T. Sachrajda,
Nucl. Phys. B 591, 313 (2000)
[hep-ph/0006124].
[8]
M. Beneke, G. Buchalla, M. Neubert and C. T. Sachrajda,
hep-ph/0104110.
[9]
Y. Y. Keum, H. Li and A. I. Sanda,
Phys. Rev. D 63, 054008 (2001)
[hep-ph/0004173].
Y. Keum and H. Li,
Phys. Rev. D 63, 074006 (2001)
[hep-ph/0006001].
[10]
G. Buchalla, A. J. Buras and M. E. Lautenbacher,
Rev. Mod. Phys. 68, 1125 (1996)
[hep-ph/9512380].
[11]
D. E. Groom et al. [Particle Data Group Collaboration],
Eur. Phys. J. C 15, 1 (2000).
[12]
K. Maltman,
Phys. Lett. B 462, 14 (1999)
[hep-ph/9906267].
[13]
S. Narison, QCD Spectral Sum Rules, Lecture notes in physics,
Vol. 26, World Scientific, Singapore, 1989.
[14]
M. K. Volkov and C. Weiss,
Phys. Rev. D 56, 221 (1997)
[hep-ph/9608347];
V. Elias, A. Fariborz, M. A. Samuel, F. Shi and T. G. Steele,
Phys. Lett. B 412, 131 (1997)
[hep-ph/9706472];
A. A. Andrianov, D. Espriu and R. Tarrach,
Nucl. Phys. B 533, 429 (1998)
[hep-ph/9803232].
[15]
S. Narison,
Nucl. Phys. Proc. Suppl. 86, 242 (2000)
[hep-ph/9911454].
[16]
V. Chernyak,
hep-ph/0102217.
[17]
P. F. Harrison and H. R. Quinn (Eds.),
The BaBar physics book: Physics at an asymmetric B factory,
SLAC-R-0504.
[18]
Z. Ligeti, M. Luke and M. B. Wise,
hep-ph/0103020.
[19]
Z. Luo and J. L. Rosner,
hep-ph/0101089.
[20]
J. G. Körner, M. Krämer and D. Pirjol,
Prog. Part. Nucl. Phys. 33, 787 (1994)
[hep-ph/9406359].
[21]
T. Mannel, W. Roberts and Z. Ryzak,
Nucl. Phys. B 355, 38 (1991).
[22]
D. Müller, D. Robaschik, B. Geyer, F. M. Dittes and
J. Hořejši,
Fortsch. Phys. 42, 101 (1994)
[hep-ph/9812448];
M. Diehl, T. Gousset, B. Pire and O. Teryaev,
Phys. Rev. Lett. 81, 1782 (1998)
[hep-ph/9805380].
[23]
G. P. Lepage and S. J. Brodsky,
Phys. Lett. B 87, 359 (1979);
A. V. Efremov and A. V. Radyushkin,
Phys. Lett. B 94, 245 (1980).
[24]
M. Beneke, F. Maltoni and I. Z. Rothstein,
Phys. Rev. D 59, 054003 (1999)
[hep-ph/9808360].
[25]
P. Ball and V. M. Braun,
hep-ph/9808229.
[26]
D. Du, D. Yang and G. Zhu,
hep-ph/0103211.
[27]
V. M. Braun and I. E. Filyanov,
Z. Phys. C 48, 239 (1990)
[Sov. J. Nucl. Phys. 52, 239 (1990)];
P. Ball,
JHEP 9901, 010 (1999)
[hep-ph/9812375].
[28]
T. Iijima [Belle Collaboration],
hep-ex/0105005.
[29]
M. Wirbel, B. Stech and M. Bauer,
Z. Phys. C 29, 637 (1985).
[30]
N. Isgur, D. Scora, B. Grinstein and M. B. Wise,
Phys. Rev. D 39, 799 (1989).
[31]
J. Charles, A. Le Yaouanc, L. Oliver, O. Pène and J. C. Raynal,
Phys. Rev. D 60, 014001 (1999)
[hep-ph/9812358].
[32]
M. J. Dugan and B. Grinstein,
Phys. Lett. B 255, 583 (1991).
[33]
M. Gronau and J. L. Rosner,
Phys. Rev. D 58, 113005 (1998)
[hep-ph/9806348];
C. Chen and H. Li,
Phys. Rev. D 63, 014003 (2001)
[hep-ph/0006351].
[34]
G. P. Lepage, S. J. Brodsky, T. Huang and
P. B. Mackenzie, in Banff Summer Insitute 1981, Particles and
Fields 2, ed. by A. Z. Capri and A. N. Kamal (Plenum Press, New York
1983), p 83.
[35]
B. Chibisov and A. R. Zhitnitsky,
Phys. Rev. D 52, 5273 (1995)
[hep-ph/9503476];
M. Diehl, T. Feldmann, R. Jakob and P. Kroll,
Eur. Phys. J. C 8, 409 (1999)
[hep-ph/9811253].
[36]
J. Gronberg et al. [CLEO Collaboration],
Phys. Rev. D 57, 33 (1998)
[hep-ex/9707031].
[37]
M. Diehl and G. Hiller,
hep-ph/0105213.
[38]
S. Laplace and V. Shelkov,
hep-ph/0105252. |
Obscured star formation in intermediate-density environments:
A Spitzer study of the Abell 901/902 supercluster
Anna Gallazzi11affiliation: Max-Planck-Institut für Astronomie,
Königstuhl 17, D-69117 Heidelberg, Germany; [email protected] , Eric F. Bell11affiliation: Max-Planck-Institut für Astronomie,
Königstuhl 17, D-69117 Heidelberg, Germany; [email protected] , Christian Wolf22affiliation: Department of Physics, Denys Wilkinson Bldg., University
of Oxford, Keble Road, Oxford, OX1 3RH, UK , Meghan E. Gray33affiliation: School of Physics and Astronomy, University of Nottingham,
Nottingham NG7 2RD, UK , Casey Papovich44affiliation: Department of Physics, Texas A&M University, College Station, TX 77843 USA ,
Marco Barden55affiliation: Institute for Astro- and Particle Physics, University of Innsbruck, Technikerstr. 25/8, A-6020 Innsbruck, Austria , Chien Y. Peng66affiliation: NRC Herzberg Institute of Astrophysics, 5071 West Saanich Road, Victoria, Canada V9E 2E7 , Klaus Meisenheimer11affiliation: Max-Planck-Institut für Astronomie,
Königstuhl 17, D-69117 Heidelberg, Germany; [email protected] , Catherine Heymans77affiliation: Department of Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, Canada V6T 1Z1 , Eelco van Kampen55affiliation: Institute for Astro- and Particle Physics, University of Innsbruck, Technikerstr. 25/8, A-6020 Innsbruck, Austria ,
Rachel Gilmour88affiliation: European Southern Observatory, Alonso de Cordova 3107, Vitacura, Casilla 19001, Santiago 19, Chile , Michael Balogh99affiliation: Department of Physics and Astronomy, University Of Waterloo, Waterloo, Ontario, Canada N2L 3G1 , Daniel H. McIntosh1010affiliation: Department of Astronomy, University of Massachusetts, 710 North Pleasant
Street, Amherst, MA 01003, USA , David Bacon1111affiliation: Institute of Cosmology and Gravitation, University of Portsmouth, Hampshire Terrace, Portsmouth PO1 2EG , Fabio
D. Barazza1212affiliation: Laboratoire d’Astrophysique, École Polytechnique Fédérale de
Lausanne (EPFL), Observatoire, CH-1290 Sauverny, Switzerland , Asmus Böhm1313affiliation: Astrophysikalisches Institut Potsdam, An der Sternwarte 16, D-14482 Potsdam, Germany ,
John A.R. Caldwell1414affiliation: University of Texas, McDonald Observatory, Fort Davis, TX 79734, USA , Boris Häußler33affiliation: School of Physics and Astronomy, University of Nottingham,
Nottingham NG7 2RD, UK , Knud Jahnke11affiliation: Max-Planck-Institut für Astronomie,
Königstuhl 17, D-69117 Heidelberg, Germany; [email protected] , Shardha Jogee1515affiliation: Department of Astronomy, University of Texas at Austin, 1 University
Station, C1400 Austin, TX 78712-0259, USA , Kyle Lane33affiliation: School of Physics and Astronomy, University of Nottingham,
Nottingham NG7 2RD, UK ,
Aday R. Robaina11affiliation: Max-Planck-Institut für Astronomie,
Königstuhl 17, D-69117 Heidelberg, Germany; [email protected] , Sebastian F. Sanchez1616affiliation: Centro Hispano Aleman de Calar Alto, C/Jesus Durban Remon 2-2, E-04004
Almeria, Spain , Andy Taylor1717affiliation: The Scottish Universities Physics Alliance (SUPA), Institute for Astronomy, University of Edinburgh, Blackford Hill,
Edinburgh, EH9 3HJ, UK ,
Lutz Wisotzki1212affiliation: Laboratoire d’Astrophysique, École Polytechnique Fédérale de
Lausanne (EPFL), Observatoire, CH-1290 Sauverny, Switzerland ,Xianzhong Zheng1818affiliation: Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210008, PR China
Abstract
We explore the amount of obscured star-formation as a function of environment in the A901/902
supercluster at $z=0.165$ in conjunction with a field sample drawn from the A901 and CDFS fields, imaged with HST as part of the
STAGES and GEMS surveys. We combine the
combo-17 near-UV/optical SED with Spitzer 24µm photometry to estimate both the unobscured and obscured star formation in
galaxies with $\rm M_{\ast}>10^{10}M_{\odot}$. We find that the star formation activity in massive galaxies
is suppressed in dense environments, in agreement with previous studies. Yet, nearly 40% of the star-forming galaxies
have red optical colors at intermediate and high densities. These red systems are not starbursting; they have star formation rates per unit
stellar mass similar to or lower than blue star-forming galaxies. More than half of the red star-forming galaxies have low
IR-to-UV luminosity ratios, relatively high Sersic indices and they are equally abundant at all densities. They might be gradually
quenching their star-formation, possibly but not necessarily under the influence of gas-removing environmental processes. The other $\gtrsim$40% of the
red star-forming galaxies have high IR-to-UV luminosity ratios, indicative of high dust obscuration. They have relatively high
specific star formation rates and are more abundant at intermediate densities. Our results indicate that while there is
an overall suppression in the star-forming galaxy fraction with density, the small amount of star formation surviving the
cluster environment is to a large extent obscured, suggesting that environmental interactions trigger a phase of obscured star
formation, before complete quenching.
galaxies: general —
galaxies: evolution — galaxies: stellar content –
††slugcomment: ApJ in press November 20, 2020
1 Introduction
Much observational evidence gathered so far has established that the environment in which galaxies live plays an important role in shaping
their properties, such as their star formation activity, gas content, and morphology, in the sense that galaxies in regions of high galaxy density
tend to have less ongoing star formation, less cold gas and more bulge-dominated morphology
(Oemler, 1974; Dressler, 1980; Lewis et al., 2002; Gavazzi et al., 2002; Gómez et al., 2003; Balogh et al., 2004a; Kauffmann et al., 2004; McIntosh et al., 2004; Baldry et al., 2006). Yet, a real concern is that most star formation
indicators used to date are based on optical properties and are susceptible to the effects of dust attenuation. Indeed a number of studies using
mid-infrared or radio-derived star formation rates (SFRs) have found evidence for some unexpectedly intense bursts of star formation in
intermediate-density regions (e.g. Miller & Owen, 2002; Coia et al., 2005; Fadda et al., 2008). The object of this paper is to use wide-field photometric redshift data, deep
Spitzer data and wide-field HST imaging of the $z=0.165$ Abell 901/902 supercluster to explore the incidence of dust-obscured star formation
for low-SFR galaxies: is dust-obscured star formation important even at low SFRs, and how does it vary with environment?
1.1 Environment, star formation and morphology
Historically, the first clear evidence that environment influences galaxy properties is the observed
predominance of early-type galaxies in low redshift clusters with respect to the field, alongside with a paucity of late-type, emission-line
galaxies (e.g. Morgan, 1961; Dressler, 1980). The so-called morphology-density relation appears to be in place already at $z\sim 1$, but varies quantitatively with redshift:
between $z\sim 0.5$ and the present, the fraction of late-type spirals in intermediate-density regions decreases in favour of the population of
S0 galaxies (Dressler et al., 1997; Smith et al., 2005; Postman et al., 2005). This has suggested that spiral galaxies evolve into smooth and passive systems such as S0s as they
enter the dense environment of galaxy clusters.
Connected to the morphology-density relation is the decrease of the average SFR, as derived from optical colors or emission lines, with increasing
environmental density (e.g. Balogh et al., 1998; Gavazzi et al., 2002; Pimbblet et al., 2002; Gómez et al., 2003). Among the two, the relation between color (or stellar age) and environment appears to be
the most fundamental one: at fixed color, morphology shows only a weak residual dependence on environment (Blanton et al., 2005; Wolf et al., 2007). Moreover, the
link between the morphology-density and the SFR-density relations has significant scatter: not all spirals in clusters appear to be
star-forming, at least on the basis of their optical spectra (Poggianti et al., 1999; Goto et al., 2003). The question remains whether these spirals are really
passive or they have star formation activity that escapes detection in the optical. Indeed, selection of passive spirals on the basis of their
emission lines can be contaminated by dusty early-type spirals with low level of star formation activity, that could instead be detected using e.g.
mid-infrared colors (Wilman et al., 2008).
The SFR-density relation extends to very low local galaxy number densities (e.g. Lewis et al., 2002; Gómez et al., 2003) and dark matter densities
(Gray et al., 2004), corresponding to the outskirts of clusters and the densities of groups. This suggests that not only the cores of
clusters impact galaxy properties but galaxies may experience significant pre-processing in systems with lower density and lower
velocity dispersion such as groups, before entering the denser and hotter environment of the cluster
(e.g. Zabludoff, 2002; Fujita, 2004).
1.2 Environmental physical processes
Several processes can act on galaxies as they interact with their surrounding environment. The intensity and timescale of individual processes
may also vary with galaxy mass and during the galaxy lifetime as it moves through different density environments (for a
review see Boselli & Gavazzi, 2006). The gas content and hence star formation activity of galaxies can be affected by interaction with the intra-cluster
medium (ICM). The cold gas reservoir can be stripped due to the ram-pressure experienced by galaxies falling at high velocities in the dense
ICM of the cluster (Gunn & Gott, 1972; Quilis et al., 2000). Ram-pressure stripping can lead to fast truncation of star formation and its action can be
recognized from truncated H$\alpha$ profiles (Koopmann & Kenney, 2004), asymmetric gas distribution and deficiency in the cold HI gas (Giovanelli & Haynes, 1985; Cayatte et al., 1990; Solanes et al., 2001) of
many spiral galaxies in local clusters. It is possible that on the front of compression of the cold gas due to ram-pressure a burst of star
formation is induced (e.g. Gavazzi & Jaffe, 1985; Gavazzi et al., 2003). Another gas-stripping process, that affects star formation on longer timescales (of
few Gyrs) than ram-pressure, is the so-called ‘strangulation’ or ‘starvation’: assuming that galaxies are surrounded by a halo of hot diffuse
gas, this can be removed when galaxies become satellites of larger dark matter halos (Larson et al., 1980; Balogh & Morris, 2000). Star formation can continue
consuming the cold disk gas, but will eventually die out for the lack of supply of new fresh gas.
The gas distribution, star formation activity and morphology of galaxies can be altered also via interaction with other galaxies. Mergers
between two equally-massive gas-rich galaxies can lead to the formation of a spheroidal system (e.g. Toomre & Toomre, 1972; Barnes, 1988; Kauffmann et al., 1993). The
merger can trigger an intense burst of star formation (e.g. Kennicutt et al., 1987), rapidly consuming the cold gas and then exhausting due to
feedback processes (Springel et al., 2005). Merging and slow galaxy-galaxy encounters are favoured in groups and in the infall region of clusters
(e.g. Moss, 2006). At higher densities, galaxies can be affected by the cumulative effect of several rapid encounters with other cluster
members, a mechanism known as ‘galaxy harassment’ (Moore et al., 1998). After a transient burst of star formation, galaxy harassment leads to
substantial change in morphology. This mechanism can start to operate at intermediate densities, inducing density fluctuations in the gas
(Porter et al., 2008).
1.3 Dust-obscured star formation in dense environments?
The net effect of the various mechanisms of interaction of galaxies with environment is an accelerated depletion or exhaustion of the gas
reservoir and hence a suppression of the star formation activity. Many of these mechanisms, however, can lead to a temporary enhancement of
star formation, either due to gas compression (e.g. ram-pressure) or density fluctuations that funnel the gas toward the center triggering
nuclear activity (e.g. tidal interactions). The gas and dust column density is likely to increase during such processes and star formation can
be to a large extent obscured and escape optical detection. Star formation indicators that are not affected by dust attenuation need to be
adopted in order to quantify the occurrence of these obscured star formation episodes.
Already several studies based on observations in the thermal infrared (IR) or in the radio have identified significant populations of
IR-bright or radio-bright galaxies in the outer regions of nearby and intermediate-redshift galaxy clusters
(e.g. Smail et al., 1999; Miller & Owen, 2002, 2003; Best, 2004; Coia et al., 2005). Miller & Owen (2002) find that up to 20% of the galaxies in 20 nearby Abell clusters have
centrally-concentrated dust-obscured star formation. These galaxies have different spatial distribution with respect to normal star-forming
galaxies or active galactic nuclei (AGN): they are preferentially found in intermediate-density regions. In the A901/902 cluster at $z=0.165$, Wolf et al. (2005)
have identified an excess of dusty red galaxies with young stellar populations in the intermediate-density, infalling
region of the cluster. Other studies have identified a population of red star-forming galaxies both in the field (Hammer et al., 1997) and in
clusters (Verdugo et al., 2008). These galaxies could be mistakenly classified as post-starburst on the basis of their weak emission lines
(Poggianti et al., 1999; Bekki et al., 2001). It is interesting to notice that populations of red, IR-bright star-forming galaxies are often found in filaments
(e.g. Fadda et al., 2000, 2008; Porter et al., 2008) and in unvirialized or merging clusters (e.g. Miller & Owen, 2003; Geach et al., 2006; Moran et al., 2007). Significant populations
of starburst, IR-bright galaxies have been also found in a dynamically young cluster at $z=0.83$ by Marcillac et al. (2007). These systems could in
fact be more abundant at higher redshift (Saintonge et al, 2008) as expected from the increase in cosmic star formation activity. Recently, Elbaz et al. (2007) have shown
that the detection of these galaxies with the use of dust-independent SFR indicators can even lead to a reversal of the star-formation–density
relation at $z\sim 1$.
In this work we want to explore as a function of local galaxy density the importance in the local Universe of the star formation ‘hidden’ among red
galaxies, that would be missed by optical, dust-sensitive SFR indicators. Uniquely, we wish to push to modest SFRs ($\rm\sim 0.2M_{\odot}~{}yr^{-1}$), in
order to constrain the star formation mode of typical (not rare starbursting) systems. There are two key requirements for such a study: 1)
obscuration-free SFR indicators, ideally given by the combination of deep thermal IR and UV, in order to obtain a complete census of the total (obscured
and unobscured) SFR; 2) a long baseline in environmental density covering from the cluster cores to the field in order to quantitatively characterize the
SFR-density relation. We analyse the combo-17 CDFS and A901 fields at $z<0.3$, complementing the UV/optical photometry from combo-17 with Spitzer
24µm data and with HST V-band imaging from the Galaxy Evolution from Morphology and SEDs (GEMS) survey and the Space Telescope A901/902
Galaxy Evolution Survey (STAGES). The A901 field is particularly interesting in that it contains the supercluster A901/902 at $z=0.165$, a complex system
with four main substructures probably in the process of accreting or merging, where mechanisms altering the star formation and morphological properties
of galaxies might be favoured (e.g. Gray et al., 2002, 2004, 2008; Wolf et al., 2005, 2007; Heymans et al., 2008).
We present the sample and the data in Section 2.1 and describe the derivation of SFR and environmental density in
Sections 2.2 and 2.3. After discussing the classification into star-forming and quiescent galaxies in
Section 3.1, we explore the dependence on local galaxy density of the fraction of (obscured and
unobscured) star-forming galaxies and their contribution to the total star-formation activity as a function of environment
(Section 3.2). The properties of red star-forming galaxies, such as their SFR, mass, morphology and dust attenuation, are
compared to those of unobscured star-forming galaxies in Section 3.3. We summarize and discuss our results in
Section 4. Throughout the paper we assume a cosmology with $\rm\Omega_{m}=0.3$,
$\rm\Omega_{\Lambda}=0.7$ and $\rm H_{0}=70km~{}s^{-1}~{}Mpc^{-1}$.
2 The Data
We describe here the sample analysed and the data available. Based on this, we describe the measurement of derived
parameters such as stellar mass, star formation rate (SFR) and environmental density.
2.1 The sample and the data
The sample analysed is drawn from two southern fields, the extended Chandra Deep Field South and the A901 field,
covered in optical by the combo-17 survey (Wolf et al., 2003) and at 24µm by MIPS on board the Spitzer Space
Telescope (Rieke et al., 2004). combo-17 has imaged three $34^{\prime}\times 33^{\prime}$ fields (CDFS, A901, S11) down to $\rm R\sim 24$
in 5 broad and 12 medium bands sampling the optical spectral energy distribution (SED) from 3500 to 9300Å. The
17-passband photometry in conjunction with a library of galaxy, star and AGN template spectra has allowed object
classification and redshift assignment for 99% of the objects, with a redshift accuracy of typically $\delta z/(1+z)\sim 0.02$.
Spitzer has imaged at 24µm a field of $1\deg\times 0.5\deg$ around CDFS as part of the MIPS Guaranteed Time Observations (GTOs) and an
equally-sized field around the Abell 901/902 supercluster (A901 field) as part of Spitzer GO-3294 (PI: Bell). The data have been acquired in a scan-map
mode with individual exposures of 10 s. In CDFS, the 24µm data reach a $5\sigma$ depth of 83$\mu$Jy (see Papovich et al., 2004, for a technical description of source
detection and photometry). In A901, the same exposure time reached a 5$\sigma$ depth of 97$\mu$Jy, owing to the high contribution of zodiacal
light at its near-ecliptic position. In what follows, we use both catalogues to 83$\mu$Jy (5$\sigma$ and 4$\sigma$ for CDFS and A901, respectively), noting
that our conclusions are little affected if we adopt brighter limits for sample selection. The 24µm sources have been matched to galaxies with a
photometric redshift estimate in the combo-17 catalogue, adopting a 1” matching radius. We omit sources within 4’ of the bright M8 Mira variable IRAS
09540-0946 to reduce contamination from spurious sources in the wings of its PSF.
The A901 combo-17 field hosts the cluster complex A901/902 composed by the substructures A901a, A901b, A902 and the SW group at a
redshift of $z=0.165$ within a projected area of $\rm 5\times 5~{}Mpc^{2}~{}h^{-2}_{70}$. A quarter square degree field centered on the
A901/902 supercluster has been imaged in the filter F606W with the $HST$ Advanced Camera for Surveys (ACS) producing a 80
orbit mosaic, as part of the STAGES survey (Gray et al., 2008). An area of 800 square arcminutes centered on the extended CDFS
has also been imaged with $HST$ ACS in the F606W and F850LP filters, as part of the GEMS program (Rix et al., 2004). In the GEMS
survey object detection was carried out using the SExtractor software (Bertin & Arnouts, 1996) in a dual configuration that optimizes
deblending and detection threshold (Caldwell et al., 2008). As described in Gray et al. (2008), a similar strategy for source detection
has been adopted in the STAGES survey. Both GEMS and STAGES imaging data have been processed using the pipeline GALAPAGOS
(M. Barden et al. 2009, in prep.), which performs profile fitting and extract Sersic indices (that we then use to morphologically characterise
our sample) with the GALFIT fitting code (Peng et al., 2002).
X-ray data are also available for both the CDFS and the A901 field. X-ray data for the CDFS are available from the $\sim 1$Ms Chandra point
source catalogue published by Alexander et al. (2003). The A901 field has been imaged by XMM with a 90 ks exposure and the catalogue is presented in
Gilmour et al. (2007). We use the X-ray information to identify possible AGN contribution among star-forming galaxies. To account for the different
sensitivity of Chandra and XMM, we consider only sources with full band flux $>1.8\times 10^{-15}\rm erg~{}cm^{-2}~{}s^{-1}$, the faintest flux
reached in the A901 field.
In this work we wish to study the dependence on environment of the star formation properties of low-redshift galaxies. To this
purpose, we define a sample of galaxies in the redshift range $0.05<z<0.3$ from the CDFS and A901 fields (limited to the areas
covered completely by Spitzer and combo-17), down to an absolute magnitude of $\rm M_{V}<-18$ (limited to those objects
classified as galaxies by combo-17). The sample peaks at an apparent magnitude of $\rm m_{R}\sim 21$, covering the range $\rm 18\lesssim m_{R}\lesssim 23$, with a (magnitude-dependent) redshift accuracy of $\sigma_{z}\lesssim 0.02$ for the majority of the galaxies,
with a tail up to 0.05 (Wolf et al., 2004, 2005). The total sample comprises 1865 galaxies (1390 in the A901 field and 475 in the
CDFS), of which 601 have a detection at $\rm 24\micron$ above the $\rm 5\sigma$ level.
We will sometimes refer to ‘cluster’ and ‘field’ sample. The ‘cluster’ sample is defined following Wolf et al. (2005), i.e. galaxies in the A901 field with
redshift $0.155<z<0.185$, and it includes 647 galaxies. With this selection of the bright-end of the cluster population, the completeness reaches about
the 92% level down to a magnitude of $\rm R\sim 23$, but the contamination also rises to 40% (while it keeps below 20% for magnitudes brighter than
$\rm R=22$). The ‘field’ sample is defined on the A901 field in the redshift ranges $0.05<z<0.125$ and $0.215<z<0.3$, and on the CDFS over the entire
redshift range $0.05<z<0.3$, with a total of 981 galaxies.
2.2 Stellar mass and star formation rate
Stellar mass estimates have been derived as outlined in Borch et al. (2006), using a set of template SEDs generated with the
Pégase code, based on a library of three-component model star formation histories (SFHs), devised in such a way to
reproduce the sequence of UV-optical template spectra collected by Kinney et al. (1996). The best-fitting SED, and hence
stellar mass-to-light ratio ($\rm M_{\ast}/L$), is obtained comparing the model colors with the observed ones. Stellar
masses were derived adopting a Kroupa et al. (1993) initial mass function (IMF). Adopting a Kroupa (2001) or
Chabrier (2003) IMF would yield differences in stellar mass of less than 10%. Random errors amount to $\lesssim 0.3$ dex on a
galaxy-by-galaxy basis, while systematic uncertainties are typically of 0.1 dex for old stellar populations and up to
0.5 dex for galaxies with strong bursts (see also Bell et al., 2007).
The best indicator of the galaxy SFR combines the bolometric IR luminosity, assuming that it represents the bolometric luminosity of totally
obscured young stars, and total UV luminosity or recombination lines such as H$\alpha$ that trace instead the emission from unobscured young
stars, thus giving a complete census of the luminosity emitted by young stars in a galaxy (e.g. Bell, 2003; Calzetti et al., 2007). The infrared
data, combined with the NUV-optical combo-17 SED allows us to use such a SFR indicator. For this, we need first to estimate total UV and IR
luminosities from monochromatic information.
To measure the total IR flux ideally we would need measurements at longer wavelengths (e.g. Helou et al., 1988; Dale & Helou, 2002). We only have data in the
24µm MIPS passband which provides us with luminosities at rest-frame wavelength $\sim 23-18.5$µm for the redshift interval
$0.05-0.3$. The monochromatic 12µm–24µm luminosity correlates well with the total IR luminosity, although it has some residual dependence on
the gas metallicity (e.g. Papovich & Bell, 2002; Relaño et al., 2007; Calzetti et al., 2007). To convert the 24µm luminosity into total IR luminosity ($8-1000$µm) we use
the Sbc template of the normal star-forming galaxy VCC 1987 from Devriendt et al. (1999). While there is certainly
an intrinsic diversity in infrared spectral shape at given luminosity or stellar mass, this results in a $\lesssim 0.3$dex uncertainty in total
IR luminosity, as inferred using the full range of Devriendt et al. (1999) templates.
The total UV luminosity ($1216-3000$Å) is estimated from the luminosity $l_{\nu,2800}$ in the combo-17 synthetic band centered at $2800$Å as
$L_{UV}=1.5\nu l_{\nu,2800}$. The rest-frame $2800$Å band falls blueward of the observed combo-17 U-band (centered at rest-frame 3650Å) for
galaxies at $z\lesssim 0.3$. The rest-frame luminosity at 2800Å thus requires an extrapolation of the best-fit model over about 200Å at the average
redshift $z\sim 0.2$ of the sample. The factor of 1.5 in the definition of $\rm L_{UV}$ accounts for the UV spectral shape of a 100-Myr old stellar population with constant SFR
(Bell et al., 2005).
We then translate UV and IR luminosities into SFR estimates following the calibration derived by Bell et al. (2005) from the Pégase stellar population
synthesis code, assuming a 100-Myr old stellar population and a Kroupa (2001) IMF:
$${SFR[M_{\odot}yr^{-1}]=9.8\times 10^{-11}(L_{IR}+2.2L_{UV})}$$
(1)
This calibration has been derived by Bell et al. (2005) under the same assumptions adopted in the calibration of Kennicutt (1998); the two calibrations
yield SFRs that agree within $\lesssim 30$%. The factor of 2.2 in front of the $\rm L_{UV}$ term in Equation 1 accounts for the light emitted by
young stars redward of 3000Å and blueward of 1216Å. We adopt Equation 1 to estimate the SFR for all galaxies detected at 24µm. For
galaxies which have upper limits to the 24µm flux, we omit the IR contribution and consider only the UV-optical emission. This is a rather
conservative approach: the SFR of MIPS-undetected galaxies calculated in this way represents a lower limit to the true SFR. On the other hand, including
the $\rm L_{IR}$ term calculated on the basis of the upper limit flux of $83\mu$Jy would overestimate the true SFRs of undetected galaxies.
We note that the adopted calibration relies on the assumption that the infrared luminosity traces the emission from young stars only. There are few
caveats to this assumption. Nuclear activity can also be responsible for at least part of the IR emission. X-ray data and optical identification of
type-1 QSOs on both CDFS and A901 allow us to identify and exclude many AGNs from the sample, but we cannot exclude some contamination from obscured,
Compton-thick AGNs. Risaliti et al. (1999) find that among local Seyfert 2 galaxies about 75% are heavily obscured (with hydrogen column densities $\rm N_{H}>10^{23}cm^{-2}$) and $\sim$50% are Compton-thick ($\rm N_{H}>10^{24}cm^{-2}$). Among all 24µm-detected galaxies in our sample only $\sim$3% are
also X-ray detected. Given the relatively faint limit reached in X-ray ($\rm L_{X}\gtrsim 10^{41}erg~{}cm^{-2}~{}s^{-1}$), it is reasonable to assume that we
potentially miss Compton-thick sources. Therefore, we expect only a $\sim$3% contribution by Compton-thick AGNs. Moreover, the presence of an AGN does
not necessarily imply that it dominates the total infrared luminosity (Rowan-Robinson et al., 2005). Indeed A. R. Robaina et al.(2009, in prep.), based on
Ramos Almeida et al. (2007) data and analysis, estimate that type-2 AGNs contribute only $\sim$26% of the total IR luminosity of their host galaxy.
In early-type galaxies circumstellar dust around red giant stars is expected to contribute to the mid-IR flux (see Temi et al., 2005, 2007, about the sensitivity of IR
bands to different dust components in early-type galaxies). Nevertheless the mid-IR in early-type galaxies can detect the presence of
intermediate age stars and small amounts of ongoing star formation (Bressan et al., 2007; Young et al., 2008). As we discuss in Sec. 3.1, the majority of
the early-type red-sequence galaxies are not detected at 24µm. For those that are detected the SFR derived assuming that their IR luminosity traces
young stellar populations is in any case not sufficient to classify them as star-forming galaxies.
There are some caveats also in the use of UV luminosity as tracer of young stars for early-type galaxies. While UV can help to detect recent episodes of
low-level star formation, it can also be affected by evolved stellar populations (e.g. Rogers et al., 2007). These mainly contribute to the UV upturn at
1200Å, i.e. at shorter wavelength than what we use, and therefore should not be a concern for the UV luminosities (and SFRs) derived in this work.
We thus believe that the caveats mentioned above do not appreciably affect the classification into star-forming and quiescent galaxies used in this work
(see Sec. 3.1) and our results. The combination of near-UV and deep 24µm data is indeed a powerful tool to detect unobscured and
obscured star formation not only for starbursting galaxies but also in the regime of normal star-forming galaxies.
2.3 Environmental density
The combination of the CDFS and of the A901 field, hosting the A901/902 supercluster, provides us with a large dynamic
range of galaxy environments. As mentioned above, we have a well-defined cluster sample, composed of galaxies within
$\pm 0.015$ of the redshift of the cluster down to a magnitude of $\rm M_{V}<-18$, and a comparison field sample. However,
we wish to characterise the environmental galaxy density in a continuous way, such that it allows us to exploit the long
baseline provided by the two fields.
We estimate the environmental density in a cylinder centered at the position of each galaxy in the sample, and express it in terms
of overdensity with respect to an average redshift-dependent background density. The average background density, $\rho_{N}$, is
calculated combining the three combo-17 fields, in redshift intervals of width 0.1. In each redshift bin the total number of
galaxies, including all objects classified as ‘galaxy’ down to $\rm R=23.5$ and correcting for completeness,111Galaxy
completeness maps were estimated from simulations as a function of aperture magnitude, redshift and $U-V$ color (see Wolf et al., 2004, for a
detailed discussion). is divided by the volume given by the total field area and the redshift depth. For each galaxy in
the sample, the local density is obtained by counting the number $N_{gal}$ of galaxies (down to $\rm R=23.5$, correcting for
completeness), in a cylinder centered at the position of the galaxy of radius 0.25 Mpc and depth given by the photometric redshift error
for that galaxy ($\geq 0.015$), and dividing by the volume $V$ of the cylinder corrected for edge effects. The local number density
is then normalized to the average background density interpolated at the redshift of the galaxy. The local overdensity is then
expressed as:
$${\delta_{N}=\frac{N_{gal}}{V~{}\rho_{N}}-1}$$
(2)
This estimate ranges from $\sim-1$ for very underdense regions, to $0$ for average-density regions up to $>4$ for the
densities characteristic of the cluster.
Because of the relatively large errors associated to photometric redshifts (compared to spectroscopic ones) the galaxy density is effectively measured in
volumes that extend $\rm\gtrsim 80~{}Mpc$ along the line of sight. In this respect the local density adopted here represents a hybrid between projected density
estimates (which neglect redshift information) and spectroscopic estimates (which smooth over much smaller scales of $\rm\lesssim 8~{}Mpc$). Thus, local densities
calculated with photometric redshifts are biased toward the cosmic mean and suffer on a galaxy-by-galaxy basis from contamination from low-density
interlopers in high-density regions (see Cooper et al., 2005, for a comparison of different density indicators). To quantify this effect, we have tested the
density measures defined in Equation 2 against mock galaxy catalogues (containing superclusters similar to A901/902), applying the completeness of
the combo-17 survey. Using Equation 2, we have measured on the mock catalogues ‘observed’ overdensities assuming realistic photometric redshift
errors (those achieved with combo-17, allowing also for catastrophic errors), and ‘real’ overdensities assuming the real observed redshift (including the
peculiar velocity) and a redshift depth of $3\times 10^{-3}$. The ‘observed’ overdensities give a density ranking similar to the ‘real’ overdensities, almost
independently of galaxy luminosity and redshift. However, the magnitude of the ‘observed’ overdensities is almost a factor of 10 lower than the ‘real’
overdensities, owing to the difference in redshift path used to calculate the overdensity.
For galaxies in the A901/902 cluster, we could also compare our density estimates to other independent density estimators.
Specifically, we compared with the projected galaxy density $\Sigma_{10}$ as defined by Wolf et al. (2007), which measures the number
density of galaxies in an adaptive aperture of radius given by the average of the distance to the $9^{th}$ and $10^{th}$ nearest
neighbour. The lower panel of Fig. 1 shows a good correlation between $\Sigma_{10}$ and the galaxy overdensity
$\rm\delta_{N}$ measured in a fixed aperture.
The upper panel of Fig. 1 compares $\rm\delta_{N}$ with a measure of the total surface mass density from a weak lensing analysis of the HST STAGES data (Heymans et al., 2008). In this analysis Heymans et al. (2008) present a pixelated map of the smoothed projected dark matter surface mass
density $\kappa$ of the A901/902 cluster along with noise $\sigma_{n}$ and systematic error maps $B$, in order to assess the reliability of each
feature. Following van Waerbeke (2000) we define a lensing density measure $\nu=\kappa/\sigma_{n}$ for the pixel region around each galaxy that corresponds
to $\rm\sim 20\times 20kpc^{2}$. For $\nu>>1$ we can calculate a corresponding mass estimate, following equation 4 in Heymans et al. (2008), where a galaxy
with a lensing density measure $\nu=4$, for example, is enclosed in a local dark matter mass of $\rm M(<20kpc)=1\times 10^{11}M_{\odot}$. For $\nu<1$
we enter a low to underdense regime, with the most negative regions showing the location of voids (Jain & Van Waerbeke, 2000; Miyazaki et al., 2002).
In this paper we are particularly interested in the low to intermediate density regions of the A901/902 cluster. Unfortunately for the weak lensing
analysis however, it is these lower density regions where systematic errors become important. We therefore introduce a selection criteria, following
Heymans et al. (2008), that the lensing density estimate $\nu$ is deemed reliable if the systematic error $B$ is either comparable to the noise $\sigma_{n}$ or
less than half the amplitude of the signal $\kappa$. Fig. 1 shows unreliable measures as open points. Comparing the reliable
lensing density measurements $\nu$ (filled points) with $\rm\delta_{N}$ shows a good correlation between these two environment variables. Taking only those
galaxies with a reliable lensing measure, we show, in the lower-left panel of Fig. 2, the position in the sky of the A901/902
cluster galaxies, color-coded according to their $\nu$ value as indicated in the upper-left panel. The corresponding right-hand panels refer to the
galaxy overdensity $\rm\delta_{N}$. We note in particular that the two dark matter peaks corresponding to the A901a and A902 cores are also identified as peaks in
the galaxy distribution. Galaxies in these regions follow the main relation between $\nu$ and $\rm\delta_{N}$ shown in the upper panels of
Fig. 1. The A901b core and SW group are instead associated to a lower galaxy density and show a larger spread to higher $\nu$ values at fixed
$\rm\delta_{N}$ that is not completely explained by larger errors on $\nu$.
Whilst weak gravitational lensing techniques can provide a direct measure of the total matter density, this environment variable is
integrated along the line of sight with contributions from mass at all redshifts. In the case of the A901/902 cluster it is a reasonable approximation to
place all the measured mass at the redshift of A901/902 as shown by Heymans et al. (2008) who find that the mass of this supercluster is significantly larger
than the known galaxy groups and the CBI cluster behind A902 (Taylor et al., 2004). However in the case of the CDFS field, mass is distributed fairly equally
along the line of sight at relatively low density. It would therefore be very difficult to obtain a local matter density measure for this field from a
weak lensing analysis even with the HST imaging that exists (see Heymans et al., 2005). For this reason we favour using $\delta_{N}$ as it permits local
density measurements in both the field and cluster environments.
We note that the large redshift depth assumed in the density measure affects in particular the cluster sample, for which one would expect overdensities
higher by about an order of magnitude. In what follows, however, we keep also for cluster galaxies the overdensities estimated over a depth set by the
photometric redshift error, since we want to study cluster and field galaxies simultaneously with a consistent density measure. The distribution in density
$\rm\delta_{N}$ for the sample as a whole is shown in Fig. 3. The dashed and dotted lines distinguish cluster galaxies from the field sample. As
expected the field sample is concentrated in environments with density similar to or below the average background density. Cluster galaxies instead dominate
at densities above $\rm\delta_{N}$$\sim 2$.
3 Results
We now describe the classification of galaxies on the basis of their star formation rate and optical color, that we will use throughout the paper
(Sec. 3.1). Unless otherwise specified, the terms ‘red star-forming’ and ‘obscured star-forming’ used in the text refer to the same class
of galaxies. We then investigate how the fraction of star-forming galaxies depends on galaxy environment, with particular attention to the extent of star
formation ‘hidden’ among red-sequence galaxies (Sec. 3.2). In Sec. 3.3 we analyse the star formation properties, morphology,
and dust attenuation of red star-forming galaxies, as opposed to quiescent ellipticals and blue-cloud galaxies, as a function of environment.
3.1 Galaxy classes
Fig. 4 shows the distribution in the color-magnitude plane of galaxies in the A901/902 cluster (left panel) compared to
galaxies in the field (right panel). The solid line indicates the magnitude-dependent color cut adopted to classify galaxies as red-sequence
(redward of the line) or blue-cloud galaxies (blueward of the line). The cut is set $0.25$ mag blueward of the color-magnitude relation fitted
by Bell et al. (2004) on the combined A901+CDFS fields at $0.2<z<0.3$. Although the exact fraction of blue/red galaxies depends on the chosen color
cut, it makes a little difference as long as the cut lies in the ‘gap’ between the ‘blue’ and the ‘red’ peaks of the color distribution.
Grey circles represent galaxies detected at 24µm, with symbol size scaling according to their total IR luminosity. While we are not
surprised to find a large number of 24µm-emitting galaxies in the blue cloud, especially in the field, it is also noticeable a significant
contamination of the cluster red sequence by IR-luminous galaxies. A fraction of the IR luminosity may come from AGNs, although we notice that
only a small number of IR-luminous red-sequence galaxies are identified as X-ray sources (large squares).
We cannot exclude some contamination by obscured, Compton-thick AGNs, but we believe this is only a few percent (see Section 2.2).
Fig. 4 illustrates that 24µm information allows us to reveal a significant number of red-sequence galaxies with
infrared luminosity in excess of $\rm 10^{10}L_{\odot}$, witnessing to a large extent ongoing star formation activity onto the red sequence, that
would be otherwise undetected (or at least underestimated) because obscured by dust.
Before exploring the properties of red IR-luminous galaxies and their importance in terms of the total star formation budget as a function of
environment, we define our classification into quiescent and star-forming galaxies, further distinguished into red and blue. We concentrate on
galaxies more massive than $\rm 10^{10}M_{\odot}$, thus sampling the high-mass end of the mass function above which the red-sequence completeness
is guaranteed up to redshift 0.3 (Borch et al., 2006). We set a threshold in specific SFR of $\rm\log(SFR/M_{\ast})=-10.7$, which corresponds to a
level of star formation of $\rm 0.2~{}M_{\odot}~{}yr^{-1}$ at the mass limit. We thus define galaxies as ‘quiescent’ or ‘star-forming’ depending on
whether their specific SFR is below or above this level, respectively. We then separate ‘red SF’ and ‘blue SF’ galaxies according to the
magnitude-dependent red-sequence cut shown in Fig. 4.
The choice of the specific SFR limit is justified by the fact that the distribution of the massive galaxies in the sample in specific SFR (as measured in
Equation 1) is bimodal and the two peaks separate at a value of $\sim-10.7$, which is also very close to the mean value of specific SFR for this
sample. This is clearly shown in the right-hand panel of Fig. 5. As discussed in Sec. 2.2, for galaxies with IR flux
below the upper limit of 83$\mu$Jy we estimate SFR only from their UV luminosity. If we included the IR term also for these galaxies the distribution
would no longer be bimodal, and the mean value of specific SFR would be $\rm\log(SFR/M_{\ast})\sim-10.6$. We decide to keep the conservative approach
of using the lower limit SFR for galaxies not detected at 24µm, however we will mention when relevant how the results would change if we used
instead the upper limit SFR (i.e. adopting the 24µm upper limit flux of 83$\mu$Jy to estimate $\rm L_{IR}$ for non-detections).
Figure 5 (left panel) describes our classification for the 689 massive galaxies in the sample, showing their distribution in specific SFR
against the rest-frame $U-V$ color. Quiescent galaxies (below $\rm\log(SFR/M_{\ast})=-10.7$, dashed line) are shown as black diamonds and almost all of
them belong to the red sequence. Star-forming galaxies (above the dashed line) are distinguished into blue-cloud galaxies (light grey triangles) and
red-sequence galaxies (dark grey circles). About 60% of the sample is classified as quiescent (406 galaxies), the remaining is divided into 77 red
SF and 206 blue SF galaxies. Galaxies that have a detection at 24µm are highlighted with filled symbols. We note that all the red SF galaxies
have 24µm detection, while 13% of the blue SF galaxies are not MIPS detected (their UV-based SFR is thus more properly a lower limit to
the total SFR). Among the quiescent galaxies, 81 have detectable IR emission. Few of the 24µm-detected quiescent galaxies are assigned a
specific SFR higher than expected on the basis of their color, but it is not clear whether the IR emission in these cases is truly indicative of low level
of star formation or rather comes from circumstellar dust in red giant stars (but see Temi et al., 2007, 2008) or from an AGN (although none of
these galaxies is associated to an X-ray source, as shown by the large squares). In any case, even assuming that the IR emission in these
galaxies is associated to young stars it is not enough to classify them as star-forming.
It is worth mentioning that the location of galaxies in the specific SFR versus $U-V$ plane is independent of environment, with only the relative
importance of blue SF/red SF/quiescent galaxies changing with environment, as we discuss in Fig. 6 below.
The apparent gap in specific SFR in Fig. 5 between IR-detected and IR-undetected galaxies is due to the drop of the $\rm L_{IR}$ term in Equation 1 in the latter case. Adopting an IR luminosity for IR-undetected galaxies given by the upper limit flux of
83$\mu$Jy would increase the specific SFR of these galaxies and fill in the gap somewhat. While this would have a small effect on the number of
blue star-forming galaxies (because their SFRs are already above the threshold even when not detected at 24µm), the number of red-sequence
galaxies classified as star-forming would increase at the expense of quiescent galaxies. More quantitatively, adopting the upper limits on SFR
and a specific SFR threshold of $\rm\log(SFR/M_{\ast})=-10.7$ (as in our default case) or $-10.6$ (the mean value for the ‘upper limit’
specific SFRs), the number of red SF galaxies would increase to 170 or 126, respectively, while the number of quiescent galaxies would decrease
to 299 or 347, respectively (note that in this case the selection would be more sensitive to the exact cut in specific SFR adopted).
We certainly expect a number of star-forming galaxies to have colors as red as red-sequence galaxies simply due to inclination effects. We have visually
inspected the STAGES and GEMS V-band images of the red SF galaxies in our sample. We found that 19% of them appear as edge-on spirals with dust lanes on
the plane of the disc. These galaxies might be classified as blue SF if viewed with a different angle. Another 10% of the red SF galaxies are inclined
spirals but with irregular structure (also in the dust), so it is not clear what the inclination effects in these cases are. We conclude that undisturbed
edge-on spirals can account for no more than 30% of the red SF galaxies in our sample. There must be an excess population that accounts for the full
sample of red SF galaxies, either old galaxies with some residual star formation or galaxies with inclination-independent dust obscuration or a
combination of both, as we discuss in Section 3.3.
3.2 Galaxy fractions versus environmental density
It is not obvious from Fig. 4 whether the abundance of 24µm sources ‘hidden’ among red-sequence
galaxies is a feature characteristic of the cluster or whether these sources represent a ubiquitous population. We explore
the possible environmental dependence in Fig. 6. Here we do not separate galaxies between ‘cluster’
and ‘field’, instead we use the continuous definition of environment given by Equation 2. The lower panels of
Fig. 6 show the relation between optical color and stellar mass for galaxies in three disjoint
density regimes, namely low-density environments with $\rm\delta_{N}$$<1.5$, intermediate-density environments with $1.5<$$\rm\delta_{N}$$<3.5$, and
high-density environments with $\rm\delta_{N}$$>3.5$. Different symbols distinguish the three classes of galaxies defined above (galaxies
associated to an X-ray source are indicated with a square): quiescent galaxies (black diamonds), blue SF galaxies (blue triangles) and red
SF galaxies (orange circles). We are particularly interested in the latter class of galaxies, which represents the
class of obscured star-forming galaxies, in comparison to the ‘unobscured’ class, i.e. those galaxies identified as
star-forming also in the optical. Red SF galaxies tend to populate the low-mass end of the red-sequence and their mass range
does not evolve with environment, as opposed to quiescent galaxies. At fixed stellar mass, red SF galaxies are on average
bluer than quiescent galaxies. We will explore these properties in Section 3.3.
The upper panel of Fig. 6 shows the fraction of blue (unobscured) SF and of red (obscured) SF galaxies
among all $\rm M_{\ast}>10^{10}M_{\odot}$ galaxies as a function of density. Galaxy fractions are calculated as follows. We first order
galaxies with increasing $\rm\delta_{N}$ values. For each galaxy we then consider the neighbouring galaxies within a given window in density
($\pm 0.5$ of the central value) and calculate the fraction of a given type of galaxies among this subsample. For galaxies in the
first half bin of $\rm\delta_{N}$ we do not measure fractions but we set their values to the first value actually measured (at
$\rm\delta_{N}$$=-0.5$). The width of the density bin is kept constant until a sufficient number (100) of galaxies fall in that bin. At
higher densities, where the sampling is sparser, we let the bin width vary in order to enclose 100 neighbouring galaxies (50 at
lower densities and 50 at higher densities).222The density range remains constant up to $\rm\delta_{N}$$\sim 4$, it increases to $\pm 1$
around $\rm\delta_{N}$$\sim 5$. Above $\rm\delta_{N}$$=5$ the density range probed is skewed toward higher densities, but the contamination by
lower-density galaxies does not increase. When there are not anymore enough neighbouring galaxies we set the fractions to the last
measured values (this happens around a $\rm\delta_{N}$ of 7). This procedure assures a signal-to-noise of at least 10 with small variation
along the density axis. The shaded regions in the upper panel of Fig. 6 represent the Poisson
uncertainty in the calculated fractions.
The blue curve in Fig. 6 shows the environmental trend of the fraction of unobscured star-forming galaxies.
As expected this fraction decreases significantly with density, from $\sim 40$% at the low densities typical of the field to
$\sim 10$% in the densest environments of the cluster. This trend reflects the well-known decrease in the number of star-forming galaxies
in clusters. When we add the contribution of star-forming galaxies that are on the red
sequence, the overall fraction of star-forming galaxies among massive galaxies (green curve) is increased over the entire density
range covered. What is interesting is that the contribution added by red SF galaxies is not constant with $\rm\delta_{N}$, but produces an enhancement in the
star-forming fraction in particular at densities $1.5\lesssim$$\rm\delta_{N}$$\lesssim 4$.
The orange curve in Fig. 6 shows the variation with density of the fraction of red SF galaxies. Contrary to blue
SF galaxies, the decrease in the fraction of red SF galaxies with density is not monotonic. At the lowest densities of the field red SF
galaxies represent about 15% of the total. After an initial decrease from the field toward higher densities, the fraction of red SF
galaxies increases again to values between 15% and 25% over the density range $2\lesssim$$\rm\delta_{N}$$\lesssim 3$, and then it settles to a value of
$\lesssim$10% up to the highest densities of the cluster. The bottom line of Fig. 6 is that red SF galaxies
represent a non-negligible fraction of the whole galaxy population even at intermediate and high densities. In particular there is an
overabundance of red SF galaxies at intermediate densities where their contribution is comparable to that of blue SF galaxies.
We have checked how the trend of red SF galaxies versus $\rm\delta_{N}$ would change if we changed the definition of ‘star-forming’ galaxies. If
we included the IR term based on the 24µm upper limit flux in the SFR estimate for MIPS-undetected galaxies, there would be an
overall increase in the fraction of red SF galaxies. This would affect mainly the high-density environments (because of the higher
abundance of red galaxies not detected at 24µm, likely because genuinely old ellipticals), bringing the red SF fraction between
20% and 30% (the exact value depending on the specific SFR cut adopted). Even if this was correct, it would only strengthen our
main point.
We also checked that the trend in the red SF fraction with density is robust against contamination by edge-on dusty spirals. Even by
removing the $<30$% contribution by galaxies identified as edge-on spirals (see Sec. 3.1), we still detect an
overabundance of red SF galaxies at intermediate densities and the qualitative behaviour with $\rm\delta_{N}$ does not change.
In Fig. 7 we show again the fraction of obscured and unobscured SF galaxies as a function of the continuous
density measure $\rm\delta_{N}$ as in Fig. 6 but distinguishing galaxies belonging to the A901/902 cluster (lower panel) and
those living in the field (upper panel). In the field sample alone there is only a weak signal of an excess of red SF galaxies at intermediate densities.
The excess found in Fig. 6 for the sample as a whole is largely driven by cluster galaxies. Fig. 7 shows
that red SF galaxies are a phenomenon more typical of the cluster environment, where their fraction is comparable to that of
blue SF galaxies. Thus, not only the local galaxy number density but also the larger-scale environment plays a role in shaping the star
formation activity and dust attenuation of galaxies.
Fig. 8 illustrates the position on the sky of the cluster red SF galaxies (compared to blue SF and quiescent galaxies) in
the three density ranges of Fig. 6. The grey scale shows the dark matter map, as expressed by the surface mass
density $\kappa$, reconstructed by Heymans et al. (2008) with the STAGES HST data. Low $\rm\delta_{N}$ values are typical of the outskirts of
the cluster, mainly populated by blue SF galaxies (left panel). High $\rm\delta_{N}$ values are instead typical of the four main supercluster
cores and of the filamentary structures connecting them, traced by the quiescent galaxy population (right panel). Red SF galaxies
populate the medium-density regime, the infalling regions around the cluster cores, where episodes of obscured star formation might be
favoured (middle panel). This supports the analysis of Wolf et al. (2005), who identified an overabundance in the medium-density regions of
the A901/902 supercluster of dusty, intermediate-age, red galaxies, classified on the basis of their location in optical color-color
diagrams.
It is also of interest to ask what is the contribution in stellar mass and star formation activity of the different classes of galaxies.
Fig. 9 shows the fraction of stellar mass contributed by $\rm M_{\ast}>10^{10}M_{\odot}$ star-forming galaxies (green curve) as a
function of environmental density. As in Fig. 6 we distinguish star-forming galaxies on the red sequence (orange curve)
and on the blue cloud (blue curve). The stellar mass fraction is calculated in the same way as the number fractions shown in
Fig. 6, but weighting each galaxy by its stellar mass. The decline from low to high densities of the stellar mass
fraction contributed by SF galaxies reflects the decline in their number density. The blue and orange dotted lines reproduce the number fraction
of blue SF and red SF galaxies, respectively. At all $\rm\delta_{N}$ the fraction in mass of star-forming galaxies, either obscured or unobscured, is
lower than the corresponding fraction in number. This comes from the fact that star-forming galaxies are preferentially less massive than
quiescent, elliptical galaxies. The effect becomes stronger at high densities (at least for red SF galaxies), where the mass function of
quiescent early-type red-sequence galaxies extends to higher masses. At low and intermediate densities the
difference between the number and stellar mass fractions is lower for red SF galaxies than for blue SF galaxies, indicating a different stellar
mass distribution of the two classes of galaxies, as we will show in Section 3.3.
In Fig. 10 we investigate the amount of obscured star-formation over the total star formation activity as a function of
environment. This is calculated as the fraction, weighted by SFR, of red SF galaxies over all SF galaxies, and it is shown by the solid curve and
hatched region. For comparison, the dotted curve shows the number fraction of red SF galaxies among all SF galaxies. As expected the majority of
the star formation activity resides in galaxies populating the blue-cloud, independently of environment. Nevertheless, there is a non negligible
contribution, both in number and in total SFR, from obscured star-forming galaxies. In particular there is a clear excess of obscured star
formation at intermediate densities ($2\lesssim$$\rm\delta_{N}$$\lesssim 4$), where red SF galaxies constitute up to 40% of all SF galaxies and contribute between 25%
and 35% of the whole star formation activity at those densities. At higher densities, red SF galaxies still make up $\sim$40% of the whole SF
class at these densities, but their contribution to the total star formation activity goes down to $\sim$20%. This suggests a small but
detectable suppression of the SFR of high-density red SF galaxies compared to their intermediate-density counterparts.
Finally, Fig. 11 illustrates the amount of contamination on the red sequence from obscured star-forming galaxies. This is
expressed both in terms of the stellar mass fraction (solid line and hatched region) and of the number fraction (dotted line) of
star-forming galaxies among red-sequence galaxies. At low densities, star-forming galaxies contribute roughly 15% in stellar mass
and 30% in number to the red sequence. This fraction is in agreement with studies of the mix in morphology and star formation
activity of the ‘field’ red sequence at different redshifts (e.g. Franzetti et al., 2007; Cassata et al., 2007; Cassata et al., 2008). As expected from the
general decrease in the number of star-forming galaxies in dense environment, the contamination of the red sequence by star-forming
galaxies also decreases with density. However, it reaches values of a few percent only at the highest densities of the cluster,
where the red sequence is highly dominated by quiescent galaxies. At intermediate densities, instead, there is an excess of
(preferentially obscured) star formation, as already discussed in the previous Figures.
3.3 The properties of red star-forming galaxies
In this section we compare the star-formation and morphological properties of red-sequence star-forming galaxies to those of blue-cloud star-forming
and quiescent galaxies. We also investigate any environmental variation of such properties in star-forming galaxies. A follow-up analysis
by Wolf et al. (2008) based on STAGES data presents environmental trends of the properties of cluster galaxies by distinguishing (visually
classified) morphological types and SED types.
Fig. 12 shows the distributions in stellar mass, specific SFR and total SFR of red SF galaxies (hatched histograms) in three density
regimes ($\rm\delta_{N}$$<1.5$, $1.5<$$\rm\delta_{N}$$<3.5$, $\rm\delta_{N}$$>3.5$). In each density range, these distributions are compared to those of blue SF galaxies (grey shaded
histograms) and quiescent galaxies (dashed histograms). The left panels of Fig. 12 show that, while the stellar mass of quiescent galaxies
clearly increases from low to high densities, the stellar masses of star-forming galaxies hardly vary with density, almost independently of their
obscuration level. There is however a hint that the mean stellar mass of red SF galaxies at intermediate densities is slightly higher than that of their
low-density and high-density counterparts ($\rm\langle\log(M_{\ast}/M_{\odot})\rangle=10.47\pm 0.05$ compared to $\rm\langle\log(M_{\ast}/M_{\odot})\rangle=10.32\pm 0.05$ and $\rm\langle\log(M_{\ast}/M_{\odot})\rangle=10.35\pm 0.08$ at low and high densities respectively). The
distribution in stellar mass at intermediate $\rm\delta_{N}$ compares with that at low $\rm\delta_{N}$ with a Kolmogorov-Smirnov test probability of 0.03 that the two
distributions are drawn from the same parent distribution. The KS test between the distribution at intermediate $\rm\delta_{N}$ and at high $\rm\delta_{N}$ gives a
probability of 0.2. Also, red SF galaxies at intermediate densities are on average more massive than blue SF galaxies at the same densities (which have
$\rm\langle\log(M_{\ast}/M_{\odot})\rangle=10.34\pm 0.03$), with a KS probability of 0.05.
The second and third columns of plots in Fig. 12 show the distributions in specific SFR and total SFR, respectively.
For completeness we also show here the measured SFR of quiescent galaxies, which have by definition specific SFR lower than $\rm 2\times 10^{-11}yr^{-1}$. We do not detect any significant variation with environment in the SFR of blue-cloud star-forming galaxies.
Also the specific SFR of red SF galaxies appears independent of environment, but their average SFR at intermediate densities is
slightly higher than at low and high densities ($\rm\langle\log SFR\rangle=0.22\pm 0.07$ compared to $\rm\langle\log SFR\rangle=0.07\pm 0.06$ and
$\rm\langle\log SFR\rangle=0.01\pm 0.08$, respectively), as a consequence of the slightly higher $\rm M_{\ast}$ discussed above.
As a general remark, it is interesting to notice that the red, often IR-bright, star-forming galaxies in our sample are not
experiencing a burst of star-formation. They have instead less intense star formation activity compared to
blue-cloud galaxies, independently of environment. Their average specific SFR is from 0.2 dex to 0.3 dex lower than blue SF galaxies
at the 5$\sigma$ level (the KS test on the their specific SFR distributions provides a probability of $0.01$, $0.001$, $0.006$ at
low, intermediate, high densities, respectively).
The right-hand panels of Fig. 13 show the distribution in the V-band Sersic index $n$ for the three classes of galaxies in the three
density regimes. As expected the distribution of star-forming galaxies peaks at low values of $n$ indicating that in all environments star formation
occurs preferentially in disc-dominated systems. This result holds for both unobscured and obscured star-forming galaxies, which have similar
distributions in Sersic index. Also in this case we might witness a difference only at intermediate densities (although with rather low significance),
where red SF galaxies tend to be more bulge-dominated than blue SF galaxies (with a $\langle n\rangle=2.28\pm 0.35$ compared to $\langle n\rangle=1.76\pm 0.21$, and a KS probability of 0.12 for the two distributions to be the same).
In this work we have defined star-forming galaxies as ‘obscured’ or ‘unobscured’ only on the basis of their optical color, namely
whether they fall redward or blueward of the red-sequence cut, respectively. The dust attenuation of the UV flux in star-forming
galaxies is often quantified by the ratio of the IR to UV luminosities (e.g. Gordon et al., 2000).333The conversion from $\rm L_{IR}/L_{UV}$
to UV attenuation depends on the galaxy star formation activity and the stellar age (Cortese et al., 2008), but this should not be a concern
for the following discussion. First, the IR luminosity that we infer is based on the luminosity at 24µm, hence relatively insensitive
to the (typically colder) dust heated by old stars. Second, the red SF galaxies in our sample span only one order of
magnitude in specific SFR, hence the $\rm L_{IR}/L_{UV}$ can at least give us an insight into the relative dust attenuation among
these galaxies. We then look at the dust attenuation properties of red SF and blue SF galaxies, as expressed by the IR-to-UV luminosity
ratio, $\rm\log(L_{IR}/L_{UV})$. This is shown in the left panels of Fig. 13 for red and blue SF galaxies in the same
three density regimes as Fig. 12 (for completeness we include also quiescent galaxies; dashed histograms). The
distributions are shown only for galaxies with a detection at 24µm (for each $\rm\delta_{N}$ bin the total number of galaxies in each class is indicated
in the panels). All red SF galaxies are detected at 24µm, while 13% of the massive blue-cloud galaxies have 24µm flux below
the detection limit. The fraction of blue-cloud galaxies missed is however independent of mass. The 24µm selection in this plot
affects only quiescent galaxies, whose detection rate decreases with mass.
As expected, in any density range, red-sequence SF galaxies have on average higher $\rm L_{IR}/L_{UV}$ than blue-cloud galaxies,
indicating a higher level of dust attenuation. The distribution in $\rm\log(L_{IR}/L_{UV})$ differ most significantly at low and
intermediate densities (with a KS probability of 0.003 and 0.002, respectively). In the lowest-density bin, red SF galaxies have an
average $\rm\log(L_{IR}/L_{UV})$ of $1.07\pm 0.09$, compared to the $0.68\pm 0.04$ of blue SF galaxies. At intermediate densities the
average $\rm\log(L_{IR}/L_{UV})$ of red SF galaxies is $1.04\pm 0.07$ compared to $0.73\pm 0.05$ of blue SF galaxies. At the highest
densities of the cluster, red SF galaxies still have higher dust attenuation with respect to blue SF galaxies (with a $\rm\langle\log(L_{IR}/L_{UV})\rangle=0.87\pm 0.08$ compared to $0.6\pm 0.05$ of blue SF), although the difference between the two distributions
is less significant (with a KS probability of 0.12).
Contrary to the other parameters analysed so far, the IR-to-UV luminosity ratios of red SF galaxies appear to have a roughly bimodal
distribution, with a peak around $\rm\log(L_{IR}/L_{UV})$ values similar to the main population of blue SF galaxies and another peak at
significantly higher values. This is particularly evident at low densities but seems to persist in all environments with varying proportion
between the two groups of red SF galaxies. We can explicitly distinguish red SF galaxies on the basis of their IR-to-UV luminosity ratio,
choosing a cut at $\rm\log(L_{IR}/L_{UV})=1$. By doing so, we find out that low-attenuation red SF galaxies differ from high-attenuation red SF galaxies in their specific SFR, their morphology and their environmental dependence, suggesting that different
evolutionary mechanisms are acting on them.
Low-attenuation red SF galaxies have systematically lower specific SFR than high-attenuation red SF galaxies (with an average $\rm\log(SFR/M_{\ast})$, over all environments, of $-10.43\pm 0.03$ compared to $-10.04\pm 0.04$ for the latter class). The distributions in specific SFR of
low-attenuation and high-attenuation red SF galaxies differ most significantly at low and intermediate densities (with a KS test probability of
0.004 and 0.002 respectively). Moreover, although with less significance, it is interesting to note that low-attenuation red SF galaxies tend to be
fitted by higher values of Sersic index than high-attenuation red SF galaxies (with a $\langle n\rangle=2.43\pm 0.96$ compared to $\langle n\rangle=1.54\pm 0.42$ for the latter class, averaged over all densities). The morphology of both galaxy classes, at least as quantified by $n$, is not a
function of environment.
The fraction of low-attenuation red SF galaxies over the entire population of SF galaxies varies from $11.6\pm 3$% at low densities to
$14.9\pm 4$% at intermediate densities and $17.5\pm 6$% at high densities. There might be a tendency of low-attenuation red SF galaxies becoming
progressively more
frequent in high-density environments, but the errors make these fractions consistent with being independent of environment. On the contrary,
high-attenuation red SF galaxies appear to be more abundant at intermediate densities at about the $2\sigma$ level: at intermediate densities they
represent $16\pm 4$% of all SF galaxies, compared to $10.7\pm 3$% at low densities and $10.5\pm 4$% at high densities. Their stellar mass is also
slightly higher at intermediate densities ($\rm\langle\log(M_{\ast}/M_{\odot})\rangle=10.52\pm 0.07$ compared to $10.33\pm 0.06$ and $10.28\pm 0.1$ at
low and high densities, respectively).
It is worth noting that also among blue SF galaxies there is a subsample of galaxies with $\rm\log(L_{IR}/L_{UV})>1$. By selecting
high-attenuation star-forming galaxies independently of optical color, the same picture emerges in comparison to low-attenuation red SF galaxies as
outlined above. Indeed, the argument of an overabundance at intermediate densities of dust-obscured star formation would be even stronger: the
fraction of $\rm\log(L_{IR}/L_{UV})>1$ SF galaxies among all SF galaxies would be $34\pm 7$% at intermediate $\rm\delta_{N}$, compared to $18\pm 4$% at low
$\rm\delta_{N}$ and $16\pm 6$% at high $\rm\delta_{N}$.
This is further illustrated in Fig. 14 which shows the fraction versus the galaxy overdensity $\rm\delta_{N}$ of red SF galaxies, separated into
low-attenuation (upper panel) and high-attenuation (lower panel). Low-attenuation red SF galaxies constitute on average $\sim$5% of the whole sample
with no significant dependence on environment. The excess at intermediate densities of red-sequence star-formation identified in
Fig. 6 is mainly contributed by high-attenuation red SF galaxies, which represent about 12% of the whole population at
$\rm\delta_{N}$$\sim$2.5. Even excluding the edge-on spirals among high-attenuation red SF galaxies (see Sec. 3.1), the trend with $\rm\delta_{N}$ is still
consistent within the errors with that shown in Fig. 14. Moreover, dust-obscured star formation could represent up to 20% of the whole
population at these densities by considering all star-forming galaxies with $\rm\log(L_{IR}/L_{UV})>1$, regardless of their optical color (dotted
curve).
Based on the considerations above, we can say that low-attenuation red SF galaxies are likely spirals that are gradually quenching
their star formation and appear bulge-dominated because of disc fading. They might resemble the anemic spirals found in local clusters as Coma and
Virgo (van den Bergh, 1976; Kennicutt, 1983; Gavazzi et al., 2002, 2006). Some mechanism that removes gas on relatively long timescales could be responsible
for their transformation toward quiescence. However, given their negligible environmental dependence, internal processes leading to star formation
quenching are equally possible and maybe even sufficient. On the other hand, high-attenuation red SF galaxies are disc-dominated spirals affected
by some mechanism, particularly efficient at intermediate densities, that triggers obscured episodes of star formation without significantly
changing the morphology, at least on timescales over which star formation is still detectable.
4 Discussion and conclusions
4.1 Star formation among red galaxies
We have combined combo-17 optical data with MIPS 24µm data for a sample of low-redshift ($0.05<z<0.3$) galaxies in the CDFS and
A901 fields with the aim of studying the occurrence of obscured star formation as a function of environment. The 24µm information allows us to recover directly the flux from young stellar populations absorbed and re-emitted by dust, and thus to
trace, in combination with the UV/optical information, the total (unobscured and obscured) star formation activity in galaxies.
The A901 field is particularly suited from this kind of analysis, not only for the exceptional multiwavelength coverage, but also
because it includes the complex A901/902 supercluster at $z=0.165$ extending over an area of 5$\times$5 Mpc${}^{2}$h${}_{70}^{-2}$. The supercluster is
composed of four main substructures, probably in the process of merging. The complex dynamical state of the A901/902 supercluster
potentially makes it an ideal case for identifying galaxies in their process of evolution under the influence of environment. The CDFS,
with the same multiwavelength coverage, offers instead a control sample of field galaxies at similar redshift as the cluster.
In this work we have focused on galaxies with stellar masses larger than $10^{10}M_{\odot}$, above which the red sequence is complete out to
$z=0.3$ (our limiting redshift). This mass limit roughly corresponds to $0.1\times M^{\ast}$ over the redshift range $z<0.3$
(Bell et al., 2003; Borch et al., 2006). We define as star-forming those galaxies with a specific SFR (derived from UV and IR luminosities) above $\rm 2\times 10^{-11}yr^{-1}$. Our focus is on star-forming galaxies populating the red sequence, either because they show low levels of star formation
insufficient to alter the color of the underlying older population or because their star formation activity is highly obscured by dust. Studies
based on the UV and optical emission of galaxies have identified a significant amount of low-level star-formation in low-mass ellipticals
(Yi et al., 2005; Kaviraj et al., 2007), with a hint of a peak in ‘frosting’ activity at group densities (Rogers et al., 2007). Star formation indicators
that are less sensitive to dust attenuation, such as the 24µm emission that we exploit in this work, are instead required to detect
dust-obscured star formation.
We have studied the abundance of blue and red star-forming galaxies as a function of environment, as expressed by the galaxy number
overdensity in a radius of 0.25 Mpc, focusing on the contribution of star formation on the red sequence compared to
optically-detectable star formation. Our results can be summarized as follows.
-
The overall fraction of star-forming galaxies decreases from $\sim$60% in
underdense regions to $\sim$20% in high-density regions. The stellar mass fraction contributed by star-forming galaxies also decreases
going from the field to the cluster cores. The decline is steeper than for the number fraction because, while no significant environmental evolution
in stellar mass occurs for star-forming galaxies, the mass function of quiescent galaxies reaches higher stellar
masses at higher densities.
-
The fraction of blue star-forming galaxies decreases monotonically from $\sim$40% at low densities to less than 20% at higher densities. On
the contrary, red SF galaxies do not show a monotonic behaviour as a function of environment. After an initial decline of the red SF fraction
from the field to higher $\rm\delta_{N}$, we identify an overabundance of obscured star formation at intermediate densities, those typical of the outskirts of the
A901/902 supercluster cores. At both intermediate and high densities, red SF galaxies represent 40% of all star-forming galaxies and contribute
$20-30$% of the total star formation activity at these densities.
To a first order, our results confirm the well-known SFR-density relation (e.g. Gavazzi et al., 2002; Lewis et al., 2002; Gómez et al., 2003; Balogh et al., 2004b; Kauffmann et al., 2004) and
morphology-density relation (Dressler, 1980; Dressler et al., 1997; Treu et al., 2003; van der Wel, 2008, e.g.). In addition to this, we find a significant contribution by red-sequence
galaxies, identified as star-forming through their IR emission, to the total star formation activity up to the highest densities of the cluster. This
would be at least partly missed by optical studies. This result is consistent with Wolf et al. (2005) who already found an enhancement of optically-classified
dusty red galaxies in the medium-density outskirts of the A901/902 supercluster. In this work, supported by deep 24µm data, we directly measures
the amount of star formation going on in these galaxies.
Our results are also in line with recent studies of clusters at similar redshifts as A901/902 or higher, which have identified a population of IR-bright
galaxies in filaments and the infalling regions of the clusters. Fadda et al. (2000) found a population of 15µm-detected galaxies with high
15µm-to-optical flux ratio suggesting star formation activity in the cluster A1689 at $z=0.18$, in excess with respect to the Virgo and Coma
cluster (see also Duc et al., 2002). In the cluster A2667 at $z=0.23$ Cortese et al. (2007) have identified a IR-bright $L^{\ast}$ spiral galaxy in the process of being transformed by the cluster
environment which triggers an intense burst of star formation. At similar redshift, Fadda et al. (2008) find two filamentary structures in the outskirts of
the A1763 cluster at $z=0.23$ (probably undergoing accretion events), which are rich in actively star-forming galaxies. Geach et al. (2006) find an excess of
mid-infrared sources in an unvirialized cluster at $z\sim 0.4$, where star formation might be triggered via mergers or interactions between gas-rich
spirals. However, they also note that significant cluster-to-cluster variations are possible: they do not find any significant excess in another cluster
at similar redshift, of similar mass but with a hotter and smoother ICM. Moving to higher redshift, Marcillac et al. (2007) studied 24µm sources in a
massive, dynamically young, unvirialized cluster at $z=0.83$. They find that IR-detected galaxies tend to lie in the outskirts of the cluster, while they
avoid the merging region. Finally, Elbaz et al. (2007), utilizing 24µm imaging in the GOODS fields at redshift $0.8<z<1.2$, have identified for the
first time a reversal of the SFR-density relation observed at lower redshifts. This result has been recently confirmed by Cooper et al. (2008) with a
spectroscopic analysis using DEEP2 data.
4.2 Dusty or old?
The relative abundance of red SF galaxies at intermediate and high densities suggests that they are transforming under the influence
of some environment-related process. What are the star formation activity, morphology and dust attenuation of these red star-forming galaxies?
-
The red SF galaxies in our sample are not in a starburst phase. The few starburst galaxies (with $\rm\log(SFR/M_{\ast})>-9.7$,
corresponding to a birthrate parameter $b>1$, assuming a formation redshift of 4) in our sample all populate the blue cloud. We find that red SF galaxies have
similar SFR as blue SF galaxies, and slightly lower specific SFR. While the overall fraction of star-forming galaxies decreases with
density, we do not identify any significant evolution in their level of activity, either obscured or not.
-
The morphology of star-forming galaxies is not very sensitive to their color. Red SF have similar distribution in Sersic index as
blue SF: they are predominantly disc-dominated. Moreover, the morphology of star-forming galaxies depends little on the environment in which
they live. This suggests that, on average, changes in stellar populations and changes in morphology happen on different timescales, as hinted
at by the fact that color seems to be more sensitive to environment than morphology (Blanton et al., 2005). The rise of red massive spirals in the
infalling regions of the A901/902 cluster has also been interpreted by Wolf et al. (2008) as due to SFR decline not accompanied by morphological
change. A two-step scenario in which star-formation is quenched first and morphological transformation follows on longer timescale is also
supported by the analysis of Sánchez et al. (2007) of the A2218 cluster at $z=0.17$.
-
Red SF galaxies have IR-to-UV luminosity ratios ($\rm L_{IR}/L_{UV}$), a proxy for the level of UV attenuation by dust, on average
higher than blue SF galaxies. The distribution in their IR-to-UV luminosity ratios suggests however the presence of two different
populations, hence possibly two distinct mechanisms affecting star formation activity of red galaxies. Roughly half of the red SF galaxies
in our sample have relatively low $\rm L_{IR}/L_{UV}$, similar to the average value of the bulk of blue SF galaxies, without evolution with
environment. The other half of the red SF galaxies have instead systematically higher $\rm L_{IR}/L_{UV}$. The range in dust attenuation of
this second population becomes narrower at higher densities, suggesting a trend of decreasing attenuation with density.
On the basis of the IR properties of red SF galaxies we tentatively distinguish them into two subpopulations. Low-attenuation red SF
galaxies have low specific SFR ($\lesssim 10^{-10.3}$) independent of environment. Among star-forming galaxies they tend to have higher Sersic
indices ($\langle n\rangle\sim 2.5$). These properties suggest that these galaxies are dominated by rather old stellar populations but have some
residual star formation.
They could be anemic/gas-deficient spirals gradually suppressing their star formation as a consequence of the removal of their gas reservoir as
they move into higher-density environments (Fumagalli & Gavazzi, 2008). Their star formation could be suppressed on relatively long timescales (of few Gyrs) if strangulation of the
hot, diffuse gas occurs while the galaxies enter a more massive halo (e.g. Balogh & Morris, 2000; van den Bosch et al., 2008). The gradual fading of the disc would make the
morphology of these galaxies appear of earlier type. In addition to strangulation, when the density of the surrounding medium becomes sufficiently high,
ram-pressure can act on smaller mass galaxies to remove the remaining gas on the disc and lead to fast quenching
(e.g. Gunn & Gott, 1972; Quilis et al., 2000; Boselli et al., 2006). However, there is no significant evidence that the relative abundance of low-attenuation red SF galaxies
varies with environment. Therefore, we cannot exclude that these galaxies are suppressing their SF due to internal feedback processes only, without any
additional environmental action required.
The other subpopulation of red SF galaxies have systematically higher $\rm L_{IR}/L_{UV}$, indicative of higher levels of dust attenuation. They
represent $\gtrsim$40% of all red SF galaxies in the sample, even after accounting for purely edge-on spirals. By visual inspection of their HST V-band
images, we can say that the majority of them are spiral galaxies with a bright nucleus or inner bar/disk, suggesting intense star formation activity in
the galaxy core (we cannot exclude AGN contribution in some cases). We also find few cases of interacting galaxies and merger remnants. In comparison to
the low-attenuation red SF class discussed above, they have systematically higher specific SFR and lower values of Sersic index ($\langle n\rangle\sim 1.5$). As opposed to low-attenuation red SF galaxies, they tend to be more abundant at intermediate densities where their stellar mass is
$\sim$50% and $\sim$70% higher than at low and high densities, respectively. This suggests that environmental interactions are particularly efficient
in triggering episodes of obscured, often centrally concentrated, star formation in these massive late-type spirals. While we find few cases of
interacting galaxies, violent processes, such as mergers, leading to intense starbursts cannot be the dominant phenomenon. These galaxies are likely more
sensitive to more gentle mechanisms that perturb the distribution of gas inducing star formation (but not a starburst) and at the same time increase the
gas/dust column density. This process should not alter morphology as long as star formation is still detectable. The fact that galaxies undergoing this
phase are preferentially found at intermediate densities and with relatively high stellar masses might indicate longer duration of the dust-obscured
episode of SF for more massive galaxies, which are thus more likely to be caught in this phase than low-mass galaxies. Harassment can act on massive
spirals, funnelling the gas toward the center and leading to a temporary enhancement of star formation (Moore et al., 1998; Lake et al., 1998). The timescales of this
process could be relatively long if it occurs at group-like densities, rather than in the cluster. Tidal interactions between galaxies at low and
intermediate densities can also produce gas funnelling toward the center (Mihos, 2004).
In summary, we have identified a significant amount of star formation ‘hidden’ among red-sequence galaxies, contributing at least 30% to the total star
formation activity at intermediate and high densities. The red SF population is composed partly of disc galaxies dominated by old stellar populations
and with low-level residual star formation, and partly of spirals or irregular galaxies undergoing modest (non-starburst) episodes of dust-obscured star
formation. This means that, while we confirm the general suppression of star formation with increasing environmental density, the small amount of star
formation surviving the cluster happens to a large extent in galaxies either obscured or dominated by old stellar populations. Low-attenuation red SF
galaxies seem to be a ubiquitous population at all densities. Therefore an environmental action is not necessarily required to explain their ongoing
low-level star formation. On the contrary, dusty SF galaxies are relatively more abundant at intermediate densities. They might be experiencing
harassment or tidal interactions with other galaxies, which funnel gas toward the center inducing a (partly or totally obscured) episode of star
formation. Ram-pressure can also be partly responsible for the population of relatively more massive dusty SF galaxies in the cluster: while it is not
effective in removing the gas from the disc in massive galaxies, it could perturb it inducing obscured star formation. The complex dynamical state of the
A901/902 supercluster could favour a combination of different processes producing a temporary enhancement of obscured star formation.
A.G. thanks Stéphane Charlot for comments on an early draft and Stefano Zibetti for useful discussions. A.G., E.F.B., A.R.R. and K.J. acknowledge support from
the Deutsche Forschungsgemeinschaft through the Emmy Noether Programme, C.W. from a PPARC Advanced Fellowship, M.E.G. from an Anne McLaren Research
Fellowship, M.B. and E.vK. by the Austrian Science Foundation F.W.F. under grant P18416. C.Y.P. is grateful for support provided through STScI and NRC-HIA
Fellowship programmes. C.H. acknowledges the support of a European Commission Programme Sixth Framework Marie Curie Outgoing International Fellowship under
contract MOIF-CT-2006-21891, and a CITA National fellowship. D.H.M. acknowledges support from the National Aeronautics and Space Administration (NASA) under
LTSA Grant NAG5-13102 issued through the Office of Space Science. A.B. was supported by the DLR (50 OR 0404), S.J. by NASA under LTSA Grant NAG5-13063 and
NSF under AST-0607748, S.F.S. by the Spanish MEC grants AYA2005-09413-C02-02 and the PAI of the Junta de Andalucía as research group FQM322. Support for
STAGES was provided by NASA through GO-10395 from STScI operated by AURA under NAS5-26555.
References
Alexander et al. (2003)
Alexander, D. M., Bauer, F. E., Brandt, W. N., Schneider, D. P.,
Hornschemeier, A. E., Vignali, C., Barger, A. J., Broos, P. S.,
Cowie, L. L., Garmire, G. P., Townsley, L. K., Bautz, M. W.,
Chartas, G., & Sargent, W. L. W. 2003, AJ, 126, 539
Baldry et al. (2006)
Baldry, I. K., Balogh,
M. L., Bower, R. G., Glazebrook, K., Nichol, R. C., Bamford, S. P.,
& Budavari, T. 2006, MNRAS, 373, 469
Balogh et al. (1998)
Balogh, M. L., Schade,
D., Morris, S. L., Yee, H. K. C., Carlberg, R. G.,
& Ellingson, E. 1998, ApJ, 504, L75
Balogh & Morris (2000)
Balogh, M. L., & Morris, S. L. 2000, MNRAS, 318, 703
Balogh et al. (2004a)
Balogh, M., et al.
2004, MNRAS, 348, 1355
Balogh et al. (2004b)
Balogh, M. L., Baldry,
I. K., Nichol, R., Miller, C., Bower, R.,
& Glazebrook, K. 2004, ApJ, 615, L101
Barnes (1988)
Barnes, J. E. 1988, ApJ, 331, 699
Bekki et al. (2001)
Bekki, K., Shioya, Y.,
& Couch, W. J. 2001, ApJ, 547, L17
Bell (2003)
Bell, E. F. 2003, ApJ, 586, 794
Bell et al. (2003)
Bell, E. F., McIntosh, D. H., Katz, N., & Weinberg, M. D. 2003, ApJS, 149, 289
Bell et al. (2005)
Bell, E. F., Papovich, C., Wolf, C., Le Floc’h, E., Caldwell,
J. A. R., Barden, M., Egami, E., McIntosh, D. H., Meisenheimer, K.,
Pérez-González, P. G., Rieke, G. H., Rieke, M. J., Rigby,
J. R., & Rix, H.-W. 2005, ApJ, 625, 23
Bell et al. (2004)
Bell, E. F., Wolf, C., Meisenheimer, K., Rix, H.-W., Borch, A.,
Dye, S., Kleinheinrich, M., Wisotzki, L., & McIntosh, D. H. 2004,
ApJ, 608, 752
Bell et al. (2007)
Bell, E. F., Zheng, X. Z., Papovich, C., Borch, A., Wolf, C., &
Meisenheimer, K. 2007, ApJ, 663, 834
Bertin & Arnouts (1996)
Bertin, E., & Arnouts, S. 1996, A&AS, 117, 393
Best (2004)
Best, P. N. 2004, MNRAS, 351, 70
Blanton et al. (2005)
Blanton, M. R.,
Eisenstein, D., Hogg, D. W., Schlegel, D. J.,
& Brinkmann, J. 2005, ApJ, 629, 143
Borch et al. (2006)
Borch, A., Meisenheimer, K., Bell, E. F., Rix, H.-W., Wolf, C.,
Dye, S., Kleinheinrich, M., Kovacs, Z., & Wisotzki, L. 2006, A&A,
453, 869
Boselli & Gavazzi (2006)
Boselli, A., & Gavazzi, G. 2006, PASP, 118, 517
Boselli et al. (2006)
Boselli, A., Boissier,
S., Cortese, L., Gil de Paz, A., Seibert, M., Madore, B. F., Buat, V.,
& Martin, D. C. 2006, ApJ, 651, 811
Bressan et al. (2007)
Bressan, A., et al.
2007, IAU Symposium, 241, 395
Caldwell et al. (2008)
Caldwell, J. A. R., McIntosh, D. H., Rix, H.-W., Barden, M.,
Beckwith, S. V. W., Bell, E. F., Borch, A., Heymans, C.,
Häußler, B., Jahnke, K., Jogee, S., Meisenheimer, K., Peng,
C. Y., Sánchez, S. F., Somerville, R. S., Wisotzki, L., & Wolf,
C. 2008, ApJS, 174, 136
Calzetti et al. (2007)
Calzetti, D., et al.
2007, ApJ, 666, 870
Cassata et al. (2007)
Cassata, P., et al., & . 2007, ApJS, 172, 270
Cassata et al. (2008)
Cassata, P., et al. 2008, A&A, 483, L39
Cayatte et al. (1990)
Cayatte, V., van
Gorkom, J. H., Balkowski, C., & Kotanyi, C. 1990, AJ, 100, 604
Chabrier (2003)
Chabrier, G. 2003, ApJ, 586, L133
Coia et al. (2005)
Coia, D., et al. 2005, A&A, 431, 433
Cooper et al. (2005)
Cooper, M. C., Newman,
J. A., Madgwick, D. S., Gerke, B. F., Yan, R.,
& Davis, M. 2005, ApJ, 634, 833
Cooper et al. (2008)
Cooper, M. C., et al.
2008, MNRAS, 383, 1058
Cortese et al. (2007)
Cortese, L., et al.
2007, MNRAS, 376, 157
Cortese et al. (2008)
Cortese, L., Boselli,
A., Franzetti, P., Decarli, R., Gavazzi, G., Boissier, S.,
& Buat, V. 2008, MNRAS, 386, 1157
Dale & Helou (2002)
Dale, D. A., & Helou, G. 2002, ApJ, 576, 159
Devriendt et al. (1999)
Devriendt, J. E. G., Guiderdoni, B., & Sadat, R. 1999, A&A, 350, 381
Dressler (1980)
Dressler, A. 1980, ApJ,
236, 351
Dressler et al. (1997)
Dressler, A., et al.
1997, ApJ, 490, 577
Duc et al. (2002)
Duc, P.-A., et al. 2002, A&A, 382, 60
Elbaz et al. (2007)
Elbaz, D., et al. 2007, A&A, 468, 33
Fadda et al. (2000)
Fadda, D., Elbaz, D., Duc, P.-A., Flores, H., Franceschini, A., Cesarsky, C. J., & Moorwood,
A. F. M. 2000, A&A, 361, 827
Fadda et al. (2008)
Fadda, D., Biviano, A.,
Marleau, F. R., Storrie-Lombardi, L. J.,
& Durret, F. 2008, ApJ, 672, L9
Franzetti et al. (2007)
Franzetti, P., et al., & . 2007, A&A, 465, 711
Fujita (2004)
Fujita, Y. 2004, PASJ, 56, 29
Fumagalli & Gavazzi (2008)
Fumagalli, M., & Gavazzi, G. 2008, A&A, 490, 571
Gavazzi & Jaffe (1985)
Gavazzi, G., & Jaffe, W. 1985, ApJ, 294, L89
Gavazzi et al. (2002)
Gavazzi, G., Boselli, A., Pedotti, P., Gallazzi, A., & Carrasco, L. 2002, A&A, 396, 449
Gavazzi et al. (2003)
Gavazzi, G., Cortese,
L., Boselli, A., Iglesias-Paramo, J., Vílchez, J. M.,
& Carrasco, L. 2003, ApJ, 597, 210
Gavazzi et al. (2006)
Gavazzi, G., Boselli, A., Cortese, L., Arosio, I., Gallazzi, A., Pedotti, P., & Carrasco,
L. 2006, A&A, 446, 839
Geach et al. (2006)
Geach, J. E., et al.
2006, ApJ, 649, 661
Gilmour et al. (2007)
Gilmour, R., Gray, M. E., Almaini, O., Best, P., Wolf, C.,
Meisenheimer, K., Papovich, C., & Bell, E. 2007, MNRAS, 380, 1467
Giovanelli & Haynes (1985)
Giovanelli, R., & Haynes, M. P. 1985, ApJ, 292, 404
Gómez et al. (2003)
Gómez, P. L., et
al. 2003, ApJ, 584, 210
Gordon et al. (2000)
Gordon, K. D., Clayton, G. C., Witt, A. N., & Misselt, K. A. 2000,
ApJ, 533, 236
Goto et al. (2003)
Goto, T., et al. 2003,
PASJ, 55, 757
Gray et al. (2002)
Gray, M. E., Taylor,
A. N., Meisenheimer, K., Dye, S., Wolf, C.,
& Thommes, E. 2002, ApJ, 568, 141
Gray et al. (2004)
Gray, M. E., Wolf, C.,
Meisenheimer, K., Taylor, A., Dye, S., Borch, A.,
& Kleinheinrich, M. 2004, MNRAS, 347, L73
Gray et al. (2008)
Gray, M. E., et al. 2008,
MNRAS in press, arXiv:0811.3890
Gunn & Gott (1972)
Gunn, J. E., & Gott, J. R. I. 1972, ApJ, 176, 1
Hammer et al. (1997)
Hammer, F., et al.
1997, ApJ, 481, 49
Helou et al. (1988)
Helou, G., Khan, I. R.,
Malek, L., & Boehmer, L. 1988, ApJS, 68, 151
Heymans et al. (2005)
Heymans, C., et al.
2005, MNRAS, 361, 160
Heymans et al. (2008)
Heymans, C., Gray, M. E., Peng, C. Y., Van Waerbeke, L., Bell, E. F.,
Wolf, C., Bacon, D., Balogh, M., Barazza, F. D., Barden, M.,
Boehm, A., Caldwell, J. A. R., Haeussler, B., Jahnke, K., Jogee,
S., van Kampen, E., Lane, K., McIntosh, D. H., Meisenheimer, K.,
Mellier, Y., Sanchez, S. F., Taylor, A. N., Wisotzki, L., & Zheng,
X. 2008, MNRAS, 385, 1431
Jain & Van Waerbeke (2000)
Jain, B., & Van Waerbeke, L. 2000, ApJ, 530, L1
Kauffmann et al. (1993)
Kauffmann, G., White,
S. D. M., & Guiderdoni, B. 1993, MNRAS, 264, 201
Kauffmann et al. (2004)
Kauffmann, G., White,
S. D. M., Heckman, T. M., Ménard, B., Brinchmann, J., Charlot, S.,
Tremonti, C., & Brinkmann, J. 2004, MNRAS, 353, 713
Kaviraj et al. (2007)
Kaviraj, S., et al.
2007, ApJS, 173, 619
Kennicutt (1983)
Kennicutt, R. C., Jr. 1983,
AJ, 88, 483
Kennicutt et al. (1987)
Kennicutt, R. C.,
Jr., Roettiger, K. A., Keel, W. C., van der Hulst, J. M.,
& Hummel, E. 1987, AJ, 93, 1011
Kennicutt (1998)
Kennicutt, Jr., R. C. 1998, ARA&A, 36, 189
Kinney et al. (1996)
Kinney, A. L., Calzetti, D., Bohlin, R. C., McQuade, K.,
Storchi-Bergmann, T., & Schmitt, H. R. 1996, ApJ, 467, 38
Koopmann & Kenney (2004)
Koopmann, R. A., & Kenney, J. D. P. 2004, ApJ, 613, 866
Kroupa (2001)
Kroupa, P. 2001, MNRAS, 322, 231
Kroupa et al. (1993)
Kroupa, P., Tout, C. A., & Gilmore, G. 1993, MNRAS, 262, 545
Lake et al. (1998)
Lake, G., Katz, N.,
& Moore, B. 1998, ApJ, 495, 152
Larson et al. (1980)
Larson, R. B., Tinsley,
B. M., & Caldwell, C. N. 1980, ApJ, 237, 692
Lewis et al. (2002)
Lewis, I., et al. 2002,
MNRAS, 334, 673
McIntosh et al. (2004)
McIntosh, D. H., Rix,
H.-W., & Caldwell, N. 2004, ApJ, 610, 161
Marcillac et al. (2007)
Marcillac, D., Rigby,
J. R., Rieke, G. H., & Kelly, D. M. 2007, ApJ, 654, 825
Mihos (2004)
Mihos, J. C. 2004, Clusters of
Galaxies: Probes of Cosmological Structure and Galaxy Evolution, 277
Miller & Owen (2002)
Miller, N. A., & Owen, F. N. 2002, AJ, 124, 2453
Miller & Owen (2003)
Miller, N. A., & Owen, F. N. 2003, AJ, 125, 2427
Miyazaki et al. (2002)
Miyazaki, S., et al.
2002, ApJ, 580, L97
Moore et al. (1998)
Moore, B., Lake, G.,
& Katz, N. 1998, ApJ, 495, 139
Moran et al. (2007)
Moran, S. M., Ellis,
R. S., Treu, T., Smith, G. P., Rich, R. M.,
& Smail, I. 2007, ApJ, 671, 1503
Morgan (1961)
Morgan, W. W. 1961,
Proceedings of the National Academy of Science, 47, 905
Moss (2006)
Moss, C. 2006, MNRAS, 373, 167
Oemler (1974)
Oemler, A. J. 1974, ApJ, 194, 1
Papovich & Bell (2002)
Papovich, C., & Bell, E. F. 2002, ApJ, 579, L1
Papovich et al. (2004)
Papovich, C., et al., & . 2004, ApJS, 154, 70
Peng et al. (2002)
Peng, C. Y., Ho, L. C., Impey, C. D., & Rix, H.-W. 2002, AJ, 124, 266
Pimbblet et al. (2002)
Pimbblet, K. A.,
Smail, I., Kodama, T., Couch, W. J., Edge, A. C., Zabludoff, A. I.,
& O’Hely, E. 2002, MNRAS, 331, 333
Poggianti et al. (1999)
Poggianti, B. M.,
Smail, I., Dressler, A., Couch, W. J., Barger, A. J., Butcher, H., Ellis,
R. S., & Oemler, A. J. 1999, ApJ, 518, 576
Porter et al. (2008)
Porter, S. C.,
Raychaudhury, S., Pimbblet, K. A.,
& Drinkwater, M. J. 2008, MNRAS, 388, 1152
Postman et al. (2005)
Postman, M., et al.
2005, ApJ, 623, 721
Quilis et al. (2000)
Quilis, V., Moore, B.,
& Bower, R. 2000, Science, 288, 1617
Ramos Almeida et al. (2007)
Ramos Almeida,
C., Pérez García, A. M., Acosta-Pulido, J. A.,
& Rodríguez Espinosa, J. M. 2007, AJ, 134, 2006
Relaño et al. (2007)
Relaño, M.,
Lisenfeld, U., Pérez-González, P. G., Vílchez, J. M.,
& Battaner, E. 2007, ApJ, 667, L141
Rieke et al. (2004)
Rieke, G. H., et al., & . 2004, ApJS, 154, 25
Risaliti et al. (1999)
Risaliti, G.,
Maiolino, R., & Salvati, M. 1999, ApJ, 522, 157
Rix et al. (2004)
Rix, H.-W., Barden, M., Beckwith, S. V. W., Bell, E. F., Borch, A.,
Caldwell, J. A. R., Häussler, B., Jahnke, K., Jogee, S.,
McIntosh, D. H., Meisenheimer, K., Peng, C. Y., Sanchez, S. F.,
Somerville, R. S., Wisotzki, L., & Wolf, C. 2004, ApJS, 152, 163
Rogers et al. (2007)
Rogers, B., Ferreras,
I., Lahav, O., Bernardi, M., Kaviraj, S.,
& Yi, S. K. 2007, MNRAS, 382, 750
Rowan-Robinson et al. (2005)
Rowan-Robinson,
M., et al. 2005, AJ, 129, 1183
Saintonge et al (2008)
Saintonge, A., Tran,
K.-V. H., & Holden, B. P. 2008, ApJ, 685, L113
Sánchez et al. (2007)
Sánchez, S. F.,
Cardiel, N., Verheijen, M. A. W., Pedraz, S.,
& Covone, G. 2007, MNRAS, 376, 125
Smail et al. (1999)
Smail, I., Morrison, G.,
Gray, M. E., Owen, F. N., Ivison, R. J., Kneib, J.-P.,
& Ellis, R. S. 1999, ApJ, 525, 609
Smith et al. (2005)
Smith, G. P., Treu, T.,
Ellis, R. S., Moran, S. M., & Dressler, A. 2005, ApJ, 620, 78
Solanes et al. (2001)
Solanes, J. M.,
Manrique, A., García-Gómez, C., González-Casado, G.,
Giovanelli, R., & Haynes, M. P. 2001, ApJ, 548, 97
Springel et al. (2005)
Springel, V., Di
Matteo, T., & Hernquist, L. 2005, MNRAS, 361, 776
Taylor et al. (2004)
Taylor, A. N., et al.
2004, MNRAS, 353, 1176
Temi et al. (2005)
Temi, P., Brighenti, F.,
& Mathews, W. G. 2005, ApJ, 635, L25
Temi et al. (2007)
Temi, P., Brighenti, F.,
& Mathews, W. G. 2007, ApJ, 660, 1215
Temi et al. (2008)
Temi, P., Brighenti, F.,
& Mathews, W. G. 2008, ApJ, 672, 244
Toomre & Toomre (1972)
Toomre, A., & Toomre, J. 1972, ApJ, 178, 623
Treu et al. (2003)
Treu, T., Ellis, R. S.,
Kneib, J.-P., Dressler, A., Smail, I., Czoske, O., Oemler, A.,
& Natarajan, P. 2003, ApJ, 591, 53
van den Bergh (1976)
van den Bergh, S. 1976,
ApJ, 206, 883
van den Bosch et al. (2008)
van den Bosch,
F. C., Aquino, D., Yang, X., Mo, H. J., Pasquali, A., McIntosh, D. H.,
Weinmann, S. M., & Kang, X. 2008, MNRAS, 387, 79
van der Wel (2008)
van der Wel, A. 2008,
ApJ, 675, L13
van Waerbeke (2000)
van Waerbeke, L. 2000,
MNRAS, 313, 524
Verdugo et al. (2008)
Verdugo, M., Ziegler, B. L., & Gerken, B. 2008, A&A, 486, 9
Wilman et al. (2008)
Wilman, D. J., et al.
2008, ApJ, 680, 1009
Wolf et al. (2008)
Wolf, C., et al. 2008,
MNRAS in press, arXiv:0811.3873
Wolf et al. (2007)
Wolf, C., Gray, M. E., Aragón-Salamanca, A., Lane, K. P., &
Meisenheimer, K. 2007, MNRAS, 376, L1
Wolf et al. (2005)
Wolf, C., Gray, M. E., & Meisenheimer, K. 2005, A&A, 443, 435
Wolf et al. (2004)
Wolf, C., Meisenheimer, K., Kleinheinrich, M., Borch, A., Dye, S.,
Gray, M., Wisotzki, L., Bell, E. F., Rix, H.-W., Cimatti, A.,
Hasinger, G., & Szokoly, G. 2004, A&A, 421, 913
Wolf et al. (2003)
Wolf, C., Meisenheimer, K., Rix, H.-W., Borch, A., Dye, S., &
Kleinheinrich, M. 2003, A&A, 401, 73
Yi et al. (2005)
Yi, S. K., et al. 2005,
ApJ, 619, L111
Young et al. (2008)
Young, L., Bendo, G.,
& Lucero, D. 2008, AJ accepted, arXiv:0811.1381
Zabludoff (2002)
Zabludoff, A. 2002, AMiBA
2001: High-Z Clusters, Missing Baryons, and CMB Polarization, 257, 123 |
Compiling quantum algorithms for architectures with multi-qubit gates
Esteban A. Martinez
Thomas Monz
Daniel Nigg
Philipp Schindler
Institut für Experimentalphysik, Universität Innsbruck, Technikerstraße 25/4, 6020 Innsbruck, Austria
Rainer Blatt
Institut für Experimentalphysik, Universität Innsbruck, Technikerstraße 25/4, 6020 Innsbruck, Austria
Institut für Quantenoptik und Quanteninformation, Österreichische Akademie der Wissenschaften, Technikerstraße 21a, 6020 Innsbruck, Austria
Abstract
In recent years, small-scale quantum information processors have
been realized in multiple physical architectures. These systems
provide a universal set of gates that allow one to implement any
given unitary operation. The decomposition of a particular
algorithm into a sequence of these available gates is not
unique. Thus, the fidelity of the implementation of an algorithm
can be increased by choosing an optimized decomposition into
available gates. Here, we present a method to find such a
decomposition, taking as an example a small-scale ion trap quantum
information processor. We demonstrate a numerical optimization
protocol that minimizes the number of required multi-qubit entangling gates
by design. Furthermore, we adapt the method for
state preparation, and quantum algorithms including in-sequence
measurements.
Contents
I Introduction
I.1 Experimental toolbox
II Compilation of local unitaries
III Compilation of general unitaries
III.1 Compilation in layers
III.2 Numerical optimization
III.3 Compilation of isometries
III.4 Compensation of systematic errors
IV Conclusions and outlook
V Acknowledgements
A Compiling local unitaries
A.1 Finding basis changes
A.2 Writing a unitary as a product of two equatorial rotations
A.3 Unitaries up to a collective Z rotation
A.4 Unitaries up to independent Z rotations
I Introduction
Quantum technologies open new possibilities that are inaccessible with current classical devices, ranging from cryptography Ekert and Jozsa (1996); Gisin et al. (2002) to efficient simulation of physical systems Cirac and Zoller (2012); Bloch et al. (2012); Blatt and Roos (2012). To utilize the full computational power of quantum systems, one needs a universal quantum computer: a device able to implement arbitrary unitary operations, or at least to approximate them to arbitrary accuracy. However, in any specific physical system, only a certain set of operations is readily available. Therefore, it is necessary to decompose the desired unitary operation as a sequence of these experimentally available gates. An available set of gates is known as universal if it is possible to find such a decomposition for an arbitrary unitary quantum operation acting on the qubit register.
A canonical universal set of gates consists of two-qubit CNOT gates and arbitrary single qubit rotations. There exist deterministic algorithms that provide near-optimal decompositions of unitaries in terms of these gates Nielsen and Chuang (2004). However, the set of gates that yields the highest fidelities depends on the particular experimental implementation. In particular, two-qubit CNOT gates may not be the most efficient to implement. Architectures like trapped ions Schindler et al. (2013); Harty et al. (2014) or atom lattices Xia et al. (2015) include in their toolboxes high-fidelity multi-qubit gates that act on the entire qubit register (see section I.1). Implementing two-qubit gates in terms of these requires refocusing Vandersypen and Chuang (2005) or decoupling Schindler et al. (2013) techniques, and thus increases the overhead. Therefore it is desirable to find a direct decomposition of the target unitary into the available operations. In general, the number of multi-qubit gates needs to be minimized, since these are more prone to errors than local gates.
Compiling unitaries using multi-qubit gates that act on the whole qubit register is more challenging than using two-qubit gates. Even if a sequence implements correctly a unitary for $N$ qubits, it might not work for $N+1$ qubits, since additional “spectator” qubits will also be affected by the sequence instead of being left unchanged Nebendahl et al. (2009). Therefore, one has to define a qubit register of interest where the unitary will be compiled, and the experimental implementation of the resulting sequence has to be limited to this subregister, as explained in Section I.1. Moreover, the existing analytical methods for finding decompositions of unitaries in terms of two-qubit gates (see for instance Ref. Khaneja and Glaser (2001)) do not seem to apply to multi-qubit gates. Therefore, in this work we employ an approach based on numerical optimization.
A similar algorithm for finding multi-qubit gate decompositions has been studied in Ref. Nebendahl et al. (2009), where optimal control techniques are used to find a pulse sequence for a given target unitary operation. The procedure described there starts with long sequences and then removes pulses, if possible. This often results in sequences with more entangling operations than actually required. In this work we present an algorithm designed to produce decompositions with a minimal number of entangling gates. In addition, we introduce a deterministic algorithm for finding decompositions of local unitaries. We also extend the algorithm to operations required for state preparation or measurement, which are particular cases of more general operations known as isometries Knill (1995); Iten et al. (2016).
The paper is organized as follows: in Section I.1 we describe precisely which gates we will consider as part of our experimentally available toolbox, and review some architectures for quantum information processing to which the methods described in this work can be applied. In Section II we show an analytic algorithm to compile local unitaries, which can be used to find efficient implementations of state and process tomographies. Finally, in Section III we describe and analyze an algorithm to compile fully general unitaries which relies on numerical optimization.
I.1 Experimental toolbox
Several quantum information processing experiments based on atomic and molecular systems have similar toolsets of quantum operations at their disposal. Often, it is convenient to apply collective rotations on an entire qubit register. These collective (yet local) gates, combined with addressed operations (typically rotations around the Z axis) allow one to implement arbitrary local unitaries, as we show in Section II. Together with suitable multi-qubit operations, arbitrary quantum unitaries can be implemented. In this work we consider the following set of gates:
•
Collective rotations of the whole qubit register about any axis on the equator of the Bloch sphere $C(\theta,\phi)$. Here $\theta$ is the rotation angle and $\phi$ is the phase, so that:
$$C(\theta,\phi)=e^{-i\theta(S_{x}\cos{\phi}+S_{y}\sin{\phi})/2},$$
(1)
where $S_{x,y}=\sigma_{1}^{x,y}+\cdots+\sigma_{N}^{x,y}$ are the total spin projections on the x or y axes, and $\sigma_{j}^{x,y,z}$ are the respective Pauli operators corresponding to qubit $j$. For the sake of brevity we also define rotations around the X and Y axes as:
$$\displaystyle X(\theta)$$
$$\displaystyle=C(\theta,0),$$
(2)
$$\displaystyle Y(\theta)$$
$$\displaystyle=C(\theta,\pi/2).$$
(3)
•
Single qubit rotations around the $Z$ axis $Z_{n}(\theta)$, where $\theta$ is the rotation angle, and $n$ is the qubit index:
$$\rm{Z}_{n}(\theta)=e^{-i\theta\sigma_{n}^{z}/2},$$
(4)
with $\sigma_{n}^{z}$ being the Pauli Z operator applied to the n-th ion.
•
Entangling Mølmer-Sørensen (MS) gates Sørensen and Mølmer (2000), with arbitrary rotation angle and phase $\operatorname{MS}_{\phi}(\theta)$. Here $\theta$ is the rotation angle and $\phi$ is the phase of the gate, resulting in:
$$\operatorname{MS}_{\phi}(\theta)=e^{-i\theta(S_{x}\cos{\phi}+S_{y}\sin{\phi})^{2}/4},$$
(5)
where $S_{x,y}=\sigma_{1}^{x,y}+\cdots+\sigma_{N}^{x,y}$ are the total spin projections on the x or y axes, as before. For $\phi=0$ or $\phi=\pi/2$ we obtain gates that act around the $X$ or $Y$ axes, which we will denote:
$$\rm{MS}_{x,y}(\theta)=e^{-i\theta S_{x,y}^{2}/4}.$$
(6)
As mentioned before, it is desirable to be able to restrict the action of the MS gate to a particular qubit subset. This can be done experimentally by spectroscopically decoupling the rest of the qubits from the computation Schindler et al. (2013), or by addressing the MS gate only on the relevant subset of the qubits Debnath et al. (2016).
This set of gates, or equivalent ones, are available in several trapped-ion experiments Gaebler et al. (2012); Schindler et al. (2013); Monroe et al. (2014).
A similar toolbox of operations is available for architectures based on trapped-ion hyperfine qubits. For example, Ref. Harty et al. (2014) describes high-fidelity microwave gates applied to a single hyperfine ${}^{43}$Ca${}^{+}$ qubit. In a multi-qubit system, these gates would drive collective rotations like the ones described above. In addition, Ref. Ballance et al. (2015) describes a Raman-driven $\sigma^{z}\otimes\sigma^{z}$ phase gate on two qubits, which applied to a many-qubit register would act analogously to the MS gate already described.
Recently, an implementation of high-fidelity gates in a 2D array of neutral atom qubits was reported Xia et al. (2015). The toolbox described there consists of global microwave-driven gates and single-site Stark shifts on the atoms, which are completely equivalent to the local operations described before for the trapped ion architecture. A multi-qubit CNOT gate, equivalent to the MS gate, could also be implemented by means of long-range Rydberg blockade interactions Isenhower et al. (2011).
II Compilation of local unitaries
Local unitaries can be written as a product of single-qubit unitaries. In this section we show a fully deterministic algorithm that produces decompositions of any local unitary as a sequence of collective equatorial rotations and addressed Z rotations, as described in section I.1. The decompositions presented here are optimal in the number of pulses. These techniques are particularly useful for the implementation of state and process tomographies, as exemplified in figure 1, since both require only local operations at the beginning and end of the algorithm.
Let us consider a register of $N$ qubits, and a local unitary $U=U_{1}\otimes U_{2}\otimes\dotsm\otimes U_{N}$ to be applied to them, where $U_{i}$ is the action of the unitary on the $i$-th qubit. If the same operation has to be applied to more than one qubit ($U_{i}=U_{j}$), we can replace both with a single instance of the operation, and then apply the same addressed rotations on all qubits subject to the same operation $U_{i}$. Therefore, we only have to consider the case where every $U_{i}$ is unique.
In order to apply a general local unitary to each qubit we need to have at least three degrees of freedom per qubit Nielsen and Chuang (2004), so the decomposition must have at least $3N$ free parameters. During the sequence at least $N-1$ of the qubits must eventually be addressed, since a different unitary has to be applied to each qubit. Therefore, a sequence of addressed operations of the form $Z_{1}(\theta_{1}),Z_{2}(\theta_{2}),\dotsc Z_{N-1}(\theta_{N-1})$ must be included in the decomposition. These provide $N-1$ parameters, so $2N+1$ more degrees of freedom are required. The most economic way to provide these is by means of collective gates $C(\theta_{i},\phi_{i})$, which have two degrees of freedom each, so the shortest sequence possible must include at least $N$ global operations, for a total of $3N-1$ free parameters. One additional degree of freedom remains, so we must add a last gate. This can be either an addressed operation on qubit $N$ or a collective gate. If we add an addressed gate $Z_{N}$, we obtain a sequence of the form:
$$U=Z_{N}C_{N}Z_{N-1}C_{N-1}\dotsm Z_{2}C_{2}Z_{1}C_{1},$$
(7)
where $C_{i}=C(\theta_{i},\phi_{i})$ and $Z_{i}=Z_{i}(\theta_{i})$ are collective and single-qubit rotations respectively, as explained in Section I.1. Such a sequence is useful for compiling local unitaries up to arbitrary phases, as explained in Appendices A.3 and A.4. The second alternative is to add a collective rotation $C^{\prime}_{N}$:
$$U=C_{N}^{\prime}C_{N}Z_{N-1}C_{N-1}\dotsm Z_{2}C_{2}Z_{1}C_{1},$$
(8)
which is the type of sequence we consider in this section.
For particular unitaries, some of the $C_{i}$ and $Z_{i}$ in Eq. (8) may actually be the identity, in which case the sequence is simpler. Since the decomposition depends on the ordering of the qubits, by reordering them a simpler sequence might be obtained. For small numbers of qubits, one can compile the unitary for every possible permutation, although this becomes inefficient for large numbers of qubits. However, let us remember that, for the purposes of the compilation, the qubits are grouped together according to which of them experience the same single-qubit unitary $U_{i}$. For an application such as state tomography, there are only three possible unitaries to be applied to each qubit in the register (shown in Figure 1), since one only wants to perform a measurement in one of three different bases. Therefore, effectively we only need to consider three qubits, in which case trying out all the permutations is perfectly feasible.
We will describe now how to compile a generic local unitary $U=U_{1}\otimes U_{2}\otimes\dotsm\otimes U_{N}$ exactly, using a decomposition of the form (8). Let us first note that the unitaries in Eq. (8) act on the $N$-qubit Hilbert space, which is the tensor product of the single-qubit Hilbert spaces. For the sake of simplifying the notation, we will now refer to these unitaries as $\tilde{C}_{i}$, $\tilde{Z}_{i}$, and will reuse the notations $C_{i}$ and $Z_{i}$ for their action on the single qubits, so that:
$$\displaystyle\tilde{C}_{i}$$
$$\displaystyle=C_{i}\otimes C_{i}\otimes\dotsm\otimes C_{i},$$
(9)
$$\displaystyle\tilde{Z}_{i}$$
$$\displaystyle=\mathbf{1}\otimes\mathbf{1}\otimes\dotsm\otimes Z_{i}\otimes\cdots\otimes\mathbf{1},$$
where $\mathbf{1}$ is the $2\times 2$ identity matrix, and $Z_{i}$ appears at the $i$-th place (since it only addresses the $i$-th qubit).
In terms of these single-qubit unitaries, factoring Eq. (8) for each qubit we obtain $N$ equations:
$$\displaystyle U_{1}$$
$$\displaystyle=C_{N}^{\prime}C_{N}\dotsm C_{2}Z_{1}C_{1},$$
(10)
$$\displaystyle U_{2}$$
$$\displaystyle=C_{N}^{\prime}C_{N}\dotsm Z_{2}C_{2}C_{1},$$
$$\displaystyle\ \ \vdots$$
$$\displaystyle U_{N}$$
$$\displaystyle=C_{N}^{\prime}C_{N}\cdots C_{2}C_{1}.$$
From the last equation we can determine $C_{N}^{\prime}C_{N}$:
$$C_{N}^{\prime}C_{N}=U_{N}C_{1}^{-1}C_{2}^{-1}\dotsm C_{N-1}^{-1},$$
(11)
and eliminating this factor from the remaining equations we obtain:
$$\displaystyle U_{N}^{-1}U_{1}$$
$$\displaystyle=C_{1}^{-1}Z_{1}C_{1},$$
(12)
$$\displaystyle U_{N}^{-1}U_{2}$$
$$\displaystyle=C_{1}^{-1}C_{2}^{-1}Z_{2}C_{2}C_{1},$$
$$\displaystyle\ \ \vdots$$
$$\displaystyle U_{N}^{-1}U_{N-1}$$
$$\displaystyle=C_{1}^{-1}C_{2}^{-1}\dotsm C_{N-1}^{-1}Z_{N-1}C_{N-1}\dotsm C_{2}C_{1}.$$
We solve each equation in (12) consecutively. To solve the first equation in (12), let us notice that its left-hand side is a known unitary, which can be written as:
$$U_{N}^{-1}U_{1}=e^{-i\alpha_{1}u_{1}/2},$$
(13)
where $\alpha_{1}$ is the angle of the rotation and $u_{1}$ its generator. The right-hand side is simply a rotation around Z and a change of basis. Therefore, the rotation angle of $Z_{1}$ must be equal to $\alpha_{1}$, and the change of basis must be such that:
$$u_{1}=C_{1}^{-1}\sigma_{z}C_{1}.$$
(14)
We show in Appendix A.1 how to find the generator and angle of the collective rotation $C_{1}$.
Having determined $C_{1}$, we can write the second equation in (12) as:
$$C_{1}U_{N}^{-1}U_{2}C_{1}^{-1}=C_{2}^{-1}Z_{2}C_{2}.$$
(15)
As before, the left-hand side of this equation is a known unitary, and the right-hand side consists of a rotation around Z and a change of basis, so the rotation angle $\theta_{2}$ and generator of the change of basis $C_{2}$ can be found as for the previous equation. This procedure can be repeated until all of the $C_{k}$ and $Z_{k}$ with $k\leq N-1$ are determined. The last collective operations $C_{N}$ and $C_{N}^{\prime}$ can be determined from equation (11). For this we need to decompose an arbitrary unitary into a product of two equatorial rotations; this can be done as explained in appendix A.2.
We have shown so far how to compile a local unitary exactly. However, in certain cases the constraints on the target unitary are weaker, so that it can be implemented with a simpler sequence. For instance, a unitary that is followed by global gates whose phase can be freely adjusted must only be specified up to a collective Z rotation afterwards, since this rotation can be absorbed into the phase. This removes one free parameter from the sequence, thus simplifying its implementation. The details of this procedure are presented in appendix A.3. Another case of interest is when the target unitary is specified up to arbitrary independent Z rotations afterwards, for instance when the unitary is followed by a projective measurement on the Z basis. This is particularly useful for tomographic measurements; details are shown in Appendix A.4.
III Compilation of general unitaries
In section II we studied how to compile local unitaries in terms of collective and addressed rotations. However, a universal quantum computer also requires entangling unitaries, which must be compiled into the experimentally available local and entangling gates. For example, in figure 2 we show a decomposition of a Toffoli gate into a sequence of local and entangling gates applied consecutively. In this section, we present an algorithm to find such decompositions for arbitrary unitaries.
We seek decompositions directly in terms of multi-qubit entangling gates, since these are often more efficient than decompositions in terms of two-qubit gates. For example, a Toffoli gate can be implemented using only 3 Mølmer-Sørensen (MS) gates Nebendahl et al. (2009), while 6 CNOT gates are needed to implement it Shende and Markov (2009)), and a Fredkin gate can be implemented using 4 MS gates Monz et al. (2016), while the least number of two-qubit gates required is 5 Yu and Ying (2013). As described in section I.1, many equivalent types of entangling gates are experimentally available. We will consider MS gates, but the methods shown here are applicable to any entangling gate that forms a universal set together with local operations.
III.1 Compilation in layers
In many quantum information processing experiments the most costly operations in terms of fidelity are entangling gates. Therefore, when trying to compile a unitary we seek to minimize the number of those. A straightforward way to do this is to use pulse sequences where layers of local unitaries and entangling gates are applied consecutively, as shown in figure 3.
Any unitary can be decomposed in terms of single-qubit gates and two-qubit CNOT gates DiVincenzo (1995):
$$U=L_{M}\ \textnormal{CNOT}_{M}\ \dotsm\ L_{1}\ \textnormal{CNOT}_{1}\ L_{0},$$
(16)
where $L_{i}$ denotes an arbitrary local unitary on the whole qubit register and $\textnormal{CNOT}_{i}$ denotes a gate between some two qubits. A two-qubit CNOT gate can be implemented in an arbitrary $N$-qubit register as a sequence of local unitaries and $\textnormal{MS}_{x}(\pi/8)$ gates Nebendahl et al. (2009). Therefore, the following decomposition is always possible:
$$U=L_{M}\ \textnormal{MS}_{x}(\pi/8)\ \dotsm\ L_{1}\ \textnormal{MS}_{x}(\pi/8)\ L_{0}.$$
(17)
However, some of the local unitaries $L_{i}$ in a decomposition of the form (17) may actually be identity, so after removing them the resulting sequence has the following structure:
$$U=L_{M}\ \textnormal{MS}_{x}(k_{M}\pi/8)\ \dotsm\ L_{1}\ \textnormal{MS}_{x}(k_{1}\pi/8)\ L_{0},$$
(18)
where the $k_{i}$ are integers, and the number $M$ of entangling gates is the same or less than in Eq. (17). It is enough to consider $0\leq k\leq 7$, since $\textnormal{MS}_{x}(\pi)$ is either the identity for an odd number of qubits, or a $\pi$ rotation around $X$ for an even number of qubits.
We now seek to further simplify sequence (18). Every single-qubit unitary $U_{i}$ on qubit $i$ can be written as a composition of rotations around two different fixed axes Nielsen and Chuang (2004), which means that we can always choose $\alpha_{i1}$, $\alpha_{i2}$ and $\alpha_{i3}$ such that:
$$U_{i}=X_{i}(\alpha_{i3})Z_{i}(\alpha_{i2})X_{i}(\alpha_{i1}).$$
(19)
Any local unitary $L=\prod_{i=1}^{N}U_{i}$ can therefore be written as:
$$L=\prod_{i=1}^{N}X_{i}(\alpha_{i3})Z_{i}(\alpha_{i2})X_{i}(\alpha_{i1}),$$
(20)
where the product goes over the $N$ qubits in the register. Since unitaries acting on different qubits commute, we can write this as:
$$\displaystyle L$$
$$\displaystyle=\prod_{i=1}^{N}X_{i}(\alpha_{i3})\prod_{i=1}^{N}Z_{i}(\alpha_{i2})\prod_{i=1}^{N}X_{i}(\alpha_{i1})$$
(21)
$$\displaystyle=\tilde{X^{\prime}}\tilde{Z}\tilde{X},$$
(22)
where $\tilde{X}$ and $\tilde{Z}$ denote arbitrary products of rotations around the $X$ or $Z$ axes for all qubits. Therefore, the sequence in (18) can be written as:
$$\displaystyle U$$
$$\displaystyle=\tilde{X}_{M}^{\prime}\tilde{Z}_{M}\tilde{X}_{M}\textnormal{MS}_{x}(k_{M}\pi/8)\cdots\times$$
(23)
$$\displaystyle\quad\times\tilde{X}_{1}^{\prime}\tilde{Z}_{1}\tilde{X}_{1}\textnormal{MS}_{x}(k_{1}\pi/8)\tilde{X}_{0}^{\prime}\tilde{Z}_{0}\tilde{X}_{0},$$
and commuting the $X$ rotations with the MS gates we obtain a sequence of the form:
$$\displaystyle U$$
$$\displaystyle=\tilde{X}_{M}^{\prime}\tilde{Z}_{M}\tilde{X}_{M}\textnormal{MS}_{x}(k_{M}\pi/8)\cdots\times$$
(24)
$$\displaystyle\quad\ \times\tilde{X}_{2}^{\prime}\tilde{Z}_{2}\tilde{X}_{2}\textnormal{MS}_{x}(\alpha_{2})\tilde{Z}_{1}\textnormal{MS}_{x}(k_{1}\pi/8)\tilde{X}_{0}^{\prime}\tilde{Z}_{0}\tilde{X}_{0}.$$
Every odd local unitary (except for the last one) is a product of Z rotations on all qubits, and the even local unitaries can be grouped as $L_{i}=\tilde{X}_{i}^{\prime}\tilde{Z}_{i}\tilde{X}_{i}$. Moreover, a collective Z rotation can be extracted from each even local unitary $L_{i}$ and absorbed into the phase of the subsequent MS gates and collective operations to simplify the implementation of $L_{i}$. Therefore the sequence can be written as:
$$\displaystyle U$$
$$\displaystyle=L_{M}\operatorname{MS}_{\phi_{M}}(k_{M}\pi/8)\cdots\times$$
(25)
$$\displaystyle\quad\times L_{2}\operatorname{MS}_{\phi_{2}}(k_{2}\pi/8)\tilde{Z}_{1}\operatorname{MS}_{\phi_{1}}(k_{1}\pi/8)L_{0}.$$
We have thus shown that any $N$-qubit unitary $U$ can be decomposed into a sequence of the form shown in (25). These sequences always have the same structure, which makes it easier to identify patterns if one wants to compile families of unitaries, i.e. unitaries that depend on some tunable parameter.
III.2 Numerical optimization
We have described a general form of a sequence of local operations and global entangling gates that implements any desired target unitary. It remains to find the actual sequence parameters, that is, the rotation angles and phases of the gates. However, we do not know a priori how many entangling gates will be needed for a given unitary. Therefore we suggest the following algorithm:
1.
Propose a sequence with $M=0$ entangling gates.
2.
Search numerically for the sequence parameters that maximize the fidelity with the target unitary.
3.
If the sequence has converged to the desired unitary (i.e. the fidelity equals 1), stop. Otherwise increase $M$ by 1 and go back to step 2.
When performing the numerical optimization in step 2 there might be a number of local optima in addition to the true global optimum, making fully deterministic optimization methods difficult to apply. We apply therefore a repeated local search, where an efficient deterministic optimization method is iterated, each time using randomly determined initial conditions. The initial conditions are chosen randomly for every optimization run, as experience has shown us that starting close to previously found local minima does not offer any improvement. The search is finalized whenever the fidelity with the target unitary is above some predefined threshold, or when a maximum number of tries is exceeded. An advantage of this method is that, since each optimization run starts from random initial conditions, these are easy to perform in parallel.
The algorithm chosen for each numerical optimization is the quasi-Newton method of Broyden, Fletcher, Goldfarb, and Shanno (BFGS) Nocedal and Wright (1999). The function to be maximized is the fidelity of the unitary resulting from the pulse sequence with the target unitary. Since we are interested in exact solutions, we reject those solutions whose fidelity (normalized to the maximum value possible) is not equal to $1$ within some tolerance threshold (usually $1\%$). The gradient of the fidelity can be calculated analytically as a function of the sequence parameters, which speeds up the computation as compared to using several evaluations of the fidelity function.
A previously used approach to this optimization problem was a combination of local gradient descent and simulated annealing (SA) Nebendahl et al. (2009), which also helps to avoid local maxima. However, this method did not make use of the analytic expression for the fidelity gradient, which speeds up the search. Moreover, its performance depends on the “topography” of the optimization space and requires manual tuning of the search parameters to achieve optimal results. We have compared the BFGS and simulated annealing approaches by compiling 100 random unitaries uniformly distributed in the Haar measure as explained in Mezzadri (2007) for different numbers of qubits. In our experience, we find that the BFGS method scales better with the number of qubits than simulated annealing (see Figure 4). The median number of search repetitions needed to find the global optimum was 1 in all the cases.
The exponential scaling of the optimization problem complexity depends on the number of entangling gates required to compile a given unitary, which is an intrinsic property of the unitary and does not depend on the search algorithm. It is already known that it is not possible to efficiently implement an arbitrary unitary in terms of two-qubit gates Nielsen and Chuang (2004); our numerical results suggest a similar result for $N$-qubit gates. In the two-qubit case the compilation always succeeded with 3 entangling gates, and not less (using 200 search repetitions). This was to be expected, since for two qubits an MS gate is equivalent to a $\operatorname{CNOT}$ gate, and it is known that 3 CNOT gates are enough (and in general necessary) to implement an arbitrary two-qubit unitary Vatan and Williams (2004); Hanneke et al. (2009). In the three-qubit case, the optimization always succeeded with 8 entangling gates, and never with fewer (also using 200 repetitions). For 4 qubits, the optimization always succeeded for 25 entangling gates, and succeeded only 4% of the time with 24 entangling gates. However, we did only 4 optimization repeats in the four-qubit case, owing to the increased time it takes for these to converge. Therefore, it might be the case that given enough optimization runs, more unitaries would have been compiled with only 24 gates. We are not aware of any result in the literature concerning the number of $N$-qubit global entangling gates required for implementing a general $N$-qubit unitary for more than $N=2$ qubits. From our numerical results, we conjecture that any three-qubit unitary can be implemented using at most 8 MS gates, and any four-qubit unitary using at most 24 or 25 MS gates.
A particularly interesting group of unitaries are Clifford gates, which find applications in quantum error correction Gottesman (1999), randomized benchmarking Knill et al. (2008), and state distillation protocols Nielsen and Chuang (2004). To explore the difficulty of compiling such gates, we have tested our algorithm with randomly generated Clifford gates, as explained in Ref. DiVincenzo et al. (2002). We show in Figure 5 the distribution of the optimal number of entangling gates required for compliging two-, three- and four-qubit unitaries. Our results agree with the literature Kliuchnikov and Maslov (2013) for the two-qubit case, since MS gates are then equivalent to controlled-Z (or CNOT) gates. For larger numbers of qubits, the performance of our algorithm in terms of number of multi-qubit gates required is also similar to that of algorithms based on two-qubit gates Kliuchnikov and Maslov (2013).
III.3 Compilation of isometries
A particular case of interest is the compilation of a unitary whose action only matters on certain input states. This happens, for instance, when one is interested in state preparation starting from some fixed input state. Such operations belong to the more general class of operations known as isometries Knill et al. (2008); Iten et al. (2016). In this case, the problem to be solved has less constraints than when fully specifying the target unitary, so a simpler sequence may be found. In this section we focus on compiling a unitary that is only specified in a particular subspace of the input states, for example:
$$U_{\text{target}}=\begin{pmatrix}u_{11}&u_{12}&\vdots&\vdots\\
u_{21}&u_{22}&\text{free}&\text{free}\\
u_{31}&u_{32}&\vdots&\vdots\\
u_{41}&u_{42}&\vdots&\vdots\end{pmatrix},$$
(26)
where the columns marked as ‘free’ are left unspecified. In this case, a suitable fidelity function for the numerical optimization is:
$$f(U)=\left|\operatorname{tr}\left(U|_{S}\,U_{\text{target}}|_{S}^{\dagger}\right)\right|^{2},$$
(27)
where $U|_{S}$ is a rectangular matrix with the components of the unitary in the restricted subspace.
A more general case is where some of the relative phases of the projections of the unitary acting on different subspaces of the whole Hilbert space are irrelevant. For example, suppose that one wants to apply a unitary to map some observable onto an ancilla qubit and then measure the ancilla, as shown in figure 6. Since the input state of qubit 3 is known to be $\ket{0}$, only the subspace of input states spanned by {$\ket{000}$, $\ket{010}$, $\ket{100}$, $\ket{110}$} is relevant. Moreover, the measurement will project the state of the system onto either the subspace spanned by {$\ket{000}$, $\ket{010}$, $\ket{100}$}, or that spanned by {$\ket{111}$}, and all phase coherence between these alternatives will be lost. Therefore, the compiled sequence can be sought such that it matches the desired unitary in each of the subspaces but allowing an arbitrary phase $\phi$ between them:
$$U_{\text{target}}=\begin{pmatrix}1&\vdots&0&\vdots&0&\vdots&0&\vdots\\
0&\vdots&0&\vdots&0&\vdots&0&\vdots\\
0&\vdots&1&\vdots&0&\vdots&0&\vdots\\
0&\text{free}&0&\text{free}&0&\text{free}&0&\text{free}\\
0&\vdots&0&\vdots&1&\vdots&0&\vdots\\
0&\vdots&0&\vdots&0&\vdots&0&\vdots\\
0&\vdots&0&\vdots&0&\vdots&0&\vdots\\
0&\vdots&0&\vdots&0&\vdots&e^{i\phi}&\vdots\\
\end{pmatrix}$$
(28)
In this case (figure 6) it is possible to find a simpler implementation than in the fully constrained case (figure 2), owing to the additional degrees of freedom available, namely arbitrary outputs for the $\ket{\psi_{3}}=\ket{1}$ input states and an arbitrary relative phase between the two possible measurement outcomes.
In the general case considered here we want to maximize the fidelity in each subspace, without regard to the relative phases between these. Therefore we can seek to maximize the function $f$ consisting of the sum of the fidelity functions (27) corresponding to each subspace:
$$f(U)=\sum_{j}\left|\operatorname{tr}\left(U|_{S_{j}}\,U_{\text{target}}|_{S_{j}}^{\dagger}\right)\right|^{2},$$
(29)
where the sum goes over all the subspaces with different relative phases, and $U|_{S_{j}}$ is a rectangular matrix with the components of the unitary in the $j$-th subspace.
III.4 Compensation of systematic errors
Owing to systematic errors, the operations experimentally applied may still be unitary but deviate from the intended ones. An example of this is addressing crosstalk due to laser light leaking onto adjacent qubits. If it is possible to characterize the actual experimental operations being applied, then they can be taken into account for the compilation by adapting our optimization procedure:
1.
Compile the target unitary in terms of the ideal gates.
2.
Replace the ideal gates by the experimentally characterized operations.
3.
Add operations to obtain a higher fidelity with the ideal target unitary.
As an example we show that excessive crosstalk can be corrected in an implementation of a Toffoli gate. Figure 7 depicts experimental data corresponding to the action of the Toffoli gate on the 8 input basis states. It can be seen that, by adding just two pulses, the output fidelity for each input state increased in some cases by up to 20%. The sequence with 11 pulses is actually only an approximate correction to the uncorrected case. The exact correction requires 14 pulses, and actually yields a lower fidelity than the approximate one, since it requires more pulses and each of these has a non-zero error probability.
IV Conclusions and outlook
In this work we have shown methods to compile quantum unitaries into a sequence of collective rotations, addressed rotations and global entangling operations. For local unitaries, we have demonstrated an analytic approach that produces the shortest possible sequences in the general case, and adapted the method to simplify the resulting sequences if some constraints on the unitary are lifted. For arbitrary unitaries, we have presented an approach that produces sequences of layered local and entangling operations. This approach is based on a numerical optimization procedure that is faster than previously used ones, and the sequences obtained are by design optimal with respect to the number of entangling gates. Our numerical results suggest upper bounds on the number of $N$-qubit gates required to implement arbitrary three- and four-qubit unitaries.
The results of this paper show that in many cases one may obtain more efficient implementations by considering operations more general than two-qubit entangling gates. However, the exponentially growing complexity of decompositions as the number of qubits increases points to the necessity of keeping the register size small.
V Acknowledgements
We thank I. Chuang, B. Lanyon, and V. Nebendahl for fruitful discussions. We thank the referees for bringing Refs. Knill et al. (2008); Iten et al. (2016); DiVincenzo et al. (2002); Kliuchnikov and Maslov (2013) to our attention. We gratefully acknowledge support by the Austrian Science Fund (FWF), through the SFB FoQuS (FWF Project No. F4002-N16), as well as the Institut für Quantenoptik und Quanteninformation GmbH. E.A.M. is a recipient of a DOC fellowship from the Austrian Academy of Sciences. P.S. was supported by the Austrian Science Foundation (FWF) Erwin Schrödinger Stipendium 3600-N27. This research was funded by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Reasearch Projects Activity (IARPA), through the Army Research Office grant W911NF-10-1-0284. All statements of fact, opinion or conclusions contained herein are those of the authors and should not be construed as representing the official views or policies of IARPA, the ODNI, or the U.S. Government.
Appendix A Compiling local unitaries
A.1 Finding basis changes
In this appendix we will show how to satisfy equation (14). We need to find a rotation $C$ around the equator of the Bloch sphere such that:
$$u=C^{-1}\sigma_{z}C,$$
(30)
where $u$ is the generator of a given known unitary $U$, and it can always be written as:
$$u=\sin\theta\cos\phi\,\sigma_{x}+\sin\theta\sin\phi\,\sigma_{y}+\cos\theta\,\sigma_{z},$$
(31)
for some angles $\theta$, $\phi$.
In general $C$ is of the form:
$$C=e^{-i\gamma c/2},$$
(32)
where $\gamma$ is its rotation angle and $c$ its generator, which must lie on the equator and thus be a linear combination of $\sigma_{x}$ and $\sigma_{y}$. If we propose:
$$c=\sin\phi\,\sigma_{x}-\cos\phi\,\sigma_{y},$$
(33)
and replace in equation (30), we find that the angle of rotation must be:
$$\gamma=\theta.$$
(34)
A.2 Writing a unitary as a product of two equatorial rotations
We will show here how to decompose an arbitrary unitary as a product of two rotations around the equator of the Bloch sphere, namely:
$$U=C_{2}C_{1}.$$
(35)
The target unitary can be written as:
$$\displaystyle U$$
$$\displaystyle=\cos\left(\frac{\beta}{2}\right)\mathbf{1}-i\sin\left(\frac{\beta}{2}\right)\times$$
$$\displaystyle\quad\ \times(\sin\theta\cos\phi\ \sigma_{x}+\sin\theta\sin\phi\ \sigma_{y}+\cos\theta\ \sigma_{z}),$$
(36)
where $\beta$ is its rotation angle,and $\theta,\phi$ determine its rotation axis. Similarly, the equatorial rotations can be written as:
$$C_{i}=\cos\left(\frac{\alpha_{i}}{2}\right)\mathbf{1}-i\sin\left(\frac{\alpha_{i}}{2}\right)(\cos\phi_{i}^{\prime}\ \sigma_{x}+\sin\phi_{i}^{\prime}\ \sigma_{y}),$$
(37)
for some rotation angles $\alpha_{i}$ and phases $\phi_{i}^{\prime}$.
We shall asume that:
$$\displaystyle\alpha_{1}$$
$$\displaystyle=\alpha_{2}=\alpha,$$
(38)
$$\displaystyle\phi_{1}^{\prime}$$
$$\displaystyle=\phi+\Delta/2,$$
(39)
$$\displaystyle\phi_{2}^{\prime}$$
$$\displaystyle=\phi-\Delta/2.$$
(40)
Replacing these into (35) and solving for $\alpha$ and $\Delta$ we obtain:
$$\displaystyle\cos^{2}\left(\frac{\alpha}{2}\right)$$
$$\displaystyle=\frac{1}{2}\left(\cos\left(\frac{\beta}{2}\right)+1\right)\sin^{2}\theta,$$
(41)
$$\displaystyle\cos\Delta$$
$$\displaystyle=\frac{\cos^{2}\left(\frac{\alpha}{2}\right)-\cos\left(\frac{\beta}{2}\right)}{1-\cos^{2}\left(\frac{\alpha}{2}\right)}.$$
(42)
A.3 Unitaries up to a collective Z rotation
Suppose that the unitary $U$ we want to implement is followed by gates whose phase can be freely chosen. Then it must only be specified up to an arbitrary collective rotation $Z^{\prime}$, since this phase can be absorbed in the following gates. To compile $U$, we shall consider a decomposition of the form (7):
$$U=Z^{\prime}C_{N}Z_{N-1}C_{N-1}\dotsm Z_{2}C_{2}Z_{1}C_{1}.$$
(43)
Such a decomposition is more convenient in this case because the last addressed pulse $Z_{N}$ has been eliminated by taking advantage of the additional degree of freedom provided by $Z^{\prime}$. We can now follow the same steps as in section II. The unitary $C_{N}$ is given by:
$$C_{N}=Z^{\prime-1}\,U_{N}C_{1}^{-1}C_{2}^{-1}\dotsm C_{N-1}^{-1},$$
(44)
and eliminating this factor from the rest of the equations we obtain:
$$\displaystyle U_{N}^{-1}U_{1}$$
$$\displaystyle=C_{1}^{-1}Z_{1}C_{1},$$
(45)
$$\displaystyle U_{N}^{-1}U_{2}$$
$$\displaystyle=C_{1}^{-1}C_{2}^{-1}Z_{2}C_{2}C_{1},$$
$$\displaystyle\ \ \vdots$$
$$\displaystyle U_{N}^{-1}U_{N-1}$$
$$\displaystyle=C_{1}^{-1}C_{2}^{-1}\dotsm C_{N-1}^{-1}Z_{N-1}C_{N-1}\dotsm C_{2}C_{1}.$$
Equations (45) can be satisfied in exactly the same way as explained in section II. In order to satisfy equation (44) we need to find a rotation $Z^{\prime}$ such that the generator of $C_{N}$ lies on the equator. This can be done as follows.
We wish to find how to satisfy equation (44). For this we need to find a rotation $Z$ around the Z axis and a rotation $C$ around an axis on the equator of the Bloch sphere such that, for a given unitary $U$, the following equation holds:
$$C=ZU.$$
(46)
$U$ is in general of the form:
$$U=e^{-i\alpha u/2},$$
(47)
and $Z$ is of the form:
$$Z=e^{-i\beta\sigma_{z}/2}.$$
(48)
We will first find the angle of rotation $\beta$. If we write out (46) in terms of the generators of $U$ and $Z$ we have:
$$\displaystyle C$$
$$\displaystyle=\left(\cos\left(\frac{\beta}{2}\right)\mathbf{1}-i\sin\left(\frac{\beta}{2}\right)\sigma_{z}\right)\times$$
$$\displaystyle\quad\ \times\left(\cos\left(\frac{\alpha}{2}\right)\mathbf{1}-i\sin\left(\frac{\alpha}{2}\right)u\right).$$
(49)
Since the axis of rotation of $C$ lies on the equator, its generator must not have any Z component, and thus:
$$0=\sin\left(\frac{\beta}{2}\right)\cos\left(\frac{\alpha}{2}\right)+\cos\left(\frac{\beta}{2}\right)\sin\left(\frac{\alpha}{2}\right)u_{z},$$
(50)
that is:
$$\beta=-2\arctan\left(\tan\left(\frac{\alpha}{2}\right)u_{z}\right).$$
(51)
Once $\beta$ is known, the unitary on the right-hand side of (46) is fully determined, and thus $C$ as well.
A.4 Unitaries up to independent Z rotations
Finally, suppose that the unitary we want to implement is defined up to arbitrary independent rotations for each qubit around the Z axis. This is useful if the unitary is followed by a projective measurement, since any final rotation around the measurement axis for any qubit simply adds a phase and will not change the measured probabilities.
Let us again consider a sequence of the form (7). The decomposition must now satisfy, for each qubit:
$$\displaystyle Z^{\prime}_{1}U_{1}$$
$$\displaystyle=C_{N}\dotsm C_{2}Z_{1}C_{1},$$
(52)
$$\displaystyle Z^{\prime}_{2}U_{2}$$
$$\displaystyle=C_{N}\dotsm Z_{2}C_{2}C_{1},$$
$$\displaystyle\ \ \vdots$$
$$\displaystyle Z^{\prime}_{N}U_{N}$$
$$\displaystyle=Z_{N}C_{N}\cdots C_{2}C_{1},$$
where the $Z^{\prime}_{i}$ are arbitrary rotations around the Z axis. As before, we can set $Z_{N}=\mathbf{1}$ and find $C_{N}$:
$$C_{N}=Z_{N}^{\prime}\,U_{N}C_{1}^{-1}C_{2}^{-1}\dotsm C_{N-1}^{-1}.$$
(53)
Eliminating $C_{N}$ from the remaining equations we obtain:
$$\displaystyle U_{N}^{-1}Z_{N}^{\prime-1}Z_{1}^{\prime}U_{1}$$
$$\displaystyle=C_{1}^{-1}Z_{1}C_{1},$$
(54)
$$\displaystyle U_{N}^{-1}Z_{N}^{\prime-1}Z_{2}^{\prime}U_{2}$$
$$\displaystyle=C_{1}^{-1}C_{2}^{-1}Z_{2}C_{2}C_{1},$$
$$\displaystyle\ \ \vdots$$
$$\displaystyle U_{N}^{-1}Z_{N}^{\prime-1}Z_{N-1}^{\prime}U_{N-1}$$
$$\displaystyle=C_{1}^{-1}\dotsm C_{N-1}^{-1}Z_{N-1}C_{N-1}\dotsm C_{1}.$$
Each equation has now an extra degree of freedom coming from the angle of the $Z^{\prime}_{k}$ rotation. Let us for simplicity consider the case where the number of qubits $N$ is odd. If we group equations (54) in pairs we get two degrees of freedom per pair, which can be used to remove one of the global operations. Therefore we will discard every even-numbered global operation $C_{2k}$ from our decomposition and look for the solution of the following system of equations:
$$\displaystyle U_{N}^{-1}Z_{1}^{\prime\prime}U_{1}$$
$$\displaystyle=C_{1}^{-1}Z_{1}C_{1},$$
(55)
$$\displaystyle U_{N}^{-1}Z_{2}^{\prime\prime}U_{2}$$
$$\displaystyle=C_{1}^{-1}Z_{2}C_{1},$$
$$\displaystyle U_{N}^{-1}Z_{3}^{\prime\prime}U_{3}$$
$$\displaystyle=C_{1}^{-1}C_{3}^{-1}Z_{3}C_{3}C_{1},$$
$$\displaystyle U_{N}^{-1}Z_{4}^{\prime\prime}U_{4}$$
$$\displaystyle=C_{1}^{-1}C_{3}^{-1}Z_{4}C_{3}C_{1},$$
$$\displaystyle\ \ \vdots$$
$$\displaystyle U_{N}^{-1}Z_{N-2}^{\prime\prime}U_{N-2}$$
$$\displaystyle=C_{1}^{-1}\dotsm C_{N-2}^{-1}Z_{N-2}C_{N-2}\dotsm C_{1},$$
$$\displaystyle U_{N}^{-1}Z_{N-1}^{\prime\prime}U_{N-1}$$
$$\displaystyle=C_{1}^{-1}\dotsm C_{N-2}^{-1}Z_{N-1}C_{N-2}\dotsm C_{1},$$
where $Z_{k}^{\prime\prime}=Z_{N}^{\prime-1}Z^{\prime}_{k}$. If the number of qubits $N$ is even, then the last equation is simply left unpaired. It is easy to verify that for each pair of equations the right-hand sides commute, and therefore we must have:
$$[U_{N}^{-1}Z_{2k-1}^{\prime\prime}U_{2k-1},U_{N}^{-1}Z_{2k}^{\prime\prime}U_{2k}]=0,$$
(56)
or equivalently:
$$[Z_{2k-1}^{\prime\prime}U_{2k-1}U_{N}^{-1},Z_{2k}^{\prime\prime}U_{2k}U_{N}^{-1}]=0.$$
(57)
In order to solve equation (57) we need to find rotations $Z_{1}=Z(\beta_{1})$, $Z_{2}=Z(\beta_{2})$ that satisfy a general equation of the form:
$$[Z_{1}U_{1},Z_{2}U_{2}]=0,$$
(58)
for given arbitrary $U_{1}$, $U_{2}$, whose generators are $u_{1}$ and $u_{2}$ respectively.
Let us define:
$$V_{i}=Z_{i}U_{i},$$
(59)
and let $v_{i}$ be the generators of the $V_{i}$. In order to satisfy (58), the $v_{i}$ must satisfy:
$$v_{1}=v_{2}=v,$$
(60)
since if two unitaries commute their generators must be the same. Our first goal is to determine the generator $v$. Let us consider the unitary:
$$W_{i}=Z_{i}^{1/2}U_{i}Z_{i}^{1/2}.$$
(61)
By writing down $W_{i}$ explicitly in terms of the generators of each factor, it can be seen that its generator $w_{i}$ satisfies:
$$\{w_{i},[\sigma_{z},u_{i}]\}=0.$$
(62)
Since we have:
$$V_{i}=Z_{i}^{1/2}W_{i}Z_{i}^{-1/2},$$
(63)
from equation (62) we see that:
$$\left\{v,Z_{i}^{1/2}[\sigma_{z},u_{i}]Z_{i}^{-1/2}\right\}=0.$$
(64)
The geometrical meaning of this equation is that the vector defined by $v$ on the Bloch sphere is perpendicular to that defined by $Z_{i}^{1/2}[\sigma_{z},u_{i}]Z_{i}^{-1/2}$. Since (64) must hold for $i=1,2$, $v$ must correspond to the cross product of these vectors:
$$v=\mathcal{N}\left[Z_{1}^{1/2}[z,u_{1}]Z_{1}^{-1/2},Z_{2}^{1/2}[\sigma_{z},u_{2}]Z_{2}^{-1/2}\right],$$
(65)
where $\mathcal{N}$ is chosen such that:
$$\frac{1}{2}\operatorname{tr}(v^{2})=1.$$
(66)
Having found $v$, it remains to find the rotation angles $\beta_{i}$. Now, $v$ must satisfy $[Z_{i}U_{i},v]=0$, and therefore:
$$U_{i}vU_{i}^{-1}=Z(\beta_{i})^{-1}vZ(\beta_{i}).$$
(67)
Both $v$ and $U_{i}$ are known, so $v$ and $U_{i}vU_{i}^{-1}$ can be written down explicitely as:
$$\displaystyle v$$
$$\displaystyle=\sin\theta\cos\phi\,\sigma_{x}+\sin\theta\sin\phi\,\sigma_{y}+\cos\theta\,\sigma_{z},$$
(68)
$$\displaystyle U_{i}vU_{i}^{-1}$$
$$\displaystyle=\sin\theta\cos\phi^{\prime}_{i}\,\sigma_{x}+\sin\theta\sin\phi^{\prime}_{i}\,\sigma_{y}+\cos\theta\,\sigma_{z},$$
(69)
and therefore:
$$\displaystyle\beta_{i}=\phi-\phi^{\prime}_{i}.$$
(70)
We have shown how to find suitable rotations $Z^{\prime\prime}$ that fulfill condition (57). Once these are found, all the left-hand sides of (55) are known unitaries and the system can be solved as before. The last collective rotation $C_{N}$ can be determined from (53) as shown in appendix A.3. We have thus shown how to compile the sought unitary $U$ into a sequence of the form:
$$U=\begin{cases}C_{N}Z_{N-1}Z_{N-2}C_{N-2}\dotsm C_{3}Z_{2}Z_{1}C_{1}&\text{for odd $N$,}\\
C_{N}Z_{N-1}C_{N-1}\dotsm C_{3}Z_{2}Z_{1}C_{1}&\text{for even $N$}.\end{cases}$$
(71)
References
Ekert and Jozsa (1996)
A. Ekert and R. Jozsa, Rev. Mod. Phys. 68, 733 (1996).
Gisin et al. (2002)
N. Gisin, G. Ribordy,
W. Tittel, and H. Zbinden, Rev.
Mod. Phys. 74, 145
(2002), arXiv:0101098 [quant-ph] .
Cirac and Zoller (2012)
J. I. Cirac and P. Zoller, Nat. Phys. 8, 264 (2012).
Bloch et al. (2012)
I. Bloch, J. Dalibard, and S. Nascimbène, Nat. Phys. 8, 267 (2012).
Blatt and Roos (2012)
R. Blatt and C. F. Roos, Nat. Phys. 8, 277 (2012).
Nielsen and Chuang (2004)
M. A. Nielsen and I. L. Chuang, Quantum Computation and
Quantum Information, 1st ed. (Cambridge University Press, 2004) pp. 191–193.
Schindler et al. (2013)
P. Schindler, D. Nigg,
T. Monz, J. T. Barreiro, E. Martinez, S. X. Wang, S. Quint, M. F. Brandl, V. Nebendahl, C. F. Roos, M. Chwalla, M. Hennrich, and R. Blatt, New J. Phys. 15, 123012 (2013), arXiv:1308.3096 .
Harty et al. (2014)
T. P. Harty, D. T. C. Allcock, C. J. Ballance, L. Guidoni,
H. A. Janacek, N. M. Linke, D. N. Stacey, and D. M. Lucas, Phys. Rev. Lett. 113, 220501 (2014), arXiv:1403.1524 .
Xia et al. (2015)
T. Xia, M. Lichtman,
K. Maller, A. W. Carr, M. J. Piotrowicz, L. Isenhower, and M. Saffman, Phys. Rev. Lett. 114, 100503 (2015), arXiv:1501.02041 .
Vandersypen and Chuang (2005)
L. M. K. Vandersypen and I. L. Chuang, Rev. Mod. Phys. 76, 1037 (2005).
Nebendahl et al. (2009)
V. Nebendahl, H. Häffner, and C. F. Roos, Phys. Rev. A 79, 012312 (2009), arXiv:0809.1414v4 .
Khaneja and Glaser (2001)
N. Khaneja and S. J. Glaser, Chem. Phys. 267, 11 (2001).
Knill (1995)
E. Knill, LANL Rep., Tech. Rep. (1995).
Iten et al. (2016)
R. Iten, R. Colbeck,
I. Kukuljan, J. Home, and M. Christandl, Phys. Rev. A 93, 032318 (2016).
Sørensen and Mølmer (2000)
A. Sørensen and K. Mølmer, Phys. Rev. A 62, 22311 (2000).
Debnath et al. (2016)
S. Debnath, N. M. Linke,
C. Figgatt, K. A. Landsman, K. Wright, and C. Monroe, (2016), arXiv:1603.04512 .
Gaebler et al. (2012)
J. P. Gaebler, A. M. Meier,
T. R. Tan, R. Bowler, Y. Lin, D. Hanneke, J. D. Jost, J. P. Home, E. Knill,
D. Leibfried, and D. J. Wineland, Phys. Rev. Lett. 108, 260503 (2012).
Monroe et al. (2014)
C. Monroe, R. Raussendorf,
A. Ruthven, K. R. Brown, P. Maunz, L.-M. Duan, and J. Kim, Phys. Rev. A 89, 022317 (2014).
Ballance et al. (2015)
C. J. Ballance, T. P. Harty,
N. M. Linke, M. a. Sepiol, and D. M. Lucas, , 1 (2015), arXiv:1512.04600
.
Isenhower et al. (2011)
L. Isenhower, M. Saffman,
and K. Molmer, Quantum Inf. Process. 10, 755 (2011), arXiv:1104.3916 .
Shende and Markov (2009)
V. V. Shende and I. L. Markov, Quant. Inf. Comp. 9, 461 (2009), arXiv:0803.2316 .
Monz et al. (2016)
T. Monz, D. Nigg, E. A. Martinez, M. F. Brandl, P. Schindler, R. Rines, S. X. Wang, I. L. Chuang, and R. Blatt, Science 351, 1068 (2016).
Yu and Ying (2013)
N. Yu and M. Ying, (2013), arXiv:1301.3727 .
DiVincenzo (1995)
D. P. DiVincenzo, Phys. Rev. A 51, 1015 (1995), arXiv:9407022 [cond-mat] .
Nocedal and Wright (1999)
J. Nocedal and S. J. Wright, Numerical Optimization (Springer, 1999).
Mezzadri (2007)
F. Mezzadri, Not. Am. Math. Soc. 54, 592 (2007), arXiv:0609050 [math-ph] .
Vatan and Williams (2004)
F. Vatan and C. Williams, Phys. Rev. A 69, 032315 (2004).
Hanneke et al. (2009)
D. Hanneke, J. P. Home,
J. D. Jost, J. M. Amini, D. Leibfried, and D. J. Wineland, Nat. Phys. 6, 13 (2009).
Gottesman (1999)
D. Gottesman, in Proc. XXII Int. Colloq. Gr. Theor. Methods Phys., edited by S. P. Corney, R. Delbourgo, and P. D. Jarvis (International Press, Cambridge, MA, 1999) pp. 32–43, arXiv:9807006 [quant-ph]
.
Knill et al. (2008)
E. Knill, D. Leibfried,
R. Reichle, J. Britton, R. B. Blakestad, J. D. Jost, C. Langer, R. Ozeri, S. Seidelin, and D. J. Wineland, Phys.
Rev. A 77, 012307
(2008), arXiv:0707.0963 .
DiVincenzo et al. (2002)
D. P. DiVincenzo, D. W. Leung, and B. M. Terhal, IEEE Trans. Inf. Theory 48, 580 (2002), arXiv:0103098 [quant-ph] .
Kliuchnikov and Maslov (2013)
V. Kliuchnikov and D. Maslov, Phys. Rev. A 88, 052307 (2013). |
Discrete and diffuse X-ray emission in the nucleus and disk
of the starburst spiral galaxy M83
R. Soria
1Mullard Space Science Laboratory,
University College London, Holmbury St Mary,
Surrey RH5 6NT, UK 1
K. Wu
1Mullard Space Science Laboratory,
University College London, Holmbury St Mary,
Surrey RH5 6NT, UK 1
Abstract
We have studied the face-on, barred spiral M83 (NGC 5236) with Chandra.
Eighty-one point sources are detected (above 3.5-$\sigma$)
in the ACIS S3 image:
15 of them are within the inner 16\arcsec region
(starburst nucleus, resolved for the first time
with Chandra), and 23 within the inner 60\arcsec (including the bar).
The luminosity distributions of the sources
in the inner 60\arcsec region (nucleus and stellar bar) is a single power law,
which we interpret as due to continuous, ongoing star formation.
Outside this inner region, there is
a smaller fraction of bright sources, which we interpret
as evidence of an aging population from a past episode of star formation.
About 50% of the total emission in the nuclear region
is unresolved; of this, about 70% can be attributed
to hot thermal plasma, and we interpret the rest
as due to unresolved point sources (eg, faint X-ray binaries).
The unresolved X-ray emission also shows differences
between the nuclear region and the spiral arms. In the nuclear region,
the electron temperature of the thermal plasma is $\approx 0.58$ keV. In the
spiral arms, the thermal component is at $kT\approx 0.31$ keV
and a power-law component dominates at energies $\buildrel{\lower 3.0pt\hbox{$>$}}\over{\lower 2.0pt\hbox{$\sim$}}$ 1 keV.
The high abundance of C, Ne, Mg, Si and S with respect to Fe
suggests that the interstellar medium is enriched and heated
by core-collapse supernova explosions and winds from massive stars.
keywords:
Galaxies: individual: M83 (=NGC 5236) –
Galaxies: nuclei –
Galaxies: spiral –
Galaxies: starburst –
X-rays: binaries –
X-rays: galaxies
1 Introduction
M83 (NGC 5236) is a grand-design, barred spiral galaxy
(Hubble type SAB(s)c) with a starburst nucleus.
Distance estimates are still very uncertain.
A value of 3.7 Mpc was obtained
by [*]rsoria-E3:va91.
This places the galaxy in the Centaurus A group,
whose members have a large spread in morphology and high velocities,
indicating that the group is not virialised
and tidal interactions and merging are frequent
([\astroncitede Vaucouleurs1979]; [\astronciteCôté et al.1998]).
M83 was observed in the X-ray bands by Einstein
in 1979–1981 ([\astronciteTrinchieri et al.1985]),
by ROSAT in 1992–1994 ([\astronciteImmler et al.1999]), and
by ASCA in 1994 ([\astronciteOkada et al.1997]).
Twenty-one point sources were found in the ROSAT/HRI image,
but the starburst nuclear region was unresolved.
M83 was observed by Chandra on 2000 April 29,
with the ACIS-S3 chip at the focus.
The data became available to the public in mid-2001.
The total exposure time was 50.978 ks; after screening out
observational intervals corresponding to background
flares, we retained a good time interval of 49.497 ks.
In this paper we present the luminosity distribution of
the discrete source population and discuss the properties
of the unresolved emission in the nuclear region and in the disk.
For further details on the data analysis techniques,
and for more extensive discussions
on the properties of the individual sources, see [*]rsoria-E3:sw02.
2 Global properties of the discrete sources
A total of 81 point sources
are detected in the S3 chip at a 3.5-$\sigma$ level
in the 0.3–8.0 keV band. The source list is given in
[*]rsoria-E3:sw02.
Comparing the position of the Chandra S3 sources
with a VLT $B$ image
shows that the off-centre sources tend to associate
with the optically bright regions (Figure 1).
The sources have a large spread in the hardness of their X-ray emission.
A “true-colour” X-ray image of the nuclear region is shown
in Figure 2, bottom panel.
Separating the sources inside and outside a circular region
of radius 60\arcsec from the geometric centre of the X-ray emission
reveals that the two groups have different luminosity distributions
in the 0.3–8.0 keV band.
(A linear separation of 60\arcsec corresponds to 1.1 kpc
for a distance of 3.7 Mpc,
and is roughly half of the total length of the major galactic bar.)
The cumulative log N($>$S) – log S distribution
(where S are the photon counts)
of the sources inside this inner region
can be described as a single power law, with a slope of $-0.8$.
The log N($>$S) – log S curve
of the sources outside the circular inner region instead
is neither a single nor a broken power law (Figure 3).
It shows a kink at S $\approx 250$ cts;
the slope of the curve above the kink is $-1.3$, while
it is $-0.6$ at the faint end.
If we assume a foreground absorbing column density
$n_{\rm H}=4\times 10^{20}$ cm${}^{-2}$
([\astronciteSchlegel et al.1998]),
a distance of 3.7 Mpc and
a power-law spectrum with photon index $\Gamma=1.5$
for all the sources,
100 counts ($\approx 2.0\times 10^{-3}$ cts s${}^{-1}$)
correspond to an unabsorbed source luminosity
$L_{\rm x}=2.3\times 10^{37}$ erg s${}^{-1}$ in the 0.3–8.0 keV band.
The kink in the log N($>$S) – log S curve
of the sources outside the 60\arcsec circle
is therefore located at
$L_{\rm x}\approx 6\times 10^{37}$ erg s${}^{-1}$ (0.3–8.0 keV band).
We estimate from the Deep Field South survey ([\astronciteGiacconi et al.2001])
that about 15% of the 81 sources are background AGN;
the expected number in the inner 60\arcsec circle
is smaller than one. The kink in the log N($>$S) – log S curve
for the outer sources and the values of the slope at both ends
are unaffected by the background subtraction.
The flatter slope of the log N($>$S) – log S curve
at the high-luminosity end for the population of sources
in the inner 60\arcsec region
implies a larger proportion of bright sources than
in source population further away from the nucleus.
The situation is different for example
in the spiral galaxy M81,
where most bright sources are found in the galactic disk
instead of the nuclear region ([\astronciteTennant et al.2001]).
If the flatness of the slope in the log N($>$S) – log S curve
is a characteristic of ongoing star formation ([\astronciteWu2001]),
the difference in the spatial distribution of the brightest sources
in M83 and M81 is simply a consequence of the fact that
M83 has a starburst nucleus
while star formation in galaxies such as M81 is presently more efficient
in the disk.
3 Emission from the nuclear region
The Chandra data reveal that
M83 has a very highly structured nuclear region
(Figure 2, bottom panel).
Fifteen discrete sources are detected
within a radius of $\approx 16\arcsec$ ($\approx 290$ pc)
from the centre of symmetry of the outer optical isophotes.
A spectral analysis of the two brightest sources is presented
in [*]rsoria-E3:sw02, and an analysis of other point sources
will be presented in Soria et al. (in preparation).
We removed the point sources and extracted counts
from concentric annuli to construct radial brightness profiles
of the unresolved emission in the nuclear region.
We found that the brightness is approximately constant
in a circular region up to a radius of 7\arcsec and
then declines radially with a power-law like profile.
The azimuthally-averaged profile is well fitted by a King profile,
with a core radius of $6\farcs 7\pm 0\farcs 5$ ($\approx 120$ pc),
and a power law with a slope of $-1.9\pm 0.2$
beyond the core.
We extracted the spectrum of the unresolved emission inside
the inner 16\arcsec circle, excluding the resolved point sources,
and we fitted it using an absorbed, single-temperature
vmekal plus power-law model.
Assuming solar abundances, we obtain
a best-fit electron temperature $kT=(0.60^{+0.02}_{-0.03})$ keV
and a power-law photon index $\Gamma=3.1^{+0.1}_{-0.2}$.
The predicted lines are not strong enough
to account for the data, leading to poor fit statistics
($\chi^{2}_{\nu}=1.42$, 114 dof).
Increasing the abundance of all the metals by the same constant
factor does not improve the fit.
We then assumed a different set of abundances, higher
than solar for C, Ne, Mg, Si and S, and
slightly underubandant for Fe (see Table 1 in [\astronciteSoria & Wu2002]).
This is physically justified if the interstellar medium
has been enriched by type-II supernova ejecta and winds from
very massive, young stars. We obtain a best-fit
temperature $kT=(0.58^{+0.03}_{-0.02})$ keV, and power-law
photon index $\Gamma=2.7^{+0.3}_{-0.3}$
($\chi^{2}_{\nu}=0.99$, 114 dof).
We also estimated the total (resolved plus unresolved)
luminosity from the circular regions within radii
of 7\arcsec (this is approximately the region inside the outer dust ring)
and 16\arcsec from the geometric centre of the X-ray emission,
using an absorbed, optically-thin thermal plasma
plus power-law model. The total emitted luminosity
in the 0.3–8.0 keV band is $\approx 15.7\times 10^{38}$ erg s${}^{-1}$
inside 7\arcsec and $\approx 23.8\times 10^{38}$ erg s${}^{-1}$
inside 16\arcsec (Table 1). Discrete sources contribute
$\approx 50\%$ of the total luminosity.
The unresolved emission is itself the sum of truly diffuse emission
from optically thin gas, and emission from unresolved point-like
sources (eg, faint X-ray binaries). Assuming that the latter
contribution is responsible for the power-law component
in the spectrum of the unresolved emission, we estimate that
emission from truly diffuse thermal plasma contributes $\approx 35\%$
of the total luminosity (Table 1).
Extrapolating the log N($>$S) – log S curve for the nuclear sources
gives us another way of estimating
the relative contribution to the
unresolved emission of truly diffuse gas and faint point-like
sources. We obtain that
unresolved point-like X-ray sources inside 16\arcsec would
have a total luminosity
of $\approx 2.7\times 10^{38}$ erg s${}^{-1}$ (see [\astronciteSoria & Wu2002]
for details).
Another possible contribution to the unresolved emission
comes from photons emitted by the resolved sources
but falling outside the extraction regions, in the wings
of the PSF. We estimate that this contribution is $\buildrel{\lower 3.0pt\hbox{$<$}}\over{\lower 2.0pt\hbox{$\sim$}}$ $1.5\times 10^{38}$ erg s${}^{-1}$.
Thus, the combined contribution of faint X-ray sources and emission in
the wings of the PSF can account for
the luminosity of the power-law component inferred
from the spectral fitting of the unresolved emission.
This also confirms that a substantial proportion ($\approx 70$%)
of the unresolved emission is indeed due to truly diffuse gas
rather than faint point-like sources.
Diffuse X-ray emission is also clearly observed
along the spiral arms (Figure 2, top panel).
We compared the spectrum of the unresolved emission in the nuclear region
and along the arms, and found that the unresolved arm
emission has a thermal component
dominating at lower energies, and a power-law-like component,
dominating above 1 keV. The thermal component in the arms has
a temperature $T=0.31^{+0.01}_{-0.03}$ keV, much cooler than that
in the nucleus. The power-law component has
a photon index $\Gamma=1.42^{+0.08}_{-0.05}$.
A more detailed analysis of the unresolved arm emission
will be presented in Soria et al. (in preparation).
4 Discussion
Separating the discrete sources inside and outside a 60\arcsec central region
reveals that the two populations have different cumulative
luminosity distributions.
The log N($>$S) – log S curve of the sources outside this radius
(i.e., the disk population) shows a kink at a luminosity
$\approx 6\times 10^{37}$ erg s${}^{-1}$ in the 0.3–8.0 keV band.
No kink is seen for the sources inside the 60\arcsec radius
(i.e., those located in the nuclear region and along the bar).
The slope of the log N($>$S) – log S curve at its high-luminosity end
is flatter for the nuclear population, implying a larger proportion
of bright sources (possible black-hole candidates). We interpret
this as evidence of a past star formation episode in the disk,
while there is continuous, ongoing star formation
in the nuclear region ([\astronciteWu2001]).
About 50% of the total emission in the nuclear region
belongs to resolved discrete sources.
We estimate
that $\approx 70$% of the unresolved emission
(35% of the total) is due to truly diffuse plasma,
with the rest (15% of the total) coming from
faint, unresolved point-like sources and photons in the wings
of the PSF outside the detection cells of the resolved sources.
The X-ray spectrum of the unresolved nuclear component
shows strong emission lines,
and can be modelled as emission from optically-thin thermal plasma
at $kT\approx 0.6$ keV. Above-solar abundances of Ne, Mg, Si and S
are required to fit the spectrum, while Fe appears to be underabundant.
This suggests that the interstellar medium in the starburst nuclear
region has been enriched by the ejecta of type-II
supernova explosions. Moreover,
a high abundance of C and a high C/O abundance ratio
can be the effect of radiatively-driven winds from metal-rich Wolf-Rayet stars
with $M$ $\buildrel{\lower 3.0pt\hbox{$>$}}\over{\lower 2.0pt\hbox{$\sim$}}$ $40$ M${}_{\odot}$
([\astronciteGustafsson et al.1999]). Both effects are likely to be present
in the nuclear region.
Strong unresolved emission is also detected along the arms.
It is well fitted by a thermal component at $kT\approx 0.3$ keV
and a power-law component ($\Gamma\approx 1.5$)
dominating at energies $\buildrel{\lower 3.0pt\hbox{$>$}}\over{\lower 2.0pt\hbox{$\sim$}}$ 1 keV. A study of
this higher-energy component and a comparison with the Galactic Ridge
emission will be presented in a work now in preparation.
Acknowledgements.
We thank Stefan Immler, Roy Kilgard,
Miriam Krauss, Casey Law, Oak-Young Park, Elena Pian,
Allyn Tennant and Daniel Wang for helpful discussions and suggestions.
References
[\astronciteCôté et al.1998]
Côté, S., Freeman, K. C., Carignan, C. & Quinn, P.
1997, AJ, 114, 1313
[\astronciteGiacconi et al.2001]
Giacconi, R., et al.
2001, ApJ, 551, 624
[\astronciteGustafsson et al.1999]
Gustafsson, B., Karlsson, T., Olsson, E., Edvardsson, B. & Ryde, N.
1999, A&A, 342, 426
[\astronciteImmler et al.1999]
Immler, S., Volger, A., Ehle, M. & Pietsch, W.
1999, A&A, 352, 415
[\astronciteOkada et al.1997]
Okada, K., Mitsuda, K. & Dotani, T.
1997, PASJ, 49, 653
[\astronciteSandage & Tamman1987]
Sandage, A. & Tammann, G. A.
1987, A revised Shapley-Ames Catalog of Bright Galaxies, 2nd ed.,
(Carnegie Institution of Washington Publication: Washington)
[\astronciteSchlegel et al.1998]
Schlegel, D. J., Finkbeiner, D. P. & Davis, M.
1998, ApJ, 500, 525
[\astronciteSoria & Wu2002]
Soria, R., Wu, K., 2002, A&A, in press (astro-ph/0201059)
[\astronciteTennant et al.2001]
Tennant, A. F., Wu, K., Ghosh, K. K., Kolodziejczak, J. J. & Swartz, D. A.
2001, ApJ, 549, L43
[\astronciteTrinchieri et al.1985]
Trinchieri, G., Fabbiano, G. & Palumbo, G. G. C.
1985, ApJ, 290, 96
[\astroncitede Vaucouleurs1979]
de Vaucouleurs, G.
1979, AJ, 84, 1270
[\astroncitede Vaucouleurs et al.1991]
de Vaucouleurs, G., de Vaucouleurs, A., Corwin, H., Jr., Buta, R.,
Paturel, G. & Fouque, P.
1991, Third Reference Catalogue of Bright Galaxies (Springer: New York)
[\astronciteWu2001]
Wu, K.
2001, PASA, 18, 443 |
The Bottom-Light Present Day Mass Function of the Peculiar Globular Cluster NGC 6535
Melissa Halford and Dennis Zaritsky
Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721, USA
[email protected], [email protected]
Abstract
Dynamical mass calculations have suggested that the Milky Way globular cluster NGC 6535 belongs to a population of clusters with high mass-to-light ratios, possibly due to a bottom-heavy stellar initial mass function. We use published Hubble Space Telescope data to measure the present day stellar mass function of this cluster within its half-light radius and instead find that it is bottom-light, exacerbating the discrepancy between the dynamical measurement and its known stellar content. The cluster’s proximity to the Milky Way bulge and its relatively strong velocity anisotropy are both reasons to be suspicious of the dynamical mass measurement, but we find that neither straightforwardly explains the sense and magnitude of the discrepancy. Although there are alternative potential explanations for the high mass-to-light ratio, such as the presence of large numbers of stellar remnants or dark matter, we find this cluster to be sufficiently perplexing that we now exclude it from a discussion of possible variations in the initial mass function. Because this was the sole known old, Milky Way cluster in the population of high dynamical mass-to-light ratio clusters, some possible explanations for the difference in cluster properties are again open for consideration.
Subject headings:globular clusters: general — globular clusters: individual (NGC 6535 (catalog )) — stars: luminosity function, mass function
I. INTRODUCTION
The stellar initial mass function (IMF) is key to understanding the details of star formation and implicit in many measurements made in extragalactic astronomy. Our particular interest lies in the interplay between the behavior of the low-mass end of the IMF and the resulting total mass of a stellar population. Several recent studies using a variety of techniques have suggested that variations in the low-mass end of the IMF exist among different stellar populations (e.g. Cappellari et al. 2012, Conroy & van Dokkum 2012, Geha et al. 2013, Spiniello et al. 2014), but the indirectness of some of these methods, large systematic uncertainties, and the importance of these results to much of extragalactic astronomy make further investigation necessary.
Studies of stellar clusters complement those of galaxies because, unlike the stars in galaxies, we can safely presume that the stars in clusters have similar ages and metallicities, and because for some clusters we can resolve individual low luminosity stars. Using new, more precise velocity dispersion measurements for a set of stellar clusters in the Milky Way and its satellite galaxies, Zaritsky et al. (2012, 2013, 2014) determined cluster dynamical masses, calculated stellar population mass-to-light ratios ($\Upsilon_{*}$), and used stellar evolution models to evaluate the corresponding mass-to-light ratios for an age of 10 Gyr, ($\Upsilon_{*,10}$). In doing so, they identified a low $\Upsilon_{*,10}$ population consisting mainly of old (age $>10$ Gyr) clusters of a wide range of metallicities ($-2.1<[$Fe/H$]<0$) and a high $\Upsilon_{*,10}$ population consisting mainly of young (age $<10$ Gyr), more metal-rich ($-1<[$Fe/H$]<0$) clusters. Zaritsky et al. noted that the differences in $\Upsilon_{*,10}$ may correspond to the same scale of IMF variations hypothesized for galaxies, where the low $\Upsilon_{*,10}$ clusters have an IMF that is consistent with that measured in our Galaxy (Bastian et al., 2010, and references therein) including modest dynamical evolution of the clusters, and the high $\Upsilon_{*,10}$ clusters have a bottom-heavy IMF, consistent with what is claimed to be the case in early type galaxies (Cappellari et al., 2012; Conroy & van Dokkum, 2012; Spiniello et al., 2014).
Studies of stellar clusters face some unique challenges. Large binary fractions, where the binary orbital velocities are larger than the internal cluster velocity dispersion, discrepancies between the phase space distribution function of actual clusters and that assumed in the dynamical models used to calibrate and test the mass estimation technique, internal dynamical relaxation, and external tidal influences can all lead to inaccurate cluster mass measurements. Beyond problems with the mass estimation, problems with the evolutionary models can cause errors in the calculated values of $\Upsilon_{*,10}$ that vary with age or metallicity. Lastly, even if those issues are minor, large numbers of stellar remnants or dark matter may cause differences in $\Upsilon_{*,10}$ that are unrelated to IMF behavior. Therefore, independent and direct confirmation of the low-mass end of the stellar mass function is absolutely necessary in cases where deviations from the norm are suspected. For sufficiently nearby clusters, the most direct method to determine the stellar mass function is to count stars.
Among the current sample of high $\Upsilon_{*,10}$ clusters, only one, NGC 6535, is located in the Milky Way, making it sufficiently close for us to resolve stars well into the subsolar mass regime. This cluster is also unique within the sample of high $\Upsilon_{*,10}$ clusters because it is the only old cluster (age $>10$ Gyr). As such, it provides the only evidence to date that the $\Upsilon_{*,10}$ differences are not solely due to age, host galaxy, or systematic errors in the stellar evolution models.
To determine whether the high dynamical mass-to-light ratio of NGC 6535 is due to an excess of low-mass stars, we measure the PDMF of this cluster and compare it to those of other clusters in §II. We discuss the results and implications in §III.
II. PRESENT DAY MASS FUNCTION
II.1. NGC 6535 and Other Clusters of Similar Age and Metallicity
To construct a mass function down to sufficiently low stellar masses, we require deep, high resolution imaging of NGC 6535. The ACS Survey of Galactic Globular Clusters was a Hubble Space Telescope program (GO-10775) that targeted Milky Way globular clusters for imaging with the Wide Field Channel (WFC) of the Advanced Camera for Surveys (Sarajedini et al., 2007). With a few exceptions that we do not consider here, each cluster was observed for one orbit in F606W and one orbit in F814W using one short exposure and four to five long exposures in each filter. The WFC images have a resolution of 50 mas pixel${}^{-1}$ and cover 1.7${}^{\prime}\times 3.4^{\prime}$ on each of the two CCDs. The long exposures were dithered so that stars located in the gap between the instrument’s CCDs in one exposure would be on a detector in the other images. Anderson et al. (2008) provide photometry and catalogs of artificial stars that can be used for completeness corrections. Because of possible systematic errors in the conversion of the photometry to a PDMF, we study a comparison set of clusters of similar age and metallicity to NGC 6535 from the published survey so that we can do a mostly model-free relative comparison. Quantitatively, we include clusters with ages that differ by less than 5% from that of NGC 6535, which, based on the normalization in Marín-Franch et al. (2009), corresponds to ages that differ by less than 0.64 Gyr. From the set of clusters in the ACS survey that satisfy this criterion, we exclude NGC 6715 because of signs of contamination in its CMD, NGC 2808 because of unacceptable scatter in the CMD, NGC 1851 because of low completeness along most of the main sequence, and NGC 362 because it lacks artificial star data in the archive. These considerations leave us with the comparison clusters listed in Table LABEL:Parameters.
We construct completeness-corrected F606W vs. F606W-F814W color-magnitude diagrams (CMDs) for NGC 6535 and the comparison clusters. As suggested in Anderson et al. (2008), when calculating completeness we only consider an artificial star “recovered” if the detected star deviates from the input star by less than 0.5 pixels in position and 0.75 in instrumental magnitude in both bands. These criteria reduce the probability that a real star in the image is counted as a recovered artificial star. To account for completeness variations that are both position- and flux-dependent, we calculate completeness corrections in one magnitude wide bins in F606W over annuli that each cover 100 pixels in radius from the center of the cluster. To avoid uncertainties due to large corrections, we exclude from our analysis bins where the completeness is $<0.5$. The limiting magnitude for each cluster is set by this completeness limit or $m_{F606W}=25.5$, whichever is lower. We present and discuss the stellar mass function out to $r_{max}$, the smaller of the cluster’s half-light radius or the size of the image. We use the half-light radius because it corresponds to the radius of the velocity dispersion measurements used to calculate mass-to-light ratios in Zaritsky et al. (2012, 2013, 2014). We present the various resulting limits in Table LABEL:Parameters.
We calculate the correspondence between stellar magnitudes, colors, and masses using isochrones from the Dartmouth Stellar Evolution Database (Dotter et al., 2007). We use metallicity, distance, and extinction values from the compilation of cluster properties by Harris (1996, 2010 edition) and ages from Marín-Franch et al. (2009). The comparisons we present in this part of our analysis are among clusters of similar relative ages and so independent of the more uncertain absolute age. To optimize the isochrone fit, we fix metallicity and age to the literature values while we vary distance and extinction. The full set of adopted parameters is listed in Table LABEL:Parameters and described in more detail below.
To exclude contaminating field stars, we define a ridgeline in each CMD and remove stars that are far from this ridgeline in color. We define the position of the ridgeline at the magnitude of each star using the median F606W-F814W color of stars in a 0.1 magnitude bin centered on that magnitude. We exclude stars more than $3\sigma$ from the ridgeline from subsequent analysis, where $\sigma$ is the standard deviation of the F606W-F814W color in the same bin, calculated separately in the blue and red directions.
To explore the sensitivity of our results to parameter choices, we calculate the mass functions using a variety of plausible parameters. First we use the model isochrones to determine the distance, $D_{I}$, and extinction, $E(B-V)_{I}$, using a least squares fit of the isochrone to the ridgeline within the magnitude range $m_{F606W,MSTO}-2<m_{F606W}<m_{F606W,max}$ where $m_{F606W,MSTO}$ is the F606W magnitude of the main sequence turnoff (MSTO) from Marín-Franch et al. (2009) and $m_{F606W,max}$ is the magnitude limit set by the 0.5 completeness criterion and the limit at $m_{F606W}=25.5$. Second, we use the published values of the distances, $D_{H}$, and extinctions, $E(B-V)_{H}$, from Harris (1996, 2010 edition). Finally, we define an empirical distance, $D_{E}$, and extinction, $E(B-V)_{E}$, by fitting the ridgeline of each cluster to the ridgeline of the most populated cluster in the sample (NGC 5904) and using the least squares fit to the isochrone for NGC 5904. To calculate the reddening, we use $A_{\lambda}/E(B-V)$ values from Sirianni et al. (2005). We use [$\alpha$/Fe]=0.2 for the isochrones. Figure 1 contains the CMDs for the clusters using absolute magnitudes determined by $D_{I}$ and $E(B-V)_{I}$ with the ridgelines and isochrones overplotted. It is clear from the figure that the isochrone for NGC 6535 does not fit very well along the giant branch, but we performed our analysis using a variety of isochrones and were unable to find one that agreed well with the data. Fits that are slightly better have parameters that are far from the accepted values for this cluster. We show in Figure 2 the CMDs and corresponding PDMFs for each of the isochrone options we explored for this cluster. The results are sufficiently similar for all of the options that none of our conclusions are affected by the choices of these parameters, within the ranges explored. The results that follow use $D_{I}$ and $E(B-V)_{I}$ with metallicity, age, and [$\alpha$/Fe] fixed to literature values for all clusters.
We present the PDMFs for these clusters in Figure 3. The clear result is that unlike those of the comparison clusters, the PDMF of NGC 6535 has a positive slope.
II.2. Other Bottom-Light Clusters
The PDMF of NGC 6535 is unusual, but not without precedent. The globular clusters NGC 6712 (Andreuzzi et al., 2001), Pal 5 (Koch et al., 2004), NGC 6218 (de Marchi et al., 2006), NGC 2298 (de Marchi & Pulone, 2007) and NGC 6366 (Paust et al., 2009) have all been found to have bottom-light stellar mass functions. Of these, NGC 6218, NGC 6366, and NGC 6712 are within 5 kpc of the galactic center, as is NGC 6535 (Harris, 1996, 2010 edition), suggesting that gravitational interactions may cause clusters in this region to lose many of their low-mass stars (Paust et al., 2009). Three of these clusters, NGC 2298, NGC 6218 and NGC 6366 are included in the ACS Survey of Galactic Globular Clusters. We perform our analysis on these clusters as in §II. We plot the CMDs of these clusters in Figure 4 and compare the PDMFs of these clusters with that of NGC 6535 in Figure 5. We find that like NGC 6535, these bottom-light clusters have PDMFs with positive power-law slopes. We present the parameters used for the fits in Table LABEL:BLParameters.
II.3. Fainter Stars
Below our self-imposed magnitude cutoff in some of the clusters, including NGC 6535, the mass function seems to increase dramatically. This is interesting because of the potential of these faint sources to explain this cluster’s elevated dynamical mass-to-light ratio. A visual inspection of the images of NGC 6535 shows that some of these sources, particularly those with very blue colors, appear to be spurious, which is the original motivation for our magnitude cutoff. However, we now explore the nature of that population in more detail. A simple color selection that removes the faint blue sources eliminates most of the sources that do not have visual counterparts, but the upturn in the mass function remains to some degree.
To estimate the number of faint sources in the NGC 6535 images in a more quantitative, well-justified way, we match sources between the Anderson et al. (2008) catalog and those found by running SExtractor on the images. We find that a slight upturn in the mass function for $M<0.3M_{\odot}$ remains, but we have not corrected for contamination by background galaxies and Galactic stars. Even so, the dramatic rise seen in the Anderson et al. (2008) sources is removed, though this could be due at least in part to incompleteness in the SExtractor sources. Although a high number of very faint stars could have helped resolve the discrepancy between the observed mass function of NGC 6535 and its dynamical mass-to-light ratio, we find no such rise for $M<0.3M_{\odot}$.
III. DISCUSSION
Taking into account evaporation rates and tidal shocks, Gnedin & Ostriker (1997) predict the lifetimes of globular clusters. As pointed out by de Marchi et al. (2006), different studies of destruction rates are not always consistent and depend on parameters that may not be well-constrained. Nevertheless, we compare the average of the predicted lifetimes of the comparison clusters from §II.1, 35.5 Gyr, to the average of the predicted lifetimes of the bottom-light clusters from §II.2, 11.6 Gyr. It is not a large interpretative leap to suggest that the dynamics of the bottom-light clusters may be grossly affected by external processes. If these dynamical effects are the sole cause of the high dynamical mass-to-light ratio of NGC 6535, the other bottom-light clusters may have similarly high dynamical mass-to-light ratios. Of the bottom-light clusters, NGC 6218, NGC 6366, NGC 6535 and NGC 6712 are included in Table 13 of the study of McLaughlin & van der Marel (2005), which provides mass-to-light ratios calculated using the dynamical masses corresponding to a Wilson model ($\Upsilon_{dyn}$) and mass-to-light ratios calculated by stellar population models ($\Upsilon_{pop}$). In Figure 6 we plot the ratio $\Upsilon_{dyn}/\Upsilon_{pop}$ vs. the velocity disperson for all of the included Milky Way clusters, highlighting the clusters with bottom-light PDMFs. NGC 6535 is the only bottom-light cluster in this sample for which the ratio $\Upsilon_{dyn}/\Upsilon_{pop}$ exceeds 1, and it therefore does not always follow that such clusters will have their velocity dispersion measurements and subsequent dynamical mass estimates artificially inflated. We conclude that NGC 6535 has a bottom-light PDMF similar to those of other globular clusters that may have experienced significant dynamical evolution, we may even conclude that it has experienced strong external influences that have led to the unusual PDMF, but we cannot yet conclude that those effects have artificially inflated the dynamical mass estimate.
Another cause for concern in the analysis of NGC 6535 is the large velocity anisotropy, $\sigma_{t}/\sigma_{r}=0.79$, measured in a recent proper motion study of 22 clusters (Watkins et al., 2015). This anisotropy is the largest measured in their sample and could, in principle, invalidate a generalized mass estimator. However, the exploration by Walker et al. (2009) of the mass estimator we use and that of Wolf et al. (2010) of related estimators suggest that this level of anisotropy should not result in sufficiently large errors to reconcile the dynamical mass-to-light ratio and the PDMF. They find uncertainties in the mass estimators on the order of a few tens of percent for a wide range of systems. Furthermore, the proper motion measurements of the velocity dispersion presented by Watkins et al. (2015) confirm the spectral, line-of-sight velocity measurement (Zaritsky et al., 2014) and demonstrate that the spectral measurement was not distorted by contamination from binary stars, which would not have a comparable impact on proper motions. We conclude that the large velocity anisotropy is cause to be concerned, but does not directly translate to the large mass discrepancy between the dynamical mass estimator and a bottom-light PDMF.
If one concludes that the $\Upsilon_{*,10}$ estimate is accurate, then the cause of such a large value cannot be low-luminosity stars, barring bizarre behavior of the mass function at even lower masses than observed here. Instead, NGC 6535 could contain a large amount of mass in stellar remnants or dark matter. The overabundance of stellar remnants in this cluster relative to others could point to an unusually top-heavy IMF or again to strong dynamical evolution. The presence of dark matter in NGC 6535 would require an explanation for why it is not present in other, similar clusters. Unfortunately, our data do not provide additional constraints on these possibilities.
In interpreting the high value of $\Upsilon_{*,10}$, we have focused on accounting for unseen mass, but not yet for the possibility of missing luminosity. Perhaps NGC 6535 is underluminous because it is missing a fraction of highly luminous stars. To explore this possibility, in Figure 7 we compare the completeness-corrected number of stars in magnitude-wide bins along the giant branch and horizontal branch among the clusters of similar age. We normalize between clusters by taking ratios of the number of stars in each bin to the number of stars in a bin defined by $M_{F606W,MSTO}<M_{F606W}<M_{F606W,MSTO}+1$. We find that NGC 6535 is not underpopulated in luminous stars.
Based on the open questions regarding this cluster and its unusual PDMF, we conclude that it is best to exclude NGC 6535 from current discussions regarding the possibility of IMF variations among clusters. Because this cluster was the only old cluster and the only Milky Way cluster in the high $\Upsilon_{*,10}$ population, its removal reopens questions about whether the differences in $\Upsilon_{*,10}$ are due to age or host galaxy and whether there are unknown age-dependent systematic errors. Some of these concerns may be alleviated with observations of another old cluster, NGC 2257, that appears, on the basis of less precise velocity dispersion measurements (McLaughlin & van der Marel, 2005), to have a high value of $\Upsilon_{*,10}$.
The difficulties experienced with this one high $\Upsilon_{*,10}$ cluster may cause one to question the results for the other high $\Upsilon_{*,10}$ clusters. However, the other clusters in that set are less likely to have experienced significant dynamical evolution because they are younger and reside in lower density environments outside of the Milky Way. Nevertheless, measuring their PDMFs is critical to determining whether the high $\Upsilon_{*,10}$ values are truly due to a bottom-heavy IMF, rather than, for example, stellar remnants or dark matter. Unfortunately, comparable observations for these clusters are more difficult than for NGC 6535 because of the larger ($>8\times$) distances. IMF studies based on resolved stars have been done at these distances (e.g. Kalirai et al. 2013) but they require extremely deep Hubble Space Telescope observations.
IV. SUMMARY
We measure the PDMF of NGC 6535, the only nearby cluster with a large dynamically measured mass-to-light ratio (Zaritsky et al., 2014), using published HST data to determine whether the large mass-to-light ratio is due to a bottom-heavy IMF. We compare the PDMF to those of other globular clusters of similar age and metallicity to minimize the potential for discrepancies due to the modeling of the stellar populations. We find that the PDMF of NGC 6535 is unusually bottom-light, which exacerbates the discrepancy between the mass function and the measured mass-to-light ratio, and conclude that the large apparent dynamical mass-to-light ratio does not indicate a bottom-heavy IMF in this cluster.
To explore this discrepancy further, we compare the PDMF of NGC 6535 to those of three other bottom-light clusters. We find that the PDMFs are quite similar, suggesting that these clusters may have experienced similar histories. Paust et al. (2009) suggested, on the basis of the proximity of some of these clusters to the galactic bulge, that tidal stripping of low-mass stars, which is aided by mass segregation, could lead to accelerated loss of low-mass stars. NGC 6535 is also near the galactic bulge, and therefore the PDMF we present supports this scenario. This association of NGC 6535 with strong external effects leads to a natural supposition that its internal kinematics are sufficiently distorted by this interaction to affect the mass estimate. However, the mass estimates of other bottom-light clusters are not artificially inflated, thereby demonstrating that whatever is happening to these clusters does not necessarily lead to inflated mass estimates. The relatively high velocity anisotropy of NGC 6535 (Watkins et al., 2015) is also a cause for concern, although investigations by Walker et al. (2009) and Wolf et al. (2010) suggest that this should not affect the mass estimation enough to explain the discrepancy between the dynamical mass-to-light ratio and the PDMF.
All of this leads to a number of unresolved open questions regarding NGC 6535, and therefore this cluster is not suitable for exploring whether the IMF is universal among stellar clusters. The dynamical evolution experienced by NGC 6535 is unlikely to afflict the other known high mass-to-light ratio clusters because they are not near the galactic bulge. The disqualification of NGC 6535 from the sample is unfortunate in that it was the sole old and sole Milky Way cluster in the set of apparently high mass-to-light ratio clusters. Its removal therefore reintroduces the possibility that the $\Upsilon_{*,10}$ differences are caused by age or host galaxy. Observations of the remaining clusters in this population using methods that do not rely on dynamical mass calculations are needed to determine whether the high dynamically derived $\Upsilon_{*,10}$ values are due to a bottom-heavy IMF, other unseen mass, or simply reflect some yet unappreciated systematic error.
This work is based on observations made with the NASA/ESA Hubble Space Telescope, and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA). We thank Ata Sarajedini for assistance in understanding the artificial star catalogs from the ACS Survey of Galactic Globular Clusters. We thank the anonymous referee for providing useful comments.
References
Anderson et al. (2008)
Anderson, J., Sarajedini, A., Bedin, L. R., King, I. R., Piotto, G.,
Reid, I. N., Siegel, M., Majewski, S. R., Paust, N. E. Q.,
Aparicio, A., Milone, A. P., Chaboyer, B., & Rosenberg, A. 2008,
AJ, 135, 2055
Andreuzzi et al. (2001)
Andreuzzi, G., De Marchi, G., Ferraro, F. R., Paresce, F., Pulone,
L., & Buonanno, R. 2001, A&A, 372, 851
Bastian et al. (2010)
Bastian, N., Covey, K. R., & Meyer, M. R. 2010, ARA&A, 48, 339
Cappellari et al. (2012)
Cappellari, M., McDermid, R. M., Alatalo, K., Blitz, L., Bois, M.,
Bournaud, F., Bureau, M., Crocker, A. F., Davies, R. L., Davis,
T. A., de Zeeuw, P. T., Duc, P.-A., Emsellem, E., Khochfar, S.,
Krajnović, D., Kuntschner, H., Lablanche, P.-Y., Morganti, R.,
Naab, T., Oosterloo, T., Sarzi, M., Scott, N., Serra, P.,
Weijmans, A.-M., & Young, L. M. 2012, Nature, 484, 485
Conroy & van Dokkum (2012)
Conroy, C. & van Dokkum, P. G. 2012, ApJ, 760, 71
de Marchi & Pulone (2007)
de Marchi, G. & Pulone, L. 2007, A&A, 467, 107
de Marchi et al. (2006)
de Marchi, G., Pulone, L., & Paresce, F. 2006, A&A, 449, 161
Dotter et al. (2007)
Dotter, A., Chaboyer, B., Jevremović, D., Baron, E., Ferguson,
J. W., Sarajedini, A., & Anderson, J. 2007, AJ, 134, 376
Geha et al. (2013)
Geha, M., Brown, T. M., Tumlinson, J., Kalirai, J. S., Simon, J. D.,
Kirby, E. N., VandenBerg, D. A., Muñoz, R. R., Avila, R. J.,
Guhathakurta, P., & Ferguson, H. C. 2013, ApJ, 771, 29
Gnedin & Ostriker (1997)
Gnedin, O. Y. & Ostriker, J. P. 1997, ApJ, 474, 223
Harris (1996)
Harris, W. E. 1996, AJ, 112, 1487
Kalirai et al. (2013)
Kalirai, J. S., Anderson, J., Dotter, A., Richer, H. B., Fahlman,
G. G., Hansen, B. M. S., Hurley, J., Reid, I. N., Rich, R. M., &
Shara, M. M. 2013, ApJ, 763, 110
Koch et al. (2004)
Koch, A., Grebel, E. K., Odenkirchen, M., Martínez-Delgado, D.,
& Caldwell, J. A. R. 2004, AJ, 128, 2274
Marín-Franch et al. (2009)
Marín-Franch, A., Aparicio, A., Piotto, G., Rosenberg, A.,
Chaboyer, B., Sarajedini, A., Siegel, M., Anderson, J., Bedin,
L. R., Dotter, A., Hempel, M., King, I., Majewski, S., Milone,
A. P., Paust, N., & Reid, I. N. 2009, ApJ, 694, 1498
McLaughlin & van der Marel (2005)
McLaughlin, D. E. & van der Marel, R. P. 2005, ApJS, 161, 304
Paust et al. (2009)
Paust, N. E. Q., Aparicio, A., Piotto, G., Reid, I. N., Anderson, J.,
Sarajedini, A., Bedin, L. R., Chaboyer, B., Dotter, A., Hempel, M.,
Majewski, S., Marín-Franch, A., Milone, A., Rosenberg, A., &
Siegel, M. 2009, AJ, 137, 246
Sarajedini et al. (2007)
Sarajedini, A., Bedin, L. R., Chaboyer, B., Dotter, A., Siegel, M.,
Anderson, J., Aparicio, A., King, I., Majewski, S.,
Marín-Franch, A., Piotto, G., Reid, I. N., & Rosenberg, A.
2007, AJ, 133, 1658
Sirianni et al. (2005)
Sirianni, M., Jee, M. J., Benítez, N., Blakeslee, J. P.,
Martel, A. R., Meurer, G., Clampin, M., De Marchi, G., Ford, H. C.,
Gilliland, R., Hartig, G. F., Illingworth, G. D., Mack, J., &
McCann, W. J. 2005, PASP, 117, 1049
Spiniello et al. (2014)
Spiniello, C., Trager, S., Koopmans, L. V. E., & Conroy, C. 2014,
MNRAS, 438, 1483
Walker et al. (2009)
Walker, M. G., Mateo, M., Olszewski, E. W., Peñarrubia, J., Wyn
Evans, N., & Gilmore, G. 2009, ApJ, 704, 1274
Watkins et al. (2015)
Watkins, L. L., van der Marel, R. P., Bellini, A., & Anderson, J.
2015, ApJ, 803, 29
Wolf et al. (2010)
Wolf, J., Martinez, G. D., Bullock, J. S., Kaplinghat, M., Geha, M.,
Muñoz, R. R., Simon, J. D., & Avedo, F. F. 2010, MNRAS, 406, 1220
Zaritsky et al. (2012)
Zaritsky, D., Colucci, J. E., Pessev, P. M., Bernstein, R. A., &
Chandar, R. 2012, ApJ, 761, 93
Zaritsky et al. (2013)
—. 2013, ApJ, 770, 121
Zaritsky et al. (2014)
—. 2014, ApJ, 796, 71 |
Graph classes with linear Ramsey numbers††thanks: Some results presented in this paper appeared in the extended abstract
[3] published in the proceedings of the 29th International Workshop on Combinatorial Algorithms, IWOCA 2018.
Bogdan Alecu
Mathematics Institute, University of Warwick, Coventry, CV4 7AL, UK. [email protected]
Aistis Atminas
Department of Mathematical Sciences, Xi’an Jiaotong-Liverpool University, 111 Ren’ai Road, Suzhou 215123, China. Email: [email protected]
Vadim Lozin
Mathematics Institute, University of Warwick, Coventry, CV4 7AL, UK. [email protected]
Viktor Zamaraev
Department of Computer Science, University of Liverpool, Ashton Building, Ashton Street, Liverpool,
L69 3BX, UK. Email: [email protected]
Abstract
The Ramsey number $R_{X}(p,q)$ for a class of graphs $X$ is the minimum $n$ such that
every graph in $X$ with at least $n$ vertices has either a clique of size $p$ or
an independent set of size $q$. We say that
Ramsey numbers are linear in $X$ if there is a constant $k$ such that $R_{X}(p,q)\leq k(p+q)$ for all $p,q$.
In the present paper we conjecture that if $X$ is a hereditary class defined by finitely many forbidden induced subgraphs,
then Ramsey numbers are linear in $X$ if and only if the co-chromatic
number is bounded in $X$. We prove the “only if” part of this conjecture and verify the “if” part for a variety of
classes. We also apply the notion of linearity to bipartite Ramsey numbers and reveal a number of similarities and differences
between the bipartite and non-bipartite case.
1 Introduction
According to Ramsey’s Theorem [18], for all natural $p$ and $q$
there exists a minimum number $R(p,q)$ such that every graph with at least $R(p,q)$ vertices has either a clique of size $p$ or an independent set of size $q$.
The exact values of Ramsey numbers are known only for small values of $p$ and $q$.
However, with the restriction to
specific classes of graphs, Ramsey numbers can be determined for all $p$ and $q$.
In particular, in [20] this problem was solved for planar graphs, while in [5]
it was solved for line graphs, bipartite graphs, perfect graphs, $P_{4}$-free graphs and
some other classes.
These studies reveal, in particular, that different classes have different rates of growth of Ramsey numbers.
In the present paper, we denote the Ramsey numbers restricted to a class $X$ by $R_{X}(p,q)$ and focus
on classes with a smallest speed of growth of $R_{X}(p,q)$. Clearly, $R_{X}(p,q)$ cannot be smaller than
the minimum of $p$ and $q$. We say that Ramsey numbers are linear in $X$ if there is a constant $k$
such that $R_{X}(p,q)\leq k(p+q)$ for all $p,q$.
All classes in this paper are hereditary, i.e. closed under taking induced subgraphs.
It is well known that a class of graphs is hereditary if and only if it can be characterized
in terms of minimal forbidden induced subgraphs. If the number of minimal forbidden induced subgraphs
for a class $X$ is finite, we say that $X$ is finitely defined.
It is not difficult to see that all classes of bounded co-chromatic number have linear Ramsey numbers, where
the co-chromatic number of a graph $G$ is the minimum $k$ such that the vertex set of $G$ can be partitioned
into $k$ subsets each of which is either a clique or an independent set. Unfortunately, as we show in Section 2,
this is not an if and only if statement in general. We conjecture, however, that in the universe of finitely defined classes the two notions coincide.
Conjecture 1.
A finitely defined hereditary class is of linear Ramsey numbers if and only if it is of bounded co-chromatic number.
In [7], it was conjectured that a finitely defined class $X$ has bounded co-chromatic number if and only if
the set of minimal forbidden induced subgraphs for $X$ contains a $P_{3}$-free graph, the complement of a $P_{3}$-free graph,
a forest (i.e. a graph without cycles) and the complement of a forest. Naturally, if this conjecture is true, we expect, following Conjecture 1:
Conjecture 2.
A finitely defined class $X$ is of linear Ramsey numbers if and only if the set of minimal forbidden induced subgraphs for $X$ contains a $P_{3}$-free graph, the complement of a $P_{3}$-free graph,
a forest and the complement of a forest.
In Section 3, we prove the “only if” part of Conjecture 2. In other words, we show that in the universe of finitely defined classes,
the property of a class $X$ having linear Ramsey numbers lies in between that of $X$ having bounded co-chromatic number and that of $X$ avoiding the specified induced subgraphs.
In Section 4, we focus on the “if” part of Conjecture 2 and verify it for a variety of classes defined by small forbidden induced subgraphs.
Moreover, for all the considered classes we derive exact values of the Ramsey numbers.
In Section 5, we extend the notion of linearity to bipartite Ramsey numbers and show that some of the results obtained for non-bipartite numbers
can be extended to the bipartite case as well. However, in general, the situation with linear bipartite Ramsey numbers seems to be more complicated
and we restrict ourselves to a weaker analog of Conjecture 2, which is also verified for some classes of bipartite graphs.
In the rest of the present section, we introduce basic terminology and notation.
All graphs in this paper are finite, undirected, without loops and multiple edges. The vertex set and the edge set of a graph $G$
are denoted by $V(G)$ and $E(G)$, respectively. For a vertex $x\in V(G)$ we denote by $N(x)$ the neighbourhood of $x$, i.e. the set of
vertices of $G$ adjacent to $x$. The degree of $x$ is $|N(x)|$.
We say that $x$ is complete to a subset $U\subset V(G)$ if $U\subseteq N(x)$ and anticomplete to $U$ if $U\cap N(x)=\emptyset$.
A subgraph of $G$ induced by a subset of vertices $U\subseteq V(G)$ is denoted $G[U]$.
By $\overline{G}$ we denote the complement of $G$ and call it co-$G$.
A clique in a graph is a subset of pairwise adjacent vertices and an independent set is a subset of pairwise non-adjacent vertices.
For a graph $G$, let $\alpha(G)$ denote the independence number of $G$, $\omega(G)$ the clique number, $\chi(G)$ the chromatic number and $z(G)$ the co-chromatic number.
By $K_{n}$, $C_{n}$ and $P_{n}$ we denote a complete graph, a chordless cycle and a chordless path with $n$ vertices, respectively.
Also, $K_{n,m}$ is a complete bipartite graph with parts of size $n$ and $m$, and $K_{1,n}$ is a star.
A disjoint union of two graphs $G$ and $H$ is denoted $G+H$. In particular, $pG$ is a disjoint union of $p$ copies of $G$.
If a graph $G$ does not contain induced subgraphs isomorphic to a graph $H$, then we say that $G$ is $H$-free and call $H$ a forbidden induced subgraph for $G$.
In case of several forbidden induced subgraphs we list them in parentheses.
A bipartite graph is a graph whose vertices can be partitioned into two independent sets, and a split graph is a graph whose vertices can be partitioned
into an independent set and a clique. A graph is bipartite if and only if it is free of odd cycles, and a graph is a split graph if and only if it is $(C_{4},2K_{2},C_{5})$-free [11].
2 Linear Ramsey numbers and related notions
As we observed in the introduction, the notion of linear Ramsey numbers has ties with bounded co-chromatic number,
and we believe that in the universe of finitely defined classes, the two notions are equivalent. In the present section,
we first show that this equivalence is not valid for general hereditary classes, and then discuss the relationship between
linear Ramsey numbers and some other notions that appear in the literature.
In order to show that Conjecture 1 is not valid for general hereditary classes, we consider the Kneser graph $KG_{a,b}$:
it has as vertices the $b$-subsets of a set of size $a$, and two vertices are adjacent if and only if the corresponding subsets are disjoint.
A well-known result due to Lovász says that, if $a\geq 2b$, then the chromatic number $\chi(KG_{a,b})$ is $a-2b+2$ [14].
In the following theorem, we denote by $X$ the hereditary closure of the family of Kneser graphs $KG_{3n,n},n\in\mathbb{N}$, i.e. $X=\{H:H\text{ is an
induced subgraph of }KG_{3n,n},$ $\text{for some }n\in\mathbb{N}\}$.
Theorem 1.
The class $X$ has linear Ramsey numbers and unbounded co-chromatic number.
Proof.
First, we note that by the Lovász’s result stated above, it follows that $\chi(KG_{3n,n})=3n-2n+2=n+2$. Also, it is not hard to see that
the the size of the biggest clique in $KG_{3n,n}$ is 3. It follows that the co-chromatic number of $KG_{3n,n}$ is at least $\frac{n+2}{3}$. As a result,
the co-chromatic number is unbounded for this class.
Now consider any induced subgraph $H$ of $KG_{3n,n}$. We will show that $\alpha(H)\geq\frac{|V(H)|}{3}$. Indeed, the vertices of the Kneser graph in this case
are $n$-element subsets of $\{1,2,\ldots,3n\}$. For each $i\in\{1,2,\ldots,3n\}$ let $V_{i}$ be the set of vertices of $H$ containing element $i$.
Then, as each vertex is an $n$-element subset, it follows that $\sum_{n=1}^{3n}|V_{i}|=n\times|V(H)|$. Hence, by the Pigeonhole Principle, there is anx $i$ such that
$|V_{i}|\geq\frac{|V(H)|}{3}$. As $V_{i}$ is an independent set, it follows that $\alpha(H)\geq\frac{|V(H)|}{3}$. This implies that for any $H\in X$ we have $|V(H)|\leq 3\alpha(H)\leq 3(\alpha(H)+\omega(H))$,
and hence the Ramsey numbers are linear in the class $X$.
∎
We now turn to one more notion, which is closely related to the growth of Ramsey numbers. This is the notion of homogeneous subgraphs that appears
in the study of the Erdős-Hajnal conjecture [10].
We will say that graphs in a class $X$ have linear homogeneous subgraphs if there exists a constant $c=c(X)$ such that
$\max\{\alpha(G),\omega(G)\}\geq c\cdot|V(G)|$ for every $G\in X$.
Proposition 1.
Let $X$ be a class of graphs. Then graphs in $X$ have linear homogeneous subgraphs if and only if Ramsey numbers are linear in $X$. More generally, for any $0<\delta\leq 1$, the following two statements are equivalent:
•
There is a constant $A$ such that $\max\{\alpha(G),\omega(G)\}\geq A\cdot|V(G)|^{\delta}$ for every $G\in X$.
•
There is a constant $B$ such that $R_{X}(p,q)\leq B(p+q)^{\frac{1}{\delta}}$.
Proof.
The second claim reduces to the first one when $\delta=1$, so we just prove the stronger claim.
For the first implication, suppose there exists a constant $A$ such that $\max\{\alpha(G),\omega(G)\}\geq A\cdot|V(G)|^{\delta}$ for all $G\in X$.
Let $H\in X$, let $p,q\in\mathbb{N}$, and suppose that $|V(H)|\geq\left(\frac{p+q}{A}\right)^{\frac{1}{\delta}}$.
Then $\max\{\alpha(H),\omega(H)\}\geq A\cdot|V(H)|^{\delta}\geq p+q$, which means that $H$ is guaranteed to have
an independent set of size $p$ or a clique of size $q$, and this proves the first direction (we can put e.g. $B=A^{-\frac{1}{\delta}}$) in the statement of the proposition.
Conversely, suppose there exists a positive constant $B$ such that for any $p,q\in\mathbb{N}$ and
$G\in X$, if $|V(G)|\geq B(p+q)^{\frac{1}{\delta}}$,
then $G$ has an independent set of size $p$ or a clique of size $q$.
Let $H$ be an arbitrary graph in $X$ and let $t$ be the largest integer such that $|V(H)|\geq 2^{\frac{1}{\delta}}Bt^{\frac{1}{\delta}}=B(t+t)^{\frac{1}{\delta}}$.
By the above assumption, $H$ has a clique or an independent set of size $t$, i.e. $\max\{\alpha(H),\omega(H)\}\geq t$.
Notice, by definition of $t$, we have $|V(H)|\leq 2^{\frac{1}{\delta}}B(t+1)^{\frac{1}{\delta}}$, i.e. $|V(H)|^{\delta}\leq 2B^{\delta}(t+1)$. Hence if $t=0$, then $|V(H)|^{\delta}\leq 2B^{\delta}$ and therefore
$\max\{\alpha(H),\omega(H)\}\geq\frac{|V(H)|^{\delta}}{2B^{\delta}}\geq\frac{|V%
(H)|^{\delta}}{4B^{\delta}}$.
On the other hand, if $t\geq 1$, then $|V(H)|^{\delta}\leq 2B^{\delta}(t+1)\leq 4B^{\delta}t$ and therefore
$\max\{\alpha(H),\omega(H)\}\geq\frac{|V(H)|^{\delta}}{4B^{\delta}}$, and putting e.g. $A=\frac{1}{4B^{\delta}}$ concludes the proof.
∎
In particular, the Erdős-Hajnal conjecture can be stated in our terminology as follows:
Conjecture 3.
(Erdős-Hajnal)
Suppose $X$ is a proper hereditary class (that is, not the class of all graphs). Then there are constants $A,k$ such that $R_{X}(p,q)\leq A(p+q)^{k}$ for every $p,q\in\mathbb{N}$, i.e. Ramsey numbers grow at most polynomially in $X$.
Finally, we point out the difference between the notion of Ramsey numbers for classes and the notion of Ramsey numbers of graphs.
Each of them leads naturally to the notion of linear Ramsey numbers, defined differently in the present paper and, for instance, in [12].
In spite of the possible confusion, we use the terminology of Ramsey numbers, and not the terminology of homogeneous subgraphs,
because most of our results deal with the exact value of $R_{X}(p,q)$.
3 Classes with non-linear Ramsey numbers
In this section, we prove the “only if” part of Conjecture 2.
Lemma 1.
For every fixed $k\geq 3$, the class $X_{k}$ of $(C_{3},C_{4},\ldots,C_{k})$-free graphs is not of linear Ramsey numbers.
Proof.
Assume to the contrary that Ramsey numbers for the class $X_{k}$ are linear.
Then, since graphs in $X_{k}$ do not contain cliques of size three, there exists a constant $t=t(k)$
such that any $n$-vertex graph from the class has an independent set of size at least $n/t$.
It is well-known (see e.g. [2]) that $X_{k}$ contains $n$-vertex graphs with the independence number of order $O(n^{1-\epsilon}\ln n)$,
where $\epsilon>0$ depends on $k$, which is smaller than $n/t$ for large $n$.
This contradiction shows that $X_{k}$ is not of linear Ramsey numbers.
∎
Theorem 2.
Let $X$ be a class of graphs defined by a finite set $M$ of forbidden induced subgraphs.
If $M$ does not contain a graph in at least one of the following four classes, then $X$ is not of linear Ramsey numbers:
$P_{3}$-free graphs, the complements of $P_{3}$-free graphs, forests, the complements of forests.
Proof.
It is not difficult to see that a graph is $P_{3}$-free if and only if it is a disjoint union of cliques.
The class of $P_{3}$-free graphs contains the graph $(q-1)K_{p-1}$ with $(q-1)(p-1)$ vertices and with no clique of size $p$ or independent set of size $q$,
and hence this class is not of linear Ramsey numbers. Therefore, if $M$ contains no $P_{3}$-free graph, then $X$ contains all $P_{3}$-free graphs and hence is not of linear Ramsey numbers.
Similarly, if $M$ contains no $\overline{P}_{3}$-free graph, then $X$ is not of linear Ramsey numbers.
Now assume that $M$ contains no forest. Therefore, every graph in $M$ contains a cycle. Since the number of graphs in $M$ is finite,
$X$ contains the class of $(C_{3},C_{4},\ldots,C_{k})$-free graphs for a finite value of $k$ and hence is not of linear Ramsey numbers by Lemma 1.
Applying the same arguments to the complements of graphs in $X$, we conclude that if $M$ contains no co-forest, then $X$ is not of linear Ramsey numbers.
∎
4 Classes with linear Ramsey numbers
In this section, we study classes of graphs defined by forbidden induced subgraphs with 4 vertices and determine Ramsey numbers for several classes in this family
that verify the “if” part of Conjecture 2.
All the eleven graphs on 4 vertices are represented in Figure 1.
Below we list which of these graphs are $P_{3}$-free and which of them are forests (take the complements for $\overline{P}_{3}$-free graphs and for the complements of forests, respectively).
•
$P_{3}$-free graphs: $K_{4}$, $\overline{K}_{4}$, $2K_{2}$, co-diamond, co-claw.
•
Forests: $\overline{K}_{4}$, $2K_{2}$, $P_{4}$, co-diamond, co-paw, claw.
4.1 Claw- and co-claw-free graphs
Lemma 2.
If a (claw,co-claw)-free graph $G$ contains a $\overline{K}_{4}$, then it is $K_{3}$-free.
Proof.
Assume $G$ contains a $\overline{K}_{4}$ induced by $A=\{a_{1},a_{2},a_{3},a_{4}\}$
and suppose by contradiction that $G$ also contains a $K_{3}$ induced by $Z=\{x,y,z\}$.
Let first $A$ be disjoint from $Z$.
To avoid a co-claw, each vertex of $A$ has a neighbour in $Z$ and hence one of the vertices of $Z$ is adjacent to two vertices of $A$, say $x$ is adjacent to $a_{1}$ and $a_{2}$.
Then, to avoid a claw, $x$ has no other neighbours in $A$ and $y$ has a neighbour in $\{a_{1},a_{2}\}$, say $y$ is adjacent to $a_{1}$.
This implies that $y$ is adjacent to $a_{3}$ (else $x,y,a_{1},a_{3}$ induce a co-claw) and similarly $y$ is adjacent to $a_{4}$.
But then $y,a_{1},a_{3},a_{4}$ induce a claw, a contradiction.
If $A$ and $Z$ are not disjoint, they have at most one vertex in common, say $a_{4}=z$.
Again, to avoid a co-claw, each vertex in $\{a_{1},a_{2},a_{3}\}$ has a neighbour in $\{x,y\}$ and hence, without loss of generality, $x$ is adjacent to $a_{1}$ and $a_{2}$.
But then $x,a_{1},a_{2},a_{4}$ induce a claw, a contradiction again.
∎
Lemma 3.
The maximum number of vertices in a (claw,co-claw,$K_{4},\overline{K}_{4}$)-free graph is 9.
Proof.
Let $G$ be a (claw,co-claw,$K_{4},\overline{K}_{4}$)-free graph and let $x$ be a vertex of $G$. Denote by $A$ the set of neighbours and by $B$ the set of non-neighbours of $x$.
Clearly, $A$ contains neither triangles nor anti-triangles, since otherwise either a $K_{4}$ or a claw arises.
Therefore, $A$ has at most 5 vertices, and similarly $B$ has at most 5 vertices.
If $|A|=5$, then $G[A]$ must be a $C_{5}$ induced by vertices, say, $a_{1},a_{2},a_{3},a_{4},a_{5}$ (listed along the cycle).
In order to avoid a claw or $K_{4}$, each vertex of $A$ can be adjacent to at most 2 vertices of $B$, which gives rise to at most 10 edges between $A$ and $B$.
On the other hand, to avoid a co-claw, each vertex of $B$ must be adjacent to at least 3 vertices of $A$. Therefore, $B$ contains at most 3 vertices and hence
$|V(G)|\leq 9$. Similarly, if $|B|=5$, then $|V(G)|\leq 9$.
It remains to show that there exists a (claw,co-claw,$K_{4},\overline{K}_{4}$)-free graph with 9 vertices. This graph can be constructed as follows.
Start with a $C_{8}$ formed by the vertices $v_{1},v_{2},\ldots,v_{8}$. Then create a $C_{4}$ on the even-indexed vertices $v_{2},v_{4},v_{6},v_{8}$ (listed along the cycle) and
a $\overline{C}_{4}$ on the odd-indexed vertices $v_{1},v_{3},v_{5},v_{7}$ (listed along the cycle in the complement).
Finally, add one more vertex adjacent to the odd-indexed vertices. It is now a routine matter to check that the resulting graph is (claw,co-claw,$K_{4},\overline{K}_{4}$)-free.
This graph is known as the Paley graph of order $q=3^{2}$.
∎
Theorem 3.
For the class $A$ of (claw,co-claw)-free graphs and all $a,b\geq 3$,
$$R_{A}(a,b)=\max\big{\{}\left\lfloor(5a-3)/2\right\rfloor,\left\lfloor(5b-3)/2%
\right\rfloor\big{\}},$$
unless $a=b=4$ in which case $R_{A}(a,b)=10$.
Proof.
According to Lemma 2, the class of (claw,co-claw)-free graphs is the union of three classes:
•
the class $X$ of (claw,$K_{3}$)-free graphs,
•
the class $Y$ of (co-claw,$\overline{K}_{3}$)-free graphs and
•
the class $Z$ of (claw,co-claw,$K_{4},\overline{K}_{4}$)-free graphs.
Clearly, $R_{A}(a,b)=\max\{R_{X}(a,b),R_{Y}(a,b),R_{Z}(a,b)\}$.
Since $K_{3}$ is forbidden in $X$, we have $R_{X}(a,b)=R_{X}(3,b)$, Also, denoting by $B$ the class of claw-free graphs, we conclude that $R_{X}(3,b)=R_{B}(3,b)$.
As was shown in [5], $R_{B}(3,b)=\left\lfloor(5b-3)/2)\right\rfloor$.
Therefore, $R_{X}(a,b)=\left\lfloor(5b-3)/2)\right\rfloor$. Similarly, $R_{Y}(a,b)=\left\lfloor(5a-3)/2)\right\rfloor$.
In the class $Z$, for all $a,b\geq 4$ we have $R_{Z}(a,b)=10$ by Lemma 3.
Moreover, if additionally $\max\{a,b\}\geq 5$, then $R_{Z}(a,b)<\max\{R_{X}(a,b),R_{Y}(a,b)\}$.
For $a=b=4$, we have $R_{Z}(4,4)=10>8=\max\{R_{X}(4,4),R_{Y}(4,4)\}$.
Finally, it is not difficult to see that $R_{Z}(3,b)\leq R_{X}(3,b)$ and $R_{Z}(a,3)\leq R_{Y}(a,3)$, and hence the result follows.
∎
4.2 Diamond- and co-diamond-free graphs
Lemma 4.
If a (diamond,co-diamond)-free graph $G$ contains a $\overline{K}_{4}$, then it is bipartite.
Proof.
Assume $G$ contains a $\overline{K}_{4}$. Let $A$ be any maximal (with respect to inclusion) independent set containing the $\overline{K}_{4}$ and let $B=V(G)-A$.
If $B$ is empty, then $G$ is edgeless (and hence bipartite). Suppose now $B$ contains a vertex $b$. Then $b$ has a neighbour $a$ in $A$ (else $A$ is not maximal) and at most one
non-neighbour (else $a$ and $b$ together with any two non-neighbours of $b$ in $A$ induce a co-diamond).
Assume $B$ has two adjacent vertices, say $b_{1}$ and $b_{2}$. Since $|A|\geq 4$ and each of $b_{1}$ and $b_{2}$ has at most one non-neighbour in $A$, there are
at least two common neighbours of $b_{1}$ and $b_{2}$ in $A$, say $a_{1},a_{2}$. But then $a_{1},a_{2},b_{1},b_{2}$ induce a diamond. This contradiction shows that $B$ is independent and hence
$G$ is bipartite.
∎
Lemma 5.
A co-diamond-free bipartite graph containing at least one edge is either a simplex (a bipartite graph in which every vertex has at most one non-neighbour in the opposite part)
or a $K_{s,t}+K_{1}$ for some $s$ and $t$.
Proof.
Assume $G=(A,B,E)$ is a co-diamond-free bipartite graph containing at least one edge. Then $G$ cannot have two isolated vertices, since otherwise
an edge together with two isolated vertices create an induced co-diamond.
Assume $G$ has exactly one isolated vertex, say $a$, and let $G^{\prime}=G-a$ Then any vertex $b\in V(G^{\prime})$ is adjacent to every vertex in the opposite part of $G^{\prime}$.
Indeed, if $b$ has a non-neighbour $c$ in the opposite part, then $a,b,c$ together with any neighbour of $b$ (which exists because $b$ is not isolated) induce a co-diamond.
Therefore, $G^{\prime}$ is complete bipartite and hence $G=K_{s,t}+K_{1}$ for some $s$ and $t$.
Finally, suppose $G$ has no isolated vertices. Then every vertex $a\in A$ has at most one non-neighbour in $B$, since otherwise any two non-neighbours of $a$ in $B$
together with $a$ and any neighbour of $a$ (which exists because $a$ is not isolated) induce a co-diamond. Similarly, every vertex $b\in B$ has at most one non-neighbour in $A$.
Therefore, $G$ is a simplex.
∎
Lemma 6.
The maximum number of vertices in a (diamond,co-diamond,$K_{4},\overline{K}_{4}$)-free graph is 9.
Proof.
Let $G$ be a (diamond,co-diamond,$K_{4},\overline{K}_{4}$)-free graph and $x$ be a vertex of $G$. Denote by $A$ the set of neighbours and by $B$ the set of non-neighbours of $x$.
Then $G[A]$ is $(P_{3},K_{3})$-free, else $G$ contains either a diamond or a $K_{4}$. Since $G[A]$ is $P_{3}$-free, every connected component of $G[A]$ is a clique and since this graph is $K_{3}$-free,
every connected component has at most 2 vertices. If at least one of the components of $G[A]$ has 2 vertices, the number of components is at most 2 (since otherwise a co-diamond arises),
in which case $A$ has at most 4 vertices. If all the components of $G[A]$ have size 1, the number of components is at most 3 (since otherwise a $\overline{K}_{4}$ arises),
in which case $A$ has at most 3 vertices. Similarly, $B$ has at most 4 vertices and hence $|V(G)|\leq 9$.
To conclude the proof, we observe that the Paley graph of order $q=3^{2}$ described in the proof of Lemma 3 is (diamond,co-diamond,$K_{4},\overline{K}_{4}$)-free.
∎
Theorem 4.
For the class $A$ of (diamond,co-diamond)-free graphs and $a,b\geq 3$,
$$R_{A}(a,b)=\max\{2a-1,2b-1\},$$
unless $a,b\in\{4,5\}$, in which case $R_{A}(a,b)=10$, and unless $a=b=3$, in which case $R_{A}(a,b)=6$.
Proof.
According to Lemma 4, in order to determine the value of $R_{A}(a,b)$, we analyze this number in three classes:
•
the class $X$ of co-diamond-free bipartite graphs,
•
the class $Y$ of the complements of graphs in $X$ and
•
the class $Z$ of (diamond,co-diamond,$K_{4},\overline{K}_{4}$)-free graphs.
In the class $X$ of co-diamond-free bipartite graphs, $R_{X}(a,b)=2b-1$, since every graph in this class with at least $2b-1$ contains an independent set of size $b$, while the graph $K_{b-1,b-1}$
contains neither an independent set of size $b$ nor a clique of size $a\geq 3$. Similarly, $R_{Y}(a,b)=2a-1$.
In the class $Z$ of (diamond,co-diamond,$K_{4},\overline{K}_{4}$)-free graphs, for all $a,b\geq 4$ we have $R_{Z}(a,b)=10$ by Lemma 6.
Moreover, if additionally $\max\{a,b\}\geq 6$, then $R_{Z}(a,b)<\max\{R_{X}(a,b),R_{Y}(a,b)\}$.
For $a,b\in\{4,5\}$, we have $R_{Z}(a,b)=10>\max\{R_{X}(a,b),R_{Y}(a,b)\}$. Also, $R_{Z}(3,3)=6$ (since $C_{5}\in Z$) and hence $R_{Z}(3,3)>\max\{R_{X}(3,3),R_{Y}(3,3)\}$.
Finally, by direct inspection one can verify that $Z$ contains no $K_{3}$-free graphs with more than 6 vertices and hence for $b\geq 4$ we have $R_{Z}(3,b)\leq R_{X}(3,b)$.
Similarly, for $a\geq 4$ we have $R_{Z}(a,3)\leq R_{Y}(a,3)$. Thus for all values of $a,b\geq 3$, we have $R_{A}(a,b)=\max\{2a-1,2b-1\}$,
unless $a,b\in\{4,5\}$, in which case $R_{A}(a,b)=10$, and unless $a=b=3$, in which case $R_{A}(a,b)=6$.
∎
4.3 $2K_{2}$- and $C_{4}$-free graphs
Theorem 5.
For the class $A$ of ($2K_{2},C_{4}$)-free graphs and all $a,b\geq 3$,
$$R_{A}(a,b)=a+b.$$
Proof.
Let $G$ be a ($2K_{2},C_{4}$)-free graph with $a+b$ vertices. If, in addition, $G$ is $C_{5}$-free, then the three forbidden induced subgraphs ensures that $G$ belongs to the class of split graphs and hence it contains either
a clique of size $a$ or an independent set of size $b$.
If $G$ contains a $C_{5}$, then the remaining vertices of the graph can be partitioned into a clique $U$, whose vertices are complete to the cycle $C_{5}$,
and an independent set $W$, whose vertices are anticomplete to the $C_{5}$ [6]. We have $|U|+|W|=a+b-5$ and hence either $|U|\geq a-2$ or $|W|\geq b-2$. In the first case,
$U$ together with any two adjacent vertices of the cycle $C_{5}$ create a clique of size $a$. In the second case, $W$ together with any two non-adjacent vertices of
the cycle create an independent set of size $b$. This shows that $R_{A}(a,b)\leq a+b$.
For the inverse inequality, we construct a graph $G$ with $a+b-1$ vertices as follows: $G$ consists of a cycle $C_{5}$, an independent set $W$ of size $b-3$ anticomplete to the cycle
and a clique $U$ of size $a-3$ complete to both $W$ and $V(C_{5})$. It is not difficult to see that the size of a maximum clique in $G$ is $a-1$ and the size of a maximum independent set in $G$ is $b-1$.
Therefore, $R_{A}(a,b)\geq a+b$.
∎
4.4 $2K_{2}$- and diamond-free graphs
To analyze this class, we split it into three subclasses $X,Y$ and $Z$ as follows:
$X$
is the class of $(2K_{2}$,diamond)-free graphs containing a $K_{4}$,
$Y$
is the class of $(2K_{2}$,diamond)-free graphs that do not contain a $K_{4}$ but contain a $K_{3}$,
$Z$
is the class of $(2K_{2}$,diamond)-free graphs that do not contain a $K_{3}$, i.e. the class of $(2K_{2},K_{3})$-free graphs.
We start by characterizing graphs in the class $X$.
Lemma 7.
If a $(2K_{2},diamond)$-free graph $G$ contains a $K_{4}$, then $G$ is a split graph
partitionable into a clique $C$ and an independent set $I$ such that every vertex of $I$ has at most one neighbour in $C$.
Proof.
Let $G$ be a $(2K_{2}$,diamond)-free graph containing a $K_{4}$. We extend the $K_{4}$ to any maximal (with respect to inclusion) clique and denote it by $C$.
Also, denote $I=V(G)-C$.
Assume a vertex $a\in I$ has two neighbours $b,c$ in $C$. It also has a non-neighbour $d$ in $C$ (else $C$ is not maximal). But then $a,b,c,d$
induce a diamond. This contradiction shows that any vertex of $I$ has at most one neighbour in $C$.
Finally, assume two vertices $a,b\in I$ are adjacent. Since each of them has at most one neighbour in $C$ and $|C|\geq 4$, there are two vertices $c,d\in C$
adjacent neither to $a$ nor to $b$. But then $a,b,c,d$ induce a $2K_{2}$. This contradiction shows that $I$ is independent and completes the proof.
∎
In order to characterize graphs in $Z$, let us say that $G^{*}$ is an extended $G$ (also known as a blow-up of $G$) if $G^{*}$ is obtained from $G$ by replacing the vertices of $G$ with independent sets.
Lemma 8.
If $G$ is a $(2K_{2},K_{3})$-free graph, then it is either bipartite or an extended $C_{5}+K_{1}$.
Proof.
If $G$ is $C_{5}$-free, then it is bipartite, because any cycle of length at least 7 contains an induced $2K_{2}$. Assume now that $G$ contains a $C_{5}$ induced by a set $S=\{v_{0},v_{1},v_{2},v_{3},v_{4}\}$.
To avoid an induced $2K_{2}$ or $K_{3}$, any vertex $u\not\in S$ must be either anticomplete to $S$ or have exactly two neighbours on the cycle of distance 2 from each other,
i.e. $N(u)\cap S=\{v_{i-1},v_{i+1}\}$ for some $i$ (addition is taken modulo 5). Moreover, if $N(u)\cap S=\{v_{i-1},v_{i+1}\}$ and $N(w)\cap S=\{v_{j-1},v_{j+1}\}$, then
•
if $i=j$ or $|i-j|>1$, then $u$ is not adjacent to $w$, since $G$ is $K_{3}$-free.
•
if $|i-j|=1$, then $u$ is adjacent to $w$, since $G$ is $2K_{2}$-free.
Clearly, every vertex $u\not\in S$, which is anticomplete to $S$, is isolated, and hence $G$ is an extended $C_{5}+K_{1}$.
∎
Now we turn to graphs $G$ in the class $Y$ and characterize them through a series of claims.
(1)
Any two triangles in $G$ are vertex disjoint. To see this, note that two triangles intersecting in two vertices
induce either a $K_{4}$ or a diamond. If two triangles induced by say $x_{1},y_{1},z$ and $x_{2},y_{2},z$ intersect in a single vertex,
there must be an other edge between them, say $x_{1}x_{2}$, since otherwise we obtain an induced $2K_{2}$. But then $x_{1},x_{2},y_{1},z$ induce two triangles intersecting in two vertices.
(2)
For any edge $xy$ and a triangle $T$ containing neither $x$ nor $y$, $x$ and $y$ each have exactly one neighbour in $T$.
Indeed, $x$ and $y$ each have at most one neighbour, since otherwise we obtain two triangles intersecting in two vertices.
Moreover, if one of them does not have a neighbour, an induced $2K_{2}$ appears. It follows, in particular, that the edges between two triangles form a matching.
(3)
$G$ contains at most 3 triangles. To see this, suppose for a contradiction that $a_{i},b_{i},c_{i},0\leq i\leq 3$ induce 4 triangles.
Let us study how this 4 triangle configuration looks like: by symmetry, we may assume that $a_{0}$ is connected to $a_{1},a_{2}$ and $a_{3}$, and similarly with $b_{0}$ and $c_{0}$.
The configuration is then determined by the matchings between triangles 1, 2 and 3. Since triangles in $G$ are vertex disjoint,
we know that there are no other edges between vertices with the same letter. In other words, each of the three matchings that we will denote by
$1\to 2$, $1\to 3$ and $2\to 3$ can be described by a permutation of $\{a,b,c\}$ with no fixed points.
The only two such permutations are the two cyclic ones $(a,b,c)$ and $(a,c,b)$.
Again one can check that by symmetry, we only need to consider the case where all three permutations are described by $(a,b,c)$.
But then $a_{0},a_{2},b_{1},c_{3}$ induce a $2K_{2}$.
(4)
If $G$ has a triangle $T$, it does not contain an induced $C_{5}$ vertex disjoint from $T$.
To see this, assume that $G$ has a triangle $x,y,z$ and a $C_{5}$ induced by $v_{1},v_{2},v_{3},v_{4},v_{5}$.
By (2), each vertex in the $C_{5}$ has exactly one neighbour in the triangle, and no two consecutive $v_{i}$ (modulo 5) have the same neighbour in the triangle.
It follows that up to isomorphism, the edges between the triangle and the $C_{5}$ are $xv_{1}$, $yv_{2}$, $yv_{4}$, $zv_{3}$, $zv_{5}$. But then $x,v_{1},v_{3},v_{4}$ induce a $2K_{2}$.
(5)
If $G$ contains exactly 3 triangles $T_{i}$, each induced by $a_{i},b_{i},c_{i},1\leq i\leq 3$, then every other vertex in the graph is isolated.
Without loss of generality, the edges between the triangles are given by $a_{i}b_{j}$, $b_{i}c_{j}$, $c_{i}a_{j}$ with $i\leq j$.
Suppose for a contradiction that $x$ is a non-isolated vertex not in the $T_{i}$. Then $x$ has exactly one neighbour in each of the triangles.
Indeed, to avoid an induced diamond or $K_{4}$, it has at most one neighbour in each triangle, and if it has a neighbour anywhere in the graph, Claim (2) applies.
Without loss of generality, suppose the neighbour of $x$ in $T_{2}$ is $b_{2}$. Then $x$ must be adjacent to exactly one of $a_{3}$ and $c_{1}$, since otherwise $x,b_{2},a_{3},c_{1}$ induce a $2K_{2}$.
If $x$ is adjacent to $a_{3}$, then $x,b_{2},a_{2},b_{3},a_{3}$ induce a $C_{5}$ vertex disjoint from $T_{1}$, contrary to the previous claim.
Similarly, if $x$ is adjacent to $c_{1}$, then $x,b_{2},c_{2},b_{1},c_{1}$ induce a $C_{5}$ vertex disjoint from $T_{3}$.
(6)
If $G$ contains exactly 2 triangles $T_{1}$ and $T_{2}$
and the graph $G^{\prime}=G-(T_{1}\cup T_{2})$
contains an edge, then $G^{\prime}$ admits a bipartition $X^{\prime}\cup Y^{\prime}$ such that there exist vertices $x_{1},y_{1}\in T_{1}$ and $x_{2},y_{2}\in T_{2}$ with
the property that $X^{\prime}\cup\{x_{1},x_{2}\}$ and $Y^{\prime}\cup\{y_{1},y_{2}\}$ are independent sets. Note $G^{\prime}\in Z$, and by Claim (5), it is $C_{5}$-free.
It follows that $G^{\prime}$ is a $2K_{2}$-free bipartite graph. We further split $G^{\prime}$ into $G^{\prime}_{1}$ and $G^{\prime}_{0}$, where $G^{\prime}_{0}$ consists of the isolated vertices in $G^{\prime}$, while $G^{\prime}_{1}$ contains the rest of the vertices of $G^{\prime}$.
Note that, since $G$ is $2K_{2}$-free, $G^{\prime}_{1}$ is a connected graph. As this graph contains an edge, by Claim (2), every vertex of $G^{\prime}_{1}$ has exactly one neighbour in each of $T_{1}$ and $T_{2}$,
and each part of $G^{\prime}_{1}$ has a dominating vertex, i.e. a vertex adjacent to all the vertices in the opposite part.
Write $x$ and $y$ for those dominating vertices, and call their respective parts $X^{\prime\prime}$ and $Y^{\prime\prime}$.
Let $y_{1}$ and $y_{2}$ be the neighbours of $x$ in $T_{1}$ and $T_{2}$ respectively, and similarly let $x_{1}$ and $x_{2}$ be the neighbours of $y$ in those triangles.
By Claim (1), $x_{1}\neq y_{1}$, $x_{2}\neq y_{2}$, $x_{1}$ and $x_{2}$ are not adjacent, and $y_{1}$ and $y_{2}$ are also not adjacent. Finally, write $z_{1}$, $z_{2}$ for the remaining two vertices in $T_{1}$ and $T_{2}$, respectively.
Note that any vertex in $X^{\prime\prime}$ must be adjacent to $y_{1}$ and to $y_{2}$: indeed, if for instance $x^{\prime}\in X^{\prime\prime}$ is adjacent to $y^{\prime}_{1}\neq y_{1}$ in $T_{1}$,
then $y$ is adjacent to neither of $y_{1}$ and $y^{\prime}_{1}$ (by Claim (1)), and so $x,x^{\prime},y,y_{1},y^{\prime}_{1}$ induce a $C_{5}$ disjoint from $T_{2}$, contrary to Claim (4).
Similarly, every vertex in $Y^{\prime\prime}$ is adjacent to $x_{1}$ and to $x_{2}$.
It remains to deal with the vertices in $G^{\prime}_{0}$. Let $w$ be a vertex in $G^{\prime}_{0}$.
We look at two sub-cases:
–
$z_{1}$ and $z_{2}$ are adjacent. In this case, the other edges between the two triangles are $x_{1}y_{2}$ and $x_{2}y_{1}$. In particular, again by Claim (1),
$w$ cannot have neighbours both in $\{x_{1},x_{2}\}$ and in $\{y_{1},y_{2}\}$. Write $G^{\prime}_{0,x}$ for the vertices in $G^{\prime}_{0}$ non-adjacent to both $x_{1}$ and $x_{2}$,
and $G^{\prime}_{0,y}$ for the vertices in $G^{\prime}_{0}$ non-adjacent to both $y_{1}$ and $y_{2}$ (breaking ties arbitrarily).
Then $X^{\prime\prime}\cup G^{\prime}_{0,x}\cup\{x_{1},x_{2}\}$ and $Y^{\prime\prime}\cup G^{\prime}_{0,y}\cup\{y_{1},y_{2}\}$ are independent sets as required.
–
$z_{1}$ and $z_{2}$ are non-adjacent. Note that $w$ is non-adjacent to both $z_{1}$ and $z_{2}$, since any such edge together with $xy$ would induce a $2K_{2}$.
Then $X^{\prime\prime}\cup G^{\prime}_{0}\cup\{z_{1},z_{2}\}$ and $Y^{\prime\prime}\cup\{z_{1},z_{2}\}$ are again independent sets as required.
Theorem 6.
Let $A$ be the class of ($2K_{2},diamond$)-free graphs. Then
•
for $a=3$, we have $R_{A}(a,b)=\lfloor 2.5(b-1)\rfloor+1$;
•
for $a=4$, we have $R_{A}(a,3)=7$, $R_{A}(a,4)=10$ and $R_{A}(a,b)=\lfloor 2.5(b-1)\rfloor+1$ for $b\geq 5$;
•
for $a\geq 5$, we have $R_{A}(a,b)=\max\{\lfloor 2.5(b-1)\rfloor+1,a+b-1\}$, except for $R_{A}(5,4)=10$.
Proof.
As before, we split the analysis into several subclasses of $A$.
For the class $X$ of $(2K_{2}$,diamond)-free graphs containing a $K_{4}$ and $a\geq 5$, we have $R_{X}(a,b)=a+b-1$. Indeed,
every split graph with $a+b-1$ vertices contains either a clique of size $a$ or an independent set of size $b$ and hence $R_{X}(a,b)\leq a+b-1$.
On the other hand, the split graph with a clique $C$ of size $a-1$ and an independent set $I$ of size $b-1$ with a matching between $C$ and $I$ belongs to $X$ and hence
$R_{X}(a,b)\geq a+b-1$.
For the class $Z_{0}$ of bipartite $2K_{2}$-free graphs, we have $R_{Z_{0}}(a,b)=2b-1$, which is easy to see.
For the class $Z_{1}$ of graphs each of which is an extended $C_{5}+K_{1}$, we have $R_{Z_{1}}(a,b)=\lfloor 2.5(b-1)\rfloor+1$.
For an odd $b$,
an extremal graph
is constructed from a $C_{5}$ by replacing each vertex with an independent set of size $(b-1)/2$.
This graph has $\lfloor 2.5(b-1)\rfloor$ vertices, the independence number $b-1$ and the clique number $2<a$.
For an even $b$,
an extremal graph
is constructed from a $C_{5}$ by replacing two adjacent vertices of a $C_{5}$
with independent sets of size $b/2$ and the remaining vertices of the cycle with independent sets of size $b/2-1$.
This again gives in total $\lfloor 2.5(b-1)\rfloor$ vertices, and the independence number $b-1$.
Therefore, in the class $Z=Z_{0}\cup Z_{1}$, we have $R_{Z}(a,b)=\max\{R_{Z_{0}}(a,b),R_{Z_{1}}(a,b)\}=\lfloor 2.5(b-1)\rfloor+1$.
To compute $R_{Y}(a,b)$, we partition $Y$ into $Y_{1}$, $Y_{2}$ and $Y_{3}$, where $Y_{s}$ consists of the graphs in $Y$ with $s$ triangles. We then have:
•
$R_{Y_{3}}(4,b)=b+6$ for $b\geq 4$. Indeed, the three triangle configuration (unique up to isomorphism) has independence number 3, and any additional vertices are isolated by Claim (5).
•
$R_{Y_{2}}(4,b)=2b+1$ for $b\geq 3$. To show this, let $G\in Y_{2}$ be a graph on $2b+1$ vertices, with triangles $T_{1}$ and $T_{2}$.
As in Claim (6), $G^{\prime}=G-(T_{1}\cup T_{2})$, $G^{\prime}_{0}$ consists of the isolated vertices in $G^{\prime}$, while $G^{\prime}_{1}$ contains the rest of $G^{\prime}$.
If $G^{\prime}_{1}$ is empty (or in other words, if $G^{\prime}$ has no edges), then $G^{\prime}=G^{\prime}_{0}$ is an independent set with $2b+1-6=2b-5$ vertices. Provided $b\geq 5$, this number is at least $b$.
For $b=3$, the unique vertex in $G^{\prime}$ has at most one neighbour in each of $T_{1}$ and $T_{2}$, so in particular, it has two non-adjacent non-neighbours in the triangles, hence $G$ has an independent set of size 3.
For $b=4$, there are 3 vertices in $G^{\prime}$. Like before, each of them has at most one neighbour in each triangle; if their neighbourhoods do not cover the triangles,
then those three vertices together with a common non-neighbour give an independent set of size 4. If their neighbourhoods do cover the triangles,
then by size constraints the neighbourhoods are disjoint, and each of them is an independent set by Claim (1).
In this case, any two of the vertices together with the neighbourhood of the third form an independent set of size 4.
Now assume that $G^{\prime}$ has an edge. Then by Claim (6) $G^{\prime}$ admits a bipartition $X^{\prime}\cup Y^{\prime}$ such that there exist vertices $x_{1},y_{1}\in T_{1}$ and $x_{2},y_{2}\in T_{2}$
with the property that $X^{\prime}\cup\{x_{1},x_{2}\}$ and $Y^{\prime}\cup\{y_{1},y_{2}\}$ are independent sets. Given such a bipartition, it immediately follows
that $G$ has an independent set of size at least $\left\lceil\frac{2b-5}{2}\right\rceil+2=b$.
Extremal counterexamples, i.e. graphs without clique of size 4 and without independent sets of size $b$, can be easily constructed,
by making for instance $G^{\prime}$ complete bipartite with $b-3$ vertices in each part and connecting each part to the triangles appropriately.
•
$R_{Y_{1}}(4,b)\leq 2b+1$ for $b\geq 3$. To see why, let $G\in Y_{1}$ be a graph on $2b+1$ vertices, write $T$ for the triangle, and put $G^{\prime}=G-T$.
Like before, $G^{\prime}$ is a $2K_{2}$-free bipartite graph; if it has isolated vertices, one can find a bipartition of $G^{\prime}$ where one of the parts has size at least $b$.
Otherwise, there are vertices $x$ and $y$ dominating each part. Those have neighbours $y^{\prime}$ and $x^{\prime}$ in $T$ respectively; but then by Claim (1),
$y^{\prime}$ is a common non-neighbour of the part containing $y$, and $x^{\prime}$ is a common non-neighbour of the part containing $x$.
Since $G^{\prime}$ has one part with size at least $b-1$, this means $G$ contains an independent set of size $b$.
Putting these together, we have $R_{Y}(4,b)=2b+1$ for $b\geq 3$, except $R_{Y}(4,4)=10$.
Combining the results for the three classes $X$, $Y$ and $Z$, we obtain the desired conclusion of the theorem.
∎
4.5 The class of ($P_{4},C_{4}$,co-claw)-free graphs
We start with a lemma characterizing the structure of graphs in this class, where we use the following well-known fact (see e.g. [9]):
every $P_{4}$-free graph with at least two vertices is either disconnected or the complement to a disconnected graph.
Lemma 9.
Every disconnected ($P_{4},C_{4}$,co-claw)-free graph is a bipartite graph and every
connected ($P_{4},C_{4}$,co-claw)-free graph consists of a bipartite graph plus a number of dominating
vertices, i.e. vertices adjacent to all other vertices of the graph.
Proof.
Let $G$ be a disconnected ($P_{4},C_{4}$,co-claw)-free graph. Then every connected component of $G$ is $K_{3}$-free,
since a triangle in one of them together with a vertex from any other component create an induced co-claw.
Therefore, every connected component of $G$, and hence $G$ itself, is a bipartite graph.
Now let $G$ be a connected graph. Since $G$ is $P_{4}$-free, $\overline{G}$ is disconnected.
Let $C^{1},\ldots,C^{k}$ $(k\geq 2)$ be co-components of $G$, i.e. components in the complement of $G$.
If at least two of them have more than 1 vertex, then an induced $C_{4}$ arises.
Therefore, all co-components, except possibly one, have size 1, i.e. they are dominating vertices in $G$.
If, say, $C^{1}$ is a co-component of size more than 1, then the subgraph of $G$ induced by $C^{1}$ must be disconnected
and hence it is a bipartite graph.
∎
Theorem 7.
For the class $A$ of ($P_{4},C_{4}$,co-claw)-free graphs and all $a,b\geq 3$,
$$R_{A}(a,b)=a+2b-4.$$
Proof.
Let $G$ be a graph in $A$ with $a+2b-5$ vertices, $2b-2$ of which induce a matching (a 1-regular graph with $b-1$ edges) and the remaining $a-3$ vertices are dominating in $G$.
Then $G$ has neither a clique of size $a$ nor an independent set of size $b$. Therefore, $R_{A}(a,b)\geq a+2b-4.$
Conversely, let $G$ be a graph in $A$ with $a+2b-4$ vertices. If $G$ is disconnected, then, by Lemma 9,
it is bipartite and hence at least one part in a bipartition of $G$ has size at least $b$,
i.e. $G$ contains an independent set of size $b$. If $G$ is connected, denote by $C$ the set of dominating vertices in $G$. If $|C|\geq a-1$, then
either $C$ itself (if $|C|\geq a$) or $C$ together with a vertex not in $C$ (if $|C|=a-1$) create a clique of size $a$.
So, assume $|C|\leq a-2$.
Then the graph $G-C$ has at least $2b-2$ vertices and, by Lemma 9, it is bipartite.
If this graph has no independent set of size $b$,
then in any bipartition of this graph each part contains exactly $b-1$ vertices, and each vertex has a neighbour in the opposite part. But then $|C|=a-2$ and therefore $C$ together with
any two adjacent vertices in $G-C$ create a clique of size $a$.
∎
4.6 The class of (co-diamond,paw,claw)-free graphs
Lemma 10.
Let $G$ be a (co-diamond,paw,claw)-free graph.
•
If $G$ is connected, then it is either a path with at most 5 vertices or a cycle with at most 6 vertices or the complement of a graph of vertex degree at most 1.
•
If $G$ has two connected components, then either both components are complete graphs or one of the components is a single vertex and the other is the complement of a graph of vertex degree at most 1.
•
If $G$ has at least 3 connected components, then $G$ is edgeless.
Proof.
Assume first that $G$ is connected. It is known (see e.g. [17]) that every connected paw-free graphs is either $K_{3}$-free or complete multipartite. i.e. $\overline{P}_{3}$-free.
If $G$ is $K_{3}$-free, then together with the claw-freeness of $G$ this implies that $G$ has no vertices of degree more than 2, i.e. $G$ is either a path or a cycle. To avoid an induced co-diamond,
a path cannot have more than 5 vertices and a cycle cannot have more than 6 vertices. If $G$ is complete multipartite, then each part has size at most 2, since otherwise an induced claw arises.
In other words, the complement of $G$ is a graph of vertex degree at most 1.
Assume now that $G$ has two connected components. If each of them contains an edge, then both components are cliques, since otherwise two non-adjacent vertices in one of the components with
two adjacent vertices in the other component create an induced co-diamond. If one of the components is a single vertex, then the other is $\overline{P}_{3}$-free (to avoid an induced co-diamond) and
hence is the complement of a graph of vertex degree at most 1 (according to the previous paragraph).
Finally, let $G$ have at least 3 connected components. If one of them contains an edge, then this edge together with two vertices from two other components form an induced co-diamond.
Therefore, every component of $G$ consists of a single vertex, i.e. $G$ is edgeless.
∎
Theorem 8.
For the class $A$ of (co-diamond,paw,claw)-free graphs and for all $a,b\geq 3$,
$R_{A}(a,3)=2a-1,$
$R_{A}(a,b)=\max\{2a,b\}$ for $b\geq 4$,
except for the following four numbers $R_{A}(3,3)=6$, $R_{A}(3,4)=R_{A}(3,5)=R_{A}(3,6)=7$.
Proof.
We start with the case $b=3$. Since $C_{5}$ belongs to $A$, $R_{A}(3,3)=6$, which covers the first of the four exceptional cases.
Let $a\geq 4$. The graph $2K_{a-1}$ with $2a-2$ vertices has neither cliques of size $a$ nor independent sets of size $3$, and hence $R_{A}(a,3)\geq 2a-1$.
Conversely, let $G\in A$ be a graph with $2a-1\geq 7$ vertices. If $G$ is connected, then according to Lemma 10 $G$ is the complement of a graph of vertex degree at most 1,
and hence $G$ has a clique of size $a$. If $G$ has two connected components both of which are cliques, then one of them has size at least $a$.
If $G$ has two connected components one of which is a single vertex, then either the second component has a couple of non-adjacent vertices, in which case an independent set of size 3 arises,
or the second component is a clique of size more than $a$. If $G$ has at least 3 connected components, then it contains an independent set of size more than $3$.
Therefore, $R_{A}(a,3)\leq 2a-1$ and hence $R_{A}(a,3)=2a-1$ for $a\geq 4$.
From now on, $b\geq 4$. Consider the last three exceptional cases, i.e. let $a=3$ and $4\leq b\leq 6$.
The graph $C_{6}$ that belongs to our class has neither a clique of size 3 nor an independent set of size $b\geq 4$ and hence $R_{A}(a,b)\geq 7$ in these cases.
Conversely, let $G\in A$ be a graph with at least 7 vertices. If $G$ is connected, then it is the complement of a graph of vertex degree at most 1 and hence contains a clique of size 3. If $G$ has two
connected components each of which is a clique, then one of them has size at least 3. If $G$ has two components one of which is a single vertex, then the other component has at least 6 vertices and
also contains a clique of size 3. If $G$ has at least 3 connected components, then $G$ has an independent set of size $4\leq b\leq 6$. Therefore, $R_{A}(a,b)=7$ for $a=3$ and $4\leq b\leq 6$.
In the rest of the proof we assume that either $a\geq 4$ or $b\geq 7$. Denote $m=\max\{2a,b\}$. If $m=2a$, then
the graph $\overline{(a-1)K_{2}}+K_{1}$ with $2a-1$ vertices has neither cliques of size $a$ nor independent sets of size $b\geq 7$. If
$m=b$, then the edgeless graph with $b-1$ vertices has neither cliques of size $a$ nor independent sets of size $b$. Therefore, $R_{A}(a,b)\geq m$.
Conversely, let $G$ be a graph with at least $m\geq 7$ vertices. If $G$ is connected, then it is the complement of a graph of vertex degree at most 1 and hence contains a clique of size $a$.
If $G$ has two connected components each of which is a clique, then one of them has size at least $a$. If $G$ has two components one of which is a single vertex, then the other component has at least $2a-1$ vertices and
also contains a clique of size $a$. If $G$ has at least 3 connected components, then $G$ has an independent set of size $b$. Therefore, $R_{A}(a,b)=m$.
∎
5 Bipartite Ramsey numbers
Let $G=(A,B,E)$ be a bipartite graph given together with a bipartition $A\cup B$ of its vertex set into two independent
sets. We call $A$ and $B$ the parts of $G$. The graph $G$ is complete bipartite,
also known as a biclique, if every vertex of $A$ is adjacent to every vertex of $B$. A biclique with parts of size $n$ and $m$
is denoted by $K_{n,m}$. By $b(G)$ we denote the biclique number of $G$, i.e. the maximum $p$ such that $G$ contains $K_{p,p}$
as an induced subgraph.
Given a bipartite graph $G=(A,B,E)$, we denote by $\widetilde{G}$ the bipartite complement of $G$, i.e. the bipartite graph on the same vertex set in which two vertices $a\in A$ and $b\in B$
are adjacent if and only if they are not adjacent in $G$. We refer to the bipartite complement of a biclique as co-biclique and denote by
$a(G)$ the maximum $q$ such that $\widetilde{G}$ contains $K_{q,q}$ as an induced subgraph.
The notion of bipartite Ramsey numbers is an adaptation of the notion of Ramsey numbers to bipartite graphs and
it can be defined as follows.
Definition 1.
The bipartite Ramsey number $R^{b}(p,q)$ is the minimum number $n$ such that
for every bipartite graph $G$ with at least $n$ vertices in each of the parts either $G$ or $\widetilde{G}$ (or both)
contains $K_{p,p}$.
It is known (see e.g. [8]) that $R^{b}(p,p)\geq 2^{p/2}$, and hence bipartite Ramsey numbers are generally non-linear.
However, similarly to the non-bipartite case, they may become linear when restricted to a specific class $X$ of bipartite graphs.
We denote bipartite Ramsey numbers restricted to a class $X$ by $R^{b}_{X}(p,q)$ and say that
$R^{b}_{X}(p,q)$ are linear in $X$ if there is a constant $k$ such that $R^{b}_{X}(p,q)\leq k(p+q)$ for all $p,q$.
Similarly to the non-bipartite case, we will say that graphs in a class $X$ of bipartite graphs have
linear bipartite homogeneous subgraphs if there exists a constant $c=c(X)$ such that
$\max\{a(G),b(G)\}\geq c\cdot|V(G)|$ for every $G\in X$.
The following proposition can be proved by analogy with Proposition 1.
Proposition 2.
Let $X$ be a class of bipartite graphs. Then graphs in $X$ have linear bipartite homogeneous subgraphs if and only if bipartite Ramsey numbers are linear in $X$.
Some classes of bipartite graphs with linear bipartite homogeneous subgraphs have been revealed recently in [4],
where the authors consider bipartite graphs that do not contain a fixed bipartite graph as
an induced subgraph respecting the parts. The subgraph containment relation respecting the parts can be thought of as
the containment of colored bipartite graphs, where a colored bipartite graph is a bipartite graph given with a fixed
bipartition of its vertices into two independent sets of black and white vertices. A colored bipartite graph $H$ is said to be
an induced subgraph of a colored bipartite graph $G$ if there exists an isomorphism between $H$ and an induced subgraph of $G$
that preserves colors.
A number of related results appeared also in [13], where the authors study zero-one matrices that do not contain a fixed matrix as a submatrix.
Primarily, they are interested in forbidden submatrices $P$ that guarantee the existence of a square
homogeneous submatrix of linear size in matrices avoiding $P$, where homogeneous means a submiatrix with all its entries being equal.
The problems studied in [13] can be interpreted as questions about homogeneous bipartite subgraphs in
colored and (vertex-)ordered bipartite graphs
which do not contain a fixed forbidden colored and ordered bipartite subgraph.
In this case, the notion of graph containment must preserve not only colors but also vertex order.
In the next sections, we extend some of the results obtained in [4] using the language of Ramsey numbers.
5.1 Classes with non-linear bipartite Ramsey numbers
According to Lemma 1, classes of graphs without short cycles have non-linear Ramsey numbers.
A similar result holds for bipartite graphs, which can be shown via standard probabilistic arguments. For the sake of completeness,
we provide formal proofs below.
We start with a result, which is an adaptation to the bipartite setting of the classical proof by Erdős of the existence of high chromatic number graphs without short cycles.
Lemma 11.
Let $k$ be a natural number and $k\geq 4$.
Then for any sufficiently large $n$, there exists a bipartite graph $G=(A,B,E)$
with $n$ vertices in each of the parts such that neither $G$ contains cycles of length at most $k$, nor
$\widetilde{G}$ contains $K_{s,s}$ with $s=o(n)$.
Proof.
Let $n$ be a natural number and let $N=2n$.
We set $\varepsilon=\frac{1}{2k}$, $p=(2N)^{\varepsilon-1}$, and consider the random bipartite graph $G(2N,p)$
(i.e. the probability space of bipartite graphs with two parts $A$ and $B$ each of size $N$ such that every
pair of vertices $a\in A,b\in B$ is connected by an edge independently with probability $p$).
Let $Y$ be a random variable equal to the number of cycles of length at most $k$ in $G(2N,p)$.
The number of potential cycles of length $i$ is at most $\frac{1}{2}(i-1)!{2N\choose i}\leq(2N)^{i}$, and each
of them is present with probability $p^{i}$. Hence
$$\mathbb{E}[Y]\leq\sum_{i=4}^{k}(2N)^{i}p^{i}=\sum_{i=4}^{k}(2N)^{\varepsilon i}.$$
Since $(2N)^{\varepsilon i}=o(N)$ for all $i\leq k$, we conclude $\mathbb{E}[Y]=o(N)$.
Hence, for every sufficiently large $N$, we have $\mathbb{E}[Y]<\frac{N}{2}$, and therefore, by Markov’s inequality,
$$P\left[Y\geq N\right]<\frac{1}{2}.$$
Now we estimate the maximum size of a co-biclique in $G(2N,p)$, i.e. $a(G(2N,p))$.
Let us set $s=\lceil\frac{3}{p}\ln N\rceil$. Then again from Markov’s inequality, we have
$$P\left[a(G(2N,p))\geq s\right]\leq{N\choose s}{N\choose s}(1-p)^{s^{2}}\leq N^%
{2s}e^{-ps^{2}}=e^{s(2\ln N-ps)},$$
which tends to zero as $N$ goes to infinity. Thus again, for $N$ sufficiently large, we have
$$P\left[a(G(2N,p))\geq s\right]<\frac{1}{2}.$$
The above conclusions imply that there exists a graph $G=(A,B,E)$ with $Y<N$ and $a(G)<s$.
Now we want to destroy all of the $Y$ short cycles by removing one vertex from each of them.
In order to guarantee that the resulting bipartite graph has many vertices in each of the parts we
destroy half of the cycles by removing vertices from $A$, and the other half by removing vertices from $B$.
In this way we remove at most $\frac{N}{2}$ vertices from each of $A$ and $B$,
and hence we obtain a graph $G^{\prime}=(A^{\prime},B^{\prime},E^{\prime})$ with at least $\frac{N}{2}=n$ vertices in each of the parts
such that $G^{\prime}$ contains neither cycles of length at most $k$, nor the bipartite complement of $K_{s,s}$
with $s=\lceil\frac{3}{p}\ln N\rceil=\lceil 3(2N)^{1-\varepsilon}\ln N\rceil=o(N)$.
By removing some of the vertices from $G^{\prime}$ we can obtain a bipartite graph with the same properties, but with exactly
$n$ vertices in each of the parts.
∎
From this lemma and Proposition 2 we derive the following conclusion.
Corollary 1.
For every $k\geq 4$, bipartite Ramsey numbers are not linear in the class of bipartite graphs without cycles of length at most $k$.
Theorem 9.
Let $X$ be a class of bipartite graphs defined by a finite set $M$ of bipartite forbidden induced subgraphs.
If $M$ does not contain a forest or the bipartite complement of a forest, then bipartite Ramsey numbers are not linear in $X$.
Proof.
If $M$ does not contain a forest, then every graph in $M$ contains a cycle. Let $k$ be the size of a largest induced cycle in graphs in $M$,
which is a finite number, since $M$ is finite. Then $X$ contains all bipartite graphs without cycles of length at most $k$, and hence
bipartite Ramsey numbers are not linear in $X$ by Corollary 1.
If $M$ does not contain the bipartite complement of a forest, then bipartite Ramsey numbers are not linear in $X$,
since they are linear in $X$ if and only if they are linear in the class of bipartite complements of graphs in $X$.
∎
This result is half analogous to Theorem 2. Unfortunately, there is no obvious analog for the second half.
In the non-bipartite case, the second half deals with $P_{3}$-free graphs and their complements.
Every $P_{3}$-free graph consists of disjoint union of cliques, and the most natural analog of this class in the bipartite case
is the class of $P_{4}$-free bipartite graphs, which are disjoint union of bicliques. However, bipartite Ramsey numbers
are linear in this class, which is not difficult to see. In the absence of any other natural obstacles for linearity
in the bipartite case, we propose the following conjecture.
Conjecture 4.
Let $X$ be a class of bipartite graphs defined by a finite set $M$ of bipartite forbidden induced subgraphs.
Then bipartite Ramsey numbers in $X$ are linear if and only if $M$ contains a forest and the bipartite complement of a forest.
We note that an analogous conjecture, in the context of homogeneous submatrices, was proposed in [13]. In the next section, we consider some classes of bipartite graphs excluding a forest and the bipartite complement of a forest,
and show that bipartite Ramsey numbers are linear for them.
5.2 Some classes with linear bipartite Ramsey numbers
First, we look at some classes defined by a single bipartite forbidden induced subgraph $H$, which is simultaneously a forest and the bipartite complement of a forest.
The following theorem characterizes all graphs $H$ of this form, where $F_{p,q}$ denotes the graph
represented in Figure 2 and $S_{1,2,3}$ is a tree obtained from the claw by subdividing one of its edges ones and another edge twice.
Implicitly, without a proof, this characterization was given in [1]. It also appeared recently in [4].
Theorem 10.
A bipartite graph $H$ is a forest and the bipartite complement of a forest if and only if $H$ is an induced subgraph of a $P_{7}$ or of an $S_{1,2,3}$ or of a graph $F_{p,q}$.
The results in [4] and [13] imply that for any natural numbers $p$ and $q$, $F_{p,q}$-free bipartite graphs have linear bipartite homogeneous subgraphs. Hence, by Proposition 2, bipartite Ramsey numbers are linear in the class of $F_{p,q}$-free bipartite graphs. In the next section, we prove that bipartite Ramsey numbers are linear in the class of $S_{1,2,3}$-free bipartite graphs. This leaves an intriguing open question of whether $P_{7}$-free bipartite graphs have linear Ramsey numbers or not. The structural characterization of the latter graph class from [16] may be helpful in answering this question.
We note that $P_{6}$ is symmetric with respect to bipartition and it is an induced subgraph of $S_{1,2,3}$. Therefore our result from the next section implies that colored $P_{6}$-free
bipartite graphs have linear bipartite homogeneous subgraphs, which resolves one of the four open cases from [4].
5.2.1 $S_{1,2,3}$-free bipartite graphs
We start with some definitions.
•
The disjoint union is the operation that creates out of $G_{1}$ and $G_{2}$ the bipartite graph $G=(A_{1}\cup A_{2},B_{1}\cup B_{2},E_{1}\cup E_{2})$.
•
The join is the operation that creates out of $G_{1}$ and $G_{2}$ the bipartite graph which is the bipartite complement of the disjoint union
of $\widetilde{G}_{1}$ and $\widetilde{G}_{2}$
•
The skew join is the operation that creates out of $G_{1}$ and $G_{2}$ the bipartite graph $G=G_{1}\oslash G_{2}=(A_{1}\cup A_{2},B_{1}\cup B_{2},E_{1}\cup E_{2}\cup\{ab:%
a\in A_{1},b\in B_{2}\})$. We say $G$ is a skew join of $G_{1}$ and $G_{2}$, if either $G=G_{1}\oslash G_{2}$ or $G=G_{2}\oslash G_{1}$.
The three operations define a decomposition scheme, known as canonical decomposition which takes a bipartite graph $G$ and partitions it into
graphs $G_{1}$ and $G_{2}$ if $G$ is a disjoint union, join, or skew-join of $G_{1}$ and $G_{2}$, and then the scheme applies to $G_{1}$ and $G_{2}$, recursively.
Graphs that cannot be decomposed into smaller graphs under this scheme will be called canonically indecomposable.
The following lemma from [15] characterizes $S_{1,2,3}$-free bipartite graphs containing a $P_{7}$.
In the paper, the author calls a graph prime if for any two distinct vertices the neighbourhoods are also distinct.
Lemma 12.
A prime canonically indecomposable bipartite $S_{1,2,3}$-free graph $G$ that contains a $P_{7}$ must be either a path or a cycle or the bipartite complement of either a path or a cycle.
Theorem 11.
Let $G$ be a canonically indecomposable $S_{1,2,3}$-free bipartite graph that contains a $P_{7}$. If $G$ has at least $4n$
vertices in each part of the bipartition, then $G$ contains a $K_{n,n}$ or a $\widetilde{K}_{n,n}$
Proof.
From Lemma 12 it follows that either $G$ or its bipartite complement
must be either a path or a cycle with some vertices duplicated (as we now no longer assume that $G$ is prime).
Hence, $G=(A,B,E)$ or its bipartite complement must admit a partition $A=A_{1}\cup A_{2}\cup\ldots\cup A_{s}$, $B=B_{1}\cup B_{2}\cup\ldots\cup B_{s}$
such that:
•
$A_{i},B_{i}$ are non-empty for all $i\leq s-1$, and at most one of $A_{s}$ and $B_{s}$ is empty
•
For any $i\leq s-1$, $A_{i}$ joined with $B_{j}$ if $j\in\{i,i+1\}$ and co-joined to $B_{j}$ otherwise
•
$A_{s}$ joined to $B_{s}$ and $B_{1}$ and co-joined with $B_{j}$ for $j\notin\{1,s\}$
Consider first the case when there exists some $i$ such that $|A_{i}|\geq n$. In this case, if $|B_{i}\cup B_{i+1}|\geq n$,
we obtain a biclique $K_{n,n}$ induced by subsets of $A_{i}$ and $B_{i}\cup B_{i+1}$.
On the other hand, if $|B_{i}\cup B_{i+1}|<n$, then we obtain a $\widetilde{K}_{n,n}$
induced by subsets of $A_{i}$ and $B\backslash(B_{i}\cup B_{i+1})$.
Hence, if there exists some $i$ such that $|A_{i}|\geq n$, then $G$ contains either a $K_{n,n}$ or a $\widetilde{K}_{n,n}$.
The argument when there exists some $i$ such that $|B_{i}|\geq n$ is analogous.
So assume now that $|A_{i}|<n$ and $|B_{i}|<n$ for all $i$. Consider the smallest $k$ such that $|A_{1}\cup A_{2}\cup\ldots\cup A_{k}|\geq n$.
If $|B_{k+2}\cup B_{k+3}\cup\ldots\cup B_{s}|\geq n$ then we have a $\widetilde{K}_{n,n}$ induced by subsets of $A_{1}\cup A_{2}\cup\ldots\cup A_{k}$
and $B_{k+2}\cup B_{k+3}\cup\ldots\cup B_{s}$. Otherwise, $|B_{2}\cup B_{3}\cup\ldots\cup B_{k}|=|B|-|B_{1}|-|B_{k+1}|-|B_{k+2}\cup\ldots%
\cup B_{s}|>4n-n-n-n=n$ and also $|A_{k+1}\cup A_{k+2}\cup\ldots\cup A_{s}|=|A|-|A_{k}|-|A_{1}\cup A_{2}\cup%
\ldots\cup A_{k-1}|>4n-n-n=2n$. Hence,
we obtain a $\widetilde{K}_{n,n}$ between subsets of $A_{k+1}\cup A_{k+2}\cup\ldots\cup A_{s}$ and $B_{2}\cup B_{3}\cup\ldots\cup B_{k}$.
∎
Theorem 12.
Let $X$ be the class of $S_{1,2,3}$-free bipartite graphs. Then $R^{b}_{X}(p,q)\leq 6(p+q)$.
Proof.
Let $G=(A,B,E)$ be a bipartite graph in $X$ that has $6n$ vertices in each part. If $G_{0}^{\prime}=G$ is canonically indecomposable,
then by the previous lemma we can find $K_{n,n}$ or $\widetilde{K}_{n,n}$ if $G$. So assume $G_{0}^{\prime}$ is a disjoint a union, a join, or a skew-join of two non-empty graphs $G_{1}=(A_{1},B_{1},E_{1})$ and $G_{1}^{\prime}=(A_{1}^{\prime},B_{1}^{\prime},E_{1}^{\prime})$. Without loss of generality
we may assume that $|A_{1}|\leq|A_{1}^{\prime}|$.
Inductively, if $G_{k}^{\prime}$ for some $k\in\mathbb{N}$ is not cannonically indecomposable, then $G_{k}^{\prime}$ is a disjoint union, a join, or a skew join of two non-empty
graphs $G_{k+1}=(A_{k+1},B_{k+1},E_{k+1})$ and $G_{k+1}^{\prime}=(A^{\prime}_{k+1},B^{\prime}_{k+1},E^{\prime}_{k+1})$.
Again, without loss of generality we may assume that $|A_{k+1}|\leq|A_{k+1}^{\prime}|$.
Consider first the case when the procedure stops with canonically indecomposable graph $G_{k}^{\prime}$ such that $|A_{k}^{\prime}|\geq 4n$.
If $|B_{k}^{\prime}|\geq 4n$, then by the previous lemma,
$G$ contains $K_{n,n}$ or $\widetilde{K}_{n,n}$. On the other hand, if $|B_{k}^{\prime}|<4n$,
then we have $|B_{1}\cup B_{2}\cup\ldots\cup B_{k}|\geq 2n$ and each vertex in $B_{1}\cup B_{2}\cup\ldots\cup B_{k}$ is either joined or co-joined to
the set $A_{k}^{\prime}$. Hence we can find a $K_{n,n}$ or $\widetilde{K}_{n,n}$ induced by subsets of $B_{1}\cup B_{2}\cup\ldots\cup B_{k}$ and $A_{k}^{\prime}$.
Now consider the case when the procedure stops with a canonically indecomposable graph $G_{k}^{\prime}$ such that $|A_{k}^{\prime}|<4n$. As $A=A_{1}\cup A_{2}\cup\ldots\cup A_{k}\cup A_{k}^{\prime}$ and $|A|=6n$ it follows that $|A_{1}\cup A_{2}\cup\ldots\cup A_{k}|>2n$. Hence, in this case we can pick the smallest $p$ such that $|A_{1}|+|A_{2}|+\ldots+|A_{p}|\geq 2n$. From the fact that
$|A_{1}|+|A_{2}|+\ldots+|A_{p}|+|A_{p}^{\prime}|=6n$ and the definition of $p$ it follows that $|A_{p}|+|A_{p}^{\prime}|\geq 4n$. By construction $|A_{p}^{\prime}|\geq|A_{p}|$, hence $|A_{p}^{\prime}|\geq 2n$.
First, let us consider the case when $|B_{p}^{\prime}|\geq n$. Then each vertex of $A_{1}\cup A_{2}\cup\ldots\cup A_{p}$ by construction
is either joined of co-joined to $B_{p}^{\prime}$. Since $|A_{1}\cup A_{2}\cup\ldots\cup A_{p}|\geq 2n$, it is clear that
we will find either a $K_{n,n}$ or $\widetilde{K}_{n,n}$ in the bipartite graph induced by $A_{1}\cup A_{2}\cup\ldots\cup A_{p}$ and $B_{p}^{\prime}$.
Now, let us consider the case when $|B_{p}^{\prime}|<n$. Then each vertex of $B_{1}\cup B_{2}\cup\ldots B_{p}$ is either
joined or co-joined to $A_{p}^{\prime}$. Since $|A_{p}^{\prime}|\geq 2n$ and $|B_{1}\cup B_{2}\cup\ldots B_{p}|>5n$,
we can find either a $K_{n,n}$ or $\widetilde{K}_{n,n}$ in the bipartite graph induced by $B_{1}\cup B_{2}\cup\ldots\cup B_{p}$ and $A_{p}^{\prime}$.
Hence we have shown that $R_{X}^{b}(n,n)\leq 6n$ for all $n\in\mathbb{N}$. It now follows easily that for any $p,q\in\mathbb{N}$,
we have $R_{X}^{b}(p,q)\leq R_{X}^{b}\left(\max\{p,q\},\max\{p,q\}\right)\leq 6\cdot%
\max\{p,q\}\leq 6(p+q)$.
∎
5.2.2 Exact values of bipartite Ramsey numbers for $P_{2}+P_{3}$-free bipartite graphs
Finding exact values of Ramsey numbers is much harder than providing bounds.
Similarly, finding tight bounds on the size of homogeneous subgraphs is a very difficult task.
In [4], such bounds have been given only for classes, where the only forbidden induced subgraph has two vertices in each part of the bipartition.
In this section, we consider the class of bipartite graphs, where the only forbidden induced subgraph $P_{2}+P_{3}$ has two vertices in one of the parts and three in the other.
This class is a subclass of $S_{1,2,3}$-free bipartite graphs, and hence the bipartite Ramsey numbers are linear in this class.
Now we refine this conclusion by deriving exact values of the bipartite Ramsey numbers for the class of $P_{2}+P_{3}$-free bipartite graphs.
Let $G=(B,W,E)$ be a $P_{2}+P_{3}$-free bipartite graph given together with a bipartition of its vertex set into a set $B$ of
black and a set $W$ of white vertices. Also, let $x$ and $y$ be two vertices of $G$ of the same color.
Definition 2.
A private neighbour of $x$ with respect to $y$ is a vertex of $G$ adjacent to $x$ and non-adjacent to $y$.
We say $x$ and $y$ are incomparable if neither $N(x)\subseteq N(y)$ nor $N(y)\subseteq N(x)$, i.e. if both
$x$ and $y$ have private neighbours with respect to each other.
From the $P_{2}+P_{3}$-freeness of $G$ we immediately make the following conclusion.
Claim 1.
If $x$ and $y$ are incomparable, then each of $x$ and $y$ has exactly one private neighbour.
Definition 3.
On the set $B$ of black vertices we define a binary relation $R_{B}$ such that $(x,y)\in R_{B}$ if and only if either
$x$ and $y$ are incomparable or $N(x)=N(y)$. Similarly, we define a binary relation $R_{W}$ on the set $W$ of white vertices.
Lemma 13.
$R_{B}$ and $R_{W}$ are equivalence relations.
Proof.
We prove the lemma for $R_{B}$. Reflexivity and symmetry are obvious. To prove transitivity, consider three vertices $x,y,z\in B$
such that $(x,y)\in R_{B}$ and $(y,z)\in R_{B}$. We want to show that $(x,z)\in R_{B}$.
If $N(x)=N(y)$ or $N(y)=N(z)$, transitivity is obvious, and we additionally get that $N(x)=N(y)=N(z)$, since otherwise
an induced $P_{2}+P_{3}$ arises.
So assume instead that both pairs
$x,y$ and $y,z$ are incomparable.
Let $x_{y}$ be a private neighbour of $x$ with respect to $y$ and
let $y_{x}$ be a private neighbour of $y$ with respect to $x$. To avoid an induced $P_{2}+P_{3}$, we conclude that
$z$ is adjacent either to both $x_{y}$ and $y_{x}$ or to none of them.
If $z$ is adjacent neither to $x_{y}$ nor to $y_{x}$, then $x_{y}$ is a private neighbour of $x$ with respect to $z$.
Also, the private neighbour $z_{y}$ of $z$ with respect to $y$ is different from $x_{y}$ and is non-adjacent to $x$ (since otherwise
$x,y,z,y_{x},z_{y}$ induce a $P_{2}+P_{3}$). Therefore, $x$ and $z$ are incomparable, which proves transitivity. We observe that
the vertices $x,y,z$ and $x_{y},y_{x},z_{y}$ induce a matching and hence $x_{y},y_{x},z_{y}$
also are pairwise incomparable.
We can reduce the case where $z$ is adjacent to both $x_{y}$ and $y_{x}$ to the previous one by passing to the (bipartite) complement, and noting that $P_{2}+P_{3}$ and $2K_{2}$ are their own complements.
∎
Lemma 14.
For any two equivalence classes $X$ and $Y$ of vertices of the same color,
•
either $N(x)\subset N(y)$ for all pairs $x\in X$ and $y\in Y$,
•
or $N(x)\supset N(y)$ for all pairs $x\in X$ and $y\in Y$.
Proof.
If $x$ and $y$ belong to different equivalence classes $X$ and $Y$, then by definition $N(x)\subset N(y)$ or $N(x)\supset N(y)$.
Assume without loss of generality that $N(x)\subset N(y)$. Then $N(x)\subset N(y^{\prime})$ for any vertex $y^{\prime}\in Y$, since otherwise
$N(y^{\prime})\subset N(x)\subset N(y)$, in which case $y$ and $y^{\prime}$ are not equivalent. In turn this implies that $N(x^{\prime})\subset N(y^{\prime})$
for any vertex $x^{\prime}\in X$.
∎
From the proof of Lemma 13, any equivalence class contains either vertices with the same neighbourhood or pairwise incomparable vertices. We will refer to those as type 1 and type 2 classes respectively.
Without loss of generality we will assume that any equivalence class of size 1 is of type 1.
Moreover, for each equivalence class $X_{B}\subseteq B$ of type 2
there is a corresponding equivalence class $X_{W}\subseteq W$ of the same type and of the same size such that $X_{B}\cup X_{W}$ induces
either a matching or a co-matching. We call $(X_{B},X_{W})$ a block in $G$.
Lemma 15.
Let $X$ be an equivalence class and let $v$ be a vertex of the opposite color not belonging to the block containing $X$ (if $X$ is of type 2).
Then $v$ is either complete or anti-complete to $X$.
Proof.
If $X$ is of type 1, then $v$ is either complete or anti-complete to $X$ by definition. If $X$ is of type 2, then
$v$ cannot distinguish two vertices of $X$, since otherwise a $P_{2}+P_{3}$ arises (remember that any two vertices of $X$ belong to an induced $2K_{2}$).
∎
Lemma 14 allows us to order the equivalence classes $B_{1},\ldots,B_{k}$ of $B$ so that $i<j$ implies $N(x)\supset N(y)$
for all pairs $x\in B_{i}$ and $y\in B_{j}$. Similarly, we order the equivalence classes $W_{1},\ldots,W_{p}$ of $W$ so that $i<j$ implies that $N(x)\subset N(y)$
for all pairs $x\in W_{i}$ and $y\in W_{j}$. Then we order the vertices within equivalence classes arbitrarily. In this way,
we produce a linear order of $B$ and a linear order of $W$ satisfying the following properties:
•
if a vertex $x\in B$ in an equivalence class of type 1 is adjacent to a vertex $y\in W$,
then $x$ is adjacent to all the white vertices following $y$ in the order;
•
if a vertex $y\in W$ in an equivalence class of type 1 is adjacent to a vertex $x\in B$,
then $y$ is adjacent to all the black vertices preceding $x$ in the order;
•
if $x$ is a black vertex in a block $(X_{B},X_{W})$, then $x$ is adjacent to all the white vertices that appear after $X_{W}$
and non-adjacent to all the white vertices that appear before $X_{W}$;
•
if $y$ is a white vertex in a block $(X_{B},X_{W})$, then $y$ is adjacent to all the black vertices that appear before $X_{B}$ and
non-adjacent to all the black vertices that appear after $X_{B}$.
Also, without loss of generality, we assume that in a block $(X_{B},X_{W})$ the order of $X_{B}$ is consistent with the order of $X_{W}$
(according to the bijection defined by the matching or co-matching between $X_{B}$ and $X_{W}$).
Theorem 13.
For the class $A$ of $P_{2}+P_{3}$-free bipartite graphs and for all $p,q\geq 2$, $R^{b}_{A}(p,q)=\max\{p,q\}+p+q-2$.
Proof.
To prove that $R^{b}_{A}(p,q)\geq\max\{p,q\}+p+q-2$, assume, without loss of generality, that $q=\max\{p,q\}$ (if $p=\max\{p,q\}$, the proof is similar).
Let $G=(B,W,E)$ be a $P_{2}+P_{3}$-free bipartite graph with $|B|=|W|=2q+p-3$
such that $B=B_{0}\cup B_{1}$, $W=W_{1}\cup W_{2}$, where $|B_{0}|=|W_{2}|=p-2$, $B_{1}\cup W_{1}$ is an induced matching of size $2q-1$,
$B_{0}$ is complete to $W$, while $W_{2}$ is complete to $B$.
Assume $G$ contains a biclique $K_{p,p}$. Then this biclique contains at least 2 vertices in $B_{1}$ and at least two vertices
in $W_{1}$. But then $B_{1}\cup W_{1}$ is not an induced matching. This contradiction shows that $G$ is $K_{p,p}$-free.
Assume $G$ contains a co-biclique $\widetilde{K}_{q,q}$. This co-biclique cannot contain vertices of $B_{0}$ or $W_{2}$ (since these vertices
dominate the opposite part of the graph). But then we obtain a contradiction to the assumption that the size of the
matching $B_{1}\cup W_{1}$ is $2q-1$. Therefore, $G$ is $\widetilde{K}_{q,q}$-free. This proves the inequality
$R^{b}_{A}(p,q)\geq\max\{p,q\}+p+q-2$.
To prove the reverse inequality, consider an arbitrary $P_{2}+P_{3}$-free bipartite graph $G=(B,W,E)$ with $|B|=|W|=\max\{p,q\}+p+q-2$.
Without loss of generality we assume that the vertices of $G$ are ordered as described above.
Denote by
•
$B_{1}$ the set of the first $p$ vertices of $B$,
•
$B_{3}$ the set of the last $q$ vertices of $B$,
•
$B_{2}=B-(B_{1}\cup B_{3})$,
•
$W_{1}$ the set of the first $q$ vertices of $W$,
•
$W_{3}$ the set of the last $p$ vertices of $W$,
•
$W_{2}=W-(W_{1}\cup W_{3})$.
Since $p,q\geq 2$, we have that $|B_{2}|=|W_{2}|=\max\{p,q\}-2\geq 0$, i.e. $B_{1}$ and $B_{3}$ are disjoint and $W_{1}$ and $W_{3}$
are disjoint.
If $B_{1}$ is complete to $W_{3}$ or $W_{1}$ is anti-complete to $B_{3}$, then $G$ contains $K_{p,p}$ or $\widetilde{K}_{q,q}$, respectively.
Therefore, we assume there is a pair $w_{1}\in W_{1}$, $b_{3}\in B_{3}$ of adjacent vertices and a pair $b_{1}\in B_{1}$, $w_{3}\in W_{3}$ of non-adjacent vertices.
If the vertices $w_{1}$ and $b_{3}$ do not belong to the same block, then every black vertex that appears before $b_{3}$
is adjacent to every white vertex that appears after $w_{1}$, in which case an induced $K_{p,p}$ arises. Therefore, we assume
that $w_{1}$ and $b_{3}$ belong to the same block. Similarly, we assume that $b_{1}$ and $w_{3}$ belong to the same block.
It is not difficult to see that all four vertices belong to the same block. We denote this block by $T$.
In what follows we assume that $T$ is an induced matching (the case when $T$ is a co-matching is symmetric).
If $|T|\geq 2q$ (i.e. if $T$ contains at least $2q$ edges), then $G$ contains an induced $\widetilde{K}_{q,q}$.
Therefore, from now on we assume that $|T|\leq 2q-1$.
Since each of $B_{1}$ and $B_{3}$ contains a vertex of $T$, we conclude that all vertices of $B_{2}$ belong to $T$. Hence the number of
black vertices of $T$ that appear before $B_{3}$ is at least $\max\{p,q\}-1$. Similarly, the number of white vertices of $T$ that appear after
$W_{1}$ is at least $\max\{p,q\}-1$.
Therefore, $|T|\geq 2\max\{p,q\}-1$ (at least $\max\{p,q\}-1$ edges before $w_{1}b_{3}$,
at least $\max\{p,q\}-1$ edges after $w_{1}b_{3}$ plus the edge $w_{1}b_{3}$).
Combining $|T|\geq 2\max\{p,q\}-1\geq 2q-1$ and $|T|\leq 2q-1$,
we conclude that $|T|=2q-1$.
If there exist at least one black vertex that appears after $T$, then this vertex together with the last $q-1$ black vertices in $T$ and the first
$q$ white vertices in $T$ would induce a $\widetilde{K}_{q,q}$. Similarly, if there is at least one white vertex before $T$, then
an induced $\widetilde{K}_{q,q}$ can be easily found.
Therefore, we assume that there exist
$$\max\{p,q\}+p+q-2-(2q-1)=\max\{p,q\}+p-q-1\geq p-1$$
black vertices before $T$ and at least $p-1$ white vertices after $T$. These vertices together with the edge $w_{1}b_{3}$
create an induced $K_{p,p}$.
∎
References
[1]
P. Allen, Forbidden induced bipartite graphs.
J. Graph Theory, 60(3) (2009), 219–241.
[2]
N. Alon, J. H. Spencer, The probabilistic method. John Wiley & Sons, 2004.
[3]
A. Atminas, V. Lozin, V. Zamaraev,
Linear Ramsey numbers.
Lecture Notes in Computer Science, 10979 (2018) 26–38.
[4]
M. Axenovich, C. Tompkins, L. Weber,
Large homogeneous subgraphs in bipartite graphs with forbidden induced subgraphs. (2019),
arXiv:1903.09725.
[5]
R. Belmonte, P. Heggernes, P. van ’t Hof, A. Rafiey, R. Saei,
Graph classes and Ramsey numbers.
Discrete Appl. Math. 173 (2014), 16–27.
[6]
Z. Blázsik, M. Hujter, A. Pluhár, Z. Tuza,
Graphs with no induced $C_{4}$ and $2K_{2}$.
Discrete Math., 115 (1993) 51–55.
[7]
M. Chudnovsky, P. Seymour,
Extending Gyárfás-Sumner conjecture.
J. Combin. Theory, B, 105 (2014) 11–16.
[8]
D. Conlon,
A new upper bound for the bipartite Ramsey problem.
J. Graph Theory, 58(4) (2008) 351–356.
[9]
D.G. Corneil, H. Lerchs, B.L. Stewart,
Complement reducible graphs.
Discrete Appl. Math, 3 (1981) 163–174.
[10]
P. Erdős, A. Hajnal,
Ramsey-type theorems.
Discrete Appl. Math. 25 (1989) 37–52.
[11]
S. Foldes, P.L. Hammer, Split graphs.
Congressus Numerantium, No. XIX, (1977) 311–315.
[12]
R.L. Graham, V. Rödl, A. Ruciński,
On graphs with linear Ramsey numbers.
J. Graph Theory, 35(3) (2000) 176–192.
[13]
D. Korándi, J. Pach, I. Tomon, Large homogeneous submatrices. (2019),
arXiv:1903.06608v2.
[14]
L. Lovász, Kneser’s conjecture, chromatic number, and homotopy.
J. Combin. Theory, A. 25(3) (1978) 319–324.
[15]
V. Lozin,
Bipartite graphs without a skew star.
Discrete Math., 257 (2002) 83-100.
[16]
V. Lozin, V. Zamaraev,
The structure and the number of $P_{7}$-free bipartite graphs.
European J. Combinatorics, 65 (2017) 143–153.
[17]
S. Olariu, Paw-free graphs.
Information Processing Letters, 28 (1988) 53–54.
[18]
F.P. Ramsey, On a problem of formal logic.
Proceedings of the London Mathematical Society, 30 (1930), 264–286.
[19]
E. Scheinerman, D. Ullman, Fractional graph theory.
A rational approach to the theory of graphs. Dover Publications, Inc., Mineola, NY, 2011. xviii+211 pp.
[20]
R. Steinberg, C.A. Tovey,
Planar Ramsey numbers.
J. Combin. Theory, B 59 (1993) 288–296. |
Hamiltonian Formalism for dynamics of particles in MOG
Sohrab Rahvar${}^{1}$
$1$ Department of Physics, Sharif University of Technology, P.O.
Box 11155-9161, Tehran, Iran
[email protected]
Abstract
MOG as a modified gravity theory is designed to be replaced with dark matter. In this theory, in addition to the metric tensor, a massive vector is a gravity field where each particle has a charge proportional to the inertial mass and couples to the vector field through the four-velocity of a particle.
In this work, we present the Hamiltonian formalism for the dynamics of particles in this theory. The advantage of Hamiltonian formalism is a better understanding and analyzing the dynamics of massive and massless particles. The massive particles deviate from the geodesics of space-time and photons follow the geodesics. We also study the dynamics of particles in the Newtonian and post-Newtonian regimes for observational purposes. An important result of Hamiltonian formalism is that while lensing on large scales is compatible with the observations, however
the deflection angle from stellar size lensing is larger than General Relativity. This result can rule out this theory unless we introduce a screening mechanism to change the effective gravitational constant near compact objects like stars.
1 Introduction
The dynamics of galaxies and large-scale structures of the Universe show that a significant amount of matter in the Universe is dark (Bertone
et al., 2005). The observational evidence for the existence of dark matter started with the measurement of the rotation curve of spiral galaxies (Rubin &
Ford, 1970). We have a list of candidates for the dark matter, however, observations
in recent years ruled out some of these candidates. For instance, the microlensing observations in the direction of Large and Small Magellanic clouds ruled out the Massive Astrophysical Compact Halo Objects (MACHOs) as the dark matter candidates (Milsztajn, 2002; Tisserand
et al., 2007; Wyrzykowski
et al., 2011).
There are also experiments for the detection of non-baryonic candidates for the dark matter as axions, sterile neutrinos, weakly interacting massive particles (WIMPs) and supersymmetric particles (Overduin &
Wesson, 2004) where until now no evidence for the dark matter particles is reported. The other candidate of dark matter is the Primordial Black Holes (PBHs) which could be formed in the early universe as a result of quantum fluctuations (Zel’dovich &
Novikov, 1966; Hawking, 1971). Various observations exclude them as the dark matter candidate except for the two narrow windows of lunar mass and tens of solar masses (Carr &
Kühnel, 2020). On the other hand, the PBHs might be the sources of large mass black holes ($m>50m_{\odot}$) where in the gravitational wave events they have been detected by LIGO (Khalouei et al., 2021). Also, the lunar mass black holes may collide with the earth, however considering hundred percent of the halo is made of PBHs, their collision rate is one per billion year and has a weak signature on the earth (Rahvar, 2021). So the PBHs could be a possible candidate for dark matter.
In recent years, the lack of detection of dark matter candidates motivated the study of modification to the gravity law such as MOND (Milgrom, 1983; Skordis &
Złośnik, 2021). One of the modified theory models is so-called MOdified Gravity (MOG) (Moffat, 2006) where in addition to metric as a gravity field there is also a vector and scalar sectors to the gravity. In this theory, each object in addition to the gravitational mass has a gravitational charge, proportional to the inertial mass that couples with a massive vector field. The result is that for the long distances from a point-like source of gravity the massive vector field fade and we recover the Einstein general relativity. In this case, the gravitational constant (i.e. $G$) is tuned to be large at large scales to compensate for the dark matter and on the small scales the repulsive vector field weakens the gravitational strength and we will have an effective gravity with a smaller gravitational constant of $G_{N}$.
The observational tests of this theory in the weak field approximation for the dynamics of galaxies (Moffat &
Rahvar, 2013) and the cluster of galaxies (Moffat &
Rahvar, 2014) have been investigated. The comparison of data from observation with the prediction of MOG shows that the dynamics of these structures can be interpreted without the need for dark matter even in the cosmological scales (Davari &
Rahvar, 2020, 2021). One of the challenging
problems to test the modified gravity models is the gravitational lensing from the large-scale structures. Since the
mass of photons is zero, on the other hand, massive particles follow the modified geodesic world lines, it is confusing how to deal photons couples with the vector field. The wave optics approach of electromagnetic propagation has been used to study this problem (Rahvar &
Moffat, 2019). The observational comparison of the gravitational lensing on the large scales confirms that strong lensing by galaxies can be interpreted without the need for the dark matter (Moffat
et al., 2018).
In this work, we introduce the Hamiltonian formalism for the dynamic of massive and massless particles in MOG. This approach resolves the ambiguity in the dynamics of particles, especially for the massless particles. We show that, unlike the massive particles, massless particles follow a different world line in this theory. In Section (2) we provide the action for this theory and the dynamics of particles in the Hamiltonian formalism. In Section (3) we introduce the field equation in MOG. In Section (4) we derive the equation of motion of particles in the weak field approximation and emphasis the gravitational lensing for the massless particles. In Section (5), we extend our calculation to the post-Newtonian limit and consider the perihelion precession of Mercury in MOG. Section (6) provides the conclusion.
2 Field equation
In the simplified form of MOG theory, the gravity is given by the metric $g_{\mu\nu}$ and a vector field of $\phi_{\mu}$ where the overall action for the field equation can be written as (Moffat, 2006)
$$S=S_{g}+S_{\phi}+S_{M},$$
(1)
where the action associate to the metric is
$$S_{g}=\frac{c^{4}}{16\pi G}\int(R+2\Lambda)\sqrt{-g}d^{4}x,$$
(2)
and the action associated to the vector field is
$$\displaystyle S_{\phi}$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{4\pi}\int\omega\Big{[}-\frac{1}{4}B^{\mu\nu}B_{\mu\nu}+\frac{1}{2}\mu^{2}\phi_{\mu}\phi^{\mu}$$
(3)
$$\displaystyle+$$
$$\displaystyle V_{\phi}(\phi_{\mu}\phi^{\mu})\Big{]}\sqrt{-g}~{}d^{4}x-\int J_{\mu}\phi^{\mu}\sqrt{-g}~{}d^{4}x,$$
where we adapt $(-,+,+,+)$ signature for the metric,
$\omega$ is coupling constant, $B_{\mu\nu}=\partial_{\mu}\phi_{\nu}-\partial_{\nu}\phi_{\mu}$ is Faraday tensor, $J^{\mu}=\kappa\rho u^{\mu}$ and $J_{\mu}\phi^{\mu}$ is the interaction term and $S_{M}$ is the matter action. The field equations are obtained by varying the action with respect to the metric and vector fields.
On the other hand, for a test mass particle interacting with the metric and vector part of gravity the action is
given by
$$S=-mc^{2}\int d\tau+q_{5}\omega\int\phi_{\mu}dx^{\mu}.$$
(4)
The main assumption of this theory is that the charge of particle in the second term is proportional to the inertial mass of particle, i.e. $q_{5}=\kappa m$. We write this action in terms of coordinate time in curved space as follows
$$S=-mc^{2}\int(-\frac{1}{c^{2}}g_{\mu\nu}\frac{dx^{\mu}}{dt}\frac{dx^{\nu}}{dt})^{1/2}dt+m\kappa\omega\int\phi_{\mu}\frac{dx^{\mu}}{dt}dt.$$
(5)
The Lagrangian corresponding to this action is
$$L=-mc(-g_{ij}\dot{x}^{i}\dot{x}^{j}-2g_{0i}c\dot{x}^{i}-c^{2}g_{00})^{1/2}+m\kappa\omega(\phi_{i}\dot{x}^{i}+\phi_{0}c),$$
(6)
where the latin indices corresponds to the spatial part of metric. The canonical momentum of particles from the definition of $p_{i}=\frac{\partial L}{\partial\dot{x}^{i}}$ obtain as
$$p_{i}=\frac{mc(g_{ij}\dot{x}^{j}+g_{0i}c)}{(-g_{ij}\dot{x}^{i}\dot{x}^{j}-2g_{0i}c\dot{x}^{i}-c^{2}g_{00})^{1/2}}+m\kappa\omega\phi_{i}.$$
(7)
According to the definition of Hamiltonian as $H=p_{i}\dot{x}^{i}-L$, the Hamiltonian of a particle is given by
$$H=-mc^{2}\frac{g_{00}c+g_{0i}\dot{x}^{i}}{(-g_{ij}\dot{x}^{i}\dot{x}^{j}-2g_{0i}c\dot{x}^{i}-c^{2}g_{00})^{1/2}}-mc\kappa\omega\phi_{0}.$$
(8)
We simplify this equation to write the Hamiltonian in a specific coordinate by setting $g_{0i}=0$, then we can write the Hamiltonian as
conventional form in terms of coordinate and momentum. Substituting equation (7) in (8), the Hamiltonian simplifies to
$$H=\sqrt{-g_{00}}E-mc\kappa\omega\phi_{0},$$
(9)
where
$$E=c\sqrt{(mc)^{2}+p_{i}p^{i}+(\kappa\omega m)^{2}\phi_{i}\phi^{i}-2\kappa\omega mp_{i}\phi^{i}}.$$
In what follows we obtain the dynamics of massive and massless particles in a generic space-time.
From the Hamilton equations $\dot{p_{k}}=-\frac{\partial H}{\partial x_{k}}$ and $\dot{x}^{k}=\frac{\partial H}{\partial p_{k}}$, the dynamics of a test particle obtain as
$$\displaystyle\dot{p_{k}}$$
$$\displaystyle=$$
$$\displaystyle-E(\sqrt{-g_{00}})_{,k}-\sqrt{-g_{00}}E_{,k}+mc\kappa\omega\phi_{0,k}$$
(10)
$$\displaystyle\dot{x}^{k}$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{E}{(p_{j}c^{2}-\kappa\omega mc^{2}\phi_{j})g^{jk}}.$$
(11)
where
$$E_{,k}=\frac{c}{E}\left(\frac{1}{2}p_{i}p_{j}g^{ij}{}_{,k}+(\kappa\omega m)^{2}\phi_{i}\phi^{i}{}_{,k}-\kappa\omega mp_{i}(\phi_{j,k}g^{ij}+\phi_{j}g^{ij}{}_{,k})\right)$$
(12)
In the next section we introduce the field equations, then apply them to the Hamiltonian equations to obtain the dynamics of the particles.
3 Solution of field equations
In this section, we review the field equations in MOG.
Varying action in equation (1) with respect to the metric results in
$$G_{\mu\nu}=\frac{8\pi G}{c^{4}}(T_{\mu\nu}^{(M)}+T_{\mu\nu}^{(\phi)}),$$
(13)
where $G_{\mu\nu}=R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}$ is the Einstein tensor and energy momentum tensor is defined by
$$T_{\mu\nu}^{(M)}+T_{\mu\nu}^{(\phi)}=-\frac{2}{\sqrt{-g}}\frac{\delta\left[(L_{M}+L_{\phi})\sqrt{-g}\right]}{\delta g^{\mu\nu}},$$
(14)
where the energy momentum tensor of vector field is
$$T_{\mu\nu}^{(\phi)}=\frac{\omega}{4\pi}(B_{\mu}{}^{\alpha}B_{\nu\alpha}-\frac{1}{4}g_{\mu\nu}B^{\alpha\beta}B_{\alpha\beta})-\frac{\mu^{2}\omega}{4\pi}(\phi_{\mu}\phi_{\nu}-\frac{1}{2}g_{\mu\nu}\phi_{\alpha}\phi^{\alpha}).$$
(15)
The first and second terms of the energy-momentum tensor are in the order of $\sim(\partial\phi)^{2}$ and $\sim\mu^{2}\phi^{2}$. For the weak field approximation, the vector field and Newtonian gravitational potential are proportional as $|\phi|\propto\Phi_{N}/\sqrt{G}$. So $\sim(\partial\phi)^{2}$ would be in the order of energy density of gravitational potential or $\sim\rho v^{2}$ (Moffat &
Rahvar, 2013). We can ignore this term compared to the matter rest mass energy density (i.e. $T^{00}=\rho c^{2}$).
So in the right hand side of modified gravity equation we ignore the energy momentum of vector field, $T_{\mu\nu}^{(\phi)}$ compare to the energy momentum of matter $T_{\mu\nu}^{(M)}$.
Now, we vary the action in equation (3) with respect to $\phi^{\nu}$. The result is
$$\phi^{\nu}{}_{;\mu}{}^{\mu}-\mu^{2}\phi^{\nu}=-\frac{4\pi}{\omega}J^{\nu}.$$
(16)
In this work we study the static gravitational field where the time derivation of $\phi^{\nu}$ is zero. In cosmology where the all the fields are changing by time this argument is not valid and due to isotropy of the Universe, the spatial derivatives are zero. For static metric equation (16) simplifies to
$$\phi^{\nu}{}_{,i}{}^{i}+\phi^{\nu}{}_{,}{}^{i}(\ln\sqrt{-g})_{,i}-\mu^{2}\phi^{\nu}=-\frac{4\pi}{\omega}J^{\nu}$$
(17)
Now we discuss the solutions of equations (13) and (17) around the flat space.
4 Dynamics in the weak field approximation
Let us assume the metric of weak field approximation up to the order of $g_{\mu\nu}\sim(v/c)^{2}$ as
$$ds^{2}=-(1+\frac{2\Phi}{c^{2}})c^{2}dt^{2}+(1-\frac{2\Phi}{c^{2}})\delta_{ij}dx^{i}dx^{j}.$$
(18)
Here, the $2\Phi/c^{2}$ term is a perturbation around the flat space-time. Also, we assume $\phi_{\mu}$ as the vector sector of the gravity is also a perturbation around the flat space. In order to keep only the first order perturbations we ignore the higher order terms contains $g^{2}$, $g\phi$ and $\phi^{2}$ in our calculations. Then the modified Einstein equation simplifies to the Poisson equation as
$$\nabla^{2}\Phi=4\pi G\rho$$
(19)
and equation (17) simplifies to
$$\nabla^{2}\phi^{\nu}-\mu^{2}\phi^{\nu}=-\frac{4\pi}{\omega}J^{\nu}.$$
(20)
The solution of this differential equation is
$$\phi^{\mu}(x)=\frac{1}{\omega}\int\frac{e^{-\mu|{{\bf x}-{\bf x^{\prime}}|}}}{|{\bf x}-{\bf x^{\prime}}|}J^{\mu}({\bf x^{\prime}})d^{3}x^{\prime},$$
(21)
where for a distribution of matter we have non-zero source of current as $J^{0}=\kappa\rho c$ and $J^{i}=\kappa\rho u^{i}$.
4.1 Equation of motion for massive particles
We substitute the metric components in the Hamiltonian equation of (10), the result is
$$\dot{p}_{k}=m(-\frac{\partial\Phi}{\partial x^{k}}+\kappa\omega\frac{p_{i}}{m}\frac{\partial\phi_{j}}{\partial x^{k}}\delta^{ij}+\kappa\omega c\frac{\partial\phi_{0}}{\partial x^{k}}).$$
(22)
Here we assume that particles are moving with non-relativistic velocity (e.g $v/c\ll 1$). On the other hand
the velocity of a particle from the Hamilton equation is
$$\dot{x}^{k}=\delta^{ki}(\frac{p_{i}}{m}-\kappa\omega\phi_{i}),$$
(23)
substituting the derivative of equation (23) in (22), we obtain the dynamics of a test mass particle
in the Newtonian approximation as follows:
$$\ddot{x}_{k}=-\frac{\partial}{\partial x^{k}}(\Phi+\kappa\omega c\phi^{0})-\kappa\omega\frac{\partial\phi_{k}}{\partial t}+\kappa\omega[\vec{\dot{x}}\times(\nabla\times{\vec{\phi}})]_{k},$$
(24)
where we replace $\phi_{0}$ with the contravariant vector of $\phi^{0}$ in the first term. We can define the effective potential of $\phi_{eff}=\Phi+\kappa\omega c\phi^{0}$ which is also derived in Moffat &
Rahvar (2013).
In the Hamiltonian approach, we derived the additional magneto-gravity terms in MOG. Let us define $-\kappa\omega\partial_{t}\phi_{k}$ as the ”Emog term” where it is similar to an induced electric field due to time variation of the magnetic field.
The third term on the right-hand side of this equation, $\kappa\omega\vec{v}\times(\nabla\times{\vec{\phi}})$ is similar to the Lorentz-force due to a magnetic term. Let us recall this term also ”Bmog term”.
We note that in equation (24) , we recover the Newtonian equation of $\ddot{x_{k}}=-\nabla\Phi_{N}$ plus the term of $~{}\nabla\phi^{0}$ where the later term plays the role of dark matter in the large scale structures. These two terms are in the order of $v^{2}/r$. On the other hand, the EMog and BMog terms are in the order of ($\frac{v^{4}}{rc^{2}}$) which is in the post-Newtonian correction terms. We will discuss the contribution of these terms to the dynamics of a test particle around small structures such as stars and large structures such as galaxies.
4.1.1 The case for spherical symmetry
Let us assume a point mass object like a star as the source of gravity. From the Poisson equation (19) and equation (20), in the Newtonian regime the potentials are
$$\displaystyle\Phi(r)$$
$$\displaystyle=$$
$$\displaystyle-\frac{GM}{r},$$
(25)
$$\displaystyle\phi^{0}(r)$$
$$\displaystyle=$$
$$\displaystyle\frac{\kappa c}{\omega}\frac{Me^{-\mu r}}{r}$$
(26)
where $M$ is the mass of central star and $r$ is the distance from the star. The poisson term is an attractive force while the second term produces a repulsive force. For a star with relative velocity to the coordinate system we may also consider the magnetic component $\phi^{i}$. For the case of $J^{i}=0$ in equation (20), for the spherically symmetry we have the following solution of
$$\phi^{r}(r)=\frac{1}{\omega}\frac{cge^{-\mu r}}{r}\hat{e}_{r},$$
(27)
where $g$ is a charge for the magnetic monopole and we can take it to be proportional to the inertial mass as $g=\kappa M$. The solution for the dynamics of a point mass particle in equation (24) up to $(v/c)^{2}$ terms and ignoring the
post-Newtonian orders results in an attractive force as $1/r^{2}$ and a Yukawa type repulsive force. The repulsive force at large distances fades to zero and at the distance of $r<\mu^{-1}$ weakens the effective gravity. Now we substitute the potentials in the equation of motion of a test particle in equation (24),
$$\ddot{x}^{i}=-\frac{GM}{r}\left(1-\frac{\kappa^{2}c^{2}}{G}e^{-\mu r}(1+\mu r)\right).$$
(28)
In equation (28) at the large distances the exponential terms goes to zero and dynamics of a test particle converge to $\ddot{x}^{i}=-GM/r$. On the other hand, for the closer distances $r\ll\mu^{-1}$, the dynamics of a test particle is given by $\ddot{x}^{i}=-(G-\kappa^{2}c^{2})M/r$. Since for the closer distance we should recover the Newtonian gravity then $G_{N}=G-\kappa^{2}c^{2}$. This means that the gravitational constant of theory (e.g. $G$) is larger than $G_{N}$ (defined at the smaller scales). Using the convention of $\kappa^{2}c^{2}=\alpha G_{N}$, we can rewrite $G$ as $G=G_{N}(1+\alpha)$ where the best fit to the spiral galaxies results in $\alpha=8.89\pm 0.34$ (Moffat &
Rahvar, 2013). For the extended spherical structure as an elliptical galaxy we can obtain the effective potential of this structure as
$$\phi_{eff}=-G_{N}\int\frac{\rho(x^{\prime})}{|x-x^{\prime}|}(1+\alpha-\alpha e^{-\mu|x-x^{\prime}|})d^{3}x^{\prime}$$
(29)
4.2 Equation of motion of massless particles
For the dynamics of massless particles as photons and neutrinos we start with the Hamiltonian in equation (9) and let $m=0$,
$$H=\sqrt{-g_{00}}(p_{i}p_{j}g^{ij}c^{2})^{1/2}.$$
(30)
From the Hamiltonian equations in (10) and (11), the time derivation of momentum and position obtain as
$$\displaystyle\frac{\dot{p}_{k}}{|p|}$$
$$\displaystyle=$$
$$\displaystyle-\frac{\partial\sqrt{-g_{00}}}{\partial x^{k}}c-\frac{1}{2}\sqrt{-g_{00}}\frac{p_{i}p_{j}}{p^{2}}\frac{\partial g^{ij}}{\partial x^{k}}c,$$
(31)
$$\displaystyle\dot{x}^{k}$$
$$\displaystyle=$$
$$\displaystyle\frac{p_{j}}{p}g^{jk}\sqrt{-g_{00}}c.$$
(32)
The main difference between the massive and massless particles is that the unlike to the massive particles that interact with $\phi_{\mu}$ field and deviate from the geodesics of the space-time, massless particles decouple from the vector field and follows the geodesics of metric. We substitute the weak field approximation of metric from
equation (18) in equations (31) and (32) which results in the equation of motion of
$$\displaystyle\frac{\dot{p}_{k}}{|p|}$$
$$\displaystyle=$$
$$\displaystyle-\frac{2}{c}\frac{\partial\Phi}{\partial x^{k}}$$
(33)
$$\displaystyle\dot{x}^{k}$$
$$\displaystyle=$$
$$\displaystyle\hat{n}^{k}(1+\frac{\Phi}{c^{2}})c,$$
(34)
where $\hat{n}^{k}=p^{k}/|p|$ is the unit vector along the light ray.
Equation (33) represents the deflection of light while passing close to a gravitational potential. The change in the transverse component of the momentum of photon to the initial momentum represents the deflection angle of light
where integrating along a light ray passing close to a mass with the impact factor of $b$ results in the deflection angle of
$\delta=\frac{4\Phi(b)}{c^{2}}$. For a single lens the deflection angle would be
$$\delta_{MOG}=\frac{4G_{N}M(1+\alpha)}{bc^{2}}$$
(35)
If we compare this deflection angle with the deflection angle from the Einstein gravity (i.e. $\alpha_{GR}$), we have enhancement in the deflection angle with the amount of $\delta_{MOG}=(1+\alpha)\delta_{GR}$. We note that this result is independent of the mass of the lens and impact factor of the light. One of the solutions to solve this problem might be the screening mechanism where close to the compact object the effective $G$ becomes smaller. The screening mechanism is practical for the scalar fields (Khoury, 2010) and one may take $G$ as in the Brans-Dick theory to hide the enhancement of $G$ in the solar system scales.
For a spherical symmetry-static space-time, taking into $G$ as the scalar field in this theory, the solution of the field equation results in a distance dependent value for the $\alpha$ parameter as (Moffat &
Toth, 2009)
$$\alpha=\frac{M}{(\sqrt{M}+{\cal E})^{2}}(\frac{G}{G_{N}}-1),$$
(36)
where ${\cal E}^{2}\gg M_{\odot}$. So for the stellar mass object the $\alpha$ parameter would be zero and we recover the gravitational lensing as in General Relativity.
5 MOG Dynamics in post-Newtonian approximation
In this section we extend the approximation for the dynamics of particles (e.g $v/c$) in MOG to the higher orders and investigate the observational effects in the astronomical systems.
In the standard formalism of general relativity, one of the methods to investigate the relativistic effects in the slow-moving objects as the planets in the solar system is the post-Newtonian approximation (Weinberg, 1972). In the Newtonian approximation of GR the equation of motion is given by $\dot{v^{i}}=\frac{1}{2}c^{2}\partial^{i}h_{00}$ where the perturbation of metric is in the order of $\phi/c^{2}$ or $(v/c)^{2}$. According to the convention, we call the perturbation of the metric by $g^{(N)}_{\mu\nu}$ where $N$ represents its order of magnitude in $(v/c)^{N}$. So the Newtonian approximation is the perturbation theory up to $N=2$. From equation (6), the Lagrangian of particle for Post-Newtonian term would be
$$L=mc^{2}(\frac{1}{2}\bar{v}^{2}+\frac{1}{2}g_{00}^{(2)}+\frac{1}{2}g_{00}^{(4)}+g_{0i}^{(3)}\bar{v}^{i}+\frac{1}{2}g^{(2)}_{ij}\bar{v}^{i}\bar{v}^{j})+mc\kappa\omega(\phi_{i}\bar{v}^{i}+\phi_{0}),$$
(37)
where $\bar{v}^{i}$ is the physical velocity normalized to the speed of light. We have seen in the previous section that the ”Bmog” and ”Emog” terms also are in the order of $\bar{v}^{4}$, so these two terms can be considered as the post-Newtonian terms. Using the Lagrangian equation
$$\frac{d}{dt}\frac{\partial L}{\partial v^{i}}-\frac{\partial L}{\partial x^{i}}=0,$$
(38)
the equation of motion of a test particle obtain as
$$\displaystyle\frac{d{\bf v}}{dt}$$
$$\displaystyle=$$
$$\displaystyle-{\bf\nabla}\phi_{eff}-\kappa\omega\frac{\partial{\vec{\phi}}}{\partial t}+\kappa\omega[{\bf v}\times(\nabla\times{\vec{\phi}})]-\nabla(2\Phi^{2}+\psi)$$
$$\displaystyle-$$
$$\displaystyle\frac{1}{c}\frac{\partial{\bf\xi}}{\partial t}+\frac{\bf v}{c}\times(\nabla\times{\bf\xi})+3\frac{\bf v}{c^{2}}\frac{\partial\Phi}{\partial t}+4\frac{{\bf v}}{c}(\frac{\bf v}{c}\cdot\nabla)\Phi-(\frac{v}{c})^{2}\nabla\Phi,$$
where $\phi_{eff}=\Phi+\kappa\omega c\phi^{0}$, the second and third terms at the right hand side are Emog and Bmog terms, $g_{00}^{(2)}=-2\Phi/c^{2}$, $g_{ij}^{(2)}=-2\delta_{ij}\Phi/c^{2}$, $g_{0i}^{(3)}=\xi_{i}$ and $g_{00}^{(4)}=-2(\Phi/c^{2})^{2}-2\psi$.
For the static space-time, we can ignore the time derivatives of the metric components. Also for the Bmog term since the current is zero (i.e. $J^{i}$) then for the monopole solution of equation (20) $\vec{\phi}\sim\frac{e^{\mu r}}{r}\hat{e}_{r}$ where $\nabla\times\vec{\phi}=0$. Then we expect that for the precession of Mercury’s perihelion only the post-Newtonian standard terms in GR matter and MOG has no contribution to this effect. In another word, MOG is compatible with the precession measurement of Mercury.
The effect of Emog and BMog terms in the galactic dynamics compare to the dominant term of $-\nabla\phi_{eff}$ is smaller by the factor of $(v/c)^{2}$. Assuming the velocity of stars inside the galaxy in the order of $v=200$ km/s, we expect that post-Newtonian, as well as Emog and BMog terms, would be six orders of magnitude smaller than the effective potential. Since the observational accuracy is not high enough we may ignore the contribution of these terms in studying galactic dynamics.
6 Conclusion
In this work, we present the Hamiltonian formalism for the dynamics of particles in the MOG theory. The advantage of
using the Hamiltonian formalism is that we can investigate the dynamics of the spatial coordinate of particles in terms of the physical time both for the massive and massless particles. For the massive particles in addition to the conventional terms in Moffat &
Rahvar (2013), we derived the Emog and Bmog terms to represent the time variation of a vector field and Lorentz force in the dynamics of particles. These two terms are in the order of post-Newtonian correction.
We also derived the equation of motion of massless particles in this theory where it is shown that massless particles unlike massive particles do not couple to the vector field. So photons follow the geodesics equation that is given by the metric of space-time. Hence taking into account the gravitational constant of theory ($G$) which is almost one order of magnitude larger than the Newtonian constant of $G_{N}$, the deflection angle for the large scale structures as the galaxies and clusters of galaxies provides a stronger light deflection which can be interpreted as the dark matter.
For the stellar mass lenses, this theory provides a stronger deflection angle unless $G$ is taken into account as an extra field in this theory.
We also obtained the post-Newtonian approximation of the equation of motion in MOG. For a point mass object like the sun in the solar system, the Emog and Bmog terms are zero and only the standard post-Newtonian terms contribute to the precession of Mercury’s perihelion. Also in the Galactic scales, while Emog and Bmog are non-zero, we can ignore the post-Newtonian terms as the accuracy of the observations is lower than the contribution of these terms.
Acknowledgments
I would like to thank Shant Baghram for
his useful comments. Also, I would like to thank anonymous referee to his/her comments improving this work.
This research was supported by Sharif University of Technology’s Office of Vice
President for Research under Grant No. G950214.
Data Availability:
No new data were generated or analysed in support of this research.
References
Bertone
et al. (2005)
Bertone G., Hooper D., Silk J., 2005, Physics Report, 405, 279
Carr &
Kühnel (2020)
Carr B., Kühnel F., 2020, Annual Review of Nuclear and
Particle Science, 70, 355
Davari &
Rahvar (2020)
Davari Z., Rahvar S., 2020, Mon. Not. Roy. Astron. Soc., 496, 3502
Davari &
Rahvar (2021)
Davari Z., Rahvar S., 2021, Mon. Not. Roy. Astron. Soc., 507, 3387
Hawking (1971)
Hawking S., 1971, Mon. Not. Roy. Astron. Soc., 152, 75
Khalouei et al. (2021)
Khalouei E., Ghodsi H., Rahvar S., Abedi J., 2021, Phys.
Rev. D, 103, 084001
Khoury (2010)
Khoury J., 2010, arXiv e-prints, p. arXiv:1011.5909
Milgrom (1983)
Milgrom M., 1983, ApJ, 270, 365
Milsztajn (2002)
Milsztajn A., 2002, Space Science Reviews, 100, 103
Moffat (2006)
Moffat J. W., 2006, JCAP, 0603, 004
Moffat &
Rahvar (2013)
Moffat J. W., Rahvar S., 2013, Mon. Not. Roy. Astron. Soc., 436, 1439
Moffat &
Rahvar (2014)
Moffat J. W., Rahvar S., 2014, Mon. Not. Roy. Astron. Soc., 441, 3724
Moffat &
Toth (2009)
Moffat J. W., Toth V. T., 2009, Classical and Quantum Gravity, 26, 085002
Moffat
et al. (2018)
Moffat J. W., Rahvar S., Toth V. T., 2018, Galaxies, 6, 3
Overduin &
Wesson (2004)
Overduin J. M., Wesson P. S., 2004, Physics Report, 402, 267
Rahvar (2021)
Rahvar S., 2021, Mon. Not. Roy. Astron. Soc., 507, 914
Rahvar &
Moffat (2019)
Rahvar S., Moffat J. W., 2019, Mon. Not. Roy. Astron. Soc., 482, 4514
Rubin &
Ford (1970)
Rubin V. C., Ford Jr. W. K., 1970, Astrophys. J.,
159, 379
Skordis &
Złośnik (2021)
Skordis C., Złośnik T., 2021, Phys. Rev. Lett., 127, 161302
Tisserand
et al. (2007)
Tisserand P., et al., 2007, Astronomy and Astrophysics, 469, 387
Weinberg (1972)
Weinberg S., 1972, Gravitation and Cosmology: Principles and Applications
of the General Theory of Relativity
Wyrzykowski
et al. (2011)
Wyrzykowski L., et al., 2011, Mon. Not. Roy. Astron. Soc., 416, 2949
Zel’dovich &
Novikov (1966)
Zel’dovich Y. B., Novikov I. D., 1966, Astronomicheskii Zhurnal, 43, 758 |
KASCADE-Grande measurements of energy spectra for elemental groups of cosmic rays
W.D. Apel
J.C. Arteaga-Velázquez
K. Bekk
M. Bertaina
J. Blümer
H. Bozdog
I.M. Brancus
E. Cantoni111Present address: Istituto Nazionale di Ricerca Metrologica, Torino, Italy
A. Chiavassa
F. Cossavella222Present address: Max-Planck-Institut Physik, München, Germany
K. Daumiller
V. de Souza
F. Di Pierro
P. Doll
R. Engel
J. Engler
M. Finger
B. Fuchs
D. Fuhrmann
[email protected]
H.J. Gils
R. Glasstetter
C. Grupen
A. Haungs
D. Heck
J.R. Hörandel
D. Huber
T. Huege
K.-H. Kampert
D. Kang
H.O. Klages
K. Link
P. Łuczak
M. Ludwig
H.J. Mathes
H.J. Mayer
M. Melissas
J. Milke
B. Mitrica
C. Morello
J. Oehlschläger
S. Ostapchenko333Present address: University of Trondheim, Norway
N. Palmieri
M. Petcu
T. Pierog
H. Rebel
M. Roth
H. Schieler
S. Schoo
F.G. Schröder
O. Sima
G. Toma
G.C. Trinchero
H. Ulrich
A. Weindl
J. Wochele
M. Wommer
J. Zabierowski
Institut für Kernphysik, KIT - Karlsruher Institut für Technologie, Germany
Universidad Michoacana, Instituto de Física y Matemáticas, Morelia, Mexico
Dipartimento di Fisica, Università degli Studi di Torino, Italy
Institut für Experimentelle Kernphysik, KIT - Karlsruher Institut für Technologie, Germany
National Institute of Physics and Nuclear Engineering, Bucharest, Romania
Osservatorio Astrofisico di Torino, INAF Torino, Italy
Universidade S$\tilde{a}$o Paulo, Instituto de Física de São Carlos, Brasil
Fachbereich Physik, Universität Wuppertal, Germany
Department of Physics, Siegen University, Germany
Dept. of Astrophysics, Radboud University Nijmegen, The Netherlands
National Centre for Nuclear Research, Department of Cosmic Ray Physics, Lodz, Poland
Department of Physics, University of Bucharest, Bucharest, Romania
Abstract
The KASCADE-Grande air shower experiment Apel and et al. (KASCADE-Grande
collaboration) (2010) consists of, among others, a large scintillator array for measurements of charged particles, $N_{\mathrm{ch}}$, and of an array of shielded scintillation counters used for muon counting, $N_{\upmu}$.
KASCADE-Grande is optimized for cosmic ray measurements in the energy range 10 PeV to about 2000 PeV, where exploring the composition is of fundamental importance for understanding the transition from galactic to extragalactic origin of cosmic rays.
Following earlier studies of the all-particle and the elemental spectra reconstructed in the knee energy range from KASCADE data Antoni and et al. (KASCADE
collaboration) (2005), we have now extended these measurements to beyond 200 PeV.
By analysing the two-dimensional shower size spectrum $N_{\mathrm{ch}}$ vs. $N_{\upmu}$ for nearly vertical events, we reconstruct the energy spectra of different mass groups by means of unfolding methods over an energy range where the detector is fully efficient. The procedure and its results, which are derived based on the hadronic interaction model QGSJET-II-02 and which yield a strong indication for a dominance of heavy mass groups in the covered energy range and for a knee-like structure in the iron spectrum at around 80 PeV, are presented.
This confirms and further refines the results obtained by other analyses of KASCADE-Grande data, which already gave evidence for a knee-like structure in the heavy component of cosmic rays at about 80 PeV Apel and et al. (KASCADE-Grande
collaboration) (2011).
keywords:
High-energy cosmic rays (HECR), KASCADE-Grande experiment, Extensive air showers (EAS), Cosmic ray energy spectrum and composition
††journal: Astroparticle Physics
1 Introduction
The spectrum of cosmic rays follows roughly a power law behaviour ($\propto E^{\gamma}$, with $\gamma\approx-2.7\ldots-3.3$) over many orders of magnitude in energy, overall appearing rather featureless. However, there are a few structures observable. In 1958, Kulikov and Khristiansen Kulikov and Khristiansen (1959) discovered a distinct steepening in the electron size spectrum measured for extensive air showers initiated by cosmic rays, corresponding to a change of the power law slope of the all-particle energy spectrum at few PeV. Three years later, Peters Peters (1961) concluded that the position of this kink, also called the “knee” of the cosmic ray spectrum, will depend on the atomic number of the cosmic ray particles if their acceleration is correlated to magnetic fields. This would mean that the spectra of lighter and heavier cosmic ray mass groups exhibit knee structures with growing energy successively. About half a century later, EAS-TOP observations Aglietta et al. (2004a, b) and, in a more detailed analysis, the KASCADE experiment Antoni and et al. (KASCADE
collaboration) (2003, 2005) showed that the change of spectral index detected by Kulikov and Khristiansen could be caused by a decrease of the so far quantitatively dominating light component of cosmic rays. More precisely, the KASCADE results Antoni and et al. (KASCADE
collaboration) (2005) have proved that the knee in the all-particle spectrum at about 5 PeV corresponds to a decrease of flux observed for light cosmic ray primaries, only. This result was achieved by means of an unfolding analysis disentangling the convoluted energy spectra of five mass groups from the measured two-dimensional shower size distribution of electrons and muons at observation level.
There are numerous theories about details of the origin, acceleration, and propagation of cosmic rays. Concerning the knee positions of individual primaries, some of the models predict, in contrast to the magnetic rigidity dependence considered by Peters Peters (1961), a correlation with the mass of the particles (e.g. cannonball model Dar and De Rújula (2008)). Hence, it is of great interest to verify whether also the spectra of heavy cosmic ray mass groups exhibit analogous structures and if so, at what energies. The KASCADE-Grande experiment Apel and et al. (KASCADE-Grande
collaboration) (2010), located at Karlsruhe Institute of Technology (KIT), Germany, extends the accessible energy range of KASCADE to higher energies up to around 2000 PeV, and allows by this to investigate the cosmic ray energy spectra and composition at regions where the iron knee is expected.
The determination of the energy where the iron knee occurs enables the validation of the various theoretical models. Following this purpose, the KASCADE-Grande measurements have been analysed with straightforward but robust analysis methods yielding an evidence for a steepening in the cosmic ray all-particle spectrum at about 80 PeV Apel and et al. (KASCADE-Grande
collaboration) (2012), which corresponds to a knee-like structure in the heavy component of cosmic rays at about this energy Apel and et al. (KASCADE-Grande
collaboration) (2011). In order to verify and to refine the obtained results, an unfolding technique has been used similar to the one applied to KASCADE data Antoni and et al. (KASCADE
collaboration) (2005); Apel and et al. (KASCADE
collaboration) (2009), but now for the KASCADE-Grande energy range and based on the interaction models QGSJET-II-02 (Ostapchenko, 2006a, b) and FLUKA 2002.4 (Battistoni et al., 2007; Fassò et al., 2001, 2005). The unfolding method used will be outlined, and the results, which yield a strong indication for a dominance of the cosmic ray all-particle spectrum by heavy mass groups in the observed energy range and for a knee in the iron spectrum at about 80 PeV, will be presented in this publication. A more detailed description of the unfolding analysis can be found in Fuhrmann (2012).
2 Outline of the analysis
2.1 Data
The KASCADE-Grande experiment444Located at Karlsruhe Institute of Technology (KIT), $49.1^{\circ}$ N, $8.4^{\circ}$ E. Observation level 110 m a.s.l., corresponding to an average atmospheric depth of $1022\>\mathrm{g}/\mathrm{cm}^{2}$. measures air showers initiated by primary cosmic rays in the energy range555In this work, the upper energy is limited to about 200 PeV since data statistics are too small in the used sample of vertical showers at higher energies. 10 PeV to about 2000 PeV. It consists of a large scintillator array for measurements of charged particles, $N_{\mathrm{ch}}$, and of an array of shielded scintillation counters used for muon counting, $N_{\upmu}$, with a resolution of $\lesssim 15\%$ and $\lesssim 20\%$, respectively. A comprehensive description of the experiment, the data acquisition and the event reconstruction, as well as the achieved experimental resolutions is given in Apel and et al. (KASCADE-Grande
collaboration) (2010, 2012); Fuhrmann (2012).
In Fig. 1, the two-dimensional shower size spectrum number of charged particles $\log_{10}(N_{\mathrm{ch}}^{\mathrm{rec}})$ vs. number of muons $\log_{10}(N_{\upmu}^{\mathrm{rec}})$ measured with KASCADE-Grande, and used as basis for this analysis, is depicted.
Only events with shower sizes for which the experiment is fully efficient are considered, i.e. $\log_{10}(N_{\mathrm{ch}}^{\mathrm{rec}})\gtrsim 6.0$ and $\log_{10}(N_{\upmu}^{\mathrm{rec}})\gtrsim 5.0$. In order to avoid effects due to the varying attenuation of the shower sizes for different angles of incidence, the data set used is restricted to showers with zenith angles $\theta\leq 18^{\circ}$. Furthermore, a couple of quality cuts are applied (cf. Apel and et al. (KASCADE-Grande
collaboration) (2010, 2012); Fuhrmann (2012)). Finally, the measurement time covers approximately $1\,318$ days resulting in c. $78\,000$ air shower events having passed all quality cuts and contributing to Fig. 1, and yielding an exposure of $164\,709$ m${}^{2}$ sr yr.
2.2 Analysis
The analysis’ objective is to compute the primary energy spectra of $N_{\mathrm{nucl}}=5$ cosmic ray mass groups666This number of considered primaries has been found to yield a good compromise between the minimum number of primaries needed to describe the measured data sufficiently well, and the dispersion effects due to the limited resolution of the shower sizes (cf. Section 3 for details)., represented by protons (p), as well as helium (He), carbon (C), silicon (Si), and iron (Fe) nuclei.
The convolution of these sought-after differential fluxes $\mathrm{d}J_{n}/\mathrm{d}\,\mathrm{log}_{10}E$ of the primary cosmic ray nuclei $n$, with $n=1\ldots N_{\mathrm{nucl}}$, having an energy $E$ into the measured number of showers $N_{i}$ that is contributing to the content of the specific charged particle and muon number bin $\left(\log_{10}(N_{\mathrm{ch}}^{\mathrm{rec}}),\log_{10}(N_{\upmu}^{\mathrm{%
rec}})\right)_{i}$ in Fig. 1, can be described by an integral equation:
$$\begin{split}\displaystyle N_{i}&\displaystyle=2\uppi A_{\mathrm{f}}T_{\mathrm%
{m}}\sum_{n=1}^{N_{\mathrm{nucl}}}\intop_{0^{\circ}}^{18^{\circ}}\intop_{-%
\infty}^{+\infty}\frac{\mathrm{d}J_{n}}{\mathrm{d}\,\mathrm{log}_{10}E}\;p_{n}%
\;\sin\theta\ \cos\theta\ \mathrm{d}\,\mathrm{log}_{10}E\ \mathrm{d}\theta\;\;%
\;,\\
&\displaystyle\mathrm{with}\;\;\;p_{n}=p_{n}\left(\left(\mathrm{log}_{10}N_{%
\mathrm{ch}}^{\mathrm{rec}},\mathrm{log}_{10}N_{\upmu}^{\mathrm{rec}}\right)_{%
i}\ |\ \mathrm{log}_{10}E\right)\;\;\;.\end{split}$$
(1)
The sampling area $A_{\mathrm{f}}$ and the measurement time $T_{\mathrm{m}}$ are constants. The factor $2\uppi$ accounts for the integration over the azimuth angle, of which the data do not show any significant dependence. Hence, the integration over the whole solid angle has been reduced to one over the zenith angle range $0^{\circ}\leq\theta\leq 18^{\circ}$.
The conditional probabilities $p_{n}$ are originating from a convolution merging the intrinsic shower fluctuations $s_{n}$, the trigger and reconstruction efficiency $\varepsilon_{n}$, as well as the reconstruction resolution and systematic reconstruction effects $r_{n}$:
$$\begin{split}\displaystyle p_{n}\left(\left(\mathrm{log}_{10}N_{\mathrm{ch}}^{%
\mathrm{rec}},\mathrm{log}_{10}N_{\upmu}^{\mathrm{rec}}\right)_{i}\ |\ \mathrm%
{log}_{10}E\right)=\\
\displaystyle\intop_{-\infty}^{+\infty}\intop_{-\infty}^{+\infty}s_{n}\ %
\varepsilon_{n}\ r_{n}\ \mathrm{d}\,\mathrm{log}_{10}N_{\mathrm{ch}}^{\mathrm{%
tru}}\ \mathrm{d}\,\mathrm{log}_{10}N_{\upmu}^{\mathrm{tru}}\;\;\;,\end{split}$$
(2)
$$\begin{split}\displaystyle\mathrm{with}\;\;\;s_{n}&\displaystyle=s_{n}\left(%
\mathrm{log}_{10}N_{\mathrm{ch}}^{\mathrm{tru}},\mathrm{log}_{10}N_{\upmu}^{%
\mathrm{tru}}\ |\ \mathrm{log}_{10}E\right)\;\;\;,\\
\displaystyle\varepsilon_{n}&\displaystyle=\varepsilon_{n}\left(\mathrm{log}_{%
10}N_{\mathrm{ch}}^{\mathrm{tru}},\mathrm{log}_{10}N_{\upmu}^{\mathrm{tru}}%
\right)\;\;\;,\\
\displaystyle r_{n}&\displaystyle=r_{n}\left(\mathrm{log}_{10}N_{\mathrm{ch}}^%
{\mathrm{rec}},\mathrm{log}_{10}N_{\upmu}^{\mathrm{rec}}\ |\ \mathrm{log}_{10}%
N_{\mathrm{ch}}^{\mathrm{tru}},\mathrm{log}_{10}N_{\upmu}^{\mathrm{tru}}\right%
)\;\;\;.\end{split}$$
(3)
More precisely, $s_{n}$ is the probability that a nucleus $n$, having an energy $E$, induces an air shower containing a specific number of charged particles $\mathrm{log}_{10}N_{\mathrm{ch}}^{\mathrm{tru}}$ and muons $\mathrm{log}_{10}N_{\upmu}^{\mathrm{tru}}$ when arriving at the detection plane. The probability to reconstruct, due to the resolution and possible systematic reconstruction uncertainties, a certain number of charged particles $\mathrm{log}_{10}N_{\mathrm{ch}}^{\mathrm{rec}}$ and muons $\mathrm{log}_{10}N_{\upmu}^{\mathrm{rec}}$, instead of the true ones $\mathrm{log}_{10}N_{\mathrm{ch}}^{\mathrm{tru}}$ and $\mathrm{log}_{10}N_{\upmu}^{\mathrm{tru}}$, is described by $r_{n}$.
Equation (1) can mathematically be understood as a system of coupled integral equations and is classified as a Fredholm integral equation of the first kind. In a straightforward way, this equation can be reformulated in terms of a matrix equation (cf. Fuhrmann (2012) for the comprehensive calculation):
$$\overrightarrow{Y}=\bm{R}\overrightarrow{X}\;\;\;,$$
(4)
with the data vector $\overrightarrow{Y}$, whose elements $y_{i}$ are the cell contents $N_{i}$ in Eq.(1). The elements $x_{i}$ of the vector $\overrightarrow{X}$ represent the values of the sought-after differential energy spectra $\mathrm{d}J_{n}/\mathrm{d}\,\mathrm{log}_{10}E$, for all considered primaries $n$ consecutively. The conditional probabilities $p_{n}$ are included in the so-called transfer or response matrix $\bm{R}$, which relates the primary energy spectra to the measured shower sizes.
There are various methods to solve such an equation, albeit resolvability often does not per se imply uniqueness. It was found that the unfolding algorithm of Gold Gold (1964) yields appropriate and robust solutions. It is an iterative procedure and de facto related to a minimization of a chi-square function. For countercheck purposes, all results are validated in Fuhrmann (2012) by means of two additional algorithms: one is an iterative method applying Bayes’ theorem D’Agostini (1995), which also performs high stability, and the other is a regularized unfolding based on a combination of the least-squares method with the principle of reduced cross-entropy Schmelling (1994), which yields slightly poorer results. For more details about the algorithms used, cf. Fuhrmann (2012); Antoni and et al. (KASCADE
collaboration) (2005).
All these solution strategies have in common that the response matrix $\bm{R}$, and therefore the response function $p_{n}$, have to be known a priori.
3 Determination of the response matrix
The calculation of the matrix elements of the response matrix $\bm{R}$, i.e. the determination of the quantities $s_{n}$, $\varepsilon_{n}$, and $r_{n}$ of Eq.(2), is realized with Monte Carlo simulations. The simulated distributions are parametrized in order to simplify the mathematical integrations, as well as to apply some kind of smoothing necessary due to the limited Monte Carlo statistics. Furthermore, a conditioning is applied to the response matrix. Even though the considered number of primary cosmic ray mass groups is restricted to only five (represented by protons (p), as well as helium (He), carbon (C), silicon (Si), and iron (Fe) nuclei), their simulated distributions for the intrinsic shower fluctuations already overlap to a large extent (cf. Fig. 2, where even the distributions of protons and of iron nuclei are overlapping). This is again worsened by the additional smearing due to the reconstruction resolution. Therefore, the response matrix is almost singular, and hence Eq.(4) states an ill-conditioned problem so that a straightforward solution by a matrix inversion would yield meaningless results. The applied unfolding algorithms, however, allow reliable solutions under the premise that the matrix equation exhibits a minimum level of stability, i.e. given the case that the response matrix is sufficiently conditioned. The so-called condition number of a response matrix is given by the ratio of the largest to the smallest singular value of the matrix777More precisely, since the response matrix is not invertible in this case, a singular value decomposition is performed to compute the pseudoinverse, and, finally, the condition number (see Fuhrmann (2012)).. To ensure the statistical significance of the solution, the condition number should not exceed888The optimal maximum value was determined using test spectra and trying different values. $10^{7}$ in our case, requiring that no more than five primary nuclei are taken into account, and that only probabilities $p_{n}$ larger than $10^{-4}$ contribute to the response matrix. On the other hand, investigations based on a Kolmogorov-Smirnov and a chi-square test (analogous to the tests presented in Section 7.1) have shown that at least five primary mass groups are needed in order to describe the measured data sufficiently. Hence, five primary nuclei will be considered in this work (see Fuhrmann (2012) for further details).
For the parametrization of the intrinsic shower fluctuations, the development of air showers is simulated by means of CORSIKA Heck et al. (1998), version 6.307, based on the interaction models QGSJET-II-02 (Ostapchenko, 2006a, b) and FLUKA 2002.4 (Battistoni et al., 2007; Fassò et al., 2001, 2005). For each of the five primaries separately, the two-dimensional shower size distribution is simulated and parametrized for thinned999In order to get sufficient simulation statistics in a certain amount of time, the thinning option (Heck and Knapp, 1998) of CORSIKA was enabled in order to save computing time. The selected thinning level $10^{-6}$ means that for interactions, where particles with energies less than $10^{-6}$ of the primary particle energy are generated, only one particle is kept and is assigned a weight that accounts for the energy of the neglected particles. It was found that this will not have any significant impact on the shower size distributions used, and hence on the analysis, as detailed investigations in Antoni and et al. (KASCADE
collaboration) (2005) have proved. air showers with cores distributed uniformly over an area slightly larger than the KASCADE-Grande detector field, and with isotropically distributed zenith angles $\leq 18^{\circ}$. The CORSIKA simulations are mono-energetic in order to get sufficiently large statistics for the parametrizations, and to avoid an a priori assumption of a specific index of the power law spectrum of cosmic rays. In order to enhance the quality, instead of fitting the correlated two-dimensional $\mathrm{log}_{10}N_{\mathrm{ch}}^{\mathrm{tru}}$-$\mathrm{log}_{10}N_{\upmu}^{\mathrm{tru}}$-distribution immediately, it is done in two steps. Firstly, the distribution of charged particles is parametrized (cf. example in Fig. 2, left panel) using an appropriate one-dimensional fit function, thereafter the one of muons (cf. example in Fig. 2, right panel).
In the latter case, the two-dimensional fit function describing the $\mathrm{log}_{10}N_{\mathrm{ch}}^{\mathrm{tru}}$-$\mathrm{log}_{10}N_{\upmu}^{\mathrm{tru}}$-distribution is used. Since the parameters of the parametrization of the distribution of the charged particles are known from step one, the two-dimensional fit function can be transferred to a one-dimensional one by means of integration over the charged particle number, such that the remaining parameters describing the muon distribution part can be determined by fitting the one-dimensional muon number distributions. Thereby, the correlation between the number of charged particles and that of muons is considered in the fit procedure (this is explained in more detail in Fuhrmann (2012)). The parameters of the parametrizations determined at the discrete energies are finally interpolated in order to extrapolate the parametrization to a continuum. The simulated energies are: 2 PeV, 5 PeV, 10 PeV, $31.6$ PeV, 100 PeV, 316 PeV, 1000 PeV, and 3160 PeV; the numbers of simulated showers are: 6400, 4800, 3200, 2400, 1600, 1200, 800, and 400, respectively.
The efficiency as well as the resolution and systematic uncertainties for the five primaries are simulated using CRES101010Cosmic Ray Event Simulation, a program package developed for the KASCADE Antoni and et al. (KASCADE
collaboration) (2003) detector simulation., which bases on the GEANT 3.21 Brun et al. (1987); Giani et al. (1994) detector description and simulation tool. A second set of unthinned111111CRES handles only unthinned showers. air showers simulated with CORSIKA (again based on QGSJET-II-02 and FLUKA 2002.4) serves as input for CRES. Unlike in case of the intrinsic shower fluctuations where mono-energetic simulations are used, now a continuous energy spectrum following a power law with differential index $-2$ and comprising energies from 0.1 PeV to 3160 PeV is assumed. This spectrum is roughly one order of magnitude harder than the one actually measured, but is representing a compromise between sufficient statistics at the highest energies and computing time. Later, the simulated spectrum is reweighted to one with index $-3$. It was found that the obtained parametrizations do not differ significantly if, alternatively, indices of $-2.7$ or $-3.3$ are assumed, so that the exact value of the index is of minor importance. With about $353\,000$ simulated events per primary, the statistic of the simulations is roughly comparable to the one given in the measured data sample before applying quality cuts.
The combined trigger and reconstruction efficiency, simply called “efficiency”, and its parametrization is depicted in Fig. 3. Full efficiency of the experiment is given for $\log_{10}(N_{\mathrm{ch}}^{\mathrm{tru}})\geq 6.0$ and $\log_{10}(N_{\upmu}^{\mathrm{tru}})\geq 5.0$. Since in this analysis only measured air showers with shower sizes beyond the threshold of full efficiency are considered, the goodness of the parametrization of the efficiency is of minor importance121212However, for the computation of the response matrix the parametrization of the efficiency is necessary, since shower sizes below the threshold of full efficiency are regarded to account for possible migration effects caused by the intrinsic shower fluctuations..
In order to parametrize the dependence of the resolution of the experiment on the true sizes, a possible bias in the charged particle and muon number reconstruction must first be corrected by using appropriate correction functions $C_{\mathrm{ch}}^{\mathrm{bias}}$ and $C_{\upmu}^{\mathrm{bias}}$, respectively, determined based on the simulations. The correction is typically in the order of less than $10\%$. The distributions of the remaining deviations between the reconstructed (and bias corrected) and true shower sizes are depicted in Fig. 4 for the charged particle number (left panel) and for the muon number (right panel), in case of discrete exemplary true shower size intervals (corresponding to about 30 PeV to 40 PeV primary energy). Since the resolution does not differ significantly between different primaries, in order to increase statistics, the simulations for the five primary particles can be combined to a mixed composition set serving for the parametrization.
In Fig. 5,
the measured shower size plane is compared to the probabilities given by the final response matrix taking into account the entire parametrizations, i.e. that of the intrinsic shower fluctuations as well as that of the properties of the experiment. Shown are some isolines representing the cells $\left(\log_{10}(N_{\mathrm{ch}}^{\mathrm{rec}}),\log_{10}(N_{\upmu}^{\mathrm{%
rec}})\right)_{i}$ of the data plane with constant probability (from the inner131313In case of smaller energies, the widths of the probability distributions are as large that there are no individual probabilities larger than 0.1 or even 0.05, such that the inner isolines are missing in these cases. to the outermost isoline: $0.1$, $0.05$ and $10^{-4}$ probability density). For reasons of clarity, only the results for two exemplary primaries are illustrated: protons and iron nuclei. The isolines, which correspond to the $\log_{10}(N_{\mathrm{ch}}^{\mathrm{rec}})$-$\log_{10}(N_{\upmu}^{\mathrm{rec}})$ combinations with a probability of $10^{-4}$, represent the smallest probability value just considered in the response matrix after its conditioning. As can be seen, these outer isolines cover almost all measured data; hence, the minimal probability is not set too large.
4 Error propagation
The determination of the elemental energy spectra will be subjected to influences of different error sources. They can roughly be classified in four categories (cf. Fuhrmann (2012) for details):
1.
Statistical uncertainties due to the limited measurement time:
Due to the limited exposure, the measured data sample will suffer from unpreventable statistical uncertainties, which are expected to be Poisson distributed. These uncertainties will be propagated through the applied unfolding algorithm and are usually amplified thereby. The statistical uncertainties can be determined by means of a frequentist approach: The measured two-dimensional shower size plane is considered as probability distribution. Based on a random generator, a couple of artificial data sets are generated, which are unfolded individually. The spread of the solutions represents a good estimate for the statistical uncertainty due to the limited measurement time.
2.
Systematic bias induced by the unfolding method:
In the context of the convergence properties of the iterative unfolding algorithms, small numbers of iteration steps will on the one hand reduce the amplification of the statistical uncertainties of the data sample, and on the other hand will result in a solution that is deviating from the exact one. In case of the regularized techniques it is similar, since the regularization damps oscillations, but, conversely, results in a biased solution. In this work, the number of iteration steps, respectively the regularization parameter, is chosen such that an optimal balance between the statistical uncertainties and the systematic bias is achieved. The bias can be estimated based on the principle of the bootstrap methods: The measured two-dimensional shower size plane is unfolded under a certain number of iteration steps. Based on the derived solution and using a random generator, while the response matrix contributes the respective probability distribution, a couple of toy data sets can be generated. Unfolding them and comparing the solutions to the original solution yields an estimate for the mean bias induced by the unfolding algorithm for this specific number of iteration steps.
3.
Systematic uncertainties due to the limited Monte Carlo statistics:
Due to limited computing time, only Monte Carlo simulation sets with limited statistics can be generated resulting in an uncertainty in the determination of the response matrix. Furthermore, the conditioning that was applied to the response matrix has systematic impacts, which are, however, small and can be neglected in this work. The systematic uncertainties of the response matrix will finally affect the unfolded solution. The influence of the limited Monte Carlo statistics can be examined by generating further sets of response matrices used for unfolding the measured data set. First, the parameters of the parametrizations can be varied within their statistical precision. However, the effect was comparatively small, meaning that the simulation statistics are basically large enough. Second, the tails of the parametrizations can be varied within the statistical accuracy of the simulated distributions. While from a pure statistical point of view the quality of the fits does not change, from a physical perspective the exact knowledge of the tails is of high importance in order to account for the bin-to-bin migration effects in combination with the steeply falling spectrum of cosmic rays. By varying the parametrizations as extensively as possible given the statistical accuracy, at least a maximal range of systematic uncertainty caused by the limited Monte Carlo statistics can be estimated.
4.
Systematic uncertainties due to the systematic uncertainty in the Monte Carlo simulations:
The Monte Carlo simulations used to compute the response matrix are based on the high energy interaction model QGSJET-II-02 (Ostapchenko, 2006a, b) and the low energy interaction model FLUKA 2002.4 (Battistoni et al., 2007; Fassò et al., 2001, 2005). d’Enterria
et al. (2011) compared the first Large Hadron Collider (LHC) data with the predictions of various Monte Carlo event generators, including e.g. the models QGSJET 01 Kalmykov and Ostapchenko (1993), QGSJET-II Ostapchenko (2006a), SIBYLL 2.1 Ahn et al. (2009), and EPOS 1.99 Werner (2008). They concluded that there is basically a reasonable overall agreement; but, they stated also that none of the investigated models can describe consistently all measured observables at the LHC. Nevertheless, it was found that the model QGSJET-II-02 yields results, which agree with the data measured with KASCADE-Grande; and hence it can be expected that the result is not far off the truth (cf. Section 7.1). A possible deficient description of the contributing physical processes would result in systematic errors in the response matrix, finally leading to a wrong result of the deconvolution. These uncertainties are difficult to quantify as all models can fail if new physics is appearing in this energy range. However, in Antoni and et al. (KASCADE
collaboration) (2005); Apel and et al. (KASCADE-Grande
collaboration) (2012) it was shown that the high energy interaction model affects primarily the relative abundances of the mass groups and the absolute scale in energy assignment, while specific structures in the spectra are conserved. For example, with EPOS 1.99 the energy assignment for an individual
event is by approximately 10% lower than interpreted with QGSJET-II-02, while for SIBYLL 2.1 the energy is 10% higher Apel and et al. (KASCADE-Grande
collaboration) (2012, 5008). In addition, it is known that the low energy interaction model has less influence on the final result, as already the analyses based on the KASCADE measurements have proved Apel and et al. (KASCADE
collaboration) (2009).
5 Monte Carlo tests
Before the unfolding techniques are applied to the measured data, the whole procedure is tested with simulations. For that purpose, different models for the energy spectra for the elemental groups of cosmic rays can be assumed. Based on these models and using a random generator, a sample of elemental test energy spectra can be generated. By means of the random generator, statistical fluctuations are introduced equivalent to ones a measurement suffers from due to the limited measurement time. Applying a random generator once more, and, this time, using the entries of the response matrix as probability distributions, a toy
data set (i.e. a two-dimensional shower size spectrum) can be generated from the elemental test energy spectra. The assumed exposure is typically chosen such that the number of entries in the generated two-dimensional toy shower size spectrum is comparable to that in case of the measurement. The artificial data sets are used to test the unfolding procedure.
We have performed such tests for a large number of toy energy spectra, generated under different model assumptions including energy spectra with and without knee-like structures and varying relative abundances of the primary mass groups. For example, test energy spectra with primaries of equal abundance were used in order to check whether or not the applied unfolding technique favours a specific cosmic ray mass group. Another approach was to consider test energy spectra following a single power law, in order to rule out that possible knee-like structures in the unfolded spectra are caused by the unfolding method itself. In all these test scenarios, equally good results for the unfolding technique were achieved. Within systematic uncertainties, the unfolding results were always compatible with the input spectra. Similar tests were performed by unfolding spectra built up by different number of primary mass groups. The tests resulted in the conclusion that the resolution of our measurements and the fluctuations in the data allow to unfold in five mass groups. Details of such tests are described in Antoni and et al. (KASCADE
collaboration) (2005); Fuhrmann (2012).
In the following, exemplarily, the results based on test spectra that are expected to be close to reality will be discussed. Their parameters are determined by fitting the energy spectra measured by the former KASCADE experiment Antoni and et al. (KASCADE
collaboration) (2005); Bindig (2010). In Fig. 6,
the generated test spectra are depicted (open symbols) in comparison to the ones obtained by an unfolding (filled symbols, based on Gold’s unfolding algorithm Gold (1964)) of the toy shower size spectrum generated based on these test spectra. The error bars are the statistical uncertainties due to the limited measurement time propagated through the unfolding algorithm, while the error bands represent the systematic bias caused by Gold’s unfolding algorithm. Since the response matrix is used for the generation of the toy data set as well as in the unfolding procedure itself, systematic uncertainties of the response matrix do not have any recognizable influence in these particular Monte Carlo tests, and are hence not included in the error bands. Basically, the unfolding method seems to yield good results, and reproduces specific structures in the spectra successfully. The artificial wobbling at low energies, especially in case of the spectrum of iron nuclei, can be explained as a systematic relic of the unfolding algorithm, and contributes to the systematic uncertainties. Furthermore, the procedure tends to overestimate the fluxes at higher energies due to the small number of true events, close to zero, in combination with the positive definiteness of Gold’s unfolding algorithm. However, the energy ranges suffering from this problem are explicitly tagged unreliable by the estimated systematic and/or statistical uncertainties. In case of lower energies, larger systematic deviations between the original and the unfolded spectra for the heavier mass groups, represented by silicon and iron nuclei, can be observed. They are caused by the different convergence rates of Gold’s algorithm below the threshold of full detection
efficiency141414The detection efficiency incorporates the combined trigger and reconstruction efficiency (which bases on the true shower sizes) with the probability that an air shower contributes to the shower size plane that is serving as basis for our analysis (i.e. that it passes the cut $\log_{10}(N_{\mathrm{ch}}^{\mathrm{rec}})\geq 6.0$ and $\log_{10}(N_{\upmu}^{\mathrm{rec}})\geq 5.0$, which bases on the reconstructed shower sizes).. These systematic effects can hardly be estimated in case of real data. Hence, the only possibility will be to demand a sufficiently large detection probability, which is realized if the energy is larger than $\log_{10}(E/\mathrm{GeV})\approx 7.0$ for the lighter mass groups, and larger than $\log_{10}(E/\mathrm{GeV})\approx 7.2$ for silicon and for iron. Hence, for energies below these thresholds, i.e. where the estimation of the systematic uncertainties is not comprehensive due to the missing uncertainty caused by the different convergence properties, no error bands are depicted in Fig. 6. In order to guarantee that the presented spectra are reliable within the given uncertainties, in the main analysis, energy ranges below these limits will be omitted in the depictions completely, though they are considered mathematically within the unfolding process itself. Considering all these insights, the unfolding procedure can be applied successfully to the measured shower size spectrum.
6 Results
In Fig. 7, the energy spectra for elemental groups of cosmic rays, determined by applying Gold’s unfolding algorithm Gold (1964) to the two-dimensional shower size distribution measured with KASCADE-Grande and shown in Fig. 1, are presented. For a better distinguishability, the spectra of lighter mass groups, represented by protons as well as helium and carbon nuclei, and those of heavier mass groups, represented by silicon and iron nuclei, are depicted separately. The error bars represent the statistical uncertainties due to the limited measurement time. The error bands, representing the maximal range of systematic uncertainties, include the bias induced by Gold’s algorithm, as well as the uncertainties caused by the uncertainties in the response matrix due to the limited simulation statistics. Possible uncertainties of the interaction models used, i.e. of QGSJET-II-02 and FLUKA 2002.4, cannot be considered (cf. Section 4). As emphasized in Section 5, at the first energy bins, where full detection efficiency is not yet given, the unfolded silicon and iron spectra are subjected to larger systematic distortions. Hence, these data points are not shown. Due to the correlation among the elemental spectra, these distortions are cancelled out almost completely when computing the sum spectrum, such that it can be shown already one energy bin earlier. The intensity values of the energy spectra are listed in B.
In the framework of the interaction models used, the cosmic ray composition is dominated by the heavy mass groups in the observed energy range. The spectra of the lighter primaries are rather structureless. There are slight indications for a recovery of protons at higher energies, which agrees with the finding in Apel and et al. (KASCADE-Grande
collaboration) (2013) where a significant hardening in the cosmic ray spectrum of light primaries was observed. However, this is without statistical significance in this work. In case of the iron spectrum, there is a distinct knee-like steepening observed at about 80 PeV. It was verified that the spectrum of iron is not compatible with a single power law: a single power law fit results in a chi-square probability of less than 1% ($\chi^{2}/ndf=18.9/7$). The all-particle spectrum is without significant structures. But, one has to keep in mind that the spectra of the lighter primaries suffer from larger uncertainties at higher energies, especially due to a possible strong overestimation of the fluxes caused by the positive definiteness of Gold’s unfolding algorithm. Such overestimations would yield an overestimation of the sum flux at these energies, such that a possible knee structure in the all-particle spectrum at about 80 PeV to 100 PeV could maybe be masked by this effect.
7 Discussion of the results
7.1 Quality of data description
The quality of the unfolding solution itself cannot be judged directly, since the truth is not known. However, whether the determined energy spectra are on the main reliable can be checked indirectly by reviewing the quality of the data description by the solution. For this, the two-dimensional shower size spectrum measured with KASCADE-Grande and represented by the data vector $\overrightarrow{Y}$ can be compared to the data vector predicted by the solution vector $\overrightarrow{X}$, i.e. to the vector given by the forward folding151515One has to keep in mind that a forward folding is mathematically an exact process, whereas the inversion, i.e. the unfolding procedure based on the algorithms used, is not bias free. $\bm{R}\overrightarrow{X}$ of the solution according to Eq.(4). If the interaction models used for the simulations are proper and if the solution is not very far from the truth, the measured data and the ones “simulated” by the forward folding should be in agreement.
Firstly, a Kolmogorov-Smirnov test was applied to compare the measured two-dimensional shower size distribution with the one predicted by the solution, yielding a high probability of 97% for a compatibility. It has to be emphasized that small deviations are possible since the measured data sample suffers from fluctuations due to the limited measurement time, while the forward folded one could be less fluctuating since a forward folding applies some kind of smoothing.
Additionally, a chi-square test is realized to compare the two-dimensional distributions:
$$\chi^{2}=\dfrac{1}{M}\displaystyle\sum_{i=1}^{M}\dfrac{\left(\displaystyle\sum%
_{j=1}^{N}R_{ij}x_{j}-y_{i}\right)^{2}}{\sigma(y_{i})^{2}}:=\dfrac{1}{M}%
\displaystyle\sum_{i=1}^{M}\chi^{2}_{i}\;\;\;.$$
(5)
Thereby, $M$ and $N$ are the dimensions of $\overrightarrow{Y}$ and $\overrightarrow{X}$ respectively. The statistical errors $\sigma(y_{i})$ of the data sample $\overrightarrow{Y}$ are assumed to be Poissonian ones, and hence are set to $\sigma(y_{i})=\sqrt{y_{i}}$. The chi-square test results in a probability of full compatibility ($\chi^{2}/ndf=0.5$).
For further investigations, the distribution of the deviations $\chi^{2}_{i}$ that are the $M$ summands of Eq.(5) is illustrated in Fig. 8. Overall, there seems to be a good agreement between measurement and prediction by the unfolded solution. Only a few cells exhibit larger $\chi^{2}_{i}$ values. To examine these outliers in more detail, one-dimensional slices of the measured and of the predicted shower size plane are compared. In Fig. 9, left panel,
an exemplary slice along the x-axis of the two-dimensional shower size planes for the interval $6.14<\log_{10}(N_{\mathrm{ch}}^{\mathrm{rec}})<6.21$ is depicted. The markers represent the measured data sample, the dashed histogram the data set predicted by the forward folding of the afore unfolded solution. Additionally shown are the contributions of the considered primaries (smooth curves), determined by a forward folding of the respective elemental energy spectra into separate shower size spectra. For the examined slice there are larger deviations between measurement and prediction at the muon number bins $5.21<\log_{10}(N_{\upmu}^{\mathrm{rec}})<5.28$ and $5.49<\log_{10}(N_{\upmu}^{\mathrm{rec}})<5.56$ according to Fig. 8. Following Fig. 9, these deviations can be explained by single statistical excesses161616To neglect such individual statistical excesses, only present due to the limited measurement time, is actually the goal of good unfolding algorithms. given in the measured data sample, not present in the predicted data sample due to the smoothing property of the forward folding. There are no indications so far that the interaction models used (QGSJET-II-02) have problems. For the sake of completeness, slices along the y-axis are shown in Fig. 9, right panel, again exhibiting no incompatibilities.
To conclude, there are no indications so far that the interaction models used, i.e. QGSJET-II-02 and FLUKA 2002.4, have serious deficits in the description of the physics of hadronic interactions at these energies, which, however, does not mean necessarily that these models must be accurate in all details. Different interaction models primarily have impact on the absolute scale of energy and masses, such that model uncertainties can shift the unfolded spectra, possibly resulting in different abundances of the primaries, while specific structures, e.g. knee-like features of the spectra, are less affected by the models.
7.2 Comparison with other analyses or experiments
In Fig. 10,
the KASCADE-Grande energy spectra obtained in this work are compared with those obtained by other KASCADE-Grande analysis methods. The all-particle spectrum, which is the sum of all five unfolded elemental spectra, is compared with that shown in Apel and et al. (KASCADE-Grande
collaboration) (2012). Furthermore, some elemental spectra are compared, too: The electron-poor energy spectrum presented in Apel and et al. (KASCADE-Grande
collaboration) (2011) can roughly be compared to the sum of the elemental spectra of silicon and iron of this work (i.e. a heavy composition), and the electron-rich one to the sum of the spectra of protons, helium, and carbon (i.e. a light to intermediate composition). The sum spectra are compatible. However, in case of the elemental spectra the differences are larger, especially in the absolute flux values. The main reason is the technique used in Apel and et al. (KASCADE-Grande
collaboration) (2011) to divide the measured data sample into an electron-rich and electron-poor subsample in a rather simple, but robust way. Changes to the separation parameter used171717In Apel and et al. (KASCADE-Grande
collaboration) (2011), the separation parameter is computed by averaging the results of simulations for carbon and silicon nuclei; and hence, the transition between the two subsamples electron-poor and electron-rich takes place at a mass group between that of carbon and silicon nuclei. affect the number of events assigned to a specific subsample, and hence affect the absolute normalization of the resulting elemental energy spectra. Thus, the differences in the absolute flux values can be interpreted by different meanings of “light”, “intermediate”, and “heavy” composition in the two compared methods, and are not an indication of inconsistencies. Despite this problem, both, the electron-poor spectrum of Apel and et al. (KASCADE-Grande
collaboration) (2011) and the heavy spectrum of this work, exhibit a knee-like steepening at about 80 PeV. Furthermore, both methods give slight indications for a recovery of lighter mass groups at higher energies. This is statistically not significant, but it agrees with the finding in Apel and et al. (KASCADE-Grande
collaboration) (2013) where a significant hardening in the cosmic ray spectrum of light primaries was observed.
Figure 11 compiles
the KASCADE-Grande energy spectra obtained in this work with those obtained by other KASCADE-Grande analysis methods or by other experiments. For a better distinguishability, the three intermediate spectra of helium, carbon, and silicon are summed up to one “medium spectrum”. Within the given uncertainties, the KASCADE-Grande all-particle spectra are compatible with those of most of the other experiments. Though, at higher energies, the KASCADE-Grande spectrum exhibits a lower intensity compared to earlier experiments, especially GAMMA, Akeno and Yakutsk. At the highest energies, the KASCADE-Grande statistics are low and the all-particle spectrum is compatible with a single power law. Assuming a single power law fit, our results are in agreement with those reported by HiRes and the Pierre Auger Observatory. Concerning the elemental energy spectra of different mass groups, there is a good agreement with the new QGSJET-II-02 based results Finger (2011) of the KASCADE experiment, despite the independent measurement and data analysis. A brief discussion on this KASCADE analysis is compiled in A.
Combining the findings of KASCADE and KASCADE-Grande, all elemental spectra exhibit a knee-like structure: for light primaries at about 3 PeV to 5 PeV, for medium ones at about 8 PeV to 10 PeV, and for heavy ones at about 80 PeV.
8 Summary and conclusion
The two-dimensional shower size spectrum of charged particles and muons measured with KASCADE-Grande was unfolded. Based on this analysis, the energy spectra for five primaries representing the chemical composition of cosmic rays have been determined, as well as the all-particle spectrum which is the sum of the elemental spectra. For this analysis, the response matrix of the experiment was computed based on the hadronic interaction models QGSJET-II-02 (Ostapchenko, 2006a, b) and FLUKA 2002.4 (Battistoni et al., 2007; Fassò et al., 2001, 2005).
The all-particle spectrum, which suffers in this work from uncertainties of the contributing elemental spectra and which is structureless within the given uncertainties, agrees with that determined in an alternative analysis of the KASCADE-Grande data Apel and et al. (KASCADE-Grande
collaboration) (2012), where a small break-off at about 80 PeV was found181818In the energy range from 1 PeV to some hundred PeV, this break-off in the all-particle spectrum is the second one besides the one at about 3 PeV to 5 PeV reported in Finger (2011) based on KASCADE data an using also QGSJET-II-02 as interaction model.. Furthermore, both KASCADE-Grande all-particle spectra are compatible with the findings of most of the other experiments.
The unfolded energy spectra of light and intermediate primaries are rather featureless in the sensitive energy range. There are slight indications for a possible recovery of protons at higher energies, which is, however, statistically not significant. But, this finding would agree with the one in Apel and et al. (KASCADE-Grande
collaboration) (2013) where a significant hardening in the cosmic ray spectrum of light primaries was observed.
The spectrum of iron exhibits a clear knee-like structure at about 80 PeV. The position of this structure is consistent with that of a structure found in spectra of heavy primaries determined by other analysis methods of the KASCADE-Grande data Apel and et al. (KASCADE-Grande
collaboration) (2011). The energy where this knee-like structure occurs conforms to the one where the break-off in the all-particle spectrum is observed. Hence, the findings in this work and in Apel and et al. (KASCADE-Grande
collaboration) (2011) demonstrate the first time experimentally that the heavy knee exists, and the kink in the all-particle spectrum is presumably caused by this decrease in the flux of heavy primaries. The spectral steepening occurs at an energy where the charge dependent knee of iron is expected, if the knee at about 3 PeV to 5 PeV is assumed to be caused by a decrease in the flux of light primaries (protons and/or helium).
However, there is still uncertainty about whether the applied interaction models, especially the high energy one QGSJET-II-02, are valid in all the details. As demonstrated in Antoni and et al. (KASCADE
collaboration) (2005), it is expected that variations in the interaction models primarily affect the relative abundances of the primaries, and hence assign possible structures given in the data to different mass groups, while the structures themselves are rather model independent. Although it was shown that the interaction models used do not seem to exhibit significant weaknesses in describing the data, more certainty can be expected in the near future, when man-made particle accelerators like the LHC reach laboratory energies up to some hundred PeV, and hence allow to optimize the interaction models in an energy range relevant for KASCADE-Grande.
Acknowledgements
The authors would like to thank the members of the
engineering and technical staff of the KASCADE-Grande
collaboration, who contributed to the success of the experiment.
The KASCADE-Grande experiment is supported
by the BMBF of Germany, the MIUR and INAF of Italy,
the Polish Ministry of Science and Higher Education,
and the Romanian Authority for Scientific Research UEFISCDI
(PNII-IDEI grants 271/2011 and 17/2011).
References
Apel and et al. (KASCADE-Grande
collaboration) (2010)
W. Apel, et al. (KASCADE-Grande
collaboration),
The KASCADE-Grande experiment,
Nucl. Instrum. Methods A 620
(2010) 202.
Antoni and et al. (KASCADE
collaboration) (2005)
T. Antoni, et al. (KASCADE
collaboration),
KASCADE measurements of energy spectra for
elemental groups of cosmic rays: Results and open problems,
Astropart. Phys. 24
(2005) 1.
Apel and et al. (KASCADE-Grande
collaboration) (2011)
W. Apel, et al. (KASCADE-Grande
collaboration),
Kneelike structure in the spectrum of the heavy
component of cosmic rays observed with KASCADE-Grande,
Phys. Rev. Lett. 107
(2011) 171104.
Kulikov and Khristiansen (1959)
G. Kulikov, G. Khristiansen,
On the size spectrum of extensive air showers,
Sov. Phys. JETP 8
(1959) 441.
Peters (1961)
B. Peters,
Primary cosmic radiation and extensive air showers,
Il Nuovo Cimento 22
(1961) 800.
Aglietta et al. (2004a)
M. Aglietta, et al.,
The cosmic ray primary composition in the “knee”
region through the EAS electromagnetic and muon measurements at EAS-TOP,
Astropart. Phys. 21
(2004a) 583.
Aglietta et al. (2004b)
M. Aglietta, et al.,
The cosmic ray primary composition between $10^{15}$
and $10^{16}$ eV from Extensive Air Showers electromagnetic and TeV
muon data,
Astropart. Phys. 20
(2004b) 641.
Antoni and et al. (KASCADE
collaboration) (2003)
T. Antoni, et al. (KASCADE
collaboration),
The cosmic-ray experiment KASCADE,
Nucl. Instrum. Methods A 513
(2003) 490.
Dar and De Rújula (2008)
A. Dar, A. De Rújula,
A theory of cosmic rays,
Phys. Rep. 466
(2008) 179.
Apel and et al. (KASCADE-Grande
collaboration) (2012)
W. Apel, et al. (KASCADE-Grande
collaboration),
The spectrum of high-energy cosmic rays measured with
KASCADE-Grande,
Astropart. Phys. 36
(2012) 183.
Apel and et al. (KASCADE
collaboration) (2009)
W. Apel, et al. (KASCADE collaboration),
Energy spectra of elemental groups of cosmic rays:
Update on the KASCADE unfolding analysis,
Astropart. Phys. 31
(2009) 86.
Ostapchenko (2006a)
S. Ostapchenko,
QGSJET-II: Towards reliable description of very
high energy hadronic interactions,
Nucl. Phys. B (Proc. Suppl.) 151
(2006a) 143.
Ostapchenko (2006b)
S. Ostapchenko,
Nonlinear screening effects in high energy hadronic
interactions,
Phys. Rev. D 74
(2006b) 014026.
Battistoni et al. (2007)
G. Battistoni, et al.,
The FLUKA code: Description and benchmarking,
in: M. Albrow, R. Raja (Eds.),
Proc. Hadronic Shower Simulation Workshop 2006, volume
896, AIP, 2007,
p. 31.
Fassò et al. (2001)
A. Fassò, et al.,
FLUKA: Status and prospective for hadronic
applications,
in: A. Kling, et al. (Eds.),
Proc. MonteCarlo 2000 Conference,
Springer-Verlag, Berlin, Germany,
2001, p. 955.
Fassò et al. (2005)
A. Fassò, et al., FLUKA: A
multi-particle transport code, Technical report
CERN-2005-10, INFN/TC_05/11, SLAC-R-773, SLAC,
2005.
Fuhrmann (2012)
D. Fuhrmann, KASCADE-Grande Measurements
of Energy Spectra for Elemental Groups of Cosmic Rays, Ph.D. thesis,
University of Wuppertal, Germany, 2012.
http://d-nb.info/1022581872/34.
Gold (1964)
R. Gold, An iterative unfolding method for
response matrices, Report ANL-6984, Argonne, USA,
1964.
D’Agostini (1995)
G. D’Agostini,
A multidimensional unfolding method based on Bayes’
theorem,
Nucl. Instrum. Methods A 362
(1995) 487.
Schmelling (1994)
M. Schmelling,
The method of reduced cross-entropy: A general
approach to unfold probability distributions,
Nucl. Instrum. Methods A 340
(1994) 400.
Heck et al. (1998)
D. Heck, et al., CORSIKA: A Monte Carlo
code to simulate extensive air showers, Report FZKA
6019, Forschungszentrum Karlsruhe, Germany, 1998.
Heck and Knapp (1998)
D. Heck, J. Knapp, Upgrade
of the Monte Carlo code CORSIKA to simulate extensive air showers with
energies $>10^{20}$ eV (corrected version from Sep. 5, 2003),
Report FZKA 6097, Forschungszentrum Karlsruhe,
Germany, 1998.
Brun et al. (1987)
R. Brun, et al., GEANT3,
Report CERN DD/EE/84-1, CERN, Geneva, Switzerland,
1987.
Giani et al. (1994)
S. Giani, et al., GEANT detector
description and simulation tool, CERN Program Library Long
Writeup W5013, CERN, Geneva, Switzerland, 1994.
d’Enterria
et al. (2011)
D. d’Enterria, et al.,
Constraints from the first LHC data on hadronic
event generators for ultra-high energy cosmic-ray physics,
Astropart. Phys. 35
(2011) 98.
Kalmykov and Ostapchenko (1993)
N. Kalmykov, S. Ostapchenko,
The nucleus-nucleus interaction, nuclear
fragmentation, and fluctuations of extensive air showers,
Phys. At. Nucl. 56
(1993) 346.
Ahn et al. (2009)
E.-J. Ahn, et al.,
Cosmic ray interaction event generator SIBYLL 2.1,
Phys. Rev. D 80
(2009) 094003.
Werner (2008)
K. Werner,
The hadronic interaction model EPOS,
Nucl. Phys. B (Proc. Suppl.) 175
(2008) 81.
Apel and et al. (KASCADE-Grande
collaboration) (5008)
W. Apel, et al. (KASCADE-Grande
collaboration),
The KASCADE-Grande energy spectrum of cosmic rays
and the role of hadronic interaction models,
Adv. Space Res. (2013, in press,
doi:10.1016/j.asr.2013.05.008).
Bindig (2010)
D. Bindig, Calculation of atmospheric
Neutrino- and Muonfluxes with respect to the Cosmic Ray composition,
diploma thesis, University of Wuppertal, Germany,
2010.
Apel and et al. (KASCADE-Grande
collaboration) (2013)
W. Apel, et al. (KASCADE-Grande
collaboration),
Ankle-like feature in the energy spectrum of light
elements of cosmic rays observed with KASCADE-Grande,
Phys. Rev. D 87
(2013) 081101(R).
Finger (2011)
M. Finger, Reconstruction of energy spectra
for different mass groups of high-energy cosmic rays, Ph.D. thesis,
Karlsruher Institut für Technologie, Germany, 2011.
http://d-nb.info/1014279917/34.
Fesefeldt (1985)
H. Fesefeldt, The Simulation of Hadronic
Showers: Physics and Applications, Technical report
PITHA 85/02, III. Physikalisches Institut RWTH Aachen,
1985.
Appendix A KASCADE data unfolding based on QGSJET-II
In Fig. 11, the results obtained by an unfolding analysis applied to air showers measured with the KASCADE experiment Antoni and et al. (KASCADE
collaboration) (2003) in the zenith angle range of
$0^{\circ}$ to $18^{\circ}$ are depicted.
In this appendix, we will discuss briefly the main findings of the corresponding analysis,
while details can be found in Finger (2011).
The analysis is based on the same method of data unfolding and the same hadronic interaction models (QGSJET-II-02 (Ostapchenko, 2006a, b) and FLUKA 2002.4 (Battistoni et al., 2007; Fassò et al., 2001, 2005)) as the work
described in this paper. But, instead of the total number of charged particles $N_{\mathrm{ch}}$ and the total number of muons $N_{\upmu}$, here the electron shower size $N_{\mathrm{e}}$ and the truncated muon number $N_{\upmu}^{\mathrm{trunc}}$ (number of muons between $40$ m and $200$ m distance from shower core) are used.
Another difference is that KASCADE covers a lower energy range than KASCADE-Grande, but a reasonable overlap remains.
At an energy of approximately 4 PeV to 5 PeV, a kink in the all-particle flux, the so-called “knee” of the cosmic ray spectrum, can be observed (cf. Fig. 11).
The left panel of Fig. 12 shows the energy spectra of protons, as well as of helium and carbon nuclei. It
can be noticed that, in the frame of the models used, protons are less abundant than helium and carbon nuclei, which is in agreement with the results at higher energies (cf. Fig. 7).
At an energy of about 4 PeV, a kink
in the proton spectrum can be found. The energy spectra of helium and carbon, which
are the most abundant nuclei, indicate an almost equal abundance of both elements, but the fluxes of the two primary particle types differ in their spectral shape.
Whereas the helium spectrum is characterized by a kink at about 7 PeV,
a change of index in the carbon spectrum is compatible with a kink at around 20 PeV. As discussed in Antoni and et al. (KASCADE
collaboration) (2005) for other models, the knee positions of the three nuclei protons, helium, and carbon relative to each other demonstrate a compatibility with a rigidity
dependence of the knees. It should be mentioned that in case of the
steepening of the carbon spectrum the statistics become poor in this energy region and the
spectrum is liable to large fluctuations; but, a general trend can be seen.
The right part of Fig. 12 exhibits the energy spectra of silicon and of iron nuclei. The silicon
spectrum reveals a kink at quite low energy, which is not expected when a rigidity dependence is assumed.
Its existence can be explained by problems in the data description.
An examination of the distribution of the $\chi_{i}^{2}$-deviations (analogous to the examination performed in Section 7.1 for the KASCADE-Grande data) reveals deficiencies
mainly in the medium energy range, especially at the heavy ridge191919The definition of light and heavy ridge is given in Fig. 1 and its caption., which might explain the unexpected course of the silicon spectrum.
But, in general, the distribution of the $\chi_{i}^{2}$-deviations indicates a good overall data description compared to other models.
The spectrum of iron does not exhibit a knee-like feature in the accessible energy range of KASCADE.
Figure 13 shows the measured two-dimensional shower size spectrum of electron and muon numbers and,
additionally, the lines of the most probable values (given by the response matrix) for all nuclei used. Whereas the lines of
helium and carbon, being the most abundant elements at low energies, start almost in the maximum
of the measured size spectrum, the line of protons is located on the left-hand side
of the maximum, which causes a minor frequency of protons. With increasing energy, the most
probable values leave, for all primaries consecutively, the maximum region, leading to a kink in the individual
energy spectra. For silicon, the situation seems to be more complicated. Although the lines start
on the right-hand side of the maximum of the data distribution and converge with increasing
energy towards the maximum, a sharp kink in the silicon spectrum can be found. Solely from
an examination of the course of the most probable values, this change of index is not expected.
With respect to the most probable values of silicon, the values for iron are shifted to the heavy
edge. The deficit of iron at low energies as well as the kink in the silicon spectrum cannot be
clarified by analysing the most probable values only. This reveals the importance of the shower
fluctuations for the results of the analysis.
In summary, the presented KASCADE unfolding analysis, which is based on the high energy interaction model QGSJET-II-02 (Ostapchenko, 2006a, b) and the low energy interaction model FLUKA 2002.4 (Battistoni et al., 2007; Fassò et al., 2001, 2005), confirms especially the former finding that the knee in the cosmic ray energy spectrum is caused by a decrease of the flux of the lighter mass groups, like already shown in Antoni and et al. (KASCADE
collaboration) (2005), where the KASCADE data was unfolded inter alia based on the high energy interaction model QGSJET 01 Kalmykov and Ostapchenko (1993) and the low energy interaction model GHEISHA Fesefeldt (1985), version 2002. The influence of the low energy interaction model on the resulting spectra is small compared to the systematic uncertainties (shown and discussed in Apel and et al. (KASCADE
collaboration) (2009)).
But, as found in Antoni and et al. (KASCADE
collaboration) (2005), the choice of the high energy interaction model affects the relative abundance of the primary mass groups, though not the spectral shapes. As for the KASCADE-Grande analysis the more sophisticated QGSJET-II-02 model was used instead of QGSJET 01, we repeated the analysis of the KASCADE data based on QGSJET-II-02. Figure 12 can be compared directly with Fig. 14 of reference Antoni and et al. (KASCADE
collaboration) (2005): whereas the structural features of the light components are similar, the spectrum of the iron component changed. This is due to the improved description of the shower development by the new version of the model. Also the overall description of the data (in terms of $\chi_{i}^{2}$-deviations) is improved considerably.
Appendix B Differential intensity values
The differential intensity values d$J$/d$E$ of the unfolded energy spectra for elemental groups of cosmic rays—based on KASCADE-Grande measurements and depicted in Fig. 7—and their statistical and systematic uncertainties, $\sigma_{\mathrm{stat.}}$ respectively $\Delta_{\mathrm{syst.}}$, are listed in Table 1 to 6. The results are based on the interaction models QGSJET-II-02 (Ostapchenko, 2006a, b) and FLUKA 2002.4 (Battistoni et al., 2007; Fassò et al., 2001, 2005). |
Effect of nearest- and next-nearest neighbor interactions
on the spin-wave velocity
of one-dimensional quarter-filled spin-density-wave conductors
Y. Tomio${}^{1,}$111E-mail: [email protected]
N. Dupuis${}^{2,}$222E-mail: [email protected]
and
Y. Suzumura${}^{1,}$333E-mail: [email protected]
${}^{1}$Department of Physics, Nagoya University, Nagoya 464-8602, Japan
${}^{2}$Laboratoire de Physique des Solides, Associé au CNRS,
Université Paris-Sud, 91405 Orsay, France
( )
Abstract
We study spin fluctuations in quarter-filled one-dimensional
spin-density-wave systems in presence of short-range Coulomb
interactions. By applying a path integral method,
the spin-wave velocity is calculated as a function of on-site
($U$), nearest ($V$) and next-nearest ($V_{2}$) neighbor-site interactions.
With increasing $V$ or $V_{2}$, the pure spin-density-wave state evolves
into a state with coexisting spin- and charge-density waves. The
spin-wave velocity is reduced when several density waves coexist in the
ground state, and may even vanish at large $V$. The effect of
dimerization along the chain is also considered.
pacs: PACS Numbers: 72.15.Nj, 75.30.Fv
I Introduction
Organic conductors of the tetramethyltetraselenafulvalene (TMTSF) and
tetramethyltetrathiafulvalene (TMTTF) salts family often
exhibit density-wave (DW) instability at low temperature.
[1, 2, 3]
Recent experiments have shown that a $2k_{\rm F}$ spin-density wave (SDW) may
coexist with a $4k_{\rm F}$ and/or a $2k_{\rm F}$ charge-density wave
(CDW). [4, 5]
(The quantity $k_{\rm F}$ denotes the one-dimensional Fermi wave vector and
2$k_{\rm F}$ is the nesting wave vector for the SDW.) Furthermore, these
CDW’s seem to be of pure electronic origin without any (significant)
contribution from the lattice.
This unusual ground-state can be understood on the basis of a
mean-field theory for a quarter-filled one-dimensional system
in the presence of several kinds of Coulomb interaction.
Within an extended Hubbard model with on-site ($U$) and nearest-neighbor
($V$) interactions, it has been shown that a $4k_{\rm F}$ CDW may coexist
with the $2k_{\rm F}$ SDW when $V$ is strong enough. [6] When the
next-nearest-neighbor interaction ($V_{2}$) is also taken into account,
three different ground states can be stabilized:
[7, 8, 9] (i) a pure $2k_{\rm F}$ SDW at
small $V$ and $V_{2}$, (ii) coexisting $2k_{\rm F}$ SDW and $4k_{\rm F}$ CDW at
large $V$, (iii) coexisting $2k_{\rm F}$ SDW, $2k_{\rm F}$ CDW and
$4k_{\rm F}$ SDW at large $V_{2}$.
Although the SDW instability is driven by the on-site
repulsive interaction $U$, the nearest and next-nearest neighbor
interactions play a crucial role for the appearance of CDW’s.
Following the standard analysis,
[10, 11, 12, 13, 14, 15, 16, 17]
fluctuations around the mean-field ground-state have been studied. For
a quarter-filled system, commensurability effects with the underlying
crystal lattice pin the DW’s and produce a gap in the sliding
modes. [18] Surprisingly, this gap vanishes at the
boundary between the pure $2k_{\rm F}$ SDW and the coexisting
2$k_{\rm F}$ SDW and 4$k_{\rm F}$ CDW.
[19] The spin-wave modes have been studied only
within the Hubbard model ($V=V_{2}=0$).[20] The spin-wave
velocity decreases monotonically with increasing $U$, in qualitative
agreement with the exact solution of the one-dimensional Hubbard
model. [21]
In this paper, we study the spin-wave modes in presence of the nearest and
next-nearest neighbor
interactions ($V,V_{2}\neq 0$). We consider a one-dimensional
system, assuming that long-range order is stabilized by (weak)
interchain coupling. Our analysis is based on a functional integral
formulation [22, 23, 24, 25] which allows a simple
treatment of the spin-wave modes even in the presence of these
interactions. The electron-electron interaction is treated
within (Hartree-Fock) mean-field theory, while the SU(2) spin rotation
symmetry is maintained by introducing a fluctuating spin-quantization
axis in the functional integral. Transverse spin-wave modes then
correspond to fluctuations of the spin-quantization axis around its
mean-field value.
In Secs. II and III, we extend the derivation of Ref. 25 from the
incommensurate to the commensurate case. Previous mean-field
results[8] are recovered within a saddle point
approximation. Then we derive the effective action of the spin-wave
modes and obtain the spin-wave velocity. In Sec. IV, the spin-wave
velocity is calculated as a function of $V,V_{2}$ and the dimerization
along the chain. Section V is devoted to discussion.
II Path integral formulation
We consider a one-dimensional electron system at quarter-filling
with dimerization along the chain. Within the extended Hubbard model,
the Hamiltonian is given by
$$\displaystyle H$$
$$\displaystyle=$$
$$\displaystyle H_{0}+H_{I}\;\;,$$
(1)
$$\displaystyle H_{0}$$
$$\displaystyle=$$
$$\displaystyle-\sum_{\sigma,n,n^{\prime}}t_{nn^{\prime}}\psi_{n\sigma}^{\dagger%
}\psi_{n^{\prime}\sigma}\;\;,$$
(2)
$$\displaystyle H_{I}$$
$$\displaystyle=$$
$$\displaystyle U\sum_{n}n_{n\uparrow}n_{n\downarrow}+V\sum_{n}(\psi_{n}^{%
\dagger}\psi_{n})(\psi_{n+1}^{\dagger}\psi_{n+1})$$
$$\displaystyle{}+V_{2}\sum_{n}(\psi_{n}^{\dagger}\psi_{n})(\psi_{n+2}^{\dagger}%
\psi_{n+2})$$
$$\displaystyle=$$
$$\displaystyle{}-\frac{U}{4}\sum_{n}(\psi_{n}^{\dagger}\sigma_{z}\psi_{n})^{2}+%
\sum_{n,n^{\prime}}(\psi_{n}^{\dagger}\psi_{n})V_{nn^{\prime}}(\psi_{n^{\prime%
}}^{\dagger}\psi_{n^{\prime}})\;\;,$$
where $\psi_{n}=(\psi_{n\uparrow},\,\psi_{n\downarrow})^{t}$,
$n_{n\sigma}=\psi_{n\sigma}^{\dagger}\psi_{n\sigma}$, and $\psi_{n\sigma}^{\dagger}$ is the creation operator of an electron with spin
$\sigma(=\uparrow,\downarrow)$ at the lattice site $n$. The
transfer integral in the kinetic term $H_{0}$ is defined by
$$\displaystyle t_{nn^{\prime}}$$
$$\displaystyle=$$
$$\displaystyle\left\{\begin{array}[]{ll}t-(-1)^{n}t_{\rm d}&{\rm for}\,\,\,n^{%
\prime}=n+1\;\;,\\
t+(-1)^{n}t_{\rm d}&{\rm for}\,\,\,n^{\prime}=n-1\;\;,\\
0&{\rm otherwise}\;\;,\end{array}\right.$$
(4)
where a finite $t_{\rm d}$ is due to the dimerization. The interaction
Hamiltonian is expressed in terms of the Hubbard interaction $U$ and
the density-density interaction $V_{nn^{\prime}}$ defined by
$$\displaystyle V_{nn^{\prime}}$$
$$\displaystyle=$$
$$\displaystyle\left\{\begin{array}[]{ll}U/4&{\rm for}\,\,\,n^{\prime}=n\;\;,\\
V/2&{\rm for}\,\,\,n^{\prime}=n\pm 1\;\;,\\
V_{2}/2&{\rm for}\,\,\,n^{\prime}=n\pm 2\;\;,\\
0&{\rm otherwise}\;\;,\end{array}\right.$$
(5)
where $V$ ($V_{2}$) is the coupling constant for nearest
(next-nearest) neighbor-site interaction ($U,V,V_{2}\geq 0$).
In order to derive the effective action for the spin-wave modes,
we write the partition function $Z$ as a path integral:
$$\displaystyle Z$$
$$\displaystyle=$$
$$\displaystyle\int{\cal D}\psi^{\dagger}{\cal D}\psi~{}e^{-{\cal S}[\psi^{%
\dagger},\psi]}\;\;,$$
(6)
$$\displaystyle{\cal S}$$
$$\displaystyle=$$
$$\displaystyle\int d\tau\left[\sum_{n}\psi_{n}^{\dagger}\left(\partial_{\tau}-%
\mu\right)\psi_{n}+H[\psi^{\dagger},\psi]\right]\;\;,$$
(7)
where the action ${\cal S}$ is a function of the Grassmann variable
$\psi$. $\tau$ is a Matsubara time-varying between $0$ and
$1/T$. Following Refs. 23 and 25, we now introduce the
new field $\phi$ defined by
$$\displaystyle\psi_{n}$$
$$\displaystyle=$$
$$\displaystyle R_{n}\phi_{n}\;\;,$$
$$\displaystyle R_{n}\sigma_{z}R_{n}^{\dagger}$$
$$\displaystyle=$$
$$\displaystyle\mbox{\boldmath$\sigma$}\cdot{\bf n}_{n}\;\;,$$
(8)
where $R_{n}$ is an SU(2)/U(1) unitary matrix and ${\bf n}_{n}$ is a
unit vector which gives the direction of the spin-quantization axis at
site $n$ and time $\tau$ for the field $\phi$.
Substituting Eq. (II) into Eq. (7),
the action is rewritten as ${\cal S}={\cal S}_{0}+{\cal S}_{I}$, where
$$\displaystyle{\cal S}_{0}$$
$$\displaystyle=$$
$$\displaystyle\int d\tau\left\{\sum_{n}\phi_{n}^{\dagger}(\partial_{\tau}-\mu+R%
_{n}^{\dagger}\partial_{\tau}R_{n})\phi_{n}\right.$$
(9)
$$\displaystyle\left.-\sum_{n,n^{\prime}}\phi_{n}^{\dagger}R_{n}^{\dagger}t_{nn^%
{\prime}}R_{n^{\prime}}\phi_{n^{\prime}}\right\}\;\;,$$
$$\displaystyle{\cal S}_{I}$$
$$\displaystyle=$$
$$\displaystyle\int d\tau\left\{-\frac{U}{4}\sum_{n}\rho_{sn}^{2}+\sum_{n,n^{%
\prime}}\rho_{cn}V_{nn^{\prime}}\rho_{cn^{\prime}}\right\}\;\,.$$
(10)
$\rho_{cn}=\phi_{n}^{\dagger}\phi_{n}$
and $\rho_{sn}=\phi_{n}^{\dagger}\sigma_{z}\phi_{n}$
are the charge- and spin-density operators.
The quantities $\sigma_{x},\sigma_{y}$ and $\sigma_{z}$ are Pauli matrices.
Note that ${\cal S}_{I}$ is invariant under the transformation $\psi\to\phi$,
since the interaction is invariant with respect to
spin rotations. It is convenient to rewrite the action as
$$\displaystyle{\cal S}$$
$$\displaystyle=$$
$$\displaystyle\int d\tau\left\{\sum_{n}\phi_{n}^{\dagger}(\partial_{\tau}-\mu-A%
_{0n})\phi_{n}\right.$$
(11)
$$\displaystyle\left.-\sum_{n,n^{\prime}}\phi_{n}^{\dagger}t_{nn^{\prime}}\exp\!%
\left(-i\int_{n}^{n^{\prime}}dlA_{xl}\right)\phi_{n^{\prime}}\right.$$
$$\displaystyle\left.{}-\frac{U}{4}\sum_{n}\rho_{sn}^{2}+\sum_{n,n^{\prime}}\rho%
_{cn}V_{nn^{\prime}}\rho_{cn^{\prime}}\right\}\;\,.$$
where the SU(2) gauge fields $A_{0}$ and $A_{x}$ are defined by
$$\displaystyle A_{0n}$$
$$\displaystyle\equiv$$
$$\displaystyle-R^{\dagger}_{n}\partial_{\tau}R_{n}\;,$$
(2.12a)
$$\displaystyle\exp\!\left(-i\int_{n}^{n+\delta}dlA_{xl}\right)$$
$$\displaystyle\equiv$$
$$\displaystyle R_{n}^{\dagger}R_{n+\delta}\;,\hskip 5.690551pt(\delta=\pm 1)\;\,.$$
(2.12b)
The lattice spacing is taken as unity. Using the
Stratonovich-Hubbard identity[26], the interaction part of the
action is rewritten as (note that $U,V,V_{2}>0$)
$$\displaystyle\exp\!\left(-\sum_{n,n^{\prime}}\int d\tau\rho_{cn}V_{nn^{\prime}%
}\rho_{cn^{\prime}}\right)$$
$$\displaystyle=$$
$$\displaystyle\int{\cal D}\Delta_{c}\exp\!\left(-\sum_{n,n^{\prime}}\int d\tau%
\Delta_{cn}V_{nn^{\prime}}^{-1}\Delta_{cn^{\prime}}+~{}2i\sum_{n}\int d\tau%
\Delta_{cn}\rho_{cn}\right)\;\;,$$
(13)
$$\displaystyle\exp\!\left(\frac{U}{4}\sum_{n}\int d\tau\rho_{sn}^{2}\right)$$
$$\displaystyle=$$
$$\displaystyle\int{\cal D}\Delta_{s}\exp\!\left(-\frac{1}{U}\sum_{n}\int d\tau%
\Delta_{sn}^{2}+~{}\sum_{n}\int d\tau\Delta_{sn}\rho_{sn}\right)\;\;,$$
(14)
where
$\Delta_{cn}$ and $\Delta_{sn}$ are (real) auxiliary fields.
By using Eqs. (13) and (14),
the final form of the partition function is given by
$$\displaystyle Z$$
$$\displaystyle=$$
$$\displaystyle\int{\cal D}\Delta_{c}{\cal D}\Delta_{s}\int{\cal D}{\bf n}\int{%
\cal D}\phi^{\dagger}{\cal D}\phi~{}e^{-({\cal S}_{0}+{\cal S}_{I})}\;\;,$$
(15)
$$\displaystyle{\cal S}_{0}$$
$$\displaystyle=$$
$$\displaystyle\int d\tau\left\{\sum_{n}\phi_{n}^{\dagger}(\partial_{\tau}-\mu-A%
_{0n})\phi_{n}\right.$$
(16)
$$\displaystyle\left.{}-\sum_{n,n^{\prime}}\phi_{n}^{\dagger}t_{nn^{\prime}}\exp%
\left(-i\int_{n}^{n^{\prime}}dlA_{xl}\right)\phi_{n^{\prime}}\right\}\;\;,$$
$$\displaystyle{\cal S}_{I}$$
$$\displaystyle=$$
$$\displaystyle\int d\tau\left\{\sum_{n}\left[\frac{1}{U}\Delta_{sn}^{2}-\Delta_%
{sn}\rho_{sn}-2i\Delta_{cn}\rho_{cn}\right]\right.$$
(17)
$$\displaystyle\left.{}+\sum_{n,n^{\prime}}\Delta_{cn}V_{nn^{\prime}}^{-1}\Delta%
_{cn^{\prime}}\right\}\;\;,$$
where $V_{nn^{\prime}}^{-1}=V_{n^{\prime}n}^{-1}$.
III Effective Action for the Spin-Wave Mode
In this section, we derive the action corresponding to the spin-wave
modes at quarter-filling. First, we reproduce the mean-field result
of Ref. 8 within a saddle-point approximation. Then we
consider transverse spin fluctuations arising from the dynamics of
the spin-quantization axis.
III.1 Mean-field solution
The standard mean-field solution is recovered from a saddle-point
approximation with ${\bf n}=\hat{z}$ at each lattice site. One then has
$R_{n}=1$ and $A_{0}=A_{x}=0$.
By minimizing the free energy with respect to $\Delta_{sn}$ and
$\Delta_{cn}$, we obtain the self-consistent mean-field equations
$$\displaystyle\Delta_{sn}$$
$$\displaystyle=$$
$$\displaystyle\frac{U}{2}\left<\rho_{sn}\right>_{\rm MF}\;\;,$$
(1)
$$\displaystyle\Delta_{cn}$$
$$\displaystyle=$$
$$\displaystyle i\sum_{n^{\prime}}V_{nn^{\prime}}\left<\rho_{cn^{\prime}}\right>%
_{\rm MF}\;\,.$$
(2)
The average $\left<\;\;\right>_{\rm MF}$ is to be calculated with
the mean-field action
$$\displaystyle{\cal S}_{\rm MF}$$
$$\displaystyle=$$
$$\displaystyle\beta\sum_{n}\frac{1}{U}\Delta_{sn}^{2}+\beta\sum_{n,n^{\prime}}%
\Delta_{cn}V_{nn^{\prime}}^{-1}\Delta_{cn^{\prime}}$$
(3)
$$\displaystyle+{}\int d\tau\left\{\sum_{n}\phi_{n}^{\dagger}\left(\partial_{%
\tau}-\mu-2i\Delta_{cn}-\Delta_{sn}\sigma_{z}\right)\phi_{n}\right.$$
$$\displaystyle\left.{}-\sum_{n,n^{\prime}}\phi_{n}^{\dagger}t_{nn^{\prime}}\phi%
_{n^{\prime}}\right\}\;\,.$$
At quarter-filling, the mean-fields $\left<\rho_{sn}\right>_{\rm MF}$ and
$\left<\rho_{cn}\right>_{\rm MF}$ are periodic with a periodicity of four
lattice spacings. They can be written as
$$\displaystyle\left<\rho_{sn}\right>_{\rm MF}$$
$$\displaystyle=$$
$$\displaystyle\sum_{m=0}^{3}S_{m}\,e^{imQ_{0}n}\;\;,$$
(4)
$$\displaystyle\left<\rho_{cn}\right>_{\rm MF}$$
$$\displaystyle=$$
$$\displaystyle\sum_{m=0}^{3}D_{m}\,e^{imQ_{0}n}\;\;,$$
(5)
where $Q_{0}=2k_{\rm F}=\pi/2$.
Since $\left<\rho_{cn}\right>_{\rm MF}$ and
$\left<\rho_{sn}\right>_{\rm MF}$ are real quantities, one finds
$D_{0}=D_{0}^{*},D_{1}=D_{3}^{*},D_{2}=D_{2}^{*}$ and
$S_{0}=S_{0}^{*},S_{1}=S_{3}^{*},S_{2}=S_{2}^{*}$.
In Eqs. (4) and (5),
$S_{0}=0$ due to the absence of ferromagnetism
and $D_{0}=1/2$ for a quarter-filled band.
From Eqs. (1)-(5),
the final form of the mean-field action is obtained as
$$\displaystyle{\cal S}_{\rm MF}$$
$$\displaystyle=$$
$$\displaystyle\beta N\left[-\frac{U}{16}-\frac{U}{2}(|D_{1}|^{2}-|S_{1}|^{2})-%
\frac{U}{4}(D_{2}^{2}-S_{2}^{2})-V(\frac{1}{4}-D_{2}^{2})-V_{2}(\frac{1}{4}-2|%
D_{1}|^{2}+D_{2}^{2})\right]$$
(6)
$$\displaystyle{}+\int d\tau\left\{\sum_{k}\phi_{k}^{\dagger}\left(\partial_{%
\tau}-\mu+\frac{U}{4}+V+V_{2}-2t\cos k\right)\phi_{k}\right.$$
$$\displaystyle{}+\left[\sum_{k}\phi_{k}^{\dagger}\left(\frac{U}{2}(D_{1}-S_{1}%
\sigma_{z})-2V_{2}D_{1}\right)\phi_{k-Q_{0}}+c.c.\right]$$
$$\displaystyle{}+\left.\sum_{k}\phi_{k}^{\dagger}\left(\frac{U}{2}(D_{2}-S_{2}%
\sigma_{z})-2VD_{2}+2V_{2}D_{2}-2it_{\rm d}\sin k\right)\phi_{k-2Q_{0}}\right%
\}\;\;,$$
where
$\phi_{k}=(1/\sqrt{N})\sum_{n}\,e^{-ikn}\phi_{n}$
and $N$ is the number of lattice sites.
The action (6) agrees with the mean-field Hamiltonian
obtained previously by the conventional method. [8]
III.2 Fluctuations
In the long-wavelength limit, collective modes can be separated into
sliding (charge) modes and spin-wave modes. In this paper, we consider
only transverse (acoustic) spin-wave modes (i.e. magnons). These modes
show up in the fluctuations of the unit vector field ${\bf n}$. They
do not couple to charge modes and gapped spin-wave modes. We shall make the
following two approximations: (i) We neglect the coupling to
long-wavelength spin fluctuations [$\Delta_{s}(q)$ with $|q|\ll Q_{0}$]. In the Hubbard model ($V=V_{2}=0$), this coupling is known to
renormalize the spin-wave velocity by the factor $[1-UN(0)]^{1/2}$
in the weak-coupling limit[27]
[$N(0)$ is the density of states at the Fermi
level]; (ii) We also neglect any possible coupling to spin
fluctuations at wave-vector $2Q_{0}+q$ [$\Delta_{s}(2Q_{0}+q)$ with
$|q|\ll Q_{0}$]. [28]
When two SDW’s coexist in the ground-state, our formalism can only
yield the “in-phase” modes where the two spin-density waves
oscillate in phase. It misses the modes where the oscillations are
out-of-phase. [29] These modes are gapped and do not couple
to the “in-phase” modes considered in this paper.
Before proceeding with the spin-wave mode analysis, let us discuss
the limit of validity of our approach. The spin-wave modes will be
obtained by expanding about the (Hartree-Fock) mean-field state. Such
an approach should hold (at least qualitatively) as long as the
interaction is smaller than the
bandwidth, i.e. $U,V,V_{2}\lesssim 4t$. Nevertheless, it does not
necessary break down in the strong-coupling limit. In the context of
the two-dimensional Hubbard model, Schrieffer et al. have shown
that an RPA analysis of the fluctuations about the mean-field state in
the limit $U\gg t$ agrees with the conclusions obtained from the
Heisenberg model with exchange constant
$J=4t^{2}/U$. [30]
Another limitation of our approach comes from the analysis of the
fluctuations of the unit vector ${\bf n}$. As will become clear below,
the main assumption is that ${\bf n}$ is a slowly varying field, thus
allowing a gradient expansion. Whereas this assumption is perfectly
valid in the weak-coupling limit ($U,V,V_{2}\lesssim 4t$), it breaks
down in the strong-coupling limit. In the latter, one should write
${\bf n}_{n}={\bf n}^{\rm slow}_{n}+\cos(n\pi/2){\bf L}_{n}$ where
${\bf n}^{\rm slow}_{n}$ is a slowly varying field and ${\bf L}_{n}$ a small perpendicular component (${\bf L}_{n}\cdot{\bf n}^{\rm slow}_{n}=0$ and $|{\bf L}_{n}|\ll|{\bf n}^{\rm slow}_{n}|\simeq 1$). [31, 23] The effective
action of the spin-wave modes,
$S_{\rm eff}[{\bf n}^{\rm slow}]$, is then obtained by integrating out
both the fermions and the (small) transverse component ${\bf L}_{n}$.
For $V=V_{2}=0$, this allows to interpolate
smoothly between the weak-coupling regime and the strong-coupling
regime which is well described by the Heisenberg model. [23]
Long-wavelength transverse spin fluctuations correspond to
fluctuations of the SU(2) gauge fields $A_{0}$ and $A_{x}$
[Eqs. (2.12a) and (2.12b)] which are rewritten as
$$\displaystyle A_{0n}$$
$$\displaystyle=$$
$$\displaystyle\sum_{\nu=x,y,z}A_{0n}^{\nu}\sigma_{\nu}\;\;,$$
(7)
$$\displaystyle A_{xn}$$
$$\displaystyle=$$
$$\displaystyle\sum_{\nu=x,y,z}A_{xn}^{\nu}\sigma_{\nu}\;\,.$$
(8)
From eqs. (11), (6), (7) and (8), we
write the action of the spin degrees of freedom as
$$\displaystyle{\cal S}$$
$$\displaystyle=$$
$$\displaystyle{\cal S}_{\rm MF}-\sum_{n}\int d\tau\phi_{n}^{\dagger}A_{0n}\phi_%
{n}$$
$$\displaystyle{}\hskip-8.535827pt-\sum_{n,n^{\prime}}\int d\tau\phi_{n}^{%
\dagger}\left[t_{nn^{\prime}}\exp\!\left(-i\int_{n}^{n^{\prime}}dlA_{xl}\right%
)-t_{nn^{\prime}}\right]\phi_{n^{\prime}}\;\,.$$
To order $O$($A_{x}^{2}$) we obtain
$$\displaystyle{\cal S}$$
$$\displaystyle=$$
$$\displaystyle{\cal S}_{\rm MF}-\sum_{n}\int d\tau\phi_{n}^{\dagger}A_{0n}\phi_%
{n}-\sum_{n,n^{\prime}}\int d\tau t_{nn^{\prime}}\phi_{n}^{\dagger}\left(-%
\frac{i}{2}(n-n^{\prime})\left(A_{xn}+A_{xn^{\prime}}\right)-\frac{1}{2}A_{xn}%
^{2}\right)\phi_{n^{\prime}}$$
(10)
$$\displaystyle=$$
$$\displaystyle{\cal S}_{\rm MF}-\sum_{n}\sum_{\mu=0,x}\sum_{\nu=x,y,z}\int d%
\tau~{}j_{\mu n}^{\nu}A_{\mu n}^{\nu}+{\cal S}_{x}^{\rm dia}\;\;,$$
where $j_{xn}^{\nu}$, $j_{0n}^{\nu}$ and ${\cal S}_{x}^{\rm dia}$
are given by
$$\displaystyle j^{\nu}_{xn}$$
$$\displaystyle=$$
$$\displaystyle-\frac{i}{2}\sum_{\delta=\pm 1}\delta\left[t_{n,n+\delta}\phi^{%
\dagger}_{n}\sigma_{\nu}\phi_{n+\delta}+t_{n-\delta,n}\phi^{\dagger}_{n-\delta%
}\sigma_{\nu}\phi_{n}\right]\;\;,$$
(11)
$$\displaystyle j_{0n}^{\nu}$$
$$\displaystyle=$$
$$\displaystyle\phi_{n}^{\dagger}\sigma_{\nu}\phi_{n}\;\;,$$
(12)
$$\displaystyle{\cal S}_{x}^{\rm dia}$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{2}\sum_{n,n^{\prime}}\sum_{\nu,\nu^{\prime}}t_{nn^{%
\prime}}\int d\tau~{}\phi_{n}^{\dagger}\sigma_{\nu}\sigma_{\nu^{\prime}}\phi_{%
n^{\prime}}A_{xn}^{\nu}A_{xn}^{\nu^{\prime}}\;\,.$$
(13)
The second term of Eq. (10)
denotes the coupling of the gauge field $A_{\mu n}^{\nu}$ to the
spin current ($j_{xn}^{\nu}$) and spin density ($j_{0n}^{\nu}$). The
last term of Eq. (10), ${\cal S}_{x}^{\rm dia}$, is the
diamagnetic contribution. [25]
The effective action of the gauge field is obtained by
integrating out the fermions in the partition function.
By substituting Eq. (10) into Eq. (15),
one obtains the effective action up to $O(A^{2})$ as
$$\displaystyle{\cal S}_{\rm eff}[A_{\mu}^{\nu}]$$
$$\displaystyle=$$
$$\displaystyle\left<{\cal S}_{x}^{\rm dia}\right>_{\rm MF}-\sum_{n,\mu,\nu}\int
d%
\tau\left<j_{\mu n}^{\nu}\right>_{\rm MF}A_{\mu n}^{\nu}$$
(14)
$$\displaystyle{}-\frac{1}{2}\sum_{n,n^{\prime}}\sum_{\mu,\mu^{\prime},\nu,\nu^{%
\prime}}\int d\tau d\tau^{\prime}A_{\mu n}^{\nu}(\tau)$$
$$\displaystyle{}\times\Pi_{j_{\mu}^{\nu}j_{\mu^{\prime}}^{\nu^{\prime}}}(n,\tau%
,n^{\prime},\tau^{\prime})A_{\mu^{\prime}n^{\prime}}^{\nu^{\prime}}(\tau^{%
\prime})\;\;,$$
where
$$\displaystyle\Pi_{j_{\mu}^{\nu}j_{\mu^{\prime}}^{\nu^{\prime}}}(n,\tau,n^{%
\prime},\tau^{\prime})=\left<j_{\mu n}^{\nu}(\tau)j_{\mu^{\prime}n^{\prime}}^{%
\nu^{\prime}}(\tau^{\prime})\right>_{\rm MF}\;\;,$$
(15)
$$\displaystyle\left<{\cal S}_{x}^{\rm dia}\right>_{\rm MF}=\frac{1}{2}\sum_{n,n%
^{\prime},\nu}t_{nn^{\prime}}\int d\tau\left<\phi_{n}^{\dagger}\phi_{n^{\prime%
}}\right>_{\rm MF}(A_{xn}^{\nu})^{2}\;\,.$$
(16)
The quantity $\Pi_{j_{\mu}^{\nu}j_{\mu^{\prime}}^{\nu^{\prime}}}$ denotes
the current-current correlation function in the mean-field state.
We note that
$\left<j_{\mu n}^{\nu}\right>_{\rm MF}=0$
in the long-wavelength limit [32]
and that
$A_{\mu n}^{\nu}$ is of the order $O(\nabla)$.
To order $O(\nabla^{2})$), we obtain
$$\displaystyle{\cal S}_{\rm eff}$$
$$\displaystyle=$$
$$\displaystyle-\frac{1}{2}\sum_{\tilde{q}}\left\{\left<K\right>_{\rm MF}\sum_{%
\nu=x,y,z}|A_{x}^{\nu}(\tilde{q})|^{2}\right.$$
(17)
$$\displaystyle\left.{}\hskip-11.381102pt+\sum_{\mu,\mu^{\prime}(=0,x)}\sum_{\nu%
,\nu^{\prime}}A_{\mu}^{\nu}(\tilde{q})A_{\mu^{\prime}}^{\nu^{\prime}}(-\tilde{%
q})~{}\Pi_{j_{\mu}^{\nu}j_{\mu^{\prime}}^{\nu^{\prime}}}(\tilde{q})\right\}\;,$$
$$\displaystyle\left<K\right>_{\rm MF}=\left<-\frac{1}{N}\sum_{n,n^{\prime}}t_{%
nn^{\prime}}\phi_{n}^{\dagger}\phi_{n^{\prime}}\right>_{\rm MF}\;\;,$$
(18)
where $\left<K\right>_{\rm MF}$ is the mean value of the kinetic
energy per site in the mean-field state. $\tilde{q}=(q,~{}i\Omega)$ and
$\Omega$ is a bosonic Matsubara frequency.
The quantity $\Pi_{j_{\mu}^{\nu}j_{\mu^{\prime}}^{\nu^{\prime}}}(\tilde{q})$
is the Fourier transform of Eq. (15)
with respect to $n$ and $\tau$. In Eq. (17), it can be
evaluated at $\tilde{q}=0$ since $A_{\mu}^{\nu}\propto O(\nabla)$.
Note that
$\Pi_{j_{\mu}^{x}j_{\mu^{\prime}}^{y}}=\Pi_{j_{\mu}^{x}j_{\mu^{\prime}}^{z}}=%
\Pi_{j_{\mu}^{y}j_{\mu^{\prime}}^{z}}=0$
and
$\Pi_{j_{0}^{\nu}j_{x}^{\nu}}(\tilde{q})~{}\big{|}_{\tilde{q}=0}=0$.
Taking the continuum limit $n\rightarrow\xi$ (with $\xi$ a real
continuous variable) and writing
$A^{\nu}_{\mu n}=A^{\nu}_{\mu}(\xi,\tau)$, the effective action
(17) is rewritten as
$$\displaystyle{\cal S}_{\rm eff}$$
$$\displaystyle=$$
$$\displaystyle-\frac{1}{2}\sum_{\tilde{q}}\sum_{\nu=x,y,z}\left\{\left<K\right>%
_{\rm MF}|A_{x}^{\nu}(\tilde{q})|^{2}+\sum_{\mu}|A_{\mu}^{\nu}(\tilde{q})|^{2}%
~{}\Pi_{j_{\mu}^{\nu}j_{\mu}^{\nu}}\right\}$$
(19)
$$\displaystyle=$$
$$\displaystyle-\frac{1}{2}\int d\xi d\tau\left\{\left(\left<K\right>_{\rm MF}+%
\Pi_{j_{x}^{x}j_{x}^{x}}\right)\sum_{\nu=x,y}A_{x}^{\nu\,2}(\xi,\tau)+\Pi_{j_{%
0}^{x}j_{0}^{x}}\sum_{\nu=x,y}A_{0}^{\nu\,2}(\xi,\tau)\right\}$$
$$\displaystyle{}-\frac{1}{2}\sum_{\tilde{q}}\left\{\left(\left<K\right>_{\rm MF%
}+\Pi_{j_{x}^{z}j_{x}^{z}}\right)|A_{x}^{z}(\tilde{q})|^{2}+\Pi_{j_{0}^{z}j_{0%
}^{z}}|A_{0}^{z}(\tilde{q})|^{2}\right\}\;\;,$$
where
$\Pi_{j_{\mu}^{\nu}j_{\mu}^{\nu}}\equiv\Pi_{j_{\mu}^{\nu}j_{\mu}^{\nu}}(\tilde{%
q}=0)$
and
$\Pi_{j_{\mu}^{x}j_{\mu}^{x}}=\Pi_{j_{\mu}^{y}j_{\mu}^{y}}$.
Here we note the identities
$\left<K\right>_{\rm MF}+\Pi_{j_{x}^{z}j_{x}^{z}}=0$ and
$\Pi_{j_{0}^{z}j_{0}^{z}}=0$,
which can be deduced from the gauge invariance
of Eq. (II) (Appendix A).
We have verified numerically the validity of these
identities. Finally, noting that
[33]
$$\displaystyle\sum_{\nu=x,y}A_{x}^{\nu\,2}(\xi,\tau)$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{4}(\partial_{\xi}{\bf n})^{2}\;\;,$$
(20)
$$\displaystyle\sum_{\nu=x,y}A_{0}^{\nu\,2}(\xi,\tau)$$
$$\displaystyle=$$
$$\displaystyle-\frac{1}{4}(\partial_{\tau}{\bf n})^{2}\;\;,$$
(21)
we obtain the following final expression for the effective action of
the spin-wave modes [25, 31] (Appendix A):
$$\displaystyle{\cal S}_{\rm eff}$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{2}\int d\xi d\tau\left\{\chi(\partial_{\tau}{\bf n})^{2}%
+\rho(\partial_{\xi}{\bf n})^{2}\right\}\;\;,$$
(22)
where $\chi$ and $\rho$ are the uniform transverse spin susceptibility and
the spin stiffness, respectively:
$$\displaystyle\chi$$
$$\displaystyle=$$
$$\displaystyle\left<S_{\nu}S_{\nu}\right>_{\tilde{q}=0}^{\rm MF}=\frac{1}{4}\Pi%
_{j_{0}^{\nu}j_{0}^{\nu}}\;\;,\hskip 14.226378pt(\nu=x,y)\;\;,$$
(23)
$$\displaystyle\rho$$
$$\displaystyle=$$
$$\displaystyle-\frac{1}{4}\left(\left<K\right>_{\rm MF}+\Pi_{j_{x}^{\nu}j_{x}^{%
\nu}}\right)\;\;,\hskip 14.226378pt(\nu=x,y)\;\,.$$
(24)
From Eq. (22) we deduce the spin-wave velocity
$$\displaystyle v=\left(\frac{\rho}{\chi}\right)^{\frac{1}{2}}\;\,.$$
(25)
In the incommensurate case,
$\Pi_{j_{x}^{\nu}j_{x}^{\nu}}\rightarrow 0$
in the weak coupling limit so that $\rho=-\left<K\right>_{\rm MF}/4$.
[25]
As shown in the next section,
$\Pi_{j_{x}^{\nu}j_{x}^{\nu}}$ gives rise to a contribution of the same order
as $\left<K\right>_{\rm MF}$ in the quarter-filled case when
the on-site interaction $U$ is of the order of the bandwidth.
In Eqs. (23) and (24),
$j_{\mu}^{\nu}$ and $K$ can be expressed as
[$\phi_{k}=(\phi_{k\uparrow},\,\phi_{k\downarrow})^{t}$, $\nu=x,y$]
$$\displaystyle j_{0}^{\nu}(\tilde{q}=0)$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{\sqrt{N}}\sum_{k}\phi_{k}^{\dagger}\sigma_{\nu}\phi_{k}%
\;\;,$$
(26)
$$\displaystyle j_{x}^{\nu}(\tilde{q}=0)$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{\sqrt{N}}\sum_{k}\left(2t\sin k\,\phi_{k}^{\dagger}%
\sigma_{\nu}\phi_{k}\right.$$
(27)
$$\displaystyle\left.{}-2it_{\rm d}\cos k\,\phi_{k}^{\dagger}\sigma_{\nu}\phi_{k%
+2Q_{0}}\right)\;\;,$$
$$\displaystyle K$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{N}\sum_{k}\left(-2t\cos k\,\phi_{k}^{\dagger}\phi_{k}\right.$$
(28)
$$\displaystyle\left.{}-2it_{\rm d}\sin k\,\phi_{k}^{\dagger}\phi_{k+2Q_{0}}%
\right)\;\,.$$
IV Spin-Wave Velocity
In this section, we evaluate the spin-wave velocity
at zero temperature ($T=0$). We take $t=1$
and calculate the velocity normalized to its value at
$V=V_{2}=0$ and $t_{\rm d}=0$.
The phase diagram of the present model as a function of $V$ and $V_{2}$
is shown in Fig. 1 for $U=4$ and $t_{\rm d}=0$ (solid curve).[8]
For small $V$ and $V_{2}$, there is a
pure 2$k_{\rm F}$ SDW state (region I). A large $V$ induces a phase with
both a 2$k_{\rm F}$ SDW and 4$k_{\rm F}$ CDW (region II), while in the presence
of a large $V_{2}$ there is coexistence between a 2$k_{\rm F}$ SDW, a
2$k_{\rm F}$ CDW and a 4$k_{\rm F}$ SDW (region III).
The dashed curve denotes the boundary
at which a first order transition occurs between II and III.
The dash-dotted curve shows the phase diagram for $t_{\rm d}=0.1$.
The sliding modes are gapped in all three regions. However, the charge
fluctuations become gapless at the transition
between I and II. We discuss below the spin-wave velocity
[Eq. (25)] as a function of $V$ and $V_{2}$ for both $t_{\rm d}=0$
and $t_{\rm d}\not=0$.
IV.1 $U$ dependence ($V=V_{2}=0$ and $t_{\rm d}=0$)
The spin stiffness $\rho$ and the susceptibility $\chi$ are shown in
Fig. 2(a) as a function of $U$ for $V=V_{2}=0$ and $t_{\rm d}=0$.
Both $\rho$ and $\chi$ are almost constant for small $U$
and decrease monotonically for large $U$.
The inset shows the corresponding
$U$-dependence for $\left<K\right>_{\rm MF}$ and
$\Pi_{j_{x}^{x}j_{x}^{x}}$ which determine $\rho$ [Eq. (24)].
A behavior similar to
the incommensurate case
is seen for $U\lesssim 2$:
$\Pi_{j_{x}^{x}j_{x}^{x}}$ is vanishingly small, and
$\chi$, $\rho$ and $\left<K\right>_{\rm MF}$ are almost constant
with respect to $U$.
The limiting values for small $U$ are given by
$\chi=1/(2\sqrt{2}\pi)\simeq 0.113$,
$\left<K\right>_{\rm MF}=-2\sqrt{2}/\pi\simeq-0.90$, and $v=\sqrt{2}$.
The variation of these quantities for $U\gtrsim 2$
comes from the effect of commensurability at quarter-filling.
In Fig. 2(b) (solid curve), we show
the spin-wave velocity $v$ [Eq. (25)], which
is almost independent of $U$ although slightly
suppressed at large $U$.
Here we note that we have neglected the coupling to long-wavelength
spin fluctuations. In the Hubbard model ($V=V_{2}=0$), the
spin-wave velocity $v=(\rho/\chi)^{1/2}$ becomes $(\rho/\chi)^{1/2}[1-2\chi U]^{1/2}$ when this coupling is taken into account within the
RPA. [27] One obtains $1-2\chi U=1-UN(0)$
in the weak-coupling limit
where $N(0)=1/\sqrt{2}\pi$ at quarter-filling.
In Fig. 2(b), we show $v$ and $v[1-2\chi U]^{1/2}$ (dashed
curve). The open circles denote the exact result for the
one-dimensional Hubbard model. [21]
For $U\lesssim 2$, the RPA result
turns out to be a good approximation, while the difference becomes
noticeable at larger $U$.
Nevertheless we use $v=(\rho/\chi)^{1/2}$
as a first step to examine the spin-wave velocity
as the function of $V$ and $V_{2}$.
The present calculation is performed
by choosing $U=4$,
which leads to $v\simeq 1.29$ for $V=V_{2}=0$ and $t_{\rm d}=0$.
IV.2 $V$ dependence ($V_{2}=0$ and $t_{\rm d}=0$)
Now we consider the $V$ dependence of the spin-wave velocity for
$V_{2}=0$, $t_{\rm d}=0$ and $U/t=4$. Contrary to the weak-coupling limit
which can be studied analytically as the incommensurate
case,[25] this intermediate coupling regime requires
numerical calculation.
Figure 3 shows the $V$-dependence of $v$, $\chi$ and $\rho$ (all
quantities are normalized to their value at $V=V_{2}=0$ and $t_{\rm d}=0$).
The arrow indicates the critical value $V_{c}=0.34$ separating regions I
($S_{1}\not=0$) and II ($S_{1},D_{2}\not=0$). In region II ($V>V_{c}$),
both $\rho$ and $\chi$ decrease for decreasing $V$. The stronger
decrease of $\rho$ results in a decrease of the spin-wave
velocity. For large $V$, both the spin stiffness
and the spin-wave
velocity vanish. It seems that the decrease of $v$ in region II
mainly comes from the reduction
of kinetic energy due to the formation of the $4k_{\rm F}$-CDW.
Note that the spin-wave velocity is discontinuous at the critical
value $V=V_{c}$. The small jump at $V_{c}$ originates in the discontinuity
of $S_{1}$ and $D_{2}$ (see inset of Fig. 3) which is found only for $t_{\rm d}=0$.
[6]
IV.3 $V_{2}$ dependence ($t_{\rm d}=0$)
In this section, we analyze the $V_{2}$ dependence of the spin-wave
velocity for $U=4$, $t_{\rm d}=0$ and different values of $V$.
Figure 4 shows $v/v^{0}$, $\chi/\chi^{0}$ and $\rho/\rho^{0}$ in the case
$V=0$ (the inset shows $S_{1}$, $D_{1}$ and $S_{2}$ as a function
of $V_{2}$). There is a transition between regions I and III at the critical
value $V_{2c}$. $v/v^{0}$, $\chi/\chi^{0}$ and $\rho/\rho^{0}$ are constant for
$V_{2}<V_{2c}$, and decrease for $V_{2}>V_{2c}$ (note that $v$ actually
slightly increases at large $V_{2}$). However, all these
quantities remain finite in the limit of large $V_{2}$. This is to be
contrasted to the large-$V$ limit (region II) where the spin-wave
velocity vanishes (Fig. 3). Such a behavior can be understood as
follows. For $V_{2}\to\infty$ (region III), the spin- and charge-density
waves in the ground-state are of the type ($\uparrow$,$\downarrow$,0,0)
and (1,1,0,0), respectively. Our numerical calculation shows that this
behavior already shows up for $V_{2}/t\simeq 4$. In this limit ($V_{2}/t\gtrsim 4$),
the one-dimensional chain divides into independent two-site
clusters. For this problem, one can find the exact expression of the
spin-wave velocity (Appendix B):
$$\displaystyle v/v^{0}$$
$$\displaystyle=$$
$$\displaystyle\left(\frac{\rho}{\chi}\right)^{1/2}/v^{0}=(t-t_{\rm d})/v^{0}\;\,.$$
(1)
For $U=4$, $v^{0}=1.286$, so that $v/v^{0}=$ 0.777. For $V_{2}/t=4$ and
13, the numerical calculation gives $v/v^{0}=$ 0.763 and 0.776,
respectively, in excellent agreement with the analytical result of the
two-site problem.
Now we consider the $V_{2}$-dependence of $\chi/\chi^{0}$, $\rho/\rho^{0}$
and $v/v^{0}$ for $V=$ 0, 1, 2 and 3. For $V=1$, there is first a transition
from region II to region I, and then a transition from I to III. For
$V=2$ or 3, there is a single transition occuring between II and
III. The ratio $\chi/\chi^{0}$ and $\rho/\rho^{0}$ (inset) exhibit a similar
behavior (Fig. 5(a)). They are constant in region I, and increase
(decrease) in II (III) when $V_{2}$ increases. Figure 5(b) shows the
spin-wave velocity $v/v^{0}$ which turns out to be mainly determined by
$\rho/\rho^{0}$. Except in region I and for large values of $V_{2}$, $v$
varies strongly as a function of $V_{2}$.
Here we comment on the fact that $v$ remains finite at large $V_{2}$.
Within the mean-field treatment, which is expected to
be valid for a moderate coupling between chains, both $\chi$ and
$\rho$ remain finite at large $V_{2}$. On the other hand, for
one-dimensional systems it is known from bosonization that $\chi$
vanishes at large $V_{2}$ due to the formation of a spin
gap. [34]
Thus, we expect our mean-field analysis in
region III of Fig. 1 to break down when the interchain coupling
becomes sufficiently small.
IV.4 Effect of dimerization
Finally, we consider the effect of dimerization on the spin-wave
velocity $v$. Figure 6(a) shows the $V$-dependence for $U=4$, $V_{2}=0$
and $t_{\rm d}=0$ (solid curve), 0.1 (dotted curve), 0.3 (dashed curve) and
0.5 (dash-dotted curve). The effect of dimerization is large in region
I, but rather small in region II. A finite $t_{\rm d}$ increases the band
gap. This induces a suppression of $\Pi_{j_{x}^{x}j_{x}^{x}}$ and $\rho$, and
leads ultimately to a reduction of the spin-wave velocity. We note that
the reduction of $S_{1}$ and $D_{2}$ in region II by the dimerization has
little effect on $v$, since the dependence of $S_{1}$ and $D_{2}$ on
dimerization is very small for $V\lesssim 4$.
Figure 6(b) shows the $V_{2}$-dependence of $v$ for $U=4$, $V=0$ and
$t_{\rm d}=0$ (solid curve), 0.1 (dotted curve), 0.3 (dashed curve) and 0.5
(dash-dotted curve). The effect of dimerization is noticeable in both
regions I and III. The limiting behavior for large $V_{2}$ is given by
Eq. (1). In that limit, the SDW exists for $U>2(t-t_{\rm d})$
and the spin-wave velocity $v$ depends only on $t$ and $t_{\rm d}$.
V Conclusion
In conclusion, the nearest and next-nearest neighbor interactions
strongly affects the
spin-wave velocity in the intermediate coupling regime $U\sim 4t$. Our
main results are as follows. (i) In the pure SDW state (region I), the
spin-wave velocity $v$ is independent of the nearest ($V$) and
next-nearest ($V_{2}$) interaction (Fig. 3). (ii) For coexisting
$2k_{\rm F}$ SDW and $4k_{\rm F}$ CDW (region II), $v$ decreases (increases) as
a function of $V$ ($V_{2}$) [Figs. 3 and 5(b)]. It is slightly
discontinuous at the transition between I and II and vanishes (as well
as the spin stiffness) at large $V$ (Fig. 3). (iii) For coexisting
$2k_{\rm F}$ SDW, $2k_{\rm F}$ CDW and $4k_{\rm F}$ SDW (region III), $v$ is
suppressed by $V_{2}$. It tends to a finite value at large
$V_{2}$ [Figs. 4 and 5(b)]. (iv) The dimerization decreases the spin-wave
velocity [Figs. 6(a) and (b)].
As discussed in Sec. III.B, our approach is limited to the weak to
intermediate coupling regime and should hold when $U,V,V_{2}\lesssim 4t$. In the half-filled Hubbard model, a strong coupling is known to
reduce the spin-wave velocity from $v=O(t)$ to $v=O(J)$ (with
$J=4t^{2}/U\ll t$). We also expect a decrease of the spin-wave velocity
in the more general case we have studied when $U,V,V_{2}$ become larger
than $4t$. Therefore, our main conclusion (a reduction of the
spin-wave velocity by the interactions $V,V_{2}$) is likely to be strengthened
by strong coupling effects. The Stoner factor $(1-2\chi U)^{1/2}$,
which arises from the coupling to long-wavelength spin fluctuations,
was not considered in our analysis. It leads to a decrease of
$v$ when the on-site interaction $U$ increases. Whether the Stoner
factor depends on the interactions $V$ and $V_{2}$ remains an open
question.
In the compounds that have been studied experimentally,
[4, 5] the Bechgaard
salts (TMTSF)${}_{2}$PF${}_{6}$ and (TMTSF)${}_{2}$AsF${}_{6}$, and the
Fabre salt (TMTTF)${}_{2}$Br, the electron-electron interaction is
expected to be in the intermediate coupling regime ($U\sim 4t$). Furthermore, estimates by Mila[36] and quantum-chemistry
calculations[37] have revealed the finite-range part of the Coulomb
potential, the first-neighbor interaction $V$ being equal or even larger than
$U/2$. We therefore think that our conclusions are relevant to the
Bechgaard-Fabre salts studied in Refs. 4
and 5.
Acknowledgments
We (Y.T. and Y.S.) thank H. Sakanaka for useful discussion.
This work was financially supported by
Université Paris–Sud, France and
a Grant-in-Aid
for Scientific Research from the Ministry of Education,
Science, Sports and Culture (Grant No.09640429), Japan.
It was also supported by Core Research for Evolutional Science
and Technology (CREST), Japan Science and Technology Corporation (JST).
Appendix A Derivation of eq. (3.22)
We rewrite Eq. (19) as
$$\displaystyle{\cal S}_{\rm eff}$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{2}\int d\xi d\tau\left\{\chi(\partial_{\tau}{\bf n})^{2}%
+\rho(\partial_{\xi}{\bf n})^{2}\right\}-{}\frac{1}{2}\sum_{\tilde{q}}\left\{%
\left(\left<K\right>_{\rm MF}+\Pi_{j_{x}^{z}j_{x}^{z}}\right)|A_{x}^{z}(\tilde%
{q})|^{2}+\Pi_{j_{0}^{z}j_{0}^{z}}|A_{0}^{z}(\tilde{q})|^{2}\right\}\;\;,$$
(1)
where $\rho$ and $\chi$ are given by
$$\displaystyle\rho=-\frac{1}{4}\left(\left<K\right>_{\rm MF}+\Pi_{j_{x}^{x}j_{x%
}^{x}}\right)=-\frac{1}{4}\left(\left<K\right>_{\rm MF}+\Pi_{j_{x}^{y}j_{x}^{y%
}}\right)\;\;,$$
(2)
and
$$\displaystyle\chi=\left<S_{\nu}S_{\nu}\right>_{\rm MF}=\frac{1}{4}\Pi_{j_{0}^{%
\nu}j_{0}^{\nu}}\;\;,\hskip 28.452756pt\nu=x,y\;\,.$$
(3)
To show that the second term of Eq. (1) vanishes, we use
the invariance of the action under the gauge transformation
$A_{\mu}^{z}(\xi,\tau)\longrightarrow A_{\mu}^{z}(\xi,\tau)+\frac{1}{2}\partial%
_{\mu}\Lambda(\xi,\tau)$
($\mu=\xi$ or $\tau$) [33]. This transformation
corresponds to a rotation of
${\bf n}_{\rm MF}=\hat{z}$ around the $\hat{z}$ axis and does not
change the state of the system. The invariance of the action in this
gauge transformation implies
$$\displaystyle-\frac{1}{2}\sum_{\tilde{q}}\biggl{\{}\left(\left<K\right>_{\rm MF%
}+\Pi_{j_{x}^{z}j_{x}^{z}}\right)\left[\frac{1}{4}q_{x}^{2}|\Lambda(\tilde{q})%
|^{2}-iA_{x}^{z}(\tilde{q})q_{x}\Lambda(-\tilde{q})\right]+\Pi_{j_{0}^{z}j_{0}%
^{z}}\left[\frac{1}{4}\Omega^{2}|\Lambda(\tilde{q})|^{2}+iA_{0}^{z}(\tilde{q})%
\Omega\Lambda(-\tilde{q})\right]\biggr{\}}=0\;\,.$$
(4)
Since Eq. (4) should be valid for an arbitrary
function $\Lambda$, we deduce
$$\displaystyle\left<K\right>_{\rm MF}+\Pi_{j_{x}^{z}j_{x}^{z}}$$
$$\displaystyle=$$
$$\displaystyle 0\;\;,$$
(5)
$$\displaystyle\Pi_{j_{0}^{z}j_{0}^{z}}$$
$$\displaystyle=$$
$$\displaystyle 0\;\;,$$
(6)
which lead to the vanishing of the second line of Eq. (1).
Equations (5) and (6) can also be obtained
from the U(1) electromagnetic field gauge invariance.
Noting that $\Pi_{j_{\mu}^{z}j_{\mu}^{z}}=\Pi_{j_{\mu}^{0}j_{\mu}^{0}}$,
Eqs. (5) and (6) can be rewritten as
$$\displaystyle\left<K\right>_{\rm MF}+\Pi_{j_{x}^{0}j_{x}^{0}}$$
$$\displaystyle=$$
$$\displaystyle 0\;\;,$$
(7)
$$\displaystyle\Pi_{j_{0}^{0}j_{0}^{0}}$$
$$\displaystyle=$$
$$\displaystyle 0\;\,.$$
(8)
We recognize here the components of the polarization tensor for the
usual U(1) electromagnetic gauge field. Equations (7)
and (8)
follow from (electromagnetic) gauge invariance. [35]
Appendix B Limiting case of large $V_{2}$
When $V_{2}\rightarrow\infty$, the mean-field solution in region III
corresponds to that of a half-filled two-site system given by
$$\displaystyle H$$
$$\displaystyle=$$
$$\displaystyle-(t-t_{\rm d})\sum_{\sigma}\left(C_{1\sigma}^{\dagger}C_{2\sigma}%
+{\rm H.c.}\right)$$
(9)
$$\displaystyle{}+U\left(n_{1\uparrow}n_{1\downarrow}+n_{2\uparrow}n_{2%
\downarrow}\right)+V\sum_{\sigma,\sigma^{\prime}}n_{1\sigma}n_{2\sigma^{\prime%
}}\;\;,$$
where $n_{1\sigma}\,(n_{2\sigma})=C_{1\sigma}^{\dagger}C_{1\sigma}\,(C_{2\sigma}^{%
\dagger}C_{2\sigma})$
and $C_{1\sigma}^{\dagger}\,(C_{2\sigma}^{\dagger})$ denote
the creation operators of an electron at site 1 (2)
with spin $\sigma$. The mean-field equations are given by
$$\displaystyle\sum_{\sigma}\left<C_{1\sigma}^{\dagger}C_{1\sigma}\right>=\sum_{%
\sigma}\left<C_{2\sigma}^{\dagger}C_{2\sigma}\right>$$
$$\displaystyle=$$
$$\displaystyle 1\;\;,$$
(10)
$$\displaystyle\sum_{\sigma}\left<C_{1\sigma}^{\dagger}C_{1\sigma}\right>{\rm sgn%
}(\sigma)$$
$$\displaystyle=$$
$$\displaystyle\Delta\;\;,$$
(11)
$$\displaystyle\sum_{\sigma}\left<C_{2\sigma}^{\dagger}C_{2\sigma}\right>{\rm sgn%
}(\sigma)$$
$$\displaystyle=$$
$$\displaystyle-\Delta\;\;,$$
(12)
where the average $\left<\;\;\right>$
is performed with the mean-field Hamiltonian
$$\displaystyle H_{\rm MF}$$
$$\displaystyle=$$
$$\displaystyle-(t-t_{\rm d})\sum_{\sigma}\left(C_{1\sigma}^{\dagger}C_{2\sigma}%
+h.c.\right)$$
(13)
$$\displaystyle{}+\sum_{\sigma}\left[\left(\frac{U}{2}+V-{\rm sgn}(\sigma)\frac{%
U}{2}\Delta\right)C_{1\sigma}^{\dagger}C_{1\sigma}\right.$$
$$\displaystyle\left.{}+\left(\frac{U}{2}+V+{\rm sgn}(\sigma)\frac{U}{2}\Delta%
\right)C_{2\sigma}^{\dagger}C_{2\sigma}\right]$$
$$\displaystyle{}-\frac{U}{2}-V+\frac{U}{2}\Delta^{2}\;\,.$$
From Eqs. (10), (11), (12) and
(13),
the self-consistency equation for $\Delta$ is expressed as
$$\displaystyle 1$$
$$\displaystyle=$$
$$\displaystyle\frac{U/2}{\sqrt{[(U/2)\Delta]^{2}+(t-t_{\rm d})^{2}}}\;\;,$$
(14)
where
$\mu=U/2+V$ at half-filling.
The solution of Eq. (14) is obtained as
$$\displaystyle\Delta$$
$$\displaystyle=$$
$$\displaystyle\pm\sqrt{1-\left(\frac{2(t-t_{\rm d})}{U}\right)^{2}}\;\;,$$
(15)
for $U/(t-t_{\rm d})>2$.
By using Eq. (15), we compute
the uniform transverse spin susceptibility ($\chi^{\prime}$)
and the spin stiffness ($\rho^{\prime}$):
$$\displaystyle\chi^{\prime}$$
$$\displaystyle\equiv$$
$$\displaystyle\frac{1}{2}\sum_{n,n^{\prime}=1,2}\left[\frac{1}{4}\left<j^{x}_{0%
}(n)j^{x}_{0}(n^{\prime})\right>\big{|}_{i\Omega=0}\right]$$
(16)
$$\displaystyle=$$
$$\displaystyle\frac{1}{2U}-\frac{2(t-t_{\rm d})^{2}}{U^{3}}\;\;,$$
$$\displaystyle\rho^{\prime}$$
$$\displaystyle=$$
$$\displaystyle-\frac{1}{4}\left(\left<K^{\prime}\right>+\Pi_{j^{x}_{x}j^{x}_{x}%
}^{\prime}\right)$$
(17)
$$\displaystyle=$$
$$\displaystyle(t-t_{\rm d})^{2}\left(\frac{1}{2U}-\frac{2(t-t_{\rm d})^{2}}{U^{%
3}}\right)\;\;,$$
where the kinetic energy per site ($\left<K^{\prime}\right>$) and
the spin current-current correlation function ($\Pi_{j^{x}_{x}j^{x}_{x}}^{\prime}$)
are given by
$$\displaystyle\left<K^{\prime}\right>$$
$$\displaystyle\equiv$$
$$\displaystyle\frac{1}{2}\left<-(t-t_{\rm d})\sum_{\sigma}\left(C_{1\sigma}^{%
\dagger}C_{2\sigma}+{\rm H.c.}\right)\right>$$
(18)
$$\displaystyle=$$
$$\displaystyle-\frac{2(t-t_{\rm d})^{2}}{U}\;\;,$$
$$\displaystyle\Pi_{j^{x}_{x}j^{x}_{x}}^{\prime}$$
$$\displaystyle\equiv$$
$$\displaystyle\frac{1}{2}\sum_{n,n^{\prime}=1,2}\left<j^{x}_{x}(n)j^{x}_{x}(n^{%
\prime})\right>\big{|}_{i\Omega=0}$$
(19)
$$\displaystyle=$$
$$\displaystyle\frac{8(t-t_{\rm d})^{4}}{U^{3}}\;\;,$$
with
$$\displaystyle j^{x}_{0}(n)$$
$$\displaystyle=$$
$$\displaystyle\sum_{\sigma}C_{n,\sigma}^{\dagger}C_{n,-\sigma}\;\;,$$
(20)
$$\displaystyle j^{x}_{x}(n)$$
$$\displaystyle=$$
$$\displaystyle-\frac{i(t-t_{\rm d})}{2}\sum_{\sigma}\left(C_{1,\sigma}^{\dagger%
}C_{2,-\sigma}-C_{2,\sigma}^{\dagger}C_{1,-\sigma}\right)\;.$$
(21)
By noting that $\chi=\chi^{\prime}/2$ and $\rho=\rho^{\prime}/2$, we obtain the
spin-wave velocity of the one-dimensional system [Eq. (1)] in
the limit $V_{2}\to\infty$ as
$$\displaystyle v$$
$$\displaystyle=$$
$$\displaystyle\left(\frac{\rho}{\chi}\right)^{1/2}=t-t_{\rm d}\;\,.$$
(22)
References
[1]
D. Jérome and H.J. Schulz, Adv. Phys. 31, 299 (1982).
[2]
T. Ishiguro and K. Yamaji,
Organic Superconductors (Springer, Berlin, 1990).
[3]
G. Grüner,
Density Waves in Solids (Addison-Wesley, New York, 1994);
Rev. Mod. Phys. 60, 1129 (1988); 66, 1 (1994).
[4]
J. P. Pouget and S. Ravy,
J. Phys. I 6, 1501 (1996);
Synth. Met. 85, 1523 (1997).
[5]
S. Kagoshima, Y. Saso, M. Maesato, R. Kondo and T. Hasegawa,
Solid State Commun. 110, 479 (1999).
[6]
H. Seo and H. Fukuyama,
J. Phys. Soc. Jpn. 66, 1249 (1997).
[7]
N. Kobayashi, M. Ogata and K. Yonemitsu,
J. Phys. Soc. Jpn. 67, 1098 (1998).
[8]
Y. Tomio and Y. Suzumura,
J. Phys. Soc. Jpn. 69, 796 (2000).
[9]
Y. Tomio and Y. Suzumura,
J. Phys. Chem. Solids 62, 431 (2001).
[10]
P. A. Lee, T. M. Rice, and P. W. Anderson,
Solid State Commun. 14, 703 (1974).
[11]
S. Takada,
J. Phys. Soc. Jpn. 53, 2193 (1984).
[12]
G.C. Psaltakis,
Solid State Commun. 51, 535 (1984).
[13]
K. Maki and A. Virosztek,
Phys. Rev. B 36, 511 (1987).
[14]
K. Maki and A. Virosztek,
Phys. Rev. B 41, 557 (1990).
[15]
K. Maki and A. Virosztek,
Phys. Rev. B 42, 655 (1990).
[16]
Y. Suzumura,
J. Phys. Soc. Jpn. 59. 1711 (1990).
[17]
S. Brazovski and I. Dzyaloshinskii,
Zh. Éksp. Teor. Fiz. 71, 2338 (1976)
[Sov. Phys. JETP 44, 1233 (1976)].
[18]
Y. Suzumura and N. Tanemura,
J. Phys. Soc. Jpn. 64, 2298 (1995).
[19]
Y. Suzumura,
J. Phys. Soc. Jpn. 66, 3244 (1997).
[20]
N. Tanemura and Y. Suzumura,
Prog. Theor. Phys. 96, 869 (1996).
[21]
H. J. Schulz,
Int. J. Mod. Phys. B 5, 57 (1991).
[22]
S. Wen and A. Zee,
Phys. Rev. Lett. 61, 1025 (1988).
[23]
H. J. Schulz,
Phys. Rev. Lett. 65, 2462 (1990);
H. J. Schulz in The Hubbard Model, edited by D. Baeriswyl
${\it et\,al.}$, (Plenum Press, New York, 1995).
[24]
Z. Y. Weng, C. S. Ting and T. K. Lee,
Phys. Rev. B 43, 3790 (1991).
[25]
K. Sengupta and N. Dupuis,
Phys. Rev. B 61, 13493 (2000).
[26]
D. J. Amit and H. Keiter,
J. Low Temp. Phys. 11, 603 (1973).
[27]
D. Poilblanc and P. Lederer,
Phys. Rev. B 37, 9650 (1987); 37, 9672 (1987).
[28]
Fluctuations of $\Delta_{s}$ at wave vectors around $Q_{0}$
correspond to gapped amplitude fluctuations which do not couple to the
transverse spin-wave modes. (When two SDW’s coexist in the
ground-state, there are both in-phase and out-of-phase amplitude
fluctuations.)
[29]
Collective modes in a system with two SDW’s have
been studied by
N. Dupuis and V. M. Yakovenko, Phys. Rev. B 61, 12888 (2000).
[30]
J.R. Schrieffer, X.G. Wen and S.C. Zhang,
Phys. Rev. B 39, 11663 (1989).
[31]
A. Auerbach,
Interacting Electrons and Quantum Magnetism
(Springer Verlag, New York, 1994).
[32]
We neglect here the Berry phase term $S_{\rm Berry}=-\sum_{n}\int d\tau\langle j_{0n}^{z}\rangle A^{z}_{0n}$ which is
expected to be irrelevant when interchain coupling is taken into
account.
[33]
A. M. J. Schakel, cond-mat/9805152
[34]
H. Yoshioka, M. Tshuchiizu and Y. Suzumura,
J. Phys. Chem. Solids 62, 419 (2001).
[35]
See, for instance, E. Fradkin, Field Theories of
Condensed-Matter Systems (Addison-Wesley, Redwood City, CA, 1991).
[36]
F. Mila, Phys. Rev. B 52, 4788 (1995).
[37]
F. Castet, A. Fritsch and L. Ducasse, J. Phys. I
6, 583 (1996). |
Towards 6G Internet of Things: Recent Advances, Use Cases, and Open Challenges
Zakria Qadir, Hafiz Suliman Munawar, Nasir Saeed, and Khoa Le
Zakria Qadir and Khoa Le is with the School of Engineering, Design and Built Environment, Western Sydney University, Locked Bag 1797, Penrith, NSW 2751, Australia.Hafiz Suliman Munawar is a PhD student at the University of New South Wales, Kensington, Sydney, NSW 2052, Australia.
Nasir Saeed is with the Department of Electrical Engineering, National University of Technology, Islamabad, Pakistan.
Abstract
Smart services based on the Internet of Everything (IoE) are gaining considerable popularity due to the ever-increasing demands of wireless networks. This demands the appraisal of the wireless networks with enhanced properties as next-generation communication systems. Although 5G networks show great potential to support numerous IoE based services, it is not adequate to meet the complete requirements of the new smart applications. Therefore, there is an increased demand for envisioning the 6G wireless communication systems to overcome the major limitations in the existing 5G networks.
Moreover, incorporating artificial intelligence in 6G will provide solutions for very complex problems relevant to network optimization. Furthermore, to add further value to the future 6G networks, researchers are investigating new technologies, such as THz and quantum communications. The requirements of future 6G wireless communications demand to support massive data-driven applications and the increasing number of users. This paper presents recent advances in the 6G wireless networks, including the evolution from 1G to 5G communications, the research trends for 6G, enabling technologies, and state-of-the-art 6G projects.
Index Terms:
6G, Wireless communication, Internet of Everything, Smart cities,
I Introduction
The up-gradation of mobile communication systems to a more advanced generation usually occurs with every turn of decade [1]. Following the usual convention, in 2020, mobile communication systems entered into fifth-generation (5G) since its inception in the 1980s. 5G is dubbed by many as the pinnacle of mobile communication technology [2]. 5G and its preceding fourth generation (4G, often known as LTE-Advanced) is known to build an Internet-of-Things (IoT) enabled intelligent services, and application-oriented eco-system [3].
More prominently, 5G offers a triad of characteristics, namely, enhanced Mobile BroadBand (eMBB), massive Machine Type Communications (mMTC), ultra-Reliable Low Latency Communications (uRLLC) that were particularly aimed to overcome the limitations of the 4G [4]. For example, in comparison to 4G, 5G networks are expected to provide a peak data rate of 20 Gbps, 3x spectral efficiency, 100 times improved energy efficiency, and a Gbps user experience with an end-to-end latency of 1 ms [5]. 5G would also support seamless connectivity for devices with mobility of 500 km/h, a connection density of 1 million devices/km${}^{2}$, and an area traffic capacity of 10 Mbps/m${}^{2}$ [6]. The 5G networks have been anticipated to facilitate an extensive range of smart IoE related services; however, it will not be sufficient to meet the requirements of future smart communities [7].
Since smart cities are automating our surroundings by enabling the digital layer on top of the traditional infrastructure, the stakeholders’ demand is abruptly increasing. Therefore, appropriate management for this digitization providing the ubiquitous solution for smart cities, disaster management, and other services is becoming critically important. Considering the forthcoming development in the domain of wireless technologies, particularly in smart cities, 5G may lack to meet future expectations as of 6G for the following facts:
•
as per the rapid growth of IoT devices in providing wireless connectivity to smart cities, there is an abrupt need of improvement to provide reliable connectivity to dense networks [18].
•
introduction of flying cars, extended reality (XR), and telemedicine require high data transfer rate, low latency, and robustness for cellular networks that can only be possible with the envision 6G networks as shown in Figure 1.
•
it is believed that the future cellular networks will be robust, highly dynamic, complex, and embedded on ultra-large-scale chips. However, the current network architecture for both 4G and 5G is fixed to tackle a dedicated task only [19]. A state-of-the-art dynamic architecture is required in 6G that can optimize based on the user demands.
Table I shows description of the symbols used in this article.
I-A Related Surveys
Many studies have focused on the 6G networks, facilitating technologies, architectures, and open research challenges in recent years. For instance, in [8], the authors portray a systematic review for 6G wireless communication based on the security and privacy perspective using blockchain technology. They have developed critical thinking for the architectural failure of a security system. Then, the authors in [9] discussed in depth the role of 6G communication for several IoT applications in the domain of healthcare, industries, autonomous vehicles, and satellite linkage using UAVs.
Reference [10] discusses main system model parameters like latency, energy consumption, network mobility. The limitations of existing 5G communications are highlighted with the advancement of 6G communication in [11].
In [12], the end-to-end transmission flow is surveyed with a focus on network access and robust routing control. Several machine learning applications are introduced with 6G aided IoT domain and blockchain for privacy and security perspective [13]. In [14], associated challenges related to terrestrial satellite networks are studied to overcome the performance parameters like channel fading, transmission delay, trajectory, and area coverage.
Moreover, the use cases related to the 6G architecture and requirements are broadly categorized in [15]. In [17], authors studied the resource allocation problems for next-generation heterogeneous networks considering the prospect of 6G.
I-B Main Contributions
Unlike existing works, this survey addresses the state-of-the-art 6G wireless communication, recent advances, use cases, and open challenges. A detailed comparison between existing articles and with our survey is shown in Table II.
We focus on several important aspects of the envisioned 6G networks, such as robust connectivity, communication latency, edge computing, UAV application, and security issues. The collected literature review is from the past five years, focusing on the recent trends and future research directions. The main contribution of this survey are summarized as follows:
•
our main focus is to discuss in detail the important parameters of 6G technologies that were not fully optimized in 5G technology. This includes higher data rate, lower latency, improved reliability and accuracy, much higher energy efficiency, AI-IoT-based wireless connectivity, and 3D MIMO-oriented signal coverage.
•
the role of 6G in security and privacy is also studied particularly in the perspective of wireless connectivity.
•
a systematic framework is designed to emphasize the applications of 6G in the domain of the smart home, smart industries, smart fire detection, smart parking, thus anticipating smart city concept.
•
an extensive comparison between 6G and the previous communication technologies is carried out to highlight the shortcomings in the previous architectures.
I-C Organization of this paper
The remaining paper is organized as follows: Section I discusses about the comparison between existing studies and why this survey would play a significant role for researchers in the context of 6G. Section II elaborates the evolution of mobile communication network from 1G to 6G. Section III extensively studies the research and marketing trend in the perspective of mobile communication network. Section IV highlights the network requirements for 6G communication network. Moreover, the essential enabling technologies for the 6G network are elaborated in the Section V. Finally, we present conclusion in Section VI.
II Evolution of Mobile Communication Networks
The recent advancements have led the mobile communication networks to bloom since the first development of the analog communications network during the 1980s. The considerable advances achieved in the mobile communication networks are not due to a one-step procedure but rather have been achieved through gradual changes over several generations in terms of the different aims, standards, capacities, perspectives, and technology applied to each existing generation. It is clearly established that new generations are introduced nearly ten years after the existing technologies [20]. The evolution of mobile communication networks from 1G to 6G is depicted in Figure 2 and the overall detailed comparison of the technologies is presented in Table III.
II-A Cellular Evolution: 1G TO 5G
During the mid-1980s, first-generation (1G) mobile networks were designed to support the wireless transmission of voice messages based on analog transmission with peak data rate up to 2.4 kbps. However, due to the absence of any universal standards, many drawbacks were emergent in the system, including poor security, lower transmission efficiency, and a problematical hand-off [21].
Particularly, digital modulation technologies, including the Time Division Multiple Access (TDMA) and Code Division Multiple Access (CDMA) were employed to develop the 2G systems with better voice, short message services (SMS) and data rate of about 64kbps where the Global System for Mobile Communication (GMS) was used as the dominant mobile communication standard [22]. The high-speed data transmission 3G network was proposed during the year 2020 that provides quick access to the internet and a data transfer rate of about 2 Mbps thus, offering advanced services (i.e., web browsing, TV streaming, navigational maps, video services etc.) in comparison to the 1G and 2G networks [23].
The IP based 4G network was introduced in the early 2000s to 1) improve the overall spectral efficiency, 2) decrease the latency, 3) provide high-speed downlink data rates of 1Gbits/s and uplink data rates of about 500Mbits/s, 4) accommodate Digital Video Broadcasting (DVB), High Definition TV content and video chat. The 4G network offers terminal mobility (anywhere, anytime) through the automatic roaming across geographic boundaries of the wireless networks. The standards considered for the 4G networks include the Long Term Evolution-Advanced (LTE-A) and Wireless Interoperability for Microwave Access (WiMAX) [24]. LTE provides an integration of the existing and novel technologies, including coordinated multiple transmission/reception (CoMP), multiple-input multiple-output (MIMO), and orthogonal frequency division multiplexing (OFDM).
After 4G, the commercial usability of the 5G would soon be enabled as the initial basic tests, the construction of hardware facilities, and procedures for standardization are almost complete. The 5G networks are basically aimed at the improvement in the data rates, reliability of networks, latency, efficiency in terms of energy, and enormous connectivity [25]. The 5G network offers higher data rates up to 10 Gbps due to the usage of a new spectrum of the microwave band (3.3-4.2 GHz) and involves advanced technologies such Beam Division Multiple Access (BDMA) and Filter Bank multi-carrier (FBMC). To improve the overall performance of the 5G network, numerous developing technologies including Massive MIMO (for capacity increase), Software Defined Networks (SDN) (for network flexibility), device-to-device (D2D) (for spectral efficiency), and Information-Centric Networking (ICN) is integrated into the network to allow rapid deployment of different services [26]). However, for the 5G network, three scenarios of usage are proposed by the IMT 2020, including 1) Enhanced mobile broadband (eMBB), 2) Ultra-reliable and low latency communications (URLLC), and 3) massive machine-type communications (mMTC).
II-B Vision of 6G Networks
Various global research institutions have focused attention on the 6G networks as the 5G networks have entered the commercial deployment phase. The 6G networks are aimed at the enhancement of performance by the provision of peak data rates of about 1 Tbps and ultra-low latency (microseconds). Moreover, in comparison to the 5G networks, the 6G network is intended to improve the capacity by 1000 times through the usage of terahertz frequency and spatial multiplexing. The 6G networks will also provide global coverage through the effective integration of satellite and underwater communication networks [27]. Additionally, there are three novel classes for 6G networks, including the ubiquitous mobile ultra-broadband (uMUB), ultrahigh-speed-with-low-latency communications (uHSLLC), and ultrahigh data density (uHDD) [28].
III Marketing, Research Activities, and Trends Towards 6G Networks
As far as the communication systems are concerned, a new generation is introduced every ten years since the first analog communication systems were introduced in 1980. Figure 3 provides a representation of the worldwide internet usage (GB) that has increased considerably from 7% (in the year 2020) to 43% (in the year 2030) as a consequence of increased population from the year 2020 to 2030 [29]. The up-gradation from one generation to another brings along various improvements in the form of new services and new features where the goals of the 5G and 6G networks are to improve the overall capabilities of the networks through a factor of 10–100 in comparison to the previous mobile communication generations. However, during the last ten years, a phenomenal increase in mobile data traffic has been observed mainly due to the development and availability of smart devices and machine-to-machine (M2M) communications. The tremendous growth in the utility of mobile communications is reflected very well in Figure 4 which depicts that in comparison to 2020, the expected worldwide mobile traffic volume will increase 700 times in the year 2030 [30]. Moreover, it is predicted by the International Telecommunication Union (ITU) that the overall mobile data traffic will prominently exceed 5 ZB per month and the number of mobile subscriptions will reach 17.1 billion by the end of the year 2030 as shown in Figure 5 [29].
It is anticipated that the annual growth rate of approximately 70% will be evident for the 6G network from the years 2015 to 2030 subsequently, reaching a value of 4.1 billion US dollars by the year 2030 [3]. Since the 6G networks have various advanced communication infrastructures, including edge computing, cloud computing, and AI, they will ultimately offer greater market shares, i.e., up to 1 billion US dollars [31]. AI-based chipsets are another major component of the 6G networks that will rise above 240 million units by the year 2028. Different worldwide organizations have started extensive research projects on the 6G mobile communication networks [32]. One of the most important research programs is the 6G Flagship research program that was supported by various working bodies, including the Academy of Finland VTT Technical Research Center, Oulu University of Applied Sciences, Nokia, Business Oulu, Aalto University, InterDigital, and Keysight Technologies [33].
The Flagship research program for 6G was initially carried out to co-create an ecosystem for innovation in 6G and adopt 5G networks. The basic aim behind the 6G Flagship research program is to develop a society that is driven through unlimited and high-speed wireless connectivity. Additionally, to streamline the development of the 6G technology South Korean government signed an agreement with the University of Oulu, Finland [34]. To carry out the 6G network-based research, LG has also established a research laboratory at the Korea Advanced Institute of Science and Technology [35]. SK Telecom, with other partners including Samsung, Nokia, and Ericsson, also initiated a joint research project on 6G-based technologies [32].
Moreover, 6G-based research activities have also been initiated in China, and Huawei has already begun research on the 6G networks at its Ottawa-based research center in Canada [36, 37]. Most prominently at the NYU WIRELESS research center, several faculty members are actively involved in research on various core components of the 6G networks, including machine learning, quantum nano-devices, communication foundations, and 6G testbeds [38]. Last but not least, the US has also announced an active investigation of the 6G networks by initiating numerous 6G-based research programs [39].
IV 6G Networks Requirements
In recent years, many studies focused on the 6G applications, facilitating technologies, architectures, and open research challenges have been reported extensively in the literature. Towards this end, [40] introduced applications, facilitating technologies, and some open research challenges for the 6G technology. Moreover, they also addressed applications, performance metrics, 6G driving trends, as well as new customer services for 6G networks [40]. The concept of AI endowed 6G wireless networks was introduced in [41]. The 6G network design and the applications of 6G for different AI-empowered smart services were also elaborated [41]. Tariq et al. concentrated on the use of 6G, its enabling technologies, as well as research challenges [42]. Moreover, Giordani et al. highlighted the development of wireless communication systems on the way to 6G networks and also some of its use cases [43]. They discussed mainly the key enabling technologies of 6G, their related challenges, and their applications. Reference [43] also presented the concept of intelligence integration in 6G systems.
Furthermore, the key drivers, requirements, design, and enabling technologies of 6G are discussed in [28]. The potential technologies for 6G wireless networks have also been elucidated in [44]. They presented a summary of time, frequency, space, and resource usage relevant to the 6G networks. Moreover, important techniques involved in the evolution of 6G wireless networks and the upcoming problems concerning the implementation of 6G were also elaborated in [44]. Additionally, the peak data rate, energy efficiency, connectivity density, user experienced data rates, as well as latency of 6G is discussed in [45]. Moreover, the challenges of 6G wireless system concerning its intelligence and various machine learning schemes are presented in [46]. Akyildiz et al. provided a comprehensive discussion on key enabling technologies of 6G [47]. Moreover, intelligent communication conditions with their layered structural design and open research challenges are discussed in [38]. The technology trends in 6G, its applications, the requirements, and the concept of 6G are discussed in [48]. A detailed analysis of the existing 6G-based research studies is provided in Table IV. In the following, we discuss some of the key requirements of 6G networks.
IV-A Connectivity
In the near future, societies will become potentially data-driven through the utilization of prompt and unlimited wireless connections [76]. Generally, to allow the 5G network utility for various smart applications different approaches including, new 5G radio, the simultaneous usage of unlicensed and licensed bands are brought into consideration [21, 77, 78]. The anticipated benefits of the 5G networks in the form of basic smart IoE-based services and short packets for URLLC display inherent liabilities and complexities to completely fulfill the requirements of the future smart city IoE applications [40, 7]. Thus, it is evident that the capabilities and important performance indicators of the 5G network are not adequate to meet the increased requirements arising from the development of different data-centric and automated processes [43]. The applications of telemedicine, haptics, and connected autonomous vehicles, are envisioned to utilize long packets with ultra-high reliability and high data rates, thus violating the general usability of short packets for URLLC that are implemented by the 5G networks [40]. Another limitation of the 5G network in terms of the exceeding demands of next-generation smart industries is the unsuitable connectivity density of 106/$\text{km}^{2}$ [79]. Some of the major shortcomings of the 5G networks are the short mmWave connectivity range, Gbps level transmission data rate, the interruptions in the signals, and no/limited coverage for the rural/remote areas [7].
IV-B Latency
Low latency, or the deterministic latency that requires the use of deterministic networking (DetNet), is one of the distinguishing features of 5G networks that are used to assure timely and accurate end-to-end latency. By far, 6G mobile networks will offer additional facilities such as high time and phase synchronization accuracy better than that offered by the 5G networks [7]. Therefore, it is well established that 6G will emerge as a promising technology that will meet the requirements of various diverse sectors to improve the quality and perception of life in the near future [80]. Not limited to this, the 6G networks will effectively overcome the limitations of 5G networks, also catering to the exceeding requirements of the next-generation smart systems. In comparison to the 5G, the 6G networks are intended to provide promising features such as much higher spectral/energy/cost efficiency, nearly 100% geographical coverage, 10 times lower latency, sub-centimeter geo-location accuracy, millisecond geo-location update rate, high-level intelligence for full automation, sub-millisecond time synchronization, a higher transmission data rate (Tbps), and a connection density that is 100 times higher [79, 47].
IV-C Reliability
The 6G networks are expected to provide 99.9% reliability [81]. Moreover, 6G will use artificial intelligence (AI) as an integral part which will prove beneficial for the optimization of a wide array of wireless network problems [82]. The deployment of 5G networks has provided a realization of the fact that softwarization pays a cost as the usage of the commercial off-the-shelf (COTS) servers instead of the domain-specific chips in a virtualized radio access network (RAN) implicates a large increase in energy consumption thus, requiring measures for improving the energy efficiency. This can be explained by the fact that in comparison to the 4G networks, the 5G networks deliver a higher bandwidth at the cost of higher power consumption. Therefore, it is of extreme importance for the 6G networks to require a relatively new computing paradigm that should leverage the benefits of softwarization without paying the costs in terms of energy consumption [79]. Moreover, it is well established that most of the 6G use cases will eventually evolve from the emerging functionalities and quality of experiences of the 5G system-based applications. The applications of the 6G networks will proceed further by the performance enhancement measures, and the addition of new use cases [42]. The details on the use cases for the 5G and 6G networks are provided for comparison in Table V.
IV-D Computing Techniques
Important computing technologies, including cloud computing, fog computing, and edge computing, form an integral part of distributed computing, processing, lower latency, synchronization time, and overall network resilience. In addition to short-packet drawback, it is highly anticipated to overcome other limitations of the 5G networks through the provision of higher reliability, lower latency, better system coverage, and higher data rates [40]. Moreover, the 6G should be based on a human-centric approach rather than the machine-, application- or data-centric approaches to meet the mobile communication demands of the coming years [60, 28].
IV-E Coverage
The elucidation of new paradigm shifts will provide the essence of the 6G wireless networks. The 6G networks will provide global coverage of the integrated networks of the space, ground, air, and sea. The overview of the 6G architecture is presented in Figure 6 [52]. The coverage and range of the wireless communication networks can be extended extensively through the usage of satellite communication, UAVs, and maritime communication [83].
IV-F Data Rate
The overall improvement in the data rate can be enabled by exploring all spectra, i.e., optical frequency bands, sub-6 GHz, mmWave, and THz. Additionally, the utility of Artificial Intelligence and Machine learning techniques in combination with the 6G networks would ultimately allow the full applicability, automation, and network management of the 6G. AI-based approaches can significantly improve the next-generation network performance by providing dynamic instrumentation of the networking, caching, and computing resources.
IV-G Security
A stronger network security needs to be implemented during the development procedure for both physical and network layers in 6G. Last but not least, the development of the 6G networks will be boosted considerably through the utilization of industry verticals, including cloud VR, IoT, industry automation, cellular vehicle to everything (C-V2X), area network for the digital twin body, and energy-efficient wireless network control and federated systems of learning [79]. Therefore, security would be of paramount importance in 6G systems.
V Essential Enabling Technologies for 6G Networks
The evolution of the mobile networks is based on inheriting the advantages of the previous network architectures and adding extra benefits that effectively meet the requirements of the latest era[44]. Similarly, the 6G network will adopt the benefits of the 5G architectures also concurrently new technologies to overcome the future demands, thus, it is indicated that the 6G communication systems will be mediated by various technologies some of which are discussed as follows:
V-A Internet of Things (IoT)
IoT seeks to connect everything to the Internet, establishing a connected environment where data sensing, processing, and communications are conducted automatically without human participation, as a major technology in integrating heterogeneous electronic devices with wireless networks. End users can benefit from IoT data acquired from ubiquitous mobile devices including sensors, actuators, smart phones, computers, and radio frequency identifications (RFIDs) [84]. According to Cisco [85], by 2030, up to 500 billion IoT devices will be connected to the Internet. In addition, according to a new study by IHS Markit [86], a worldwide leader in critical information, analytics, and solutions, the number of linked global IoT devices will grow at a staggering 12 % per year, from roughly 27 billion in 2017 to 125 billion in 2030.
In this perspective, 6G will be a significant enabler for future IoT networks and applications, as it will provide full-dimensional wireless coverage and integrate all functionality, including sensing, transmission, computation, cognition, and fully automated control. In fact, compared to the 5G mobile network, the next generation 6G mobile network is expected to give massive coverage and enhanced adaptability to support IoT connectivity and service delivery [87].
V-B Artificial Intelligence (AI)
One most crucial component of the self-sufficient 6G networks is intelligence, which is a relatively new technology being integrated in the 6G networks through the utility of AI [88, 89, 90]. It is evident that AI could not be applied to the previous versions including 4G and lower generation. However, in the 5G networks a partial or limited applicability of AI will be observed. Most prominently the 6G networks would provide the full automation through AI which will offer full potential of the radio signals, also allowing the cognitive radio to intelligent radio based transformations [41]. It is notable that for 6G real time communications the advancements in the machine learning/AI procedures leads to the development of highly intelligent networks that will ultimately improve and simplify the real-time data transmission. AI techniques display numerous benefits such as increasing efficiency, reducing the processing delays within the communication steps, solving complex problem efficiently, prompting communications within the BCI, performing network selection and handover. However, instances such as meta-materials, intelligent structures, intelligent networks, intelligent devices, intelligent cognitive radio, self-sustaining wireless networks, and machine learning would provide the support for the communication systems based on AI [91, 90, 31].
Therefore, the application of AI based technology will assist in meeting the goals of several 6G services including uMUB, uHSLLC, mMTC, and uHDD. The recent advancements in the machine learning approaches allow its application to RF signal processing, spectrum mining, and spectrum mapping. Whereby the combination of machine learning approaches with photonic technologies will also uplift the AI evolution in 6G networks to shape a cognitive radio system that is based on photonics. For the channel state estimation, and automatic modulation classification, the physical layer implements the AI based deep learning encoder-decoder based setup, whereas, deep learning-based resource allocation, intelligent traffic prediction and control have been extensively investigated for the data link layer and transport layer respectively. An additional advantage associated to the application of machine learning and big data is the determination of best possible approaches for data transmission between the end users through the provision of the predictive analysis [91, 90, 31].
V-C Integration of Wireless Information and Energy Transfer
One of the most ground-breaking technologies within the 6G network is the integration of the Wireless Information and Energy Transfer (WIET) which takes in to account a set of fields and waves that are similar to those used in wireless communications. Since, WIET shows a greater potential for lengthening the battery charging lifetime of the wireless systems, thus, providing support to the devices without batteries in the 6G networks [92]. WIET is particularly envisioned to allow the progression of battery-less smart devices, charging and saving the battery life-time of the wireless networks and other devices respectively [93, 94].
V-D Mobile Edge Computing
The launching of content delivery networks (CDNs) in 1990s by Akamai is the first step towards edge computing for performance and speed improvement. Moreover, edge computing takes a broad view of CDN concept by utilizing cloud computing platform. Brain and his co-workers in 1997 introduced the importance of edge computing to mobile networks [95]. However, the cloud computing began to rise in mid 2000s and became the most usable infrastructure for mobile devices which is used today by Apple and google devices. Paramir Bahl and his colleagues were first to demonstrate the conceptual groundwork of edge computing in 2009 [96]. Edge computing is of great importance by creating new onsets in computing environment. It allows the services of cloud computing to come closer to end user as it is the modified version of cloud computing and lessens the delay time of bringing services to the end user. It is a fast-processing system that has very quick response time [33]. The upcoming 6G networks will integrate the current 5G and IoT infrastructures, with the help of the edge computing hardware, thus, supporting the heavy execution of AI algorithms [64].
Therefore, the mobility enhanced edge computing (MEEC) will become an integral part of the future 6G machinery due to immense applications of the distributed large scale clouds. Moreover, the amalgamation of MEC infrastructures with AI methods will allow effective computation not only on the big data analytic but also on the system controls to the edge. Edge based intelligent computing has emerged to leverage maximum benefits in fulfilling the challenging needs of the impending heterogeneous computation, communication, and high-dimensional intelligent configurations based ubiquitous service scenarios[48]. Various applications of the edge computing are illustrated in Figure 7. Herein, various applications have been considered to highlight the importance of edge computing. The applications are elaborated as follows:
V-D1 Real-Time Reporting of Autonomous Cars Accidents
The future of cars will be holding a huge number of autonomous cars. There are six levels of autonomous cars including level 0,1,2,3,4,5. The difference in levels of car is their varying automation levels i.e., level zero has no automation whereas level 5 represents full level of automation in cars. Moreover, these cars have the ability to handle lane changing and also to tackle the collision [97]. The roadside units having edge computing facility can be fruitful for handling real time data of such cars. Considering the example of road accident where the timely reporting of accident is linked with providing the first aid to the patients. In that case the key factors are the medical team along with the road administrators, the smart police and public safety points having edge computing facility for effective communication. During the emergency situation the damage can be minimized by providing the timely first aid and this can be achieved by reporting the incident at an appropriate time. Hence, these incidents are either reported by the person himself or with the help of advanced edge computing enabled safety points. It is understood that the person might have severe injuries and be unable to report about accident which can lead to an adverse situation. Therefore, the safety points having edge computing facility will automatically detect the incident by applying algorithms and result in timely detection of the incidents. Thus, edge computing is necessary for detection of such delicate tasks [3].
V-D2 Smart Forest Fire Detection
The concept of smart forest originated from the IoT. In this concept the data about environmental conditions is collected via remote sensing. The foremost objective of this detection is to control wild fire in the forest at the near beginning phase. Smart forest fire detection system might decrease the harm caused by the fire in forests. The smart forest fire detection based on edge computing facility has assisted in timely reporting the forest fires, hence can be effectively used for monitoring of fire [98]. Here, cameras are fitted in the cars that are taking pictures constantly. Hence, fires are reported by processing the pictures to the server. The delay in processing of picture may result in loss of communication leading towards late response against fire. Thus, edge computing based on image processing will help reduce this delay and as a result quick decisions could be made against the fires due to timely reporting [99]. Moreover, it also shows applications in several rescue activities via the telecommunication process.
V-D3 Smart Parking System
All activities in the day-to-day life of urban cities like working, shopping etc demands the parking at an inexpensive place. So, using the internet connections users can search for vacant parking slots. Conventional parking system frequently faces the challenge of finding the vacant parking space and also inefficiency in management of parking space. Hence, smart parking systems have been introduced to solve these problems. These systems use various machine learning algorithms for rapid computation of vacant parking space and also provides the systematic management of parking space [100]. The image recognition system enabled by edge computing helps in facilitating the smart parking systems. These image recognition systems find the vacant spaces for parking using various AI algorithms [100]. Therefore, to avoid these inconsistencies efficient and smart parking system must be designed for empty space detection in a very small time. Thus, for that purpose edge computing will be helpful in enabling smart parking systems [101].
V-D4 Smart Home
Currently, an enormous amount of services are installed on the edge of the network from the cloud due to the fact that processing data at the edge can decrease response time and lower throughput costs for applications such as systems in smart homes in order to help improve the living comfort of residents [102]. Various features are designed for smart homes such as improved surveillance, smart controls, and smart meters. However, for the implementation of such smart home ideas a multi-layer system is demanded that is able to make decisions about home automation [103]. Hence, different AI algorithms as well as IoT devices are brought in to account to for the multi-layer system to carry out its functions. The system uses real-time and historical information to make decisions. There are various designed devices that are already using such systems such as smart TV, refrigerators, air conditions, washing machines etc. Hence, edge computing is really helpful in implementation of such smart hope ideas using artificial intelligence algorithms [104]. Therefore, vision of 6G networks will provide a strong base for the implementation and execution of various smart services.
V-E Integration of Sensing and Communication
One principle mediator of the self-directed wireless networks is the ability to detect the dynamically changing environmental states and allowing effective exchange of information among various nodes [105]. The autonomous or the self-directed systems would be supported in the 6G networks through the effective integration of sensing and communication using a large number of sensing objects, complex resources for communication, computing resources (multi-level) and cache resources (multi-level) which is a challenging endeavor [31].
V-F Dynamic Network Slicing
The dynamic network slicing is an important aspect that needs to be considered by the operators of a network to ensure dedicated execution of the virtual networks and to provide extensive support for the optimized delivery of services towards various users (i.e., vehicles, industries, machines). Thus, dynamic network slicing serves as a core element in the 5GB communication systems mainly for the management of multiple users that are connected to numerous heterogeneous networks. As far as the implementation of the dynamic network slicing is considered, it requires software defined networking and network function virtualization techniques. These techniques are influential for the cloud computing in the management of networks for the performance optimization through a centrally controlled dynamic steering, traffic flow management and organized network resource allocation [31, 106].
V-G Holographic Beamforming
The beamforming procedure is based on the signal processing through which the radio signals can be transmitted in precise directions using an array of steered antennas through an emphasis on the minimized angular range [94]. Moreover, beamforming procedure displays a wider range of benefits in the form of higher network efficiency, better coverage and throughput, higher signal to interference noise ratio (SINR), user tracking, interference prevention and rejection [55, 94]. Holographic beamforming (HBF) is a new method that is based on the usage of a hologram to achieve steering of the beam through the antenna where, the RF signals emerging from the radio travel to the back of the antenna thus, scattering across its front and later adjusting according to the beam shape and direction [94]. This method is based on the utility of Software Defined Antennas (SDA) hence, it is quite different from MIMO systems because in comparison to the traditional arrays or MIMO systems the SDA are smaller, lighter, cheaper and low power consuming [107]. Since Cost, Size, Weight and Power (C-SWaP) are the major challenges for the design of any communication system but the utilization of SDAs in the HBF procedure will lead to the highly flexible and efficient transmission and reception of signal in the 6G networks [94]. Particularly, for the 6G networks the use of HBF approach in multi-antenna communication devices is advantageous for the transmission and reception of signals in a highly efficient and flexible manner. Thus, indicating important roles of HBF positioning in the wireless power transfer, physical layer security and augmented network coverage related scenarios[55].
V-H Big Data Analytics
The analytics of Big data is a highly intricate and complicated procedure, that is used for the analysis of a broader range of massive data sets through revelation of information associated to concealed pattern, unidentified correlations, and customer dispositions to guarantee data management in comprehensive manner. The collection of Big data occurs from a variety of sources (i.e., videos, social networks, images, sensors). Moreover, the Big Data Analytics is effectively deployed to handle and manage massive amounts of data within the 6G communication networks. The deployment of large amounts of data, deep learning protocols and big data analytics within the 6G networks are foreseen to lead to advancements in the 6G network mainly because of the automation and self-optimization properties. End-to-end (E2E) delay reduction is one important example in context of the big data applications where, the integration of big data and machine learning will assist in performing the predictive analysis for the determination of the optimized user data based paths for the reduction in E2E delays within the 6G networks [55].
V-I Backscatter Communication
The interactions between two battery-less devices are enabled through the utility of the ambient backscatter wireless communication that is based on the available RF signals (i.e., ambient television and cellular transmissions [108]. However, for a short communication range a reasonable data rate can be obtained and the sensor based transmission of the small monitoring signals be achieved with negligible power consumption. Due to impending connectivity associated with the battery-less nodes in the backscatter systems, its potential usability in terms of providing massive connectivity in the future 6G networks is highly implicated. Where, the acquisition of critical requirements (i.e., exact phase and channel state) at nodes within the networks cannot be neglected. Eventually, these requirements can be fulfilled through the usage of non-coherent backscatter communications that show a greater potential for the optimization of resource deployment and augmentation of the services in the network devices [49].
V-J Proactive Caching
The most important concern for the 6G networks is the large scale deployment of small cell networks for the enhancement of the overall network properties including coverage, capacity and mobility management which will lead to the massive downlink traffic overload (at BS). Therefore, the proactive caching will overcome these limitations through the provision of reduced access delay and traffic offloading which will ultimately enhance the quality-of- user experience [109]. Moreover, to allow for the fruitful deployment of the 6G networks extensive research should be conducted to elucidate the joint optimization of various aspects including proactive caching, management of interference, intelligently coded schemes, and scheduling techniques that essential for the 6G networks.
V-K Unmanned Aerial Vehicles (UAV)
An essential component of the 6G communication networks are the UAVs/drones. In many instances the UAVs aim to provide a very high data rate and wireless connectivity. The UAVs display a capability of providing cellular connectivity mainly due to the installation of BS entities but it is clearly evident that the certain additional features displayed by the UAVs including easy deployment, strong line of sight links, and degrees of freedom with controllable mobility are not supported by the fixed infrastructures of BS [42]. The implementation of infrastructures based on the terrestrial communications has limited practicality and economic feasibility as it is nearly impossible to provide services during situations of emergency or natural disasters but the UAVs can manage such situations easily. Therefore, the UAVs will provide new avenues for the wireless communications as it provides facilitation for the uMUB, uHSLLC, mMTC and uHDD requirements of the wireless networks [22]. A broader applicability is demonstrated by UAVs that spans from strengthening network connectivity to fire detection, emergency services, disaster management, monitoring pollution/parking/accidents, security and surveillance. Owing to these facts the UAVs have been considered as one the most important technologies for the 6G networks. A UAV-enabled backscatter communication for the assistance of various communication based tasks including the supply of ambient power and the creation of suitable channel conditions for remote sensors has been reported elsewhere [110]. The combined application of the non-coherent detection systems and UAVs can help in the creation of air-interfaces that are appropriate and well suited for the 6G networks. However, in order to perform the realization of UAVs for the incorporation of intelligence into the 6G networks a deep reinforcement learning based robust resource allocation procedure can be used [55].
V-L Terahertz Communications
The spectral efficiency can be enhanced by widening bandwidths and enabling applications of advanced MIMO technologies [111]. The extensive applications and higher data rates is the result of 5G communications relying on mmWave frequencies. Whereas, the 6G aims to extend the frequency boundaries to THz to meet the increasing demands of the future communications. THz waves or sub-millimeter radiations usually display the frequency bands and wavelengths between 0.1 THz -10 THz and 0.03 mm–3 mm respectively [112, 113]. THz band will form an important component of the 6G communication as the RF band has now become exhausted and is nearly inadequate to meet the higher requirements of the 6G networks[47, 114]. For cellular communications the band range of 275 GHz–3 THz has been described as the main part of the THz band by the ITU Radio communication Sector (ITU-R)(Stoica and de Abreu 2019). The addition of THz band (275 GHz–3THz) to the existing mmWave band (30–300 GHz) would definitely increase the capacity for the 6G networks. Since the 275 GHz–3 THz band range has yet not been applied for any global functionality thus, the desired higher data rates can be potentially achieved using this band range [114]. However, the total band capacity can be increased by a minimum of 11.11x by the addition of THz band to the existing mmWave band. The 300 GHz–3 THz is a part of the optical band but it displays properties quite similar to the RF band which is mainly due to the fact that that THz lies at the boundary of the optical band that is positioned immediately after the RF band. This leads to increased potentials and challenges associated to the applicability of the THz in the 6G wireless communications [43]. Two most critical properties of the THz are its wider applicability to achieve high data rates and a high path loss mainly due to a higher frequency [115].
Additionally, the utilization of THz band will allow a fast-track and efficient provision of various services in 6G including uMUB, uHSLLC, and uHDD. That ultimately leads to the increased potential of 6G communications, through the provision of extensive support for wireless sensing, cognition, imaging, positioning, and communication procedures. The shorter THz wavelength offers the advantage of including a large number of antennas thus, offering hundreds of beams in comparison to the mmWave band [55]. The orbital angular momentum (OAM) multiplexing can be brought in to consideration to improve the overall spectral efficiency which can accomplished through the superimposition of multiple electromagnetic waves with highly diverse modes of the orbital angular momentum [50]. Moreover, there also exists a possibility to reduce the co-channel aggregated interference and severe loss in propagation associated with the mmWave and THz bands through the formation of very narrow beams. The high atmospheric attenuation observed at THz based communications can be controlled significantly using the highly directional pencil beam based antennas. Hereby, the fixed aperture sized antennas deliver a squared frequency that provides an overall improvement in the gain and directionality which is definitely advantageous for the communication systems based on THz [116, 117].
V-M Optical Wireless Communications (OWC)
Some of the eminent and well-known OWC technologies, including visible light communication (VLC), light fidelity, optical camera communication, and optical band based FSO communication, display extensive usage in several applications ( i.e., V2X communication, indoor mobile robot positioning, VR, underwater OWC) [55, 117, 118, 55, 119]. In addition to the RF-based communications, the OWCs are also intended for 6G communications, and it is also evident that these FSO among OWC can provide network-to-backhaul/fronthaul connectivity. Due to various complexities and remote geographical locations, optical fiber-based connectivity as a backhaul network is difficult. The installation of optical fiber links for small-cell networks might not offer an economical and reasonable solution. Also, the 6G demands a huge density of users for access, to manage and control the majority of the access networks, a considerable level of integration of the backhaul and access networks is highly necessitated [31]. The utility of the FSO fronthaul/backhaul network is emerging and will be applied to the 5GB communications in the near future [120, 121, 122]. Moreover, the FSO based system displays transmitter and receiver characteristics that are similar to that of the optical fiber networks, thus indicating that the data transfer operation in the FSO occurs in a truly self-directed and autonomous manner [49].
V-N MIMO-Cell-Free Communication
The large intelligent surfaces (LIS) and intelligent reflecting surfaces (IRS) are two types of intelligent surfaces. Both are considered to be promising 6G candidate technologies. In [123], authors first presented the idea of using antenna arrays as the LIS in large MIMO systems.
Unlike beamforming, which requires a large number of antennas to focus signals, the LIS is electromagnetically proactive in the external environment and places few constraints on how antennas spread. As a result, the LIS is able to avoid the negative impacts of antenna correlations. However, because of the active property of the surfaces, the LIS consumes a lot of power and is not energy efficient.
The consolidation of different communication technologies and multiple frequencies for the 6G networks will allow the user to effortlessly shift from one network to another without the requirement of any manual configuration [43]. For the 6G communication networks a shift from both conventional cellular and orthogonal communications would be observed towards the cell-free and non-orthogonal communications, thus, allowing for the automatic selection of the best network from the available set of communication technologies. In the current networks, the movement from one cell to another leads to various handover failures, delays and data losses which will be taken over by the 6G cell-free communications. Therefore, the utilization of multi-connectivity, multi-tier hybrid techniques and heterogeneous radio based devices will allow the effective augmentation of the Cell-free communication [55, 43].
V-O Blockchain-Security Perspective
Blockchain is a decentralized database that are based on the hash tree theory, are tamper-proof and difficult to reverse [124]. Authenticity, data security, and accessibility are all characteristics of blockchain [125]. As a result, without the necessity of a centralised authority, blockchain can be utilised to manage spectrum resources in 6G context. Furthermore, blockchain may be used to secure data security and privacy, as well as regulate access. In [126], authors offer a privacy-preserving blockchain-based approach that combines access policy and encryption technologies to ensure data privacy. In mobile cognitive radio networks [127], authors use blockchain as a decentralised database to improve accessibility protocols and ensure spectral allocation.
VI Conclusion
The evolution of each generation of wireless communication networks brings enhancements to existing technologies and adds new features to meet future requirements. Although the 5G communication system displays promising features, it is still not adequate to meet the ever-growing wireless communication requirements. These factors call for envisioning the 6G networks to effectively cater to the demands of the new era of communication systems. Extensive research is carried on elucidating important aspects of the 6G networks, thus indicating a promising future utility. Therefore, we provide a brief overview of the overall 6G networks, the evolution of the communication networks, the marketing/research activities on the 6G mobile communication networks, the enabling technologies for the 6G networks, and the current state-of-the-art works on 6G communication systems. Besides providing an insight into the vision of the 6G, we have also elaborated various technologies that will form the core of 6G. Additionally, we also focus on the emerging data rate improving technologies such as the relatively new spectrum technologies (i.e., THz communication and VLC) and new communication paradigms (i.e., molecular and quantum communication). In the face of a globally accruing digital divide, we believe that this paper can motivate researchers to investigate the enabling technologies for 6G systems and their applications in IoT.
References
[1]
S. Chen, Y.-C. Liang, S. Sun, S. Kang, W. Cheng, and M. Peng, “Vision,
requirements, and technology trend of 6G: How to tackle the challenges of
system coverage, capacity, user data-rate and movement speed,” IEEE
Wireless Communications, vol. 27, no. 2, pp. 218–228, 2020.
[2]
M. H. Alsharif, A. H. Kelechi, M. A. Albreem et al., “Sixth generation
(6G) wireless networks: Vision, research activities, challenges and
potential solutions. symmetry,” Symmetry, vol. 12, no. 676, p. 4,
2020.
[3]
L. U. Khan, I. Yaqoob, M. Imran et al., “6G wireless systems: A
vision, architectural elements, and future directions,” IEEE Access,
vol. 8, no. 14, pp. 47 029–14 704, 2020.
[4]
G. Karabulut Kurt, M. G. Khoshkholgh, S. Alfattani, A. Ibrahim, T. S. J.
Darwish, M. S. Alam, H. Yanikomeroglu, and A. Yongacoglu, “A vision and
framework for the high altitude platform station (haps) networks of the
future,” IEEE Communications Surveys Tutorials, vol. 23, no. 2, pp.
729–779, 2021.
[5]
S. Basharat, S. Ali Hassan, H. Pervaiz, A. Mahmood, Z. Ding, and M. Gidlund,
“Reconfigurable intelligent surfaces: Potentials, applications, and
challenges for 6G wireless networks,” IEEE Wireless Communications,
pp. 1–8, 2021.
[6]
H. Wang, W. Wang, X. Chen et al., Wireless information and energy
transfer in interference aware massive MIMO systems. IEEE Global Communications Conference, 2014.
[7]
X. You, C.-X. Wang, J. Huang et al., “Towards 6G wireless
communication networks: Vision, enabling technologies, and new paradigm
shifts.” Science China Information Sciences, vol. 64, no. 1, pp.
1–74, 2021.
[8]
V.-L. Nguyen, P.-C. Lin, B.-C. Cheng, R.-H. Hwang, and Y.-D. Lin, “Security
and privacy for 6G: A survey on prospective technologies and
challenges,” IEEE Communications Surveys Tutorials, pp. 1–1, 2021.
[9]
D. C. Nguyen, M. Ding, P. N. Pathirana, A. Seneviratne, J. Li, D. Niyato,
O. Dobre, and H. V. Poor, “6G internet of things: A comprehensive
survey,” IEEE Internet of Things Journal, pp. 1–1, 2021.
[10]
N.-N. Dao, Q.-V. Pham, N. H. Tu, T. T. Thanh, V. N. Q. Bao, D. S. Lakew, and
S. Cho, “Survey on aerial radio access networks: Toward a comprehensive
6G access infrastructure,” IEEE Communications Surveys Tutorials,
vol. 23, no. 2, pp. 1193–1225, 2021.
[11]
C. D. Alwis, A. Kalla, Q.-V. Pham, P. Kumar, K. Dev, W.-J. Hwang, and
M. Liyanage, “Survey on 6G frontiers: Trends, applications, requirements,
technologies and future research,” IEEE Open Journal of the
Communications Society, vol. 2, pp. 836–886, 2021.
[12]
F. Tang, B. Mao, Y. Kawamoto, and N. Kato, “Survey on machine learning for
intelligent end-to-end communication toward 6G: From network access,
routing to traffic control and streaming adaption,” IEEE
Communications Surveys Tutorials, vol. 23, no. 3, pp. 1578–1598, 2021.
[13]
F. Guo, F. R. Yu, H. Zhang, X. Li, H. Ji, and V. C. M. Leung, “Enabling
massive iot toward 6G: A comprehensive survey,” IEEE Internet of
Things Journal, vol. 8, no. 15, pp. 11 891–11 915, 2021.
[14]
X. Fang, W. Feng, T. Wei, Y. Chen, N. Ge, and C.-X. Wang, “5g embraces
satellites for 6G ubiquitous iot: Basic models for integrated satellite
terrestrial networks,” IEEE Internet of Things Journal, vol. 8,
no. 18, pp. 14 399–14 417, 2021.
[15]
S. Aggarwal, N. Kumar, and S. Tanwar, “Blockchain-envisioned uav communication
using 6G networks: Open issues, use cases, and future directions,”
IEEE Internet of Things Journal, vol. 8, no. 7, pp. 5416–5441, 2021.
[16]
A. H. Sodhro, S. Pirbhulal, Z. Luo, K. Muhammad, and N. Z. Zahid, “Toward 6G
architecture for energy-efficient communication in iot-enabled smart
automation systems,” IEEE Internet of Things Journal, vol. 8, no. 7,
pp. 5141–5148, 2021.
[17]
Y. Xu, G. Gui, H. Gacanin, and F. Adachi, “A survey on resource allocation for
5g heterogeneous networks: Current research, future trends, and challenges,”
IEEE Communications Surveys Tutorials, vol. 23, no. 2, pp. 668–695,
2021.
[18]
J. Liang, L. Li, and C. Zhao, “A transfer learning approach for compressed
sensing in 6G-iot,” IEEE Internet of Things Journal, pp. 1–1,
2021.
[19]
B. Mao, Y. Kawamoto, and N. Kato, “Ai-based joint optimization of qos and
security for 6G energy harvesting internet of things,” IEEE Internet
of Things Journal, vol. 7, no. 8, pp. 7032–7042, 2020.
[20]
X. Huang, J. A. Zhang, R. P. Liu et al., “Integrating space and
terrestrial networks with passenger airplanes for the generation
wireless-will it work?” IEEE Vehicular Technology Magazine, vol. 6,
2019.
[21]
M. Agiwal, A. Roy, and N. Saxena, “Next generation 5G wireless networks: A
comprehensive survey,” IEEE Communications Surveys and Tutorials,
vol. 18, no. 3, pp. 1617–1655, 2016.
[22]
S. Li, L. D. Xu, and S. Zhao, “5G internet of things: A survey,”
Journal of Industrial Information Integration, vol. 10, pp. 1–9,
2018.
[23]
S. A. A. Shah, E. Ahmed, M. Imran et al., “5g for vehicular
communications,” IEEE communications magazine, vol. 56, no. 1, pp.
111–117, 2018.
[24]
J. Parikh and A. Basu, “Lte advanced: The 4g mobile broadband technology,”
International Journal of Computer Applications, vol. 13, no. 5, pp.
17–21, 2011.
[25]
M. Shafi, A. F. Molisch, P. J. Smith et al., “5g: A tutorial overview
of standards, trials, challenges, deployment, and practice,” IEEE
journal on selected areas in communications, vol. 35, no. 6, pp. 1201–1221,
2017.
[26]
J. Wu, M. Dong, K. Ota et al., “Big data analysis-based secure cluster
management for optimized control plane in software-defined networks,”
IEEE Transactions on Network and Service Management, vol. 15, no. 1,
pp. 27–38, 2018.
[27]
A. Yastrebova, R. Kirichek, Y. Koucheryavy et al., “Future networks
2030: Architecture and requirements.” 10th International Congress on
Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT),
IEEE., 2018.
[28]
B. Zong, C. Fan, X. Wang et al., “6G technologies: Key drivers,
core requirements, system architectures, and enabling technologies,”
IEEE Vehicular Technology Magazine, vol. 14, no. 3, pp. 18–27, 2019.
[29]
S. P. Rout, “6G wireless communication: Its vision, viability,
application, requirement, technologies, encounters and research.” in
Viability, 11th International Conference on Computing, Communication
and Networking Technologies (ICCCNT), IEEE, 2020.
[30]
M. Jaber, M. A. Imran, R. Tafazolli et al., “5G backhaul challenges
and emerging research directions: A survey,” IEEE Access, vol. 4,
pp. 1743–1766, 2016.
[31]
M. Z. Chowdhury, M. K. Hasan, M. Shahjalal et al., “Optical wireless
hybrid networks: Trends, opportunities, challenges, and research
directions,” IEEE Communications Surveys and Tutorials, vol. 22,
no. 2, pp. 930–966, 2020.
[32]
L. U. Khan, I. Yaqoob, N. H. Tran et al., “Edge-computing-enabled smart
cities: A comprehensive survey,” IEEE Internet of Things Journal,
vol. 7, no. 10, pp. 10 200–10 232, 2020.
[33]
W. Z. Khan, M. Rehman, H. M. Zangoti et al., “Industrial internet of
things: Recent advances, enabling technologies and open challenges,”
Computers and Electrical Engineering, vol. 81, p. 10652.
[34]
Y. Lu and X. Zheng, “6G: A survey on technologies, scenarios,
challenges, and the related issues,” Journal of Industrial Information
Integration, vol. 100158, 2020.
[35]
Y. Lu and X. Ning, “A vision of 6G-5G’s successor,” Journal of
Management Analytics, vol. 7, no. 3, pp. 301–320, 2020.
[36]
Z. Zhang, Y. Xiao, Z. Ma et al., “6G wireless networks: Vision,
requirements, architecture, and key technologies,” IEEE Vehicular
Technology Magazine, vol. 14, no. 3, pp. 28–41, 2019.
[37]
M. Hensmans and G. Liu, “Huawei’s long march to global leadership: Joint
innovation strategy from the periphery to the center,” in Huawei Goes
Global. springer, 2020, pp. 225–245.
[38]
I. F. Akyildiz, A. Kak, and S. Nie, “6G and beyond: The future of
wireless communications systems,” IEEE Access, vol. 8, no. 10, pp.
33 995–13 403, 2020.
[39]
J. Hayes, “Network-communication 6G and the reinvention of mobile,”
Engineering and Technology, vol. 15, no. 1, pp. 26–29, 2020.
[40]
W. Saad, M. Bennis, and M. Chen, “A vision of 6G wireless systems:
Applications, trends, technologies, and open research problems,” IEEE
network, vol. 34, no. 3, pp. 134–142, 2019.
[41]
K. B. Letaief, W. Chen, Y. Shi et al., “The roadmap to 6G: Ai
empowered wireless networks.” IEEE communications magazine, vol. 57,
no. 8, pp. 84–90, 2019.
[42]
F. Tariq, M. R. Khandaker, K.-K. Wong et al., “A speculative study on
6G.” IEEE Wireless Communications, vol. 27, no. 4, pp. 118–125.
[43]
M. Giordani, M. Polese, M. Mezzavilla et al., “Toward 6G networks:
Use cases and technologies.” IEEE communications magazine, vol. 58,
no. 3, pp. 55–61, 2020.
[44]
P. Yang, Y. Xiao, M. Xiao et al., “6G wireless communications:
Vision and potential techniques,” IEEE network, vol. 33, no. 4, pp.
70–75, 2019.
[45]
L. Zhang, Y.-C. Liang, and D. Niyato, “6G visions: Mobile ultra-broadband,
super internet-of-things, and artificial intelligence,” China
Communications, vol. 16, no. 8, pp. 1–14, 2019.
[46]
N. Kato, B. Mao, F. Tang et al., “Ten challenges in advancing machine
learning technologies toward 6G.” IEEE Wireless Communications,
vol. 27, no. 3, pp. 96–103, 2020.
[47]
I. F. Akyildiz, J. M. Jornet, and C. Han, “Terahertz band: Next frontier for
wireless communications,” Physical Communication, vol. 12, pp.
16–32, 2014.
[48]
S. Chen, Y.-C. Liang, S. Sun et al., “Vision, requirements, and
technology trend of 6G: How to tackle the challenges of system coverage,
capacity, user data-rate and movement speed,” in Vision. IEEE Wireless Communications 27(2), pp.
218–228.
[49]
S. J. Nawaz, S. K. Sharma, B. Mansoor et al., “Non-coherent and
backscatter communications: Enabling ultra-massive connectivity in 6G
wireless networks.” IEEE Access, vol. 6.
[50]
E. C. Strinati, S. Barbarossa, J. L. Gonzalez-Jimenez et al., “6G:
The next frontier: From holographic messaging to artificial intelligence
using subterahertz and visible light communication,” IEEE Vehicular
Technology Magazine, vol. 14, no. 3, pp. 42–50, 2019.
[51]
M. Salehi and E. Hossain, “On the effect of temporal correlation on joint
success probability and distribution of number of interferers in mobile uav
networks,” IEEE Wireless Communications Letters, vol. 8, no. 6, pp.
1621–1625, 2019.
[52]
T. Huang, W. Yang, J. Wu et al., “A survey on green 6G network:
Architecture and technologies.” IEEE Access, vol. 7, no. 18, pp.
75 758–17 576, 2019.
[53]
D. Elliott, W. Keen, and L. Miao, “Recent advances in connected and automated
vehicles.” journal of traffic and transportation engineering, vol. 6,
no. 2, pp. 109–131, 2019.
[54]
B. Ji, Y. Li, B. Zhou et al., “Performance analysis of uav relay
assisted iot communication network enhanced with energy harvesting,”
IEEE Access, vol. 7, pp. 38 738–38 747, 2019.
[55]
M. Z. Chowdhury, M. T. Hossan, M. K. Hasan et al., “Integrated
rf/optical wireless networks for improving qos in indoor and transportation
applications,” Wireless Personal Communications, vol. 107, no. 3, pp.
1401–1430, 2019.
[56]
L. Lovén, T. Lepp”anen, E. Peltonen et al., “Edgeai: A vision for
distributed, edgenative artificial intelligence in future 6G networks.”
The 1st 6G Wireless Summit, vol. 6, pp. 1–2, 2019.
[57]
F. Clazzer, A. Munari, G. Liva et al., “From 5G to 6G: Has the
time for modern random access come?” Tech. Rep., 2019.
[58]
H. Viswanathan and P. E. Mogensen, “Communications in the 6G era,”
IEEE Access, vol. 8, pp. 57 063–57 074, 2020.
[59]
N. H. Mahmood, H. Alves, O. A. López et al., “Six key features of
machine type communication in 6G.x,” G2nd 6G Wireless Summit
(6G SUMMIT), IEEE, vol. 6, 2020.
[60]
S. Dang, O. Amin, B. Shihada et al., “From a human-centric perspective:
What might 6G be?”
[61]
H. Zhang, Y. Li, Z. Lv et al., “A real-time and ubiquitous network
attack detection based on deep belief network and support vector machine,”
IEEE/CAA Journal of Automatica Sinica, vol. 7, no. 3, pp. 790–799.
[62]
Y. Zhang, B. Di, P. Wang et al., “Hetmec: Heterogeneous multi-layer
mobile edge computing in the 6G era,” IEEE Transactions on
Vehicular Technology, vol. 69, no. 4, pp. 4388–4400.
[63]
G. Gui, M. Liu, F. Tang et al., “6G: Opening new horizons for
integration of comfort, security, and intelligence,” IEEE Wireless
Communications, vol. 27, no. 5, pp. 126–132, 2020.
[64]
I. Tomkos, D. Klonidis, E. Pikasis et al., “Toward the 6G network
era: Opportunities and challenges.” IT Professional, vol. 22, no. 1,
pp. 34–38, 2020.
[65]
E. Yaacoub and M.-S. Alouini, “A key 6G challenge and
opportunity—connecting the base of the pyramid: A survey on rural
connectivity,” in Proceedings of the IEEE 108(4, 2020, pp. 533–582.
[66]
R. Shafin, L. Liu, V. Chandrasekhar et al., “Artificial
intelligence-enabled cellular networks: A critical path to beyond-5g and
6G.” IEEE Wireless Communications, vol. 27, no. 2, pp. 212–217.
[67]
M. Z. Chowdhury, M. Shahjalal, S. Ahmed et al., “6G wireless
communication systems: Applications, requirements, technologies, challenges,
and research directions,” IEEE Open Journal of the Communications
Society, vol. 1, pp. 957–975, 2020.
[68]
J. H. Kim, “6G and internet of things: a survey,” Journal of
Management Analytics, pp. 1–17, 2021.
[69]
Z. Allam and D. S. Jones, “Future (post-covid) digital, smart and sustainable
cities in the wake of 6G: Digital twins, immersive realities and new
urban economies,” Land Use Policy, vol. 101, no. 10520, p. 1, 2021.
[70]
P. K. Padhi and F. Charrua-Santos, “6G enabled industrial internet of
everything: Towards a theoretical framework,” Applied System
Innovation, vol. 4, no. 1, p. 11, 2021.
[71]
Z. Yang, M. Chen, K.-K. Wong et al., “Federated learning for 6G:
Applications, challenges, and opportunities.” Tech. Rep., 2021.
[72]
Y. Wang, Y. Tian, X. Hei et al., “A novel iov block-streaming service
awareness and trusted verification in 6G.” IEEE Transactions on
Vehicular Technology, vol. 6, 2021.
[73]
A. Shahraki, M. Abbasi, M. Piran et al., “A comprehensive survey on
6G networks: Applications, core services, enabling technologies, and
future challenges. arxiv preprint,” Tech. Rep., 2021.
[74]
A. L. Imoize, O. Adedeji, N. Tandiya et al., “6G enabled smart
infrastructure for sustainable society: Opportunities, challenges, and
research roadmap,” Sensors, vol. 21, no. 5, p. 1709, 2021.
[75]
H. Wang, “Application of data mining technology in quality evaluation of
online teaching based on 6G,” The Educational Review, USA,
vol. 5, no. 2, pp. 27–30, 2021.
[76]
K. David and H. Berndt, “6G vision and requirements: Is there any need for
beyond 5g?” IEEE Vehicular Technology Magazine, vol. 13, no. 3, pp.
72–80, 2018.
[77]
F. Hu, B. Chen, and K. Zhu, “Full spectrum sharing in cognitive radio networks
toward 5G: A survey,” IEEE Access, vol. 6, pp. 15 754–15 776,
2018.
[78]
B. Li, Z. Fei, and Y. Zhang, “Uav communications for 5G and beyond: Recent
advances and future trends,” IEEE Internet of Things Journal, vol. 6,
no. 2, pp. 2241–2263, 2018.
[79]
H. You, Key parameters for 5G mobile communications [ITU-R WP 5D
standardization status]. Seongnam-Si,
South Korea, Tech. Rep: KT. Korea Telecom, 2015.
[80]
S. Nayak and R. Patgiri, “6G communication technology: A vision on
intelligent healthcare,” in Health Informatics: A Computational
Perspective in Healthcare, 2021, pp. 1–18.
[81]
A. Kumari, R. Gupta, and S. Tanwar, Amalgamation of blockchain and IoT
for smart cities underlying 6G communication: A comprehensive
review. Computer Communications,
2021.
[82]
S. Ali, W. Saad, N. Rajatheva et al., “6G white paper on machine
learning in wireless communication networks. arxiv,” 2020, preprint.
[83]
N. Saeed, A. Elzanaty, H. Almorad, H. Dahrouj, T. Y. Al-Naffouri, and M.-S.
Alouini, “Cubesat communications: Recent advances and future challenges,”
IEEE Communications Surveys and Tutorials, vol. 22, no. 3, pp.
1839–1862, 2020.
[84]
M. A. Al-Jarrah, M. A. Yaseen, A. Al-Dweik, O. A. Dobre, and E. Alsusa,
“Decision fusion for iot-based wireless sensor networks,” IEEE
Internet of Things Journal, vol. 7, no. 2, pp. 1313–1326, 2020.
[85]
““internet available: of things 2016,” 2016. [online].
https://www.cisco.com/c/dam/en/us/products/collateral/se/
internetof-things/at-a-glance-c45-731471.pdf.” Cisco.
[86]
““number to 125 of connected billion iot by devices 2030,” will 2021.
surge [online]. available:
https://news.ihsmarkit.com/prviewer/releaseonly/slug/
number-connected-iot-devices-will-surge-125-billion-2030.”
[87]
M. R. Palattella, M. Dohler, A. Grieco, G. Rizzo, J. Torsner, T. Engel, and
L. Ladid, “Internet of things in the 5g era: Enablers, architecture, and
business models,” IEEE Journal on Selected Areas in Communications,
vol. 34, no. 3, pp. 510–527, 2016.
[88]
A. Jagannath, J. Jagannath, and T. Melodia, “Redefining wireless communication
for 6G: Signal processing meets deep learning with deep unfolding,”
IEEE Transactions on Artificial Intelligence, pp. 1–1, 2021.
[89]
R.-A. Stoica and G. T. F. de Abreu, “6G: the wireless communications
network for collaborative and ai applications.” 2019, preprint.
[90]
J. Zhao, “A survey of intelligent reflecting surfaces (irss): Towards 6G
wireless communication networks. arxiv,” 2019, preprint.
[91]
N. H. Mahmood, H. Alves, O. A. López et al., “Six key enablers for
machine type communication in 6G.” Tech. Rep., 2019.
[92]
C.-X. Wang, F. Haider, X. Gao et al., “Cellular architecture and key
technologies for 5g wireless communication networks.” IEEE
communications magazine, vol. 52, no. 2, pp. 122–130, 2014.
[93]
T. Jung, T. Kwon, and C.-B. Chae, “Qoe-based transmission strategies for
multi-user wireless information and power transfer,” ICT Express,
vol. 1, no. 3, pp. 116–120, 2015.
[94]
S. Elmeadawy and R. M. Shubair, “Enabling technologies for 6G future
wireless communications: Opportunities and challenges. arxiv,” 2020,
preprint.
[95]
A. Mitra, S. Biswas, T. Adhikari et al., “Emergence of edge computing:
An advancement over cloud and fog11th international conference on
computing.” IEEE: Communication and
Networking Technologies (ICCCNT), 2020.
[96]
S. George, T. Eiszler, R. Iyengar et al., “Openrtist: End-to-end
benchmarking for edge computing,” IEEE Pervasive Computing, vol. 19,
no. 4, pp. 10–18, 2020.
[97]
S. Liu, L. Liu, J. Tang et al., “Edge computing for autonomous driving:
Opportunities and challenges,” Proceedings of the IEEE, vol. 107,
no. 8, pp. 1697–1716, 2019.
[98]
N. Kalatzis, M. Avgeris, D. Dechouniotis et al., “Edge computing in iot
ecosystems for uav-enabled early fire detection.” IEEE: IEEE International Conference on Smart Computing
(SMARTCOMP),, 2018.
[99]
G. B. Neumann, V. P. D. Almeida, and M. Endler, Smart Forests: fire
detection service. 2018 IEEE symposium on computers and communications
(ISCC). IEEE, 2018.
[100]
H. Bura, N. Lin, N. Kumar et al., An edge based smart parking
solution using camera networks and deep learning. IEEE International Conference on Cognitive Computing (ICCC),
2018.
[101]
R. Ke, Y. Zhuang, Z. Pu et al., A smart, efficient, and reliable
parking surveillance system with edge artificial intelligence on IoT
devices. IEEE Transactions on
Intelligent Transportation Systems., 2020.
[102]
X. Chang, W. Li, C. Xia et al., “From insight to impact: Building a
sustainable edge computing platform for smart homes.” IEEE 24th International Conference on Parallel and
Distributed Systems (ICPADS), 2018.
[103]
T. Chakraborty and S. K. Datta, Home automation using edge computing and
internet of things. IEEE
International Symposium on Consumer Electronics (ISCE), 2017.
[104]
W. Shi, J. Cao, Q. Zhang et al., “Edge computing: Vision and
challenges,” IEEE Internet of Things Journal, vol. 3, no. 5, pp.
637–646, 2016.
[105]
M. Kobayashi, G. Caire, and G. Kramer, Joint state sensing and
communication: Optimal tradeoff for a memoryless case. IEEE International Symposium on Information Theory (ISIT),
2018.
[106]
X. Shen, J. Gao, W. Wu et al., “Ai-assisted network-slicing based
next-generation wireless networks,” IEEE Open Journal of Vehicular
Technology, vol. 1, pp. 45–66.
[107]
E. J. Black, “Holographic beam forming and mimo,” in Pivotal
Commware. unpublished, 2017.
[108]
V. Liu, A. Parks, V. Talla et al., “Ambient backscatter: Wireless
communication out of thin air,” ACM SIGCOMM Computer Communication
Review, vol. 43, no. 4, pp. 39–50, 2013.
[109]
C. Yi, S. Huang, and J. Cai, “An incentive mechanism integrating joint power,
channel and link management for social-aware d2d content sharing and
proactive caching,” IEEE Transactions on Mobile Computing, vol. 17,
no. 4, pp. 789–802, 2017.
[110]
Y. Zeng, R. Zhang, and T. J. Lim, “Wireless communications with unmanned
aerial vehicles: Opportunities and challenges,” IEEE communications
magazine, vol. 54, no. 5, pp. 36–42, 2016.
[111]
H. Sarieddeen, N. Saeed, T. Y. Al-Naffouri, and M.-S. Alouini, “Next
generation terahertz communications: A rendezvous of sensing, imaging, and
localization,” IEEE Communications Magazine, vol. 58, no. 5, pp.
69–75, 2020.
[112]
S. Hanna, “Technological and regulatory developments for electromagnetic
transmission into the millimeter wave and terahertz wave spectrum,” in
Proceedings of the Future Technologies Conference, 2018.
[113]
I. Siaud and A.-M. Ulmer-Moll, THz Communications: an overview and
challenges., 2019.
[114]
K. Tekbıyık, A. R. Ekti, G. K. Kurt et al., “Terahertz band
communication systems: Challenges, novelties and standardization efforts,”
Physical Communication, vol. 35, p. 10070, 2019.
[115]
S. Mumtaz, J. M. Jornet, J. Aulin et al., “Terahertz communication for
vehicular networks,” IEEE Transactions on Vehicular Technology,
vol. 66, p. 7.
[116]
T. S. Rappaport, Y. Xing, O. Kanhere et al., “Wireless communications
and applications above ghz: Opportunities and challenges for 6G and
beyond.” IEEE Access, vol. 100, pp. 78 729–78 757, 2019.
[117]
M. Z. Chowdhury, M. T. Hossan, A. Islam et al., “A comparative survey
of optical wireless technologies: Architectures and applications,”
IEEE Access, vol. 6, pp. 9819–9840, 2018.
[118]
M. T. Hossan, M. Z. Chowdhury, M. Shahjalal et al., “Human bond
communication with head-mounted displays: Scope, challenges, solutions, and
applications,” IEEE communications magazine, vol. 57, no. 2, pp.
26–32, 2019.
[119]
N. Saeed, A. Celik, T. Y. Al-Naffouri, and M.-S. Alouini, “Underwater optical
wireless communications, networking, and localization: A survey,” Ad
Hoc Networks, vol. 94, p. 101935, 2019.
[120]
A. Douik, H. Dahrouj, T. Y. Al-Naffouri et al., “Hybrid
radio/free-space optical design for next generation backhaul systems,”
IEEE Transactions on Communications, vol. 64, no. 6, pp. 2563–2577,
2016.
[121]
B. Bag, A. Das, I. S. Ansari et al., “Performance analysis of hybrid
fso systems using fso/rf-fso link adaptation,” IEEE Photonics
Journal, vol. 10, no. 3, pp. 1–17, 2018.
[122]
Z. Gu, J. Zhang, Y. Ji et al., “Network topology reconfiguration for
fso-based fronthaul/backhaul in g+ wireless networks.” IEEE Access,
vol. 6, pp. 69 426–69 437.
[123]
S. Hu, F. Rusek, and O. Edfors, “Beyond massive mimo: The potential of data
transmission with large intelligent surfaces,” IEEE Transactions on
Signal Processing, vol. 66, no. 10, pp. 2746–2758, 2018.
[124]
S. Underwood, “Blockchain beyond bitcoin,” Commun. ACM, vol. 59,
no. 11, p. 15–17, Oct. 2016. [Online]. Available:
https://doi.org/10.1145/2994581
[125]
M. W. Akhtar, S. A. Hassan, R. Ghaffar, H. Jung, S. Garg, and M. S. Hossain,
“The shift to 6G communications: vision and requirements,”
Human-centric Computing and Information Sciences, vol. 10, no. 1, pp.
1–27, 2020.
[126]
K. Fan, Y. Ren, Y. Wang, H. Li, and Y. Yang, “Blockchain-based efficient
privacy preserving and data sharing scheme of content-centric network in
5g,” IET communications, vol. 12, no. 5, pp. 527–532, 2018.
[127]
K. Kotobi and S. G. Bilen, “Secure blockchains for dynamic spectrum access: A
decentralized database in moving cognitive radio networks enhances security
and user access,” ieee vehicular technology magazine, vol. 13, no. 1,
pp. 32–39, 2018. |
Stellar Wind Accretion in GX301-2:
Evidence for a High-density Stream
D.A. Leahy and M.Kostka
Department of Physics & Astronomy, University of Calgary,
Calgary, AB, Canada, T2N 1N4
Abstract
The X-ray binary system GX301-2 consists of a neutron star in an
eccentric orbit accreting from the massive early-type star WRAY 977. It has
previously been shown that the X-ray orbital light curve is consistent with
existence of a gas stream flowing out from WRAY 977 in addition to
its strong stellar wind. Here, X-ray monitoring observations by the
Rossi X-ray Timing Explorer (RXTE)/ All-Sky-Monitor (ASM) and pointed observations
by the RXTE/ Proportional Counter Array (PCA) over the past decade are analyzed.
We analyze both the flux and column
density dependence on orbital phase. The wind and stream dynamics are calculated
for various system inclinations, companion rotation rates and wind velocities, as
well as parametrized by the stream width and density. These calculations are used
as inputs to determine both the
expected accretion luminosity and the column density along the line-of-sight to the
neutron star. The model luminosity and column density are compared to observed flux
and column density vs. orbital phase, to constrain the properties of the stellar
wind and the gas stream. We find that the change between bright and medium intensity
levels is primarily due to decreased mass loss in the stellar wind, but the change
between medium and dim intensity levels is primarily due to decreased stream density.
The mass-loss rate in the stream exceeds that in the stellar
wind by a factor of $\sim$2.5.
The quality of the model fits is significantly better for lower inclinations,
favoring a mass for WRAY 977 of $\sim 53-62M_{\odot}$.
stars: neutron – stars: individual: GX301-2
– stars: emission line, Be – X-rays: stars
1 Introduction
GX 301-2 (also known as 4U 1223-62) is a pulsar with a 680 s rotation period,
in a 41.5 day eccentric orbit (Sato et al., 1986).
The mass function is $31.8M_{\rm\odot}$, making
the minimum companion mass $35M_{\rm\odot}$ for a $1.4M_{\rm\odot}$ neutron star.
The companion Wray 977 has a B1 Ia+ spectral classification (Kaper et al., 1995),
determined via comparison with the hypergiant $\zeta^{1}$ Sco.
This analysis also yielded an upper limit for the radius of Wray 977
of $75R_{\rm\odot}$ placing it just inside its tidal radius.
The neutron star flares regularly in X-rays approximately 1-2 days before
periastron passage, and several stellar wind
accretion models have been proposed to explain the magnitude of the
flares and their orbital phase dependence
(e.g. Koh et al., 1997, Leahy, 1991, Haberl, 1991).
The modeling by Leahy (1991) and Haberl (1991) was done using TENMA and EXOSAT
observations, respectively, which cover many short data sets spaced irregularly
over orbital phase.
More recently better orbital phase coverage has been obtained by CGRO/BATSE
(Koh et al., 1997),
which however has much lower sensitivity than the previous studies.
The broad band X-ray spectrum of GX301-2 has been studied by TENMA (Leahy & Matsuoka,
1990, Leahy & Matsuoka, 1989a and Leahy & Matsuoka, 1989b) and ASCA measurements
(Saraswat et al., 1996). The latter study illustrates the complexity of the GX301-2
spectrum. Four components are necessary:
i) an absorbed power law with high column density;
ii) a scattered power law with much lower column density; iii) a thermal
component with temperature of 0.8 keV; iv) a set of six emission lines
(including the iron line at 6.4 keV). Of the above, components
ii) and iv) are due to
reprocessing in the gas in stellar wind from Wray 977 which surrounds the x-ray
source. Reprocessed spectra were calculated for a centrally illuminated cloud
by (Leahy and Creighton, 1993) and for an externally illuminated cloud by (Leahy, 1999),
including the Comptonized iron line shapes. Later, the Comptonized iron line was
detected in GX301-2 using the Chandra High Energy Transmission Grating (Watanabe et al., 2003),
confirming the high column density (on the order of $10^{24}$ cm${}^{-2}$) and
yielding an upper limit on electron temperature of $\sim$3 eV.
The orbital phase dependence of the X-ray spectrum of GX301-2 was observed by
the PCA on board RXTE (Mukherjee and Paul, 2004). It was concluded that clumpiness in the matter
surrounding the neutron star caused large variability in column density measurements.
Long-term monitoring of GX301-2 has been carried out
by the ASM on board RXTE. Analysis of these observations for the 5 year time period
MJD 50087.2 to MJD 52284.5 was done by (Leahy, 2002) (henceforth refered to as L02).
In this paper the light curve based on the significantly longer 10 year RXTE/ASM
database is analyzed. In addition we study the flux and column density
measurements made by the RXTE/PCA, as well as column densities derived from the
RXTE/ASM softness ratios. Improved modeling methods are introduced: accurate analytic
description of the stream and inclusion of simultaneous flux and column density
calculations. The inclusion of this much more extensive data and the more realistic
modeling results in significant improvement on constraints on the system properties
of GX301-2 and allows new conclusions to be drawn.
2 RXTE/ ASM and RXTE/ PCA Observations
The ASM on RXTE (Levine et al., 1996)
consists of three scanning shadow cameras (SSC’s),
each with a field of view of $6^{\circ}$ by $90^{\circ}$ FWHM. The SSC’s are rotated
in a sequence of “dwells” with an exposure typically of 90 seconds,
so that the most of the sky can be covered in one day. The dwell data are
also averaged for each day to yield a daily-average.
The RXTE/ ASM dwell data and daily-average data were obtained from the
ASM web site.
The data reduction to
obtain the count rates and errors from the satellite observations was
carried out by ASM/RXTE team, and the procedures are described at
the web site.
The ASM count rates used here include the full energy range band as well as
the three sub-bands 1.3-3.0 keV, 3.0-5.0 keV and 5.0-12.1 keV. The data covered
the time period MJD 50172.6 to MJD 53978.6. The regular outbursts every 41.5 day
orbital cycle are seen in the 5-12 keV count rates, as well as the variability
from cycle to cycle.
The orbital parameters of GX301-2, updated with the BATSE observations
(Koh et al., 1997) and used for the current study are as follows.
$P_{\rm orb}=41.498$days, $a_{x}sin(i)=368.3$lt-s, eccentricity $e=0.462$,
longitude of periastron $\omega=310^{\circ}$, time of
periastron passage $T_{0}=$MJD 48802.79.
The dwell data is used in the analysis that follows.
The three RXTE/ ASM sub-bands and full energy range band were epoch folded.
The orbital light curves for these bands and the full energy range
band are shown in Fig. 1 with orbital phase zero defined by the
time of periastron passage, $T_{0}$.
GX301-2 shows a significant variability above statistical uncertainties in
intensity from orbit to orbit.
This is illustrated in Fig. 2 which shows the RXTE/ ASM data over the entire observation
period with timebins equal to one orbital period.
The r.m.s. variability is 0.33 ASM c/s compared to a mean error of 0.044 ASM c/s: the
variability is real at greater than 7 sigma significance.
There is a secular decrease in the
mean flux in the amount of -0.07 ASM c/s/year.
However the length of the data set is not long enough to establish a long term trend,
and the flux is also consistent with no secular decrease after $\sim$MJD 51200.
The high time-resolution data were tested for long term periodicities
by examining $\chi^{2}$ vs. period for
epoch folding over periods up to 500 days. This showed peaks at N times the
orbital period (with N=1, 2, 3, 4 …): this is due to aliasing of the orbital light
curve. To negate the effect of aliasing, one bin per orbital period was used as input
to the epoch folding. This then yielded peaks at 4 and 8 times the orbital period.
A visual inspection of Figure 2 verifies that this long-term period is real: there
are prominent oscillations around MJD 52000 which have a period $\sim$ 4 times
the orbital period.
Fig. 3 shows the 5-12.1 keV orbital light curves when the the total time period is
divided into three different intensity levels based on the average count rate per
orbit: bright (average count rate per orbit greater than 2 c/s); medium (average
count rate per orbit in range 1.5- 2 c/s) and dim (average count rate per orbit
less than 1.5 c/s).
The variability in the shape of the light curve for GX301-2 between bright, medium
and dim levels is primarily due to variability in
the outburst peak near orbital phase 0.9. The medium and dim level folded light
curves are consistent with each other between orbital phases 0.1 to 0.8, and the bright
level light curve is different from medium and dim between
orbital phases 0.3 to 0.55.
Column densities were extracted from the RXTE/PCA spectral fits of Mukherjee and Paul (2004). The
values used were the column densities of the absorbed component, since that measures
the column density to the neutron star. Whereas the column densities of the
scattered component are complicated to interpret and represent mean values to
the scattering region. The orbital phase coverage of the RXTE/PCA column densities
is not very uniform, and all column densities are from a single orbit observation
of GX301-2. To obtain better orbital coverage and to cover the same multiyear timespan
as the ASM lightcurve observations, an estimate of column density versus orbital phase
was created based on the 3-5 keV to 5-12 keV softness ratio of the ASM observations.
Conversion coefficients from the softness
ratio to column density were determined using NASA’s WebPIMMS software assuming a
power law spectrum with photon index -1.0. Figure 4 shows the derived ASM column
densities compared to the observed PCA column densities.
The main approximation in calculation of column densities from ASM softness ratio is
use of a single power law spectrum, which is the equivalent of ignoring the
scattering contribution to the spectrum.
3 Model
3.1 Wind Model
The stellar wind velocity and density profiles are considered first.
Both radial and azimuthal velocity components of the wind were included in
this analysis.
The radial wind velocity follows a power law and is taken to be of the form,
$v_{w}(r)=v_{o}(1-R_{s}/r)^{\beta}+c_{s}$ with $\beta=1$, $c_{s}$ the speed of sound and
$v_{o}$ the terminal velocity of the wind, (Castor et al, 1975). Conservation of angular
momentum dictates that the azimuthal component of the wind velocity drops off as $1/r$.
The constant stellar angular speed $\omega$ of WRAY977 is expressed using the
parameter $f$:
$$\omega(f)=f\times\omega_{\rm orb}+(1-f)\times\omega_{\rm per}$$
(1)
with
$\omega_{\rm orb}$ is
the average orbital angular velocity ($2\pi/P_{\rm orb}$) and
$\omega_{\rm per}=\omega_{\rm orb}(1+e)^{0.5}/(1-e)^{1.5}=3.06\omega_{\rm orb}$ is
the periastron angular velocity. Thus the primary is taken to be rotating at
some angular velocity between $\omega_{\rm orb}$ and $\omega_{\rm per}$. The large
difference in $\omega_{\rm orb}$ and $\omega_{\rm per}$ is due to the high eccentricity
of the orbit.
Fits of a wind driven accretion model for GX301-2 were studied by L02 and found to
be unable to fit the ”double bump” nature of the light curve. As well L02 tested
a wind and disk model of accretion for GX301-2 which could not fit the observations.
The conclusion drawn by L02 was that a wind and stream model (Stevens, 1988) is the
best supported model for GX301-2.
3.2 Wind plus stream model
Simplified models describing a simultaneous wind and stream accretion process were
first used by Haberl (1991) (straight line stream) and Leahy (1991) (spiral stream)
to fit the less complete data from EXOSAT and TENMA.
In the model (Stevens, 1988), a stream originates at the point on the surface of
WRAY 977 that is nearest to GX301-2. The stream then bends backwards (with respect
to the direction of orbital velocity of GX301-2) as it travels radially outward from
the primary star.
Here we calculate the stream position at any given orbital phase by integrating
the radial and azimuthal equations of motion. Roughly speaking the stream is like
an Archimedes spiral co-rotating at the orbital angular velocity. However since
the point of origin of the stream follows the neutron star it has a greatly varying
angular velocity, by a factor of $[(1+e)/(1-e)]^{2}=7.4$ for GX301-2. The result is
a stream that changes shape considerably with orbital phase, similar to a
garden sprinkler with an uneven rotational speed.
Animations of the stream can be found at www.iras.ucalgary.ca/$\sim$leahy/. The
stream shape depends on the terminal wind velocity ($v_{o}$), the angular speed
($\omega(f)$) of WRAY 977, the system inclination, and on companion radius (through
the wind law). The model light curve depends also on the stream width and density
and the speed of sound. In order to calculate the full stream shape we started
with a set of terminal wind velocities and stellar angular velocities
($f$) then created a stream for each combination of $v_{o}$ and $f$.
To allow $v_{o}$ and $f$ to be free parameters, the stream position was interpolated in
$f$ and $v_{o}$.
Analysis done by L02 suggests that two crossing points (with orbital phases
$\gamma_{1},\gamma_{2}$) between GX301-2 and the stream exist. While L02 allowed
$\gamma_{1},\gamma_{2}$ to be free parameters, here we constrain them to be
governed by the computed stream shape. The relative velocity of GX301-2 with respect
to the stream and the stream density both play a large role in the intensity
of the X-ray flux.
One stream crossing occurs just before periastron (orbital phase $\sim$0.93). GX301-2
is nearing its peak speed to overtake the stream, but the stream is near its
highest density and lowest radial velocity resulting in a large increase in luminosity.
The second stream crossing is at orbital phase $\sim$0.55. As GX301-2 approaches
apastron it slows to its most leisurely pace, so the stream is able to overtake the
neutron star. Since the radial wind speed is highest and stream density lowest at
apastron, a significantly lower peak in luminosity occurs.
Physically one expects the stream width to increase with radial distance from the
companion star, due to expansion of the higher density, overpressured stream in the
lower density surrounding stellar wind. Here we use a Gaussian density profile with
variable width. The angular width ($\sigma$) is taken as a power law function
of distance from the center of the companion star:
$\sigma(r)=\sigma_{0}(r/r_{per})^{-\kappa}$, with $\sigma_{0}$ the width normalized at
periastron distance $r_{per}$.
$\kappa<0$ corresponds to a stream with increasing physical width
as the stream propagates outward.
We ensure mass conservation by employing the continuity equation, so the density of
the stream varies with $r$ depending on the value of $\kappa$.
The angular width of the stream (viewed from the companion star)
depends on $(r/r_{per})^{-\kappa-1}$, so $\kappa>-1$ corresponds to a stream with
decreasing angular width as the stream propagates outward.
3.3 Comparison to Data
GX301-2 has a significant absorption by its stellar wind in soft X-rays
with column densities several times $10^{23}$ cm${}^{-2}$. This shows up well
in Fig. 1 above:
Band 1 and Band 2 data are affected significantly by absorption but Band 3
(5-12.1 keV) is mostly free of absorption effects since the photoelectric
cross-section is very small above 3-4 keV.
This is also confirmed by the consistency in shape of the Band 3 light curve with
the BATSE light curve (Koh et al., 1997), although the RXTE/ ASM light curve here is
of significantly better statistical quality.
Thus the 5-12.1 keV band flux is
taken here to be proportional to the X-ray luminosity of the pulsar.
The comparison of our wind plus stream models to the RXTE/ ASM orbital light
curve is made by $\chi^{2}$ minimization using the non-linear
conjugate gradient method. For each minimization (i.e. fitting), some parameters
were taken as fixed parameters (stellar radius and system inclination) and the
remaining parameters were taken as free parameters.
The mass-radius constraints on WRAY 977 were discussed in detail in L02. Here
we replot the radius contraints in Fig. 5. We use inclination rather than mass as the
independent variable, since radius and inclination are critical inputs to our
model calculations. Also we have extended the upper limit of $T_{eff}$ to 22500 K
as suggested by the data of Kaper et al. (1995),
and show the most relevant region of the radius vs. inclination diagram. Allowable
companion radii are smaller than those that result in eclipses (i.e. left of the
”Eclipse Limit” line) and smaller than those that overflow the Roche lobe (below
the ”Mean Roche Radius” line). These are hard constraints that cannot be violated.
The estimated mass-radius relation and estimated $T_{eff}$ give that the star should
fall in between the $T_{eff}$=20000 K and 22500 K lines.
Thus for our models, we chose allowable inclination and radius pairs falling in
the allowed region from Fig. 5. We find a maximum radius of 78 $R_{\odot}$ consistent
with the maximum of (Kaper et al., 1995), and a minimum radius of 45 $R_{\odot}$.
These correspond to inclinations of 52${}^{\circ}$ and 77.5${}^{\circ}$.
Different pairs of stellar radii and inclination were chosen covering the allowed
region in the radius-inclination plane. These are listed in the first two colums
of Table 1.
For each case of the two fixed parameters; R${}_{c}$ and inclination,
the stream model was fit to the ASM data.
The free parameters in the model were terminal wind velocity ($v_{o}$),
stream density contrast (ds), stream angular width in radians ($\sigma_{o}$), stream
width variation parameter $\kappa$, mass-loss rate ($\dot{m}$),
a normalization factor, and the stellar
angular velocity factor ($f$) that determined the rate of rotation of WRAY 977.
Initial fits were done on the light curves and column densities from the full
ASM data set, then fits were
carried out on the three subsets of lightcurve and column density data
for the different intensity levels
(bright, medium and dim). Since the light curves for the three different
intensity levels
were significantly different, we present results for separately fitting the
different intensity levels.
We carried some joint fits to both lightcurve and column density, and found no
significant difference to fits done to lightcurve to determine all model lightcurve
parameters, followed by fits to column density to determine wind mass loss rate.
This is not too surprising as the errors on lightcurve data are very small, whereas
the errors on the column density data are large (see Fig.3 and Fig.4).
Thus the the light curve completely dominated the determination of joint parameters.
The second procedure has the two advantages of: much faster run-time; and
less reliance on the approximate column densities.
4 Results
For each set of fixed parameters (radius,$R_{c}$, and inclination $inc)$ in Table 1),
the best fit parameters for fitting to the ASM light curve and column density data
are listed in Table 1. B, M and D refer to bright, medium and dim intensity levels.
The fit parameters for the light curve fits are base wind velocity ($v_{wo}$),
stellar angular velocity parameter ($f1$), stream width and width
variation parameters ($\sigma_{o}$ and $\kappa$), and stream
central density enhancement at periastron ($ds$).
The $\chi_{L}^{2}$ values for the light curve fits are listed in column 8.
The large $\chi_{L}^{2}$ compared to the number of degrees of freedom (34)
shows that the model does not provide a statistically good fit.
Fig. 6 shows model fits to the ASM lightcurve data (top panel) and ASM column
density data (bottom panel) for bright level for
$R_{c}=75R_{\odot}$, inclination 55${}^{\circ}$. The shape of the light curve is fit
very well: the contributions to $\chi_{L}^{2}$ mainly come from real fluctuations in
the observed light curve at orbital phases $\sim$0.3- 0.4 and $\sim$0.6 - 0.7.
This is likely due to clumps in the stellar wind and stream that we cannot model
currently: Mukherjee and Paul (2004) also noted large fluctuations in their RXTE/PCA observations
which they attributed to clumps in the stellar wind.
There are several trends in the fit results. As one goes to smaller $R_{c}$ and larger
inclination, the best fit wind velocity decreases: from $\sim$600 km/s to
$\sim$300 km/s.
The derived windspeeds are in good agreement with the values estimated from the
optical spectrum of WRAY 977 (Kaper et al. (1995)).
In all cases, for a given $R_{c}$ and inclination, the wind velocity
is highest for bright and lowest for dim.
Also in all cases the best fit mass-loss
rate is highest for bright and similar for medium and dim, where as the density
enhancement ratio, $ds$, is smallest for bright and highest for medium.
Instead, one can consider the central density of the stream (proportional to
density enhancement ratio, column 7, times mass-loss rate, column 9)
This shows that the central density of the stream is essentially unchanged between
bright and medium levels, so the main change between bright and medium is the
stellar wind mass-loss rate. However the central density of the stream drops from
medium to dim whereas the wind mass-loss rate does not change signficantly.
Thus the main change between medium in dim is that the stream density is
lower for dim.
The angular rotation rate parameter $f1$ systematically decreases with $R_{c}$.
From the stream model, this is just due to the requirement of
having the neutron star- stream crossings at
the correct observed orbital phases as the position of the base of the wind
(at $R_{c}$) changes.
From equation 1, a value of $f1$=1 is for the companion rotation synchronous
with the mean orbital rotation
and a value of 0 is for synchronous with periatron angular velocity.
Tidal torques in the eccentric orbit would yield an intermediate value, consistent
with the best-fit values in Table 1.
Values of $\kappa$ are in the range -0.4 to 0, thus the stream
physical width grows with $r$, as expected, and the stream angular
width decreases with $r$ ($-1<\kappa<0$).
The stream density enhancement over that of the spherical component of the stellar
wind is typically 25 (with a range of 20 to 30), and the stream angular width
at periastron
is in the range of 22${}^{\circ}$ to 26${}^{\circ}$.
The column density model gives two gradual peaks (see Fig. 6): one near periastron
and one near orbital phase 0.25. The stream is seen to dominate over the wind component
for the periastron peak but both wind and stream contribute roughly equally for the
peak near phase 0.25. The system inclination can be such that the neutron star
is nearly eclipsed by the companion (see Fig. 5 for $R_{c}$-inclination values where
this occurs, near the ”Eclipse Limit” line). In this case the wind contribution to
the column density becomes large near orbital phase 0.25, when the line-of-sight
passes near the companion’s surface, and the $\chi_{N}^{2}$ for the column density fit
becomes large. An example of this is the (48$R_{\odot}$,75${}^{\circ}$) case in Table 1.
Finally, we have calculated the mass-loss rate in the stream and added this as
column 10 in Table 1. It is seen that the stream mass-loss rate is about a factor
of 2 to 2.5 times higher than the mass-loss rate in the wind. This is a dramatic
confirmation of the importance of the stream in the total mass-loss rate from
WRAY 977. It is also consistent with WRAY 977 being close to filling its Roche lobe.
Since this can only occur at lower inclinations without having WRAY 977 too close to
eclipsing, this is another indicator that the system inclination is near the low end
of the allow range (near $\sim 55^{\circ}$).
5 Conclusion
Long term (10-year) monitoring of GX301-2 with the RXTE/ASM has revealed some new
properties of this high-mass X-ray binary. Secular changes in flux (Fig. 2)
are also accompanied by flux oscillations with a period of four 41.5-day binary
orbits. The orbital light curve is seen to be significantly different between bright,
medium and dim flux levels (Fig.3).
We have constructed an improved stellar wind and stream model for the GX301-2/ WRAY 977
binary system, extending the work of L02.
The model is compared to long term lightcurve observations from
the RXTE/ASM and to column densities derived from the RXTE/ASM 3-5 keV to 5-12 keV
softness ratios.
We have validated the necessity of including a stream in the mass
flow from WRAY 977 in addition to a spherically symmetric wind in order to explain
the observed light curves and column densities.
The timings and amplitudes of the
two peaks in the lightcurve, near orbital phases 0.92 and 0.5 are naturally explained
by accretion onto the neutron star from an Archimedes spiral-type stream. The
quality of the column density data is low due to statistical uncertainties.
Yet the column density data provides the primary constraint on the stellar
wind mass loss rate.
We have found best fit parameters for a range of radii for WRAY 977 and a range of
system inclinations which are consistent with the physical constraints such as no
eclipse and maximum mean radius not exceeding the mean Roche radius (see Fig. 5).
From the model fits, we find the change between bright and medium intensity
levels is primarily due to decreased mass loss in the stellar wind, but the change
between medium and dim intensity levels is primarily due to decreased stream density.
For any intensity level, the total mass-loss rate in the stream exceeds that in the
wind by a factor of $\sim$2.5, indicating the crucial role of the stream in this
binary system.
The model fits at higher inclinations are significantly worse than those at
lower inclinations.
In Table 1, last column, we list the total $\chi^{2}$ (values for lightcurve and
column density fits summed).
The best fit values of ($R_{c},inc$) can be determined by summing the total $\chi^{2}$
for the three intensity levels B, M and D to yield a total $\chi_{BMD}^{2}$.
This gives, in order of increasing $chi_{BMD}^{2}$,:
($R_{c},inc$)=(75$R_{\odot}$,$55^{o}$) with $\chi_{BMD}^{2}=1660$;
(68$R_{\odot}$,$60^{o}$) with $\chi_{BMD}^{2}=1820$; and
(62$R_{\odot}$,$60^{o}$) with $\chi_{BMD}^{2}=2010$.
Thus the $chi_{BMD}^{2}$ from our modeling strongly prefers the lowest inclination.
In Table 2, we have listed
the companion mass vs. inclination for GX301-2. Thus the wind plus stream model fits
to the RXTE/ASM lightcurve and column density data favor $\sim 55-60^{\circ}$ inclination,
or companion mass $\sim 53-62M_{\odot}$.
Acknowledgments
DAL acknowledges support from the
Natural Sciences and Engineering Research Council.
References
Aurière (1982)
Aurière, M. 1982, A&A,
109, 301
Canizares et al. (1978)
Canizares, C. R.,
Grindlay, J. E., Hiltner, W. A., Liller, W., and
McClintock, J. E. 1978, ApJ, 224, 39
Castor et al (1975)
Castor, J.,
Abbott, D. and Klein, R. 1975, ApJ, 195, 157
Djorgovski and King (1984)
Djorgovski, S.,
and King, I. R. 1984, ApJ, 277, L49
Haberl (1991)
Haberl, F. 1991, ApJ, 376, 245
Hagiwara and Zeppenfeld (1986)
Hagiwara, K., and
Zeppenfeld, D. 1986, Nucl.Phys., 274, 1
Harris and van den Bergh (1984)
Harris, W. E.,
and van den Bergh, S. 1984, AJ, 89, 1816
Hènon (1961)
Hénon, M. 1961, Ann.d’Ap., 24, 369
Kaper et al. (1995)
Kaper, L., Lamers, H., Ruymaekers, E., van den Heuvel, E., &
Zuiderwijk, E. 1995, A&A, 300, 446
King (1966)
King, I. R. 1966, AJ, 71, 276
King (1975)
King, I. R. 1975, Dynamics of
Stellar Systems, A. Hayli, Dordrecht: Reidel, 1975, 99
King (1968)
King, I. R., Hedemann, E.,
Hodge, S. M., and White, R. E. 1968, AJ, 73, 456
Koh et al. (1997)
Koh, D., Bildsten, L., Chakrabarty, D. et al. 1997, ApJ, 479, 933
Kron et al. (1984)
Kron, G. E., Hewitt, A. V.,
and Wasserman, L. H. 1984, PASP, 96, 198
Leahy (2002)
Leahy, D. 2002, A&A, 391, 219
Leahy (1999)
Leahy, D., 1999, JRASC., 93, 33
Leahy and Creighton (1993)
Leahy, D., Creighton, J., 1993, MNRAS., 263, 314
Leahy (1991)
Leahy, D., 1991, MNRAS., 250, 310
Leahy & Matsuoka (1990)
Leahy, D. & Matsuoka, M. 1990, ApJ, 355, 627
Leahy & Matsuoka (1989a)
Leahy, D.A., Matsuoka, M., Kawai, N. & Makino, F. 1989, MNRAS, 236, 603
Leahy & Matsuoka (1989b)
Leahy, D.A., Matsuoka, M., Kawai, N. & Makino, F. 1989, MNRAS, 237, 269
Leahy (1983)
Leahy, D.A., Elsner, R., & Weisskopf, M. 1983, ApJ, 272, 256
Levine et al. (1996)
Levine, A., Bradt, H., Cui, W.,et al. 1996, ApJ, 469, L33
Lynden-Bell and Wood (1968)
Lynden-Bell, D.,
and Wood, R. 1968, MNRAS, 138, 495
Mukherjee and Paul (2004)
Mukherjee, U. and Paul, B.
2004, A&A, 427, 567
Newell and O’Neil (1978)
Newell, E. B.,
and O’Neil, E. J. 1978, ApJS, 37, 27
Ortolani et al. (1985)
Ortolani, S., Rosino, L.,
and Sandage, A. 1985, AJ, 90, 473
Parkes et el. (1980)
Parkes, G., Mason, K., Murdin, P. & Culhane, J. 1980, MNRAS, 191, 547
Peterson (1976)
Peterson, C. J. 1976, AJ, 81, 617
Pravdo et al. (1995)
Pravdo, S., Day, C., Angelini, L., et al. 1996
ApJ, 454, 872
Saraswat et al. (1996)
Saraswat, P., Yoshida, A., Mihara, T. et al. 1996, ApJ, 463, 726
Sato et al. (1986)
Sato, N., Nagase, F., Kawai, N., et al. 1986
ApJ, 304, 241
Shaller et al. (1992)
Shaller, G., Schaerer, D. & Maeder, G. 1992, A&AS, 96, 269
Spitzer (1985)
Spitzer, L. 1985, Dynamics of
Star Clusters, J. Goodman and P. Hut, Dordrecht: Reidel, 109
Stevens (1988)
Stevens, I. R. 1988 MNRAS, 232, 199
Watanabe et al. (2003)
Watanabe, S., et al. 2003, ApJ, 597, L37 |
Initial Results of a Silicon Sensor Irradiation Study for ILC Extreme Forward Calorimetry
Talk presented at the International Workshop on Future Linear Colliders (LCWS13), Tokyo, Japan, 11-15 November 2013.
Reyer Band
Vitaliy Fadeyev
R. Clive Field
Spencer Key
Tae Sung Kim
Thomas Markiewicz
Forest Martinez-McKinney
Takashi Maruyama
Khilesh Mistry
Ravi Nidumolu
Bruce A. Schumm
[email protected]
Edwin Spencer
Conor Timlin
Max Wilder
Santa Cruz Institute for Particle Physics and the University Of California, 1156 High Street,
Santa Cruz California 95064 USA
SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park California 94025 USA
Abstract
Detectors proposed for the International Linear Collider (ILC)
incorporate a tungsten sampling calorimeter (‘BeamCal’) intended to
reconstruct showers of electrons, positrons and photons
that emerge from the interaction point of the collider
with angles between 5 and 50 milliradians. For the
innermost radius of this calorimeter, radiation doses
at shower-max are expected to reach 100 MRad per year,
primarily due to minimum-ionizing electrons and positrons
that arise in the induced electromagnetic showers
of e+e- ‘beamstrahlung’ pairs produced in the ILC beam-beam interaction. However,
radiation damage to calorimeter sensors may be dominated
by hadrons induced by nuclear interactions of shower photons,
which are much more likely to contribute to the non-ionizing
energy loss that has been observed to damage sensors exposed to
hadronic radiation. We report here on the results of SLAC
Experiment T-506, for which several different types of
silicon diode sensors were exposed to doses of radiation
induced by showering electrons of energy 3.5-10.6 GeV. By embedding
the sensor under irradiation within a tungsten radiator, the exposure
incorporated hadronic species that would potentially contribute to the
degradation of a sensor mounted in a precision sampling calorimeter.
Depending on sensor technology, efficient charge collection
was observed for doses as large as 220 MRad.
keywords:
radiation damage, electromagnetic showers, silicon diode sensors, sampling calorimetry
1 Introduction
Far-forward calorimetry, covering the region between 5 and 50 milliradians
from the on-energy beam axis,
is envisioned as a component of both the ILD ref:ILD_DBD and SiD ref:SiD_DBD
detector concepts for the proposed International Linear Collider (ILC). The
BeamCal tungsten sampling calorimeter proposed to cover this angular region
is expected to absorb approximately 10 TeV of electromagnetic radiation
per beam crossing from e+e- beamstrahlung pairs, leading to expected annual radiation doses of 100 MRad
for the most heavily-irradiated portions of the instrument.
While the deposited energy is expected to arise primarily from minimum-ionizing
electrons and positrons in the induced electromagnetic showers,
radiation damage to calorimeter sensors may be dominated
by hadrons induced by nuclear interactions of shower photons,
which are much more likely to contribute to the non-ionizing
energy loss that has been observed to damage sensors exposed to
hadronic radiation. We report here on the results of SLAC
Experiment T-506, for which several different types of
silicon diode sensors were exposed to doses of up to 220 MRad
at the approximate maxima of electromagnetic
showers induced in a tungsten radiator by electrons of energy
3.5-10.6 GeV, similar to that of electrons and positrons
from ILC beamstrahling pairs.
Bulk damage leading to the suppression of the electron/hole
charge-collection efficiency is generally thought to be proportional
to the non-ionizing energy loss (‘NIEL’) component of the energy
deposited by the incident radiation.
Early studies of electromagnetically-induced damage to
solar cells ref:TaeSung5 ; ref:TaeSung6 ; ref:TaeSung7
suggested that p-type bulk sensors were more tolerant to
damage from electromagnetic sources, due to an apparent
departure from NIEL scaling, particularly for electromagnetic particles
of lower incident energy.
Several more-recent studies have
explored radiation tolerance to incident fluxes of electrons.
A study assessing the capacitance vs. bias voltage (CV) characteristics of sensors exposed
to as much as 1 GRad of incident 2 MeV electrons ref:Rafi_09
suggested approximately 35 times less damage to n-type magnetic
Czochralski sensors than that expected from NIEL scaling.
A study of various n-type sensor types exposed to 900 MeV electrons
showed charge-collection loss of as little as 3% for exposures up to
50 MRad exposure ref:Dittongo2004 ; for exposures of 150 MRad, a suppression of damage
relative to NIEL expectations of up to a factor of four was observed ref:Dittongo2005 .
These discrepancies have been attributed to the different types of
defects created by lattice interactions: electrons tend to create point-like defects that are more
benign than the clusters formed due to hadronic interactions.
Finally, in studies of sensors exposed to large doses of hadron-induced
radiation, p-type bulk silicon was found to be more radiation-tolerant
than n-type bulk silicon, an observation that has been attributed to the absence of type inversion and the
collection of an electron-based signal ref:TaeSung3 ; ref:TaeSung4 .
However, n-type bulk devices have certain advantages, such as a natural inter-electrode
isolation with commonly used passivation materials such as silicon oxide and silicon nitride.
Here, we report on an exploration of the radiation tolerance of silicon sensors,
assessed via direct measurements of the median collected charge deposited
by minimum-ionizing particles,
for four different bulk compositions: p-type and n-type doping of
both magnetic Czochralski and float-zone crystals.
The p-type float-zone sensors were produced by Hamamatsu Photonics while the
remaining types were produced by Micron Corporation.
Sensor strip pitch varied between
50 and 100 $\mu$m, while the bulk thickness
varied between 307 $\mu$m (for the p-type magnetic Czochralski sensors)
and 320 $\mu$m (for the p-type float zone sensors).
The use of these sensors is being explored as an alternative to several
more novel sensor technologies that are currently under
development ref:ILD_DBD , including GaAs and CVD diamond.
While the radiation dose was initiated by electromagnetic processes
(electrons showering in tungsten), the placement of the sensors near
shower max ensures that the shower incorporates an appropriate
component of hadronic irradiation arising from neutron spallation,
photoproduction, and the excitation of the $\Delta$ resonance.
Particularly for the case that NIEL scaling suppresses
electromagnetically-induced radiation damage, the small
hadronic component of the electromagnetic
shower might dominate the rate of damage to the sensor.
However, the size and effect of this component is difficult to
estimate reliably, and so we choose to study radiation damage in
a configuration that naturally incorporates all components present in
an electromagnetic shower.
2 Experimental Setup
Un-irradiated sensors were subjected to current vs. bias voltage (IV) and CV tests,
the results of which allowed
a subset of them to be selected for irradiation based on their
breakdown voltage (typically above 1000 V for selected sensors) and low level of leakage
current. The sensors were placed on carrier printed-circuit ‘daughter boards’ and wire-bonded to a
readout connector. The material of the daughter boards was milled away in the
region to be irradiated in order to facilitate the
charge collection measurement (described below) and minimize radio-activation.
The median collected charge was measured with the Santa Cruz Institute for Particle Physics (SCIPP)
charge-collection (CC) apparatus (also described below) before irradiation.
The sensors remained mounted to their individual daughter boards throughout irradiation and the followup
tests, simplifying their handling and reducing uncontrolled annealing.
Additionally, this allowed a reverse-bias voltage to be maintained across the sensor during irradiation.
The voltage was kept small (at the level of a few volts) to avoid possible damage of the devices
from a large instantaneous charge during the spill.
Sensors were irradiated with beam provided by the End Station
Test Beam (ESTB) facility at the SLAC National Accelerator Laboratory.
Parameters of the beam provided by the ESTB facility are shown in
Table 1. The beam was incident upon a series of
tungsten radiators, as enumerated in Table LABEL:tab:setup.
An initial 7mm-thick tungsten plate
served to initiate the electromagnetic shower.
The small number of
radiation lengths of this initial radiator (2.0) permitted the
development of a small amount of divergence of the shower
relative to the straight-ahead beam direction without
significant development of the largely isotropic hadronic
component of the shower.
This plate was followed by an
open length of approximately 55cm, which allowed a degree of
spreading of the shower before it impinged upon a second,
significantly thicker radiator (4.0 radiation lengths)
which was followed immediately by the sensor undergoing
irradiation. This was closely followed, in turn, by an
8.0 radiation-length radiator. Immediately surrounding the
sensor by tungsten radiators that both
initiated and absorbed the great majority of the electromagnetic
shower ensured that the sensor would be illuminated by a
flux of hadrons commensurate with that experienced by a calorimeter
sensor close to the maximum of a tungsten-induced shower.
Although initiating the shower significantly upstream of the sensor
promoted a more even illumination of the sensor
than would otherwise have been achieved, the half-width
of the resulting electron-positron fluence distribution
at the sensor plane was less than 0.5 cm. On the other hand,
the aperture of the CC apparatus (to be
described below) was of order 0.7 cm.. Thus, in order to
ensure that the radiation dose was well understood over
the region of exposure to the CC apparatus source,
it was necessary to achieve a uniform illumination over
a region of approximately 1cm${}^{2}$. This was done by
‘rastering’ the detector across the beam spot through
a range of 1cm in the directions both along
and transverse to the direction of the sensor’s strips,
generating a region of approximately 1cm${}^{2}$ over which
the illumination was uniform to within $\pm 5$%.
3 Dose Rates
During the 120 Hz operation of the SLAC Linear Collider Light Source (LCLS),
5-10 Hz of beam was deflected by a pulsed kicker magnet into the End Station transfer line.
The LCLS beam was very stable with respect to both current and energy. Electronic
pickups and ion chambers measured the beam current and beam loss through the
transfer line aperture, ensuring that good transfer efficiency could be established
and maintained. The transfer efficiency was estimated to be ($95\pm 5$)%, although
for the highest energy beams delivered in the final days of T-506, the transfer line
experienced small but persistent beam loss; for this period, the transfer
efficiency was measured to be ($90\pm 10$)%. These transfer factors and their
uncertainties were taken into account in the estimation of dose rates through
the exposed sensors.
To calculate the dose rate through the sensor, it is necessary to determine
the ‘shower conversion factor’ $\alpha$ that provides the mean fluence of minimum-ionizing
particles (predominantly electrons and positrons), in particles per cm${}^{2}$,
per incoming beam electron. This factor is dependent upon the radiator
configuration and incident beam energy, as well as the rastering pattern
used to provide an even fluence across the sensor (as stated above,
the detector was translated continuously across the beam centerline
in a 1 cm${}^{2}$ square pattern).
To estimate $\alpha$, the Electron-Gamma-Shower (EGS) Monte Carlo program ref:EGS
was used to simulate showers through the radiator configuration
and into the sensor. The configuration of Table LABEL:tab:setup
was input to the EGS program, and a mean fluence profile (particles per
cm${}^{2}$ through the sensor as a function of transverse distance from the nominal
beam trajectory) was accumulated by simulating the showers of 1000
incident electrons of a given energy. To simulate the rastering process,
the center of the simulated profile was then
moved across the face of the sensor in 0.5mm steps, and an estimated mean fluence
per incident electron as a function of position on the sensor (again, relative to the nominal beam
trajectory) was calculated. This resulted in a mean fluence per incident electron
that was uniform to within a few percent 1mm or more inside of the edge of the rastering
region.
The value of $\alpha$ used for subsequent irradiation dose estimates was taken to be
the value found at the intersection of the nominal
beam trajectory with the sensor plane. The simulation was repeated for
various values of the incident electron energy, producing the values of
$\alpha$ shown in Table 3.
To convert this number to Rads per nC of delivered charge, a mean
energy loss in silicon of 3.7 MeV/cm was assumed, leading to
a fluence-to-Rad conversion factor of 160 Rad per nC/cm${}^{2}$.
It should be noted that, while this dose rate considers only
the contribution from electrons and positrons, these
two sources dominate the overall energy absorbed by the
sensor. In addition, the BeamCal dose-rate spec of 100 MRad
per year considered only the contribution from electrons
and positrons.
In order to accurately estimate the dose rates, it was also necessary to ensure that
the nominal beam trajectory passed through a well-known
and reproducible position on the sensors. A jig attached
to the downstream side of Radiator 3 (see Table LABEL:tab:setup)
positioned the daughter board carrying the sensor at
a fixed position relative to the radiator configuration.
Each sensor was mounted onto its own daughter board at
a location reproducible to sub-millimeter accuracy. The
desired location of the nominal beam trajectory in the
middle of the 1 cm${}^{2}$ rastering pattern was then transfered
to the upstream face of Radiator 2, which was rigidly attached
to Radiator 3, using a mechanical metrology procedure. A Delrin
pin was attached to the upstream face of Radiator 2 at a
known displacement from the desired beam location, which
was then used to spindle a reticled phosphorescent screen.
The sensor/radiator assembly was then moved to the
center of the rastering pattern, and with Radiator 1
removed, the beam was
steered until it hit the intended place on the
reticled screen. With the beam trajectory thus
established, Radiator 1 was replaced and two upstream
phosphorescent screens were placed in the beamline.
The position of the beam on these screens was
recorded, establishing both the position and
angle of the properly steered beam.
To confirm the adequacy of the dose-calibration simulation
(described above) and this alignment procedure, an
in-situ measurement of the dose was made using a
radiation-sensing field-effect transistor (‘RADFET’) ref:radfet
positioned on a daughter board at the expected
position of the nominal beam trajectory at the
center of the rastering pattern.
Beam was delivered in 150 pC pulses of 4.02 GeV
electrons; a total of 1160 pulses were directed
into the target over a period of four minutes,
during which the sensor was rastered quickly
through its 1 cm${}^{2}$ pattern.
The RADFET was then read out, indicating
a total accumulated dose of 230 kRad,
with an uncertainty of roughly 10%. Making
use of the dose rate calibration of Table 3,
interpolating to the exact incident energy of 4.02 GeV,
and taking into account the ($95\pm 5$)% transfer efficiency
of the ESTB beamline, leads to an expected dose of 250 kRad,
within the $\sim$10% uncertainty of the RADFET measurement.
4 Sensor Irradiation Levels
As mentioned above, four types of sensors were studied:
p-type and n-type doped versions of
both magnetic Czochralski and float-zone crystals.
In what follows, we will use the notation ‘N’ (‘P’)
for n-type (p-type) bulk sensors, and ‘F’ (‘C’)
for float-zone (magnetic Czochralski) crystal technology.
Once a sensor was irradiated with the ESTB, it was placed
in a sub-freezing environment and not irradiated again.
Up to four sensors of each type were irradiated and
chilled until they could be brought back to the University
of California, Santa Cruz campus for the
post-irradiation CC measurement. In
addition, the sub-freezing environment was maintained
both during and after the CC measurement, so
that controlled annealing studies can eventually be done.
Table 4 displays the dose parameters of the
irradiated sensors. The
$(95\pm 5)$% transfer line efficiency has been taken
into account in these estimates. The numeral following
the two letters in the sensor identifier refer to
an arbitrary ordering of sensors assigned during
the sensor selection. Sensors were held at between 0 and 5 C
during irradiation. With the exception of sensor
NC02, which was accidentally annealed for 5 hours at temperatures as high as
130 C, all sensors were transferred to a cold
(below -10 C) environment immediately after irradiation.
All four sensor types were exposed to dose rates
of approximately 5 and 20 MRad, while an NF sensor
received over 90 MRad and an NC sensor 220 MRad.
CC results for the
irradiated sensors will be presented below.
5 Charge Collection Measurement
The SCIPP CC
apparatus incorporates a ${}^{90}$Sr source that has a secondary $\beta$-decay
with an end-point energy of 2.28 MeV. These $\beta$ particles illuminate
the sensor under study, 64 channels of which are read out by the PMFE ASIC ref:PMFE ,
with a shaping time of 300 nsec. Whenever one of the 64 channels exceeds
a pre-set, adjustable threshold, the time and duration of the excursion
over threshold is recorded. In addition, the $\sim$250 Hz of $\beta$ particles that pass through
the sensor, and subsequently
enter a small (2mm horizontal by 7mm vertical) slit, trigger a
scintillator, and the time of excitation of the scintillator is also recorded.
If the slit is properly aligned with the read-out channels of
the sensor, and the sensor is efficient at the set read-out threshold,
a temporal coincidence between the scintillator pulse and
one of the read-out channels will be found in the data stream.
Figure 1 shows a sample coincidence profile
(histogram of the number of coincidences vs. channel
number) for a 150-second run at a given threshold and
reverse bias level for one of the irradiated sensors
(specifically, for the NC01 sensor after 5.1 MRad
of irradiation, applying a 300V reverse bias and a 130 mV threshold).
The integral of the distribution yields an estimate
of the total number of coincidences found during
the run, which, when divided by the number of scintillator
firings (after a small correction for cosmic background events)
yields the median CC level at that threshold
and bias level. This measurement can then be performed as
a function of threshold level, yielding the curve shown in
Figure 2. For this plot, the abscissa
has been converted from voltage (the applied threshold
level) to fC (the PMFE input charge that will fire the threshold
with exactly 50% efficiency) via a prior calibration step
involving measurement of the PMFE response to known
values of injected charge. The point at which the curve
in Figure 2 crosses the 50% level
yields the median CC for the given bias level.
In a prior study of sensors irradiated with hadrons, the SCIPP
apparatus gave median charge results consistent with that of
other charge collection systems used to assess radiation damage in
that study Hara .
6 Charge Collection Results
The daughter boards
containing the irradiated sensors were designed
with connectors that allowed them to be attached to the
CC apparatus readout board without handling the sensors.
The median CC was measured as a function of reverse bias voltage for each sensor
both before and after irradiation.
The best performance was observed for the NC (n-type bulk
magnetic Czochralski) sensor type. For the exposures of 5.1 (NC01)
and 18.0 MRad (NC10), no difference in charge collection performance was observed
relative to the pre-irradiation studies of the NC01 and NC10 sensors.
In Figure 3 the median CC
both before and after irradiation is plotted for the NC03 (90 MRad dose)
and NC02 (220 MRad dose) sensors; it should be borne in mind,
though, that the NC02 sensor experienced significant annealing before
the post-irradiation measurement was done. It is seen that,
while the depletion voltage increases significantly with dose, median CC
within 20% of un-irradiated values is maintained
for doses above 200 MRad, although it may require annealing
to maintain efficiency at that level.
Figures 4 through 6 show the results
for the remaining three sensor types (PF, PC and NF) for
irradiation levels up to approximately 20 MRad. Charge collection
remains high for the PC and NF sensors
at this dose level, with the PF sensors showing 10-20%
charge collection loss at 19.7 MRad. While this represents a dose of only
about 20% of the expected annual dose for the most heavily-irradiated
sensors in the BeamCal instrument, it is possible that a period
of controlled annealing may restore some or all of the CC loss
for these sensors. An NF sensor (NF07) with a 91 MRad
exposure remains to be evaluated with the SCIPP CC apparatus.
Table 5 provides a table of maximum median collected
charge, both before and after irradiation, and median
charge loss due to irradiation. Not shown are results for the PC10 (damaged
during handling) and NF07 (still under study) sensors.
7 Summary and Conclusions
We have explored the radiation tolerance of four different types of
silicon diode sensors (n-type and p-type Float Zone and Magnetic Czochralski bulk sensors),
exposing them to doses as high as 220 MRad
at the approximate maxima of tungsten-induced electromagnetic showers.
We have found all types to be radiation tolerant to 20 MRad, with the
n-type Czochralski sensors exhibiting less than a 20% reduction
in median collected charge for a dose in excess of 200 MRad. This
suggests the possibility of charge collection
sufficient for the operation of a calorimeter exposed to
hundreds of MRad, approaching the specification required for
the most heavily irradiated sensors in the ILC BeamCal instrument. We plan to
follow through with IV and CV studies of the irradiated sensors, as well as
annealing studies on selected sensors.
8 Acknowledgments
We are grateful to Leszek Zawiejski, INP, Krakow for supplying us with the tungsten plates
needed to form our radiator. We also would like to express our gratitude
to the SLAC Laboratory, and particularly the End Station Test Beam delivery
and support personnel, who made the run possible and successful.
Finally, we would like to thank our SCIPP colleague Hartmut Sadrozinski for
the numerous helpful discussions and guidance he provided us.
9 Role of the Funding Source
The work described in this article was supported by the United States Department of Energy,
DOE contract DE-AC02-7600515 (SLAC) and grant DE-FG02-04ER41286 (UCSC/SCIPP). The funding agency
played no role in the design, execution, interpretation, or
documentation of the work described herein.
References
(1)
ILD Concept Group, International Large Detector DBD, http://www.linearcollider.org/ILC/physics-detectors/Detectors/
Detailed-Baseline-Design,
Chapter 4 (2012).
(2)
SID Collaboration, SiD Detailed Baseline Design, http://www.linearcollider.org/ILC/physics-detectors/Detectors/
Detailed-Baseline-Design,
Chapter 5 (2013).
(3)
J.R. Carter and R.G. Downing,
‘Charged Particle Radiation Damage in Semiconductors: Effect of Low Energy Protons and High Energy Electrons on
Silicon’, Interim Technical Final Report, TRW Apce Technology Laboratories, May 1965.
(4)
T. Noguchi and M. Uesugi, ‘Electron Energy Dependence of Relative Damage Coefficients of Silicon Solar Cells for Space Use’,
Technical Design of the International PVSEC-5, Kyoto, Japan (1990).
(5)
Geoffrey P. Summers et al., ‘Damage Correlations in Semiconductors Exposed to Gamma, Electron, and Proton Radiations’,
IEEE Transactions on Nuclear Science 40, 1372 (1993).
(6)
J.M. Rafi et al., ‘Degradation of High-Resistivity Float Zone and Magnetic Czochralski
n-type Silicon Detectors Subjected to 2-MeV Electron Irradiation’,
NIM A 604, 258 (2009).
(7)
S. Dittongo et al.,
‘Radiation Hardness of Different Silicon Materials after High-Energy Electron Irradiation’,
NIM A 530, 110 (2004).
(8)
S. Dittongo et al.,
‘Studies of Bulk Damage Induced in Different Silicon Materials by 900 Mev Electron Irradiation’,
NIM A 546, 300 (2005).
(9)
G. Casse et al., ‘First Results on Charge Collection Efficiency of Heavily Irradiated
Microstrip Sensors Fabricated on Oxygenated p-type Silicon’, NIM A 518, 340 (2004).
(10)
G. Casse, ‘Radiation Hardness of p-type Silicon Detectors’, NIM A 612, 464 (2010).
(11)
The Electron Gamma Shower (EGS) Monte Carlo Program, http://rcwww.kek.jp/research/egs/.
(12)
The specific device used was the REM Oxford Ltd. corporation’s REM TOT601B device,
http://www.radfet.com/index.html.
(13)
Hartmut F.-W. Sadrozinski et al.,
‘The Particle Tracking Silicon Microscope PTSM’,
IEEE Transactions on Nuclear Science 51, 2032 (2004).
(14)
K. Hara et al., ‘Testing of bulk radiation damage of n-in-p silicon sensors for very high
radiation environments’, NIM A 636, S83 (2011). |
Optimization of Measurement Device Independent Scarani-Acìn-Ribordy-Gisin protocol
C. Tannous111Tel.: (33) 2.98.01.62.28, E-mail: [email protected] and J. Langlois
Laboratoire des Sciences et Techniques de l’Information, de la Communication et
de la Connaissance, UMR-6285 CNRS, Brest Cedex3, FRANCE
Abstract
The measurement device independent (MDI) Quantum Key Distribution (QKD)
is a practically implementable method for transmitting secret keys
between respective partners performing quantum communication.
SARG04 (Scarani-Acìn-Ribordy-Gisin 2004) is a protocol
tailored to struggle against photon number splitting (PNS) attacks by
eavesdroppers and its MDI-QKD version is reviewed and optimized from secret key
bitrate versus communication distance point of view.
We consider the effect of several important factors such as error correction function, dark
counting parameter and quantum efficiency in order to achieve the largest
key bitrate versus longest communication distance.
Quantum cryptography, Quantum Information, Quantum Communication
pacs: 03.67.Dd, 03.67.Ac, 03.67.Hk
Version November 19, 2020
While Classical cryptography use two types of keys to
encode and decode messages (secret or symmetric and public or asymmetric keys)
Quantum cryptography uses QKD for transmitting secret keys
between partners allowing them to encrypt and decrypt their messages.
QKD principal characteristic is that it is practically implementable and has already
been deployed commercially by several quantum communication providers such as SeQureNet in France,
IQ Quantique in Switzerland, MagiQ Technologies in the USA and QuintessenceLabs in Australia.
The second main feature of QKD is that it allows communicating parties to
detect online eavesdroppers in a straightforward fashion.
In principle, QKD is unconditionally secure nevertheless its practical implementation
has many loopholes and consequently has been attacked by many different ways
exploiting some intermediate operation or another during secret key processing such as
time-shift attack4 ; attack5 , phase-remapping attack6 ,
detector blinding attack7 ; attack8 , detector dead-time attack9 ,
device calibration attack10 , laser damage attack11 …
This work is about optimization of SARG04 Scarani MDI-QKD version protocol designed
to fend off photon number splitting (PNS) attacks by considering important factors such as
error correction function types, detector dark counting parameter and quantum efficiency.
It is organized as follows: after reviewing the original four-state SARG04 protocol,
we discuss its MDI version and describe the effects of various parameters
on communication distance and secret key bitrate.
SARG04 protocol has been developed to combat PNS attacks that are targeted
toward intercepting photons present in weak coherent pulses (WCP) that are used for
communication. This stems from the fact, it is not possible presently to commercially exploit
single photons in a pulse. However, progress in developing large scale methods targeted at
using single photons in a pulse is advancing steadily.
SARG04 being very similar to BB84 Scarani protocol, the simplest example
of secret key sharing among sender and receiver (Alice and Bob), we review first the BB84 case below.
In the BB84 protocol framework, Alice and Bob use two channels to communicate: one quantum and private to send
polarized single photons and another one classical and public (telephone or Internet)
to send ordinary messages Tannous .
Alice selects two bases in 2D Hilbert space consisting each of two
orthogonal states: $\bigoplus$ basis with $(0,\pi/2)$
linearly polarized photons,
and $\bigotimes$ basis with $(\pi/4,-\pi/4)$ linearly polarized photons.
Four photon polarization states: $\left|{\rightarrow}\right\rangle,\left|{\uparrow}\right\rangle,\left|{\nearrow%
}\right\rangle,\left|{\searrow}\right\rangle$
are used to transmit quantum data with
$\left|{\nearrow}\right\rangle=\frac{1}{\sqrt{2}}(\left|{\rightarrow}\right%
\rangle+\left|{\uparrow}\right\rangle)$
and $\left|{\searrow}\right\rangle=\frac{1}{\sqrt{2}}(\left|{\rightarrow}\right%
\rangle-\left|{\uparrow}\right\rangle)$.
A message transmitted by Alice to Bob over the Quantum channel is a stream of
symbols selected randomly among the four above and Alice and Bob choose randomly
one of the two bases $\bigoplus$ or $\bigotimes$
to perform photon polarization measurement.
Alice and Bob announce their respective
choice of bases over the public channel without revealing the measurement results.
The raw key is obtained by a process called ”sifting” consisting of retaining only the
results obtained when the used bases for measurement are same.
After key sifting, another process called key distillation Scarani must be performed.
This process entails three steps Scarani : error correction, privacy amplification and
authentication in order to counter any information leakage from
photon interception, eavesdropping detection (with the no-cloning theorem Scarani ) and
exploitation of announcement over the public channel.
The basic four-state SARG04 protocol is similar to BB84 but adds a number of steps to
improve it and protect it against PNS attacks. The steps entail introducing
random rotation and filtering of the quantum states. Before we describe it, we introduce
some states and operators Yin using Pauli matrices $\sigma_{X},\sigma_{Y},\sigma_{Z}$:
•
$R=\cos(\frac{\pi}{4})I-i\sin(\frac{\pi}{4})\sigma_{Y}$ is a $\pi/2$ rotation operator about $Y$ axis,
•
$T_{0}=I$ is the (2$\times$2) identity operator,
•
$T_{1}=\cos(\frac{\pi}{4})I-i\sin(\frac{\pi}{4})\frac{(\sigma_{Z}+\sigma_{X})}{%
\sqrt{2}}$ is a $\pi/2$ rotation operator around the $(Z+X)$ axis,
•
$T_{2}=\cos(\frac{\pi}{4})I-i\sin(\frac{\pi}{4})\frac{(\sigma_{Z}-\sigma_{X})}{%
\sqrt{2}}$ is a $\pi/2$ rotation operator around the $(Z-X)$ axis.
Alice prepares many pairs of qubits and sends each one of them to Bob after performing a random
rotation over different axes with $T_{l}R^{k}$ where $l\in\{0,1,2\}$ and $k\in\{0,1,2,3\}$.
Upon receiving the qubits, Bob first applies:
•
A random reverse multi-axis rotation $R^{-k^{\prime}}T_{l^{\prime}}^{-1}$,
•
Afterwards, he performs a local filtering operation defined by
$F=\sin(\frac{\pi}{8})\left|{0_{x}}\right\rangle\left\langle{0_{x}}\right|+\cos%
(\frac{\pi}{8})\left|{1_{x}}\right\rangle\left\langle{1_{x}}\right|$ where
$\{\left|{0_{x}}\right\rangle,\left|{1_{x}}\right\rangle\}$ are $X$-eigenstate qubits;
they are also eigenvectors of $\sigma_{X}$ with eigenvalues +1, and -1 respectively.
Local filtering enhances entanglement degree and the $\pi/8$ angle helps retrieve Tamaki
one of the maximally entangled EPR Bell EPR ; Kwiat states i.e. polarization
entangled photon pair states given by:
$\left|{\psi^{\pm}}\right\rangle=\frac{1}{\sqrt{2}}(\left|{\rightarrow\uparrow}%
\right\rangle\pm\left|{\uparrow\rightarrow}\right\rangle),\left|{\phi^{\pm}}%
\right\rangle=\frac{1}{\sqrt{2}}(\left|{\rightarrow\rightarrow}\right\rangle%
\pm\left|{\uparrow\uparrow}\right\rangle)$. They form a complete orthonormal basis in 4D Hilbert space for all polarization states
of a two-photon system and the advantage of local filtering is to make Alice and Bob
share pairs of a Bell state making the shared bits unconditionally secure Tamaki .
•
After, Alice and Bob compare their indices ${k,l}$ and ${k^{\prime},l^{\prime}}$ via public communication, and keep the qubit pairs with $k=k^{\prime}$ and $l=l^{\prime}$ when Bob’s filtering operation is successful.
•
They choose some states randomly as test bits, measure them in the $Z$ basis, and compare their results publicly to estimate the bit error rate and the information acquired by the eavesdropper.
•
Finally, they utilize the corresponding Calderbank-Shor-Steane (CSS) code CSS to correct bit and phase errors and perform a final measurement in the $Z$ basis on their qubits to obtain the secret key.
Following Lo et al. Lo2012 Mizutani et al. Mizutani modified the original
SARG04 protocol by including an intermediate experimental setup run by Charlie,
at mid-distance between Alice and Bob,
consisting of Bell correlation measurements. The setup contains a half beam-splitter,
two polarization beam-splitters to simulate photonic Hadamard and CNOT gates in order to
produce Bell states, as well as photodiode detectors. This additional step will help
discard non perfectly anti-correlated photons and thus reduce transmission error rates.
In addition, Alice and Bob not only choose photon polarization randomly, they also
use WCP amplitude modulation to generate decoy states in order to confuse the eavesdropper.
The protocol runs as follows:
•
Charlie performs Bell measurement on the incoming photon pulses
and announces to Alice and Bob over the public channel
whether his measurement outcome is successful or not.
When the outcome is successful, he announces the successful events
as being of Type1 or Type2. Type1 is coincidence detection events
of $AT$ and $BR$ or $BT$ and $AR$. Type2 is coincidence detection events
of $AT$ and $AR$ or $BT$ and $BR$ where $AT,BT$ stand for detecting transmitted $(T)$ photon events
from Alice $(A)$ or Bob $(B)$ linearly polarized at 45${}^{\circ}$ whereas $AR,BR$
are for detecting reflected $(R)$ photon events at -45${}^{\circ}$ .
•
Alice and Bob broadcast $k$ and $k^{\prime}$,
over the public channel.
If the measurement outcome is successful with Type1 and $k=k^{\prime}=0,\ldots,3$,
they keep their initial bit values, and Alice flips her bit.
If the measurement outcome is successful with Type2
and $k=k^{\prime}=0,2$, they keep their initial bit values.
In all the other cases, they discard their bit values.
•
After repeating the above operations several times,
Alice and Bob perform error correction, privacy amplification
and authentication as described previously.
In the ideal case (no transmission errors, no eavesdropping)
Alice and Bob should discard results pertaining to
measurements done in different bases (or when Bob failed to detect
any photon).
In QKD, Alice and Bob should be able to determine efficiently their shared secret key
as a function of distance $L$ separating them. Since, the secure key is determined after
sifting and distillation, secure key rate is expressed in bps (bits per signal) given
that Alice sends symbols to Bob to sift and distill with the remaining bits making the secret key.
For Type $i$ event, we define $e_{i,p}^{(m,n)}$ as the phase error probability
that Alice and Bob emits $m$ and $n$ photons respectively,
and Charlie announces a successful outcome with $Q_{i}^{(m,n)}$,
the joint probability. Consequently the asymptotic key rate for Type $i$ is given as a sum
over partial private amplification terms of the form $Q_{i}^{(m,n)}[1-h_{2}(e^{(m,n)}_{i,{p}})]$
and one error correction term $Q_{i}^{tot}f(e_{i}^{tot})h_{2}(e_{i}^{tot})$ related to total
errors as Mizutani ; GLLP :
$$\displaystyle K_{i}(L)$$
$$\displaystyle=Q_{i}^{(1,1)}[1-h_{2}(e^{(1,1)}_{i,{p}})]+Q_{i}^{(1,2)}[1-h_{2}(%
e^{(1,2)}_{i,{p}})]$$
(1)
$$\displaystyle+Q_{i}^{(2,1)}[1-h_{2}(e^{(2,1)}_{i,{p}})]-Q_{i}^{tot}f(e_{i}^{%
tot})h_{2}(e_{i}^{tot}).$$
The total probabilities $Q_{i}^{tot}=\sum_{m,n}Q_{i}^{(m,n)}$
and total error rates are given by $e_{i}^{tot}=\sum_{m,n}Q_{i}^{(m,n)}e^{(m,n)}_{i,{b}}/Q_{i}^{tot}$
where $e^{(m,n)}_{i,{b}}$ is the Type $i$ bit error probability and
$h_{2}$ is the binary Shannon entropy Carlson given by $h_{2}(x)=-x\log_{2}(x)-(1-x)\log_{2}(1-x)$.
Moreover, the above asymptotic key rate is obtained in the limit of infinite number of
decoy states Mizutani .
Phase error probabilities are determined from bit error probabilities as depicted in fig. 1
for Type 1 and 2 and depending on photons $(m,n)$ emitted.
Since Charlie is in the middle between Alice and Bob,
the channel transmittance to Charlie from Alice is the same as that from Bob.
Considering that $L$ is the distance between Alice and Bob,
the channel transmittance $\eta_{T}$ is obtained by replacing $L$ by $L/2$ resulting in:
$\eta_{T}=10^{-\alpha{L/20}}$.
For the standard Telecom wavelength Carlson
$\lambda=1.55\mu$m, the loss coefficient with distance is $\alpha$=0.21 dB/km.
The quantum efficiency and the dark count rate of the detectors are taken as
$\eta=0.045$ and $d=8.5\times 10^{-7}$, respectively as in the GYS GYS case.
We compare below the effect of a fixed error correction function with respect to a fixed
value function.
The error correction function is given by Enzer et al. Enzer as:
$f_{e}(x)=1.1581+57.200x^{3}$
In figs 2,3 secret key rates for Type 1 and Type 2 events
are displayed versus distance when $f_{e}$ function is
considered as variable or fixed at a value of 1.33.
Improving quality of detection means that dark counting must be substantially reduced
in order to avoid false ”clicks” (irrelevant event detection) of the detectors.
In figs 4,5 secret key rates for Type 1 and Type 2 events
are displayed versus distance for different values of the dark count rate with error correction
function $f_{e}$ freely varying.
Quantum yield is an important parameter that plays an important role in quantum
communications.
In figs 6,7 secret key rates for Type 1 and Type 2 events
are displayed versus distance for different values of the quantum yield $\eta$ with error correction
function $f_{e}$ freely varying. The value of $\eta$ has been intentionally exaggerated in order to
explore the range of communication distances covered by it variation. It is interesting to note that
the Quantum yields acts on communication distance and key bitrate simultaneously whereas dark count
rate and error correction function changes affect solely communication distance.
Communication distances and secret key bitrates obtained in this work can be improved
when we vary the error correction function, dark count rate and quantum efficiency.
Insight into SARG04 protocol acquired by optimization leads to conclude that the most sensitive way to increase communication distance substantially is to decrease the dark count rate.
The least sensitive parameter is the error correction function type and in spite of exaggerating
the values of the quantum efficiency in order to probe the largest possible range of communication
distances, the dark count rate parameter is the most promising. Consequently future research efforts ought to be directed towards reducing it considerably. This improvement relies on developing special algorithms
that will allow to discriminate between different events occurring around the photodetectors or
developing materials with selective and specially engineered higher thresholds preventing false
”clicks” triggered by ”irrelevant” events.
References
(1)
B. Qi, C.-H. F. Fung, H.-K. Lo and X. Ma, Quantum Inf. Comput.7, 73 (2007).
(2)
Y. Zhao, C.-H. F. Fung, B. Qi, C. Chen and H.-K Lo, Phys. Rev. A 78, 042333 (2008).
(3)
F. Xu, B. Qi and H.-K. Lo, New J. of Phys. 12, 113026 (2010).
(4)
L. Lydersen, C. Wiechers, C. Wittmann, D. Elser, J. Skaar
and V. Makarov, Hacking commercial quantum cryptography systems by tailored bright illumination. Nature Photon. 4, 686 (2010).
(5)
I. Gerhardt, Q. Liu, A. Lamas-Linares, J. Skaar and C. Kurtsiefer, Full-field implementation of a perfect eavesdropper on a quantum cryptography system. Nature Commun. 2, 349 (2011).
(6)
H. Weier, H. Krauss, M. Rau, M. Fürst, S. Nauerth and H. Weinfurter, New J. Phys. 13, 073024 (2011).
(7)
N. Jain, C. Wittmann, L. Lydersen, C. Wiechers, D. Elser, C. Marquardt, V. Makarov, and G. Leuchs, Device calibration impacts security of quantum key distribution. Phys. Rev. Lett. 107, 110501 (2011).
(8)
A. N. Bugge, S. Sauge, A. M. M. Ghazali, J. Skaar, L. Lydersen and V. Makarov, Laser damage helps the eavesdropper in quantum cryptography Phys. Rev. Lett. 112, 070503 (2014).
(9)
V. Scarani, A. Acìn, G. Ribordy and N. Gisin, Phys. Rev. Lett. 92,
057901 (2004); see also: V. Scarani, H. Bechmann-Pasquinucci, N. J. Cerf, M. Dušek,
N. Lütkenhaus and M. Peev, Rev. Mod. Phys. ”The security of practical quantum key distribution”
81, 1304 (2009).
(10)
C Tannous and J Langlois, Eur. J. Phys. 37 013001 (2016).
(11)
H-L Yin, Y Fu, Y-Q Mao and Z-B Chen, Sci. Rep. 6, 29482; doi: 10.1038/srep29482 (2016).
(12)
K. Tamaki and N. Lutkenhaus, Phys. Rev. A 68, 032316 (2004)
(13)
A. Einstein, B. Podolsky, and N. Rosen, Phys. Rev. 47, 777 (1935).
(14)
P. G. Kwiat, K. Mattle, H. Weinfurter and A. Zeilinger,
Phys. Rev. Lett. 75, 4337 (1995).
(15)
A. R. Calderbank and P. W. Shor, Phys. Rev. A 54, 1098 (1996).
(16)
H-K Lo, M Curty and B Qi, Phys. Rev. Lett. 108, 130503 (2012).
(17)
A Mizutani, K Tamaki, R Ikuta, T Yamamoto and N Imoto,
Sci. Rep. 4, 5236, doi: 10.1038/srep05236 (2014).
(18)
H-K Lo, X. Ma and K. Chen, Phys. Rev. Lett. 94, 230504 (2005).
(19)
A. B. Carlson and P. B. Crilly Communication systems:
An Introduction to Signals and Noise in Electrical Communication, 5th Edition, McGraw-Hill, New York (2010).
(20)
C. Gobby, Z. L. Yuan and A. J. Shields, App. Phys. Lett.
84, 3762 (2004).
(21)
D. G Enzer, P. G Hadley, R. J Hughes, C. G Peterson and P. G Kwiat, New Journal of Physics 4, 45 (2002). |
Phase diagram and quantum order by disorder in the Kitaev $K_{1}$-$K_{2}$ honeycomb magnet
Ioannis Rousochatzakis
School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455, USA
Johannes Reuther
Dahlem Center for Complex Quantum Systems and Fachbereich Physik, Freie Universität Berlin, 14195 Berlin, Germany
Helmholtz-Zentrum Berlin für Materialien und Energie, 14109 Berlin, Germany
Ronny Thomale
Institute for Theoretical Physics, University of Würzburg, 97074 Würzburg, Germany
Stephan Rachel
Institute for Theoretical Physics, Technische Universität Dresden, 01062 Dresden, Germany
N. B. Perkins
School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455, USA
(January 13, 2021)
Abstract
We show that the non-abelian Kitaev spin liquid on the honeycomb lattice is extremely fragile against the second neighbor Kitaev coupling $K_{2}$, which has been recently shown to be the dominant perturbation away from the nearest neighbor model in iridate Na${}_{2}$IrO${}_{3}$ and ruthenate $\alpha\!-\!{\rm RuCl}_{3}$. This coupling explains naturally the zig-zag ordering in both compounds without introducing unphysically large long-range Heisenberg exchange terms. The minimal $K_{1}$-$K_{2}$ model that we present here hosts a number of unconventional aspects, such as the fundamentally different role of thermal and quantum fluctuations, which can be traced back to the principle that time reversal symmetry can only act globally in a quantum system.
Introduction.
The search for novel quantum states of matter arising from the interplay of strong electronic correlations, spin-orbit coupling (SOC), and crystal field splitting has recently gained strong impetus in the context of $4d$ and $5d$ transition metal oxides Witczak-Krempa et al. (2014). The layered iridates of the A${}_{2}$IrO${}_{3}$ (A=Na,Li) family Singh and Gegenwart (2010); Singh et al. (2012); Liu et al. (2011); Ye et al. (2012); Choi et al. (2012); Hwan Chun et al. (2015) have been at the center of this search because of the prediction Jackeli and Khaliullin (2009); Chaloupka et al. (2010) that the dominant interactions in these magnets constitute the celebrated Kitaev model on the honeycomb lattice, one of the few exactly solvable models hosting gapped and gapless, non-abelian spin liquids Kitaev (2006). This aspect together with the realization that the Kitaev spin liquid is stable with respect to moderate Heisenberg-like perturbations Chaloupka et al. (2010); Schaffer et al. (2012) has triggered a lot of experimental activity on A${}_{2}$IrO${}_{3}$ and, more recently, on the similar $\alpha-$RuCl${}_{3}$ compound Plumb et al. (2014); Sears et al. (2015); Kubota et al. (2015).
In the layered A${}_{2}$IrO${}_{3}$ magnets, the single-ion ground state (GS) configuration of Ir${}^{4+}$ is an effective pseudospin $J_{\rm eff}\!=\!1/2$ doublet, where spin and orbital angular momenta are intertwined due to the strong SOC. In the original Kitaev-Heisenberg (KH) model proposed by Jackeli and Khaliullin Jackeli and Khaliullin (2009), these entities couple via two competing nearest neighbor (NN) interactions: An isotropic antiferromagnetic (AFM) Heisenberg exchange, $J_{1}$, and a highly anisotropic Kitaev interaction, $K_{1}$, which is strong and ferromagnetic, a fact that is also confirmed by ab-initio quantum chemistry calculations by Katukuri et al Katukuri et al. (2014); Nishimoto et al. . Nevertheless, neither Na${}_{2}$IrO${}_{3}$ nor Li${}_{2}$IrO${}_{3}$ are found to be in the spin liquid state at low temperatures. Instead, Na${}_{2}$IrO${}_{3}$ and Li${}_{2}$IrO${}_{3}$ show, respectively, AFM zigzag and incommensurate (IC) long-range magnetic orders, none of which is actually present in the KH model.
The most natural way to obtain these magnetic states is by including further neighbor Heisenberg couplings Rastelli et al. (1979); Fouet, J. B. et al. (2001), which are non-negligible due to extended nature of the $5d$-orbitals of Ir${}^{4+}$ ions Kimchi and You (2011); Choi et al. (2012). In addition, recent calculations by Sizyuk et al Sizyuk et al. (2014) based on the ab-initio density-functional data of Foyevtsova et al Foyevtsova et al. (2013) have shown that, for Na${}_{2}$IrO${}_{3}$, the next nearest neighbor (NNN) couplings must also include an anisotropic, Kitaev-like coupling $K_{2}$ which turns out to be AFM. More importantly, this coupling is the second dominant interaction after $K_{1}$. It has also been argued Reuther et al. (2014) that $K_{2}$ plays an important role in the stabilization of the IC spiral state in Li${}_{2}$IrO${}_{3}$ and might be deduced from the strong-coupling limit of Hubbard model with topological band structure Shitade et al. (2009); Reuther et al. (2012).
Recent structural Plumb et al. (2014) and magnetic Sears et al. (2015) studies have shown that the layered honeycomb magnet $\alpha\!-\!{\rm RuCl}_{3}$ is another example of a strong SOC Mott insulator, where the Ru${}^{3+}$ ions are again described by effective $J_{\rm eff}\!=\!1/2$ doublets. At low $T$, this magnet exhibits zigzag ordering as in Na${}_{2}$IrO${}_{3}$. Furthermore, the superexchange derivations Shankar et al. ; Sizyuk et al. (2015) based on the ab initio tight-binding parameters show that the NNN coupling $K_{2}$ is again appreciable, however a strong off-diagonal symmetric NN exchange is also present. Note that the signs of $K_{1}$ and $K_{2}$ are reversed in $\alpha\!-\!{\rm RuCl}_{3}$: the NN Kitaev interaction is AFM while the NNN Kitaev interaction is FM.
Motivated by these studies, here we consider the minimal extension of the NN Kitaev model that incorporates the effect of $K_{2}$, the $K_{1}$-$K_{2}$ model. We show that an extremely weak $K_{2}$ is enough to stabilize the zig-zag phases relevant for Na${}_{2}$IrO${}_{3}$ and $\alpha\!-\!{\rm RuCl}_{3}$, without introducing large, second and third neighbor Heisenberg exchange $J_{2}$ and $J_{3}$. While $J_{2}$ and $J_{3}$ is present in these compounds, the key point is that the Kitaev spin liquid is significantly more fragile against $K_{2}$ than $J_{2}$ and $J_{3}$. Thus, in conjunction with the above predictions from superexchange derivations, our findings suggest that any adequate minimal model of these compounds should include the NNN coupling $K_{2}$.
A very striking aspect of the zig-zag phases (shared by all magnetic phases) of the $K_{1}$-$K_{2}$ model is that they are only stabilized for quantum spins and not for classical spins, despite having a strong classical character. Indeed, these phases are Ising-like (with spins pointing along one of the three cubic axes), they are protected by a large excitation gap in the interacting $1/S$ spin-wave spectrum, and the spin lengths are extremely close to their classical value of $1/2$. Yet, these phases cannot be stabilized in the classical limit, in stark contrast to the conventional situation where quantum and thermal fluctuations work in parallel and often lead to the same order-by-disorder phenomena. Instead, this rare situation we encounter here stems from the manifestly different symmetry structure of the classical and quantum Hamiltonians, and the underlying principle that time reversal can only act globally in quantum systems (see below). This aspect has important ramifications for the phase diagram at zero and finite temperatures $T$.
Model & Phase diagram.
The model we consider here is described by the effective spin-1/2 Hamiltonian
$$\mathcal{H}=K_{1}\sum_{\langle ij\rangle}S^{\gamma_{ij}}_{i}S_{j}^{\gamma_{ij}%
}+K_{2}\sum_{\ll ij\gg}S^{\lambda_{ij}}_{i}S^{\lambda_{ij}}_{j}~{},$$
(1)
where $\langle ij\rangle$ (respectively $\ll\!\!ij\!\!\gg$) label NN (NNN) spins on the honeycomb lattice, $S_{j}^{a}$ defines the $a$th cartesian component of the spin operator at site $j$, and $\gamma_{ij}$ ($\lambda_{ij}$) define the type of Ising coupling for the bond $(ij)$, see Fig. 1. This model interpolates between two well known limits, the exactly solvable Kitaev spin liquid Kitaev (2006) at $K_{2}\!=\!0$, and the triangular Kitaev model at $K_{1}\!=\!0$ Rousochatzakis et al. ; Kimchi and Vishwanath (2014); Becker et al. (2015); Jackeli and Avella . It is easy to see that a finite $K_{2}$ ruins the exact solvability of the NN Kitaev model because the flux operators Kitaev (2006) $W_{p}\!=\!2^{6}S_{1}^{z}S_{2}^{x}S_{3}^{y}S_{4}^{z}S_{5}^{x}S_{6}^{y}$ around hexagons $p$ are no longer conserved.
In the following we parametrize $K_{1}\!=\!\cos\psi$ and $K_{2}\!=\!\sin\psi$, and take $\psi\!\in\![0,2\pi)$. It turns out that the physics actually remains the same under a simultaneous sign change of $K_{1}$ and $K_{2}$, because this can be gauged away by an operation $H_{yzx}\!=\!\prod_{i\in\text{B}}\mathsf{C}_{2y}(i)\prod_{i\in\text{C}}\mathsf{%
C}_{2z}(i)\prod_{i\in\text{D}}\mathsf{C}_{2x}(i)$, which is the product of $\pi$-rotations around the $\mathbf{y}$, $\mathbf{z}$, and $\mathbf{x}$ axis, respectively, for the B, C, and D sublattices of Fig. 1 111This symmetry does not exist when Heisenberg couplings are also present, in contrast to the symmetry $H_{xyz}$, see below. This invariance then reduces our study to the first two quadrants of the unit circle of $\psi$.
Figure 2 shows the quantum phase diagram as found by exact diagonalizations (ED) on finite clusters, see numerical data shown in Fig. 3. There are six different regimes as a function of the angle $\psi$: the two quantum spin liquids (QSLs) around the exactly solvable Kitaev points ($\psi\!=\!0$ and $\pi$) and four long-range magnetic regions (I-IV), hosting FM, Neel, stripy, and the zig-zag phases that are relevant for Na${}_{2}$IrO${}_{3}$ (II) and $\alpha\!-\!{\rm RuCl}_{3}$ (IV). Under $H_{yzx}$, the two QSLs map to each other, I maps to III, and II maps to IV.
The QSL regions are extremely narrow: They survive in a window of $\delta\psi\!=\!0.05\pi$ around the exact Kitaev points, which is confirmed by the comparison of ED against large scale pseudofermion functional renormalization group (PFFRG) calculations Reuther and Wölfle (2010); Reuther and Thomale (2011); Reuther et al. (2011a, b). The GS degeneracy at these points 222This is a degeneracy between three out of the four topological sectors and can appear already for finite systems, depending on the cluster geometry and the corresponding structure of the boundary terms in the fermionic description of the problem Kells et al. (2009). is lifted by $K_{2}$. Still, for small enough $|K_{2}|$, the QSLs must be gapless in the thermodynamic limit, because $K_{2}$ respects time reversal symmetry and is therefore not expected Kitaev (2006) to open a gap in the Majorana spectrum 333However, a gap may eventually open at finite $K_{2}$, before the transitions to the magnetically ordered phases.. The magnetic instabilities, which serve as good examples of deconfinement-confinement transitions Fradkin and Shenker (1979); Grignani et al. (1996); Tsuchiizu and Suzumura (1999); Mandal et al. (2011) for the underlying spinons, are of first order, as they are accompanied by finite, abrupt changes 444For finite systems, these are not true jumps because the transitions involve two states that belong to the same (identity) symmetry sector, leading to a very small level anticrossing. in $\langle W_{p}\rangle$, and in the ‘symmetrized’ spin structure factor $\widetilde{\mathcal{S}}(\mathbf{Q})$, defined below.
At $\psi\!=\!0$ and $\pi$, all fluxes $W_{p}$ have a value of $+1$ Kitaev (2006). A finite $K_{2}$ admixes sectors of different $W_{p}$, and so $\langle W_{p}\rangle$ drops continuously as we depart from the exact Kitaev’s points, until it jumps to very low absolute values when we enter the magnetic phases, see Fig. 3 (c). At the same time, the system-size dependence of $\widetilde{\mathcal{S}}(\mathbf{Q})$ shows clearly the short-range (long-range) character of spin-spin correlations inside (outside) the QSL regions, see Fig. 3 (d).
Next, each of the magnetic regions actually hosts twelve degenerate quantum states, some of which are even qualitatively different among themselves (as in III and IV) with very distinct Bragg reflections. In addition, the latter show different ordering wavevectors $\mathbf{Q}^{(\alpha)}$ for different spin components $\alpha$, reflecting the locking between spin and orbital degrees of freedom in this model. In particular, these features stem from the special point group symmetry which involves: i) the double cover $\widetilde{\mathsf{C}}_{6\text{v}}$ of $\mathsf{C}_{6\text{v}}$, ii) the double cover $\widetilde{\mathsf{D}}_{2}$ of the $\mathsf{D}_{2}$ group of global $\pi$ rotations in spin space, and iii) a hidden Khaliullin (2005); Chaloupka et al. (2010); Chaloupka and Khaliullin (2015) symmetry $H_{xyz}$, which is the product of $\pi$-rotations around the $\mathbf{x}$, $\mathbf{y}$, and $\mathbf{z}$ axis, respectively, for the B, C, and D sublattices of Fig. 1. For finite systems, the 12 quantum states are admixed by a finite tunneling, leading to 12 symmetric eigenstates with quantum numbers SM corresponding to the decomposition of the symmetry broken states. All of these states can be readily identified in the low-energy part of the spectra of Fig. 3 (a-b) with the expected quantum numbers and degeneracy SM . So the multiplicity and the symmetry structure of the low-energy spectrum is fully consistent with the states shown in Fig. 2. To probe their physical origin we now take one step back and examine the classical limit first.
Classical limit.
For classical spins, the frustration introduced by the $K_{2}$ coupling is different from the one of the pure $K_{1}$ model studied by Baskaran et al Baskaran et al. (2008). A straightforward classical minimization in momentum space SM gives lines of energy minima instead of a whole branch of minima Baskaran et al. (2008), suggesting a sub-extensive GS manifold structure, in analogy to compass-like models Nussinov and van den
Brink (2015) or other special antiferromagnets Rousochatzakis et al. (2015).
We can construct one class of GSs by satisfying one of the three types of Ising bonds. We can choose for example the horizontal $zz$-bonds and align the spins along the $\mathbf{z}$-axis with relative orientations dictated by the signs of $K_{1}$ and $K_{2}$. The energy of the resulting configuration saturates the lower energy bound SM $E_{b}/(NS^{2})\!=\!-|K_{2}|\!-\!|K_{1}|/2$ and is therefore one of the GSs. We can then generate other GSs by noting that $K_{1}$ and $K_{2}$ fix the relative signs of the spin projections $S_{z}$ only within the vertical 2-leg ladders of the lattice (shaded strips in Fig. 1), but do not fix the relative orientation between different ladders, because these couple only via $xx$ and $yy$ Ising interactions which drop out at the mean field level. This freedom leads to $2^{n_{\text{lad}}}$ GSs, where $n_{\text{lad}}\!\propto\!\sqrt{N}$ is the number of vertical ladders. This sub-extensive degeneracy stems from the presence of non-global, sliding operations Batista and Nussinov (2005); Nussinov and Fradkin (2005); Nussinov et al. (2006); Nussinov and van den
Brink (2015) of flipping $S_{z}\!\mapsto\!-S_{z}$ for all spins belonging to one vertical ladder. Similarly, we can saturate the $xx$ or the $yy$ bonds, leading to 2-leg ladders running along the diagonal directions of the lattice. In total, this procedure delivers $3\!\times\!2^{n_{\text{lad}}}$ classical GSs.
These states are actually connected in parameter space by valleys formed by other, continuous families of GSs that can be generated by global SO(3) rotations of the discrete states. The degeneracy associated with these valleys is accidental and can therefore be lifted by fluctuations. This is in fact the situation at finite $T$ where thermal fluctuations select one of the three types of discrete GSs, thereby breaking the three-fold symmetry of the model in the combined spin-orbit space. This corresponds to a finite-$T$ nematic phase where spins point along one of the three cubic axes but still sample all of the $2^{n_{\text{lad}}}$ corresponding states, without any long-range magnetic order. To achieve the latter one needs to spontaneously break all sliding symmetries and this cannot happen at finite $T$, according to the generalized Elitzur’s theorem of Batista and Nussinov Batista and Nussinov (2005). The sliding symmetries can break spontaneously only at $T\!=\!0$ and in all possible ways, which is reflected in the divergence of the spin structure factor along lines in momentum space.
Quantum spins & Strong-coupling expansion.
Turning to quantum spins, the situation is fundamentally different because the sliding symmetries are absent from the beginning: To flip one component of the spin we must combine a $\pi$-rotation in spin space and the time reversal operation 555By contrast, for the square-lattice compass model Douçot et al. (2005); Dorier et al. (2005), a $\pi$-rotation is actually enough (because the model involves only two types of Ising couplings), meaning that sliding symmetries exist also for quantum spins.. The latter, however, involves the complex conjugation which cannot be constrained to act locally only on one ladder. Essentially, this means that the ladders must couple to each other dynamically by virtual quantum-mechanical processes, which in turn opens the possibility for long-range magnetic ordering even at finite $T$.
The natural way to understand the dynamical coupling between the ladders is to perform a perturbative expansion around one of the three strong coupling limits where the above discrete states become true quantum-mechanical GSs. Consider for example the limit where the $xx$ and $yy$ couplings, denoted by $K_{1}^{x(y)}$ and $K_{2}^{x(y)}$, are much smaller than the $zz$ couplings, $K_{1}^{z}$ and $K_{2}^{z}$. Let us also parametrize $K_{1,2}^{x(y)}\!=\!rK_{1,2}^{z}$, $K_{1}^{z}\!=\!\cos\psi$ and $K_{2}^{z}\!=\!\sin\psi$. For $r\!=\!0$ we have $n_{\text{lad}}$ decoupled vertical ladders, and $2^{n_{\text{lad}}}$ quantum GSs. Degenerate perturbation theory SM then shows that the degeneracy is first lifted at fourth order in $r$ via three, loop-four virtual processes that involve: (i) only $K_{1}^{x(y)}$, (ii) only $K_{2}^{x(y)}$, and (iii) both $K_{1}^{x(y)}$ and $K_{2}^{x(y)}$ perturbations, see the top panel of Fig. 4.
The processes (i) give rise to intra-ladder, six-body terms which are nothing else than the flux operators $W_{p}$. As shown by Kitaev Kitaev (2006), these terms can be mapped to the square lattice Toric code Kitaev (2003) which has a gapped spin liquid GS. Next, the processes (ii) and (iii) give rise to effective, NNN inter-ladder couplings of the form $JS_{i}^{z}S_{j}^{z}$, where $i$ and $j$ have the same (ii) or different (iii) sublattice unit cell indices, see top panel of Fig. 4. To fourth-order in $r$, the corresponding couplings $J_{W}$ (i), $J_{1}$ (ii), and $J_{2}$ (iii) read
$$\displaystyle J_{W}\!=\!\frac{-\left(K_{1}^{x}K_{1}^{y}\right)^{2}|K_{1}^{z}|}%
{64(|K_{1}^{z}|\!+\!2|K_{2}^{z}|)^{2}(|K_{1}^{z}|\!+\!3|K_{2}^{z}|)(|K_{1}^{z}%
|\!+\!4|K_{2}^{z}|)},$$
$$\displaystyle J_{1}\!=\!\frac{\left(K_{2}^{x}K_{2}^{y}\right)^{2}}{8(|K_{1}^{z%
}|\!+\!2|K_{2}^{z}|)^{2}(2|K_{1}^{z}|\!+\!3|K_{2}^{z}|)}\text{sgn}(K_{2}^{z}),$$
(2)
$$\displaystyle J_{2}\!=\!\frac{K_{1}^{x}K_{1}^{y}K_{2}^{x}K_{2}^{y}}{4(|K_{1}^{%
z}|\!+\!2|K_{2}^{z}|)^{3}}\!\!\left[\!\frac{|K_{1}^{z}|\!+\!|K_{2}^{z}|}{2|K_{%
1}^{z}|\!+\!3|K_{2}^{z}|}\!+\!\frac{2|K_{2}^{z}|}{|K_{1}^{z}|\!+\!4|K_{2}^{z}|%
}\!\right]\!\!.$$
Note that $J_{2}$ is always AFM and competes with $J_{1}$ in the regions I and III of Fig. 2. We also emphasize that there is no $S_{i}^{z}S_{j}^{z}$ coupling when $i$ and $j$ belong to NN ladders. This is actually true to all orders in perturbation theory, because of the above hidden symmetry $H_{xyz}$, which changes the sign of $S_{z}$ on every second vertical ladder (B and C sites of Fig. 1).
The main panel of Fig. 4 shows the behavior of $|J_{W}|/r^{4}$, $2|J_{1}|/r^{4}$, and $J_{2}/r^{4}$ as a function of the angle $\psi$, where the relative factor of $2$ between $|J_{1}|$ and $J_{2}$ accounts for their relative contribution to the total classical energy. Close to the exactly solvable points $\psi\!=\!0$ and $\pi$, the physics is dominated by the flux terms $W_{p}$ which, as mentioned above, lead to the gapped Toric code QSL Kitaev (2003, 2006). The gapless QSL at $r\!=\!1$ is eventually stabilized by off-diagonal processes that necessarily admix states outside the lowest manifold of the $r\!=\!0$ point Schmidt et al. (2008).
The four magnetic phases I-IV of Fig. 2 are all stabilized by $J_{1}$ which, according to Fig. 4, is the dominant coupling in a wide region away from $\psi\!=\!0$ and $\pi$. Note that there are also two windows (shaded in Fig. 4) in the beginning of regions I and III where the two inter-ladder terms compete and $2|J_{1}|\!<\!J_{2}$. This opens the possibility for two more states (the ones favored by $J_{2}$) in these regions. This scenario is however not confirmed by our ED data (spectra and spin structure factors), showing that these phases are eventually preempted by the QSLs and the phases I and III at higher values of $r$.
We remark here that the 1-loop formulation of PFFRG delivers the $J_{2}$ but not the $J_{1}$ processes because, in a diagrammatic formulation of Abrikosov fermions, these processes relate to $3$-particle vertex contributions, which require a 2-loop formulation. However, for $\psi$ around $0$ and $\pi$, where $J_{1}$ is small, a 1-loop formulation already yields good agreement.
Semiclassical picture.
The magnetic phases of the model can be captured by a standard semiclassical expansion, but this has to go beyond the non-interacting spin-wave level. Indeed, the zero-point energy of the quadratic theory lifts the continuous degeneracy of the problem, but fails to lift the discrete $2^{n_{\text{lad}}}$ degeneracy (the spectrum has lines of zero modes corresponding to the soft classical twists along individual ladders), and does not deliver a finite spin length, in analogy to several frustrated models Khaliullin (2001); Dorier et al. (2005); Jackeli and Avella ; Rousochatzakis et al. (2015). The spurious zero modes are gapped out by spin-wave interactions, leading to the expected anisotropy gap and a finite spin length. The latter (obtained here from a self-consistent treatment of the quartic theory; details will be given elsewhere) is in good agreement with the values extracted from the ‘symmetrized’ spin structure factor 666The extra factor of $2$ in this definition accounts for the fact that there are no correlations between NN ladders for finite systems, due to the hidden symmetry $H_{xyz}$, see also [SM, ] $\widetilde{\mathcal{S}}(\mathbf{Q})\!=\!\frac{2}{N}\sum_{\alpha}\sum_{\mathbf{%
r}\neq 0}e^{i\mathbf{Q}^{(\alpha)}\cdot\mathbf{r}}\langle S^{\alpha}_{0}\!S^{%
\alpha}_{\mathbf{r}}\rangle$, see Fig. 3 (d). Both methods give values that are very close to the classical value of $1/2$ inside the magnetic regions, showing that these phases are very robust.
Triangular Kitaev points.
At $\psi\!=\!\pm\frac{\pi}{2}$ the system decomposes into two inter-penetrating triangular sublattices, where the $K_{2}$ coupling plays the role of a NN Kitaev coupling. This problem has been studied for both classical Rousochatzakis et al. ; Kimchi and Vishwanath (2014) and quantum spins Becker et al. (2015); Jackeli and Avella . The above analysis for the magnetic phases still holds here, the only difference being that the two legs of each ladder decouple, since they belong to different triangular sublattices. The ordering between the legs belonging to the same sublattice stems from the effective coupling $J_{1}$, which is the only one surviving at $K_{1}\!=\!0$. This coupling connects NNN legs only, leading to twelve states in each sublattice and thus $12^{2}$ states in total, instead of 12 for finite $K_{1}$. The accumulation of such extra states at low energies can be clearly seen in Fig. 3(a-b) at $\psi\!=\!\pm\frac{\pi}{2}$. The origin of the ordering mechanism at the triangular Kitaev points has also been discussed independently in a recent paper by Jackeli and Avella Jackeli and Avella .
Relevance to materials.
According to the recent superexchange derivations, in Na${}_{2}$IrO${}_{3}$, the $K_{2}$ coupling is the largest energy scale after the NN coupling $K_{1}$. Here we have shown that such a coupling can explain naturally the zig-zag order in this compound without introducing unphysically large longer-range Heisenberg terms. The presence of large AFM $K_{2}$ can also clarify the puzzle of the large AFM Curie-Weiss temperature in this compound Singh and Gegenwart (2010); Singh et al. (2012); Choi et al. (2012). The fact that the Kitaev spin liquid is significantly more fragile against $K_{2}$ than against isotropic Heisenberg terms shows that Na${}_{2}$IrO${}_{3}$ is deep inside the magnetically ordered phase.
By contrast, the value of $K_{2}$ is relatively smaller in $\alpha\!-\!{\rm RuCl}_{3}$ due to the absence of the large diffusive $s$-orbitals from Na${}^{1+}$ ions that mediate the NNN $K_{2}$ coupling in Na${}_{2}$IrO${}_{3}$. This suggests that $\alpha\!-\!{\rm RuCl}_{3}$ can be closer to the spin liquid phase, as has been recently indicated by Raman and neutron scattering experiments Sandilands et al. (2015); Banerjee et al. (2015).
Acknowledgements.
We are grateful to R. Moessner and the Max Planck Institute for the Physics of Complex Systems, Dresden, where a large part of the numerical computations took place. We also thank Craig Price, Oleg Starykh, George Jackeli, Yuriy Sizyuk, Paula Mellado, and Marc Schulz for stimulating discussions. IR and NP acknowledge the support from NSF Grant DMR-1511768. RT was supported by the European Research Council through ERC-StG-336012 and by DFG-SFB 1170. SR was supported by DFG-SFB 1143, DFG-SPP 1666, and by the Helmholtz association through VI-521.
References
Witczak-Krempa et al. (2014)
W. Witczak-Krempa, G. Chen, Y. B. Kim, and L. Balents, Annual Review of Condensed Matter Physics 5, 57 (2014).
Singh and Gegenwart (2010)
Y. Singh and P. Gegenwart, Phys. Rev. B 82, 064412 (2010).
Singh et al. (2012)
Y. Singh, S. Manni,
J. Reuther, T. Berlijn, R. Thomale, W. Ku, S. Trebst, and P. Gegenwart, Phys. Rev. Lett. 108, 127203 (2012).
Liu et al. (2011)
X. Liu, T. Berlijn,
W.-G. Yin, W. Ku, A. Tsvelik, Y.-J. Kim, H. Gretarsson, Y. Singh, P. Gegenwart, and J. P. Hill, Phys. Rev. B 83, 220403 (2011).
Ye et al. (2012)
F. Ye, S. Chi, H. Cao, B. C. Chakoumakos, J. A. Fernandez-Baca, R. Custelcean, T. F. Qi, O. B. Korneta, and G. Cao, Phys. Rev. B 85, 180403 (2012).
Choi et al. (2012)
S. K. Choi, R. Coldea,
A. N. Kolmogorov,
T. Lancaster, I. I. Mazin, S. J. Blundell, P. G. Radaelli, Y. Singh, P. Gegenwart, K. R. Choi, S.-W. Cheong, P. J. Baker,
C. Stock, and J. Taylor, Phys. Rev. Lett. 108, 127204 (2012).
Hwan Chun et al. (2015)
S. Hwan Chun, J.-W. Kim,
J. Kim, H. Zheng, C. C. Stoumpos, C. D. Malliakas, J. F. Mitchell, K. Mehlawat, , Y. Singh,
Y. Choi, T. Gog, A. Al-Zein, M. M. Sala, M. Krisch, J. Chaloupka,
G. Jackeli, G. Khaliullin, and B. J. Kim, Nat Phys 10, 1038 (2015).
Jackeli and Khaliullin (2009)
G. Jackeli and G. Khaliullin, Phys. Rev. Lett. 102, 017205 (2009).
Chaloupka et al. (2010)
J. Chaloupka, G. Jackeli,
and G. Khaliullin, Phys. Rev. Lett. 105, 027204 (2010).
Kitaev (2006)
A. Kitaev, Annals of Physics 321, 2 (2006).
Schaffer et al. (2012)
R. Schaffer, S. Bhattacharjee, and Y. B. Kim, Phys. Rev. B 86, 224417 (2012).
Plumb et al. (2014)
K. W. Plumb, J. P. Clancy,
L. J. Sandilands,
V. V. Shankar, Y. F. Hu, K. S. Burch, H.-Y. Kee, and Y.-J. Kim, Phys.
Rev. B 90, 041112
(2014).
Sears et al. (2015)
J. A. Sears, M. Songvilay,
K. W. Plumb, J. P. Clancy, Y. Qiu, Y. Zhao, D. Parshall, and Y.-J. Kim, Phys. Rev. B 91, 144420 (2015).
Kubota et al. (2015)
Y. Kubota, H. Tanaka,
T. Ono, Y. Narumi, and K. Kindo, Phys.
Rev. B 91, 094422
(2015).
Katukuri et al. (2014)
V. M. Katukuri, S. Nishimoto,
V. Yushankhai, A. Stoyanova, H. Kandpal, S. Choi, R. Coldea, I. Rousochatzakis, L. Hozoi, and J. van den
Brink, New Journal of Physics 16, 013056 (2014).
(16)
S. Nishimoto, V. M. Katukuri, V. Yushankhai, H. Stoll,
U. K. Roessler, L. Hozoi, I. Rousochatzakis, and J. van den Brink, arXiv:1403.6698 .
Rastelli et al. (1979)
E. Rastelli, A. Tassi, and L. Reatto, Physica B+C 97, 1 (1979).
Fouet, J. B. et al. (2001)
Fouet, J. B.,
Sindzingre, P., and Lhuillier, C., Eur. Phys. J. B 20, 241 (2001).
Kimchi and You (2011)
I. Kimchi and Y.-Z. You, Phys. Rev. B 84, 180407 (2011).
Sizyuk et al. (2014)
Y. Sizyuk, C. Price,
P. Wölfle, and N. B. Perkins, Phys. Rev. B 90, 155126 (2014).
Foyevtsova et al. (2013)
K. Foyevtsova, H. O. Jeschke, I. I. Mazin,
D. I. Khomskii, and R. Valentí, Phys. Rev. B 88, 035107 (2013).
Reuther et al. (2014)
J. Reuther, R. Thomale, and S. Rachel, Phys. Rev. B 90, 100405 (2014).
Shitade et al. (2009)
A. Shitade, H. Katsura,
J. Kuneš, X.-L. Qi, S.-C. Zhang, and N. Nagaosa, Phys. Rev. Lett. 102, 256403 (2009).
Reuther et al. (2012)
J. Reuther, R. Thomale, and S. Rachel, Phys. Rev. B 86, 155127 (2012).
(25)
V. V. Shankar, H.-S. Kim, and H.-Y. Kee, arXiv:1411.6623 .
Sizyuk et al. (2015)
Y. Sizyuk, P. Wölfle, and N. B. Perkins, in preparation (2015).
(27)
See Supplemental material at ???? for
auxiliary information and technical details on: i) the classical
Luttinger-Tisza minimization in momentum space, ii) our ED study, (including
the symmetry decomposition of the twelve magnetic states in regions I-II,
spin-spin correlation profiles in real space, and the definition of
$\widetilde{S}(\mathbf{Q})$), iii) momentum space structure factors from PFFRG
calculations, and v) the derivation of
(Phase diagram and quantum order by disorder in the Kitaev $K_{1}$-$K_{2}$ honeycomb magnet)-(Phase diagram and quantum order by disorder in the Kitaev $K_{1}$-$K_{2}$ honeycomb magnet).
(28)
I. Rousochatzakis, U. K. Roessler, J. van den
Brink, and M. Daghofer, arXiv:1209.5895 .
Kimchi and Vishwanath (2014)
I. Kimchi and A. Vishwanath, Phys.
Rev. B 89, 014414
(2014).
Becker et al. (2015)
M. Becker, M. Hermanns,
B. Bauer, M. Garst, and S. Trebst, Phys.
Rev. B 91, 155135
(2015).
(31)
G. Jackeli and A. Avella, arXiv:1504.01435 .
(32)
This symmetry does not exist when
Heisenberg couplings are also present, in contrast to the symmetry $H_{xyz}$,
see below.
Reuther and Wölfle (2010)
J. Reuther and P. Wölfle, Phys. Rev. B 81, 144410 (2010).
Reuther and Thomale (2011)
J. Reuther and R. Thomale, Phys. Rev. B 83, 024402 (2011).
Reuther et al. (2011a)
J. Reuther, D. A. Abanin,
and R. Thomale, Phys. Rev. B 84, 014417 (2011a).
Reuther et al. (2011b)
J. Reuther, R. Thomale, and S. Trebst, Phys. Rev. B 84, 100406 (2011b).
(37)
This is a degeneracy between three out of the four
topological sectors and can appear already for finite systems, depending on
the cluster geometry and the corresponding structure of the boundary terms in
the fermionic description of the problem Kells et al. (2009).
(38)
However, a gap may eventually open at finite $K_{2}$, before
the transitions to the magnetically ordered phases.
Fradkin and Shenker (1979)
E. Fradkin and S. H. Shenker, Phys. Rev. D 19, 3682 (1979).
Grignani et al. (1996)
G. Grignani, G. Semenoff,
and P. Sodano, Phys. Rev. D 53, 7157 (1996).
Tsuchiizu and Suzumura (1999)
M. Tsuchiizu and Y. Suzumura, Phys. Rev. B 59, 12326 (1999).
Mandal et al. (2011)
S. Mandal, S. Bhattacharjee, K. Sengupta, R. Shankar, and G. Baskaran, Phys. Rev. B 84, 155121 (2011).
(43)
For finite systems, these are not true jumps because the
transitions involve two states that belong to the same (identity) symmetry
sector, leading to a very small level anticrossing.
Khaliullin (2005)
G. Khaliullin, Progress of Theoretical Physics Supplement 160, 155 (2005).
Chaloupka and Khaliullin (2015)
J. Chaloupka and G. Khaliullin, arXiv:1502.02587 (2015).
Baskaran et al. (2008)
G. Baskaran, D. Sen, and R. Shankar, Phys. Rev. B 78, 115116 (2008).
Nussinov and van den
Brink (2015)
Z. Nussinov and J. van den
Brink, Rev. Mod. Phys. 87, 1 (2015).
Rousochatzakis et al. (2015)
I. Rousochatzakis, J. Richter, R. Zinke, and A. A. Tsirlin, Phys. Rev. B 91, 024416 (2015).
Batista and Nussinov (2005)
C. D. Batista and Z. Nussinov, Phys. Rev. B 72, 045137 (2005).
Nussinov and Fradkin (2005)
Z. Nussinov and E. Fradkin, Phys. Rev. B 71, 195120 (2005).
Nussinov et al. (2006)
Z. Nussinov, C. D. Batista, and E. Fradkin, International Journal of Modern Physics B 20, 5239 (2006).
(52)
By contrast, for the square-lattice compass model Douçot et al. (2005); Dorier et al. (2005), a $\pi$-rotation is actually enough (because the
model involves only two types of Ising couplings), meaning that sliding symmetries exist also for quantum spins.
Kitaev (2003)
A. Kitaev, Annals of Physics 303, 2 (2003).
Schmidt et al. (2008)
K. P. Schmidt, S. Dusuel, and J. Vidal, Phys. Rev. Lett. 100, 057208 (2008).
Khaliullin (2001)
G. Khaliullin, Phys. Rev. B 64, 212405 (2001).
Dorier et al. (2005)
J. Dorier, F. Becca, and F. Mila, Phys. Rev. B 72, 024448 (2005).
(57)
The extra factor of $2$ in this definition accounts for the
fact that there are no correlations between NN ladders for finite systems,
due to the hidden symmetry $H_{xyz}$, see also [\rev@citealpnumSM].
Sandilands et al. (2015)
L. J. Sandilands, Y. Tian,
K. W. Plumb, Y.-J. Kim, and K. S. Burch, Phys. Rev. Lett. 114, 147201 (2015).
Banerjee et al. (2015)
A. Banerjee, C. Bridges,
J.-Q. Yan, A. Aczel, L. Li, M. Stone, G. Granroth, M. Lumsden, Y. Yiu, J. Knolle, D. Kovrizhin,
S. Bhattacharjee, R. Moessner, D. Tennant, D. Mandrus, and S. Nagler, arXiv:1504.08037 (2015).
Kells et al. (2009)
G. Kells, J. K. Slingerland, and J. Vala, Phys. Rev. B 80, 125415 (2009).
Douçot et al. (2005)
B. Douçot, M. V. Feigel’man, L. B. Ioffe, and A. S. Ioselevich, Phys. Rev. B 71, 024505 (2005).
Supplemental material
In this Supplementing material we provide auxiliary information and technical details and derivations. Specifically, Sec. A deals with the Luttinger-Tisza minimization of the classical energy in momentum space. Sec. B gives details about our finite-size ED study, including the symmetry analysis of the low-energy spectra in regions I and II of the phase diagram (B.3), GS spin-spin correlation profiles (B.4), and the definition of the ‘symmetrized’ spin structure factor $\widetilde{\mathcal{S}}(\mathbf{Q})$.
In Sec. C we provide results from the pseudofermion functional renormalization group (PFFRG) approach.
Finally, in Sec. D we provide the derivation of the effective Hamiltonian around the strong coupling limit of $K_{1,2}^{x(y)}\!=\!0$.
Appendix A Lutinger-Tisza minimization
We choose the primitive vectors of the honeycomb lattice as $\mathbf{t}_{1}\!=\!a\mathbf{y}$ and $\mathbf{t}_{2}\!=\!(-\frac{\sqrt{3}}{2}\mathbf{x}\!+\!\frac{1}{2}\mathbf{y})a$, where $a$ is a lattice constant, see Fig. 1 of the main paper. We also define $\mathbf{t}_{3}\!=\!\mathbf{t}_{1}\!-\!\mathbf{t}_{2}\!=\!\frac{\sqrt{3}}{2}%
\mathbf{x}\!+\!\frac{1}{2}\mathbf{y}$. In the following, we label the Bravais lattice vectors as $\mathbf{R}\!=\!n\mathbf{t}_{1}\!+\!m\mathbf{t}_{2}$, where $n$ and $m$ are integers. We also denote the two sites in the unit cell by a sublattice index $i\!=\!1$-$2$. The total classical energy of the $K_{1}$-$K_{2}$ model reads
$$\displaystyle E=\sum_{\mathbf{R}}K_{1}\left(S_{\mathbf{R},1}^{z}~{}S_{\mathbf{%
R},2}^{z}+S_{\mathbf{R},1}^{x}~{}S_{\mathbf{R}+\mathbf{t}_{2},2}^{x}+S_{%
\mathbf{R},1}^{y}~{}S_{\mathbf{R}-\mathbf{t}_{3},2}^{y}\right)+K_{2}~{}\sum_{%
\mathbf{R},i}\left(S_{\mathbf{R},i}^{z}~{}S_{\mathbf{R}-\mathbf{t}_{1},i}^{z}+%
S_{\mathbf{R},i}^{x}~{}S_{\mathbf{R}+\mathbf{t}_{3},i}^{x}+S_{\mathbf{R},i}^{y%
}~{}S_{\mathbf{R}+\mathbf{t}_{2},i}^{y}\right)~{}.$$
(3)
Defining $\mathbf{S}_{\mathbf{R},i}\!=\!\sum_{\mathbf{k}}e^{i\mathbf{k}\cdot\mathbf{R}}~%
{}\mathbf{S}_{\mathbf{k},i}$, we get
$$\displaystyle\epsilon\equiv\mathcal{H}/N_{uc}$$
$$\displaystyle=$$
$$\displaystyle K_{1}\sum_{\mathbf{k}}\left[S_{\mathbf{k},1}^{z}~{}S_{-\mathbf{k%
},2}^{z}+e^{-i\mathbf{k}\cdot\mathbf{t}_{2}}S_{\mathbf{k},1}^{x}~{}S_{-\mathbf%
{k},2}^{x}+e^{i\mathbf{k}\cdot\mathbf{t}_{3}}S_{\mathbf{k},1}^{y}~{}S_{-%
\mathbf{k},2}^{y}\right]$$
$$\displaystyle+$$
$$\displaystyle K_{2}\sum_{\mathbf{k},i}\left[\cos(\mathbf{k}\cdot\mathbf{t}_{1}%
)~{}S_{\mathbf{k},i}^{z}~{}S_{-\mathbf{k},i}^{z}+\cos(\mathbf{k}\cdot\mathbf{t%
}_{3})~{}S_{\mathbf{k},i}^{x}~{}S_{-\mathbf{k},i}^{x}+\cos(\mathbf{k}\cdot%
\mathbf{t}_{2})~{}S_{\mathbf{k},i}^{y}~{}S_{-\mathbf{k},i}^{y}\right]$$
$$\displaystyle=$$
$$\displaystyle\sum_{\mathbf{k},ij}\sum_{\alpha}S_{\mathbf{k},i}^{\alpha}\cdot%
\Lambda^{(\alpha)}_{ij}(\mathbf{k})\cdot S_{-\mathbf{k},j}^{\alpha}~{},$$
where $N_{uc}\!=\!N/2$, is the number of unit cells, and the matrices $\bm{\Lambda}^{(\alpha)}$ (where $\alpha\!=\!x,y,z$) are given by
$$\displaystyle\bm{\Lambda}^{(x)}(\mathbf{k})\!\!=\!\!\left(\!\!\begin{array}[]{%
ll}K_{2}\cos(\mathbf{k}\cdot\mathbf{t}_{3})&\frac{K_{1}}{2}e^{-i\mathbf{k}%
\cdot\mathbf{t}_{2}}\\
\frac{K_{1}}{2}e^{i\mathbf{k}\cdot\mathbf{t}_{2}}&K_{2}\cos(\mathbf{k}\cdot%
\mathbf{t}_{3})\end{array}\!\!\right)\!\!,~{}~{}\bm{\Lambda}^{(y)}(\mathbf{k})%
\!\!=\!\!\left(\!\!\begin{array}[]{ll}K_{2}\cos(\mathbf{k}\cdot\mathbf{t}_{2})%
&\frac{K_{1}}{2}e^{i\mathbf{k}\cdot\mathbf{t}_{3}}\\
\frac{K_{1}}{2}e^{-i\mathbf{k}\cdot\mathbf{t}_{3}}&K_{2}\cos(\mathbf{k}\cdot%
\mathbf{t}_{2})\end{array}\!\!\right)\!\!,~{}~{}\bm{\Lambda}^{(z)}(\mathbf{k})%
\!\!=\!\!\left(\!\!\begin{array}[]{ll}K_{2}\cos(\mathbf{k}\cdot\mathbf{t}_{1})%
&\frac{K_{1}}{2}\\
\frac{K_{1}}{2}&K_{2}\cos(\mathbf{k}\cdot\mathbf{t}_{1})\end{array}\!\!\right)%
\!\!.$$
To find the classical minimum we need to minimize the energy under the strong constraints $\mathbf{S}_{\mathbf{R},i}^{2}\!=\!S^{2}$, $\forall(\mathbf{R},i)$. The Luttinger-Tisza method Luttinger and Tisza (1946); Bertaut (1961); Litvin (1974); Kaplan and Menyuk (2007) amounts to relax the strong constraints with the weaker one $\sum_{\mathbf{R},i}\mathbf{S}_{\mathbf{R},i}^{2}\!=\!NS^{2}$, or equivalently $\sum_{\mathbf{k},i}\mathbf{S}_{\mathbf{k},i}\cdot\mathbf{S}_{-\mathbf{k},i}\!=%
\!S^{2}$. If we can find a minimum under the weak constraint that also satisfies the strong constraints then we have solved the problem. To this end, we minimize the function
$$F=\epsilon-\lambda\sum_{\mathbf{k},i}(\mathbf{S}_{\mathbf{k},i}\cdot\mathbf{S}%
_{-\mathbf{k},i}-S^{2})~{},$$
(4)
with respect to $\{S_{-\mathbf{k},i}^{\alpha}\}$, which gives a set of three eigenvalue problems for the $\bm{\Lambda}$ matrices:
$$\sum_{j=1,2}\Lambda^{(\alpha)}_{ij}(-\mathbf{q})~{}S_{\mathbf{q},j}^{\alpha}=%
\lambda~{}S_{\mathbf{q},i}^{\alpha},~{}~{}\alpha=x,y,z~{}.$$
(5)
If we can satisfy these three relations (plus the strong constraint) with a single eigenvalue $\lambda$, then $\epsilon\!=\!\lambda S^{2}$. So the energy minimum corresponds to the minimum over the three eigenvalues $\lambda^{(\alpha)}$ of the matrices $\bm{\Lambda}^{(\alpha)}(-\mathbf{k})$, and over the whole Brillouin zone (BZ).
The eigenvalues of these matrices and the corresponding eigenvectros are:
$$\displaystyle\lambda^{(x)}_{\pm}=K_{2}\cos(\mathbf{k}\cdot\mathbf{t}_{3})\pm%
\frac{1}{2}K_{1},~{}~{}~{}\lambda^{(y)}_{\pm}=K_{2}\cos(\mathbf{k}\cdot\mathbf%
{t}_{2})\pm\frac{1}{2}K_{1},~{}~{}~{}\lambda^{(z)}_{\pm}=K_{2}\cos(\mathbf{k}%
\cdot\mathbf{t}_{1})\pm\frac{1}{2}K_{1},$$
$$\displaystyle\mathbf{v}^{(x)}_{\pm}\sim\left(\begin{array}[]{c}1\\
\pm e^{i\mathbf{k}\cdot\mathbf{t}_{2}}\end{array}\right),~{}~{}~{}\mathbf{v}^{%
(y)}_{\pm}\sim\left(\begin{array}[]{c}1\\
\pm e^{-i\mathbf{k}\cdot\mathbf{t}_{3}}\end{array}\right),~{}~{}~{}\mathbf{v}^%
{(z)}_{\pm}\sim\left(\begin{array}[]{c}1\\
\pm 1\end{array}\right)~{}.$$
For $K_{2}$ positive, the minima of $\lambda^{(x)}_{\pm}$, $\lambda^{(y)}_{\pm}$, and $\lambda^{(z)}_{\pm}$ are located on the lines $\mathbf{Q}^{(x)}\!=\!r(\mathbf{G}_{1}\!+\!\mathbf{G}_{2})\!+\!(l\!+\!\frac{1}{%
2})\mathbf{G}_{2}$, $\mathbf{Q}^{(y)}\!=\!r\mathbf{G}_{1}\!+\!(l\!+\!\frac{1}{2})\mathbf{G}_{2}$, and $\mathbf{Q}^{(z)}\!=\!r\mathbf{G}_{2}\!+\!(l\!+\!\frac{1}{2})\mathbf{G}_{1}$, respectively, where $l$ is any integer and $r\in(-\frac{1}{2},\frac{1}{2})$. On the other hand, for $K_{2}$ negative, the minima are located on the lines: $\mathbf{Q}^{(x)^{\prime}}\!=\!r(\mathbf{G}_{1}\!+\!\mathbf{G}_{2})\!+\!l%
\mathbf{G}_{2}$, $\mathbf{Q}^{(y)^{\prime}}\!=\!r\mathbf{G}_{1}\!+\!l\mathbf{G}_{2}$, and $\mathbf{Q}^{(z)^{\prime}}\!=\!r\mathbf{G}_{2}\!+\!l\mathbf{G}_{1}$. Both sets of lines are shown in Fig. 5.
Let us now try to build a ground state from the minima of the above eigenvectors for the case $K_{1,2}\!>\!0$, by using the line of minima $\mathbf{Q}^{(z)}$ as follows:
$$\left(\begin{array}[]{c}S^{z}_{\mathbf{R},1}\\
S^{z}_{\mathbf{R},2}\end{array}\right)=\sum_{\{\mathbf{Q}^{(z)}\}}f_{\mathbf{Q%
}^{(z)}}e^{i\mathbf{Q}^{(z)}\cdot\mathbf{R}}\left(\begin{array}[]{c}1\\
-1\end{array}\right)=(-1)^{n}\left(\begin{array}[]{c}\xi_{m}\\
-\xi_{m}\end{array}\right)$$
(6)
where we used the relation $\mathbf{R}\!=\!n\mathbf{t}_{1}+m\mathbf{t}_{2}$ and have defined $\xi_{m}\equiv\int_{-1/2}^{1/2}drf(r)e^{i2\pi mr}$, which is the Fourier transform of the envelope function $f(r)$. We still need to satisfy the spin length constraint, which imposes a condition that the inverse Fourier transform of $f(r)$ takes only the values $\pm 1$. This freedom corresponds to the sliding symmetries of flipping individual vertical ladders, and leads to $2^{n_{\text{lad}}}$ degenerate states (where $n_{\text{lad}}$ is the number of vertical ladders), as discussed in the main text.
Similarly we can construct another $2\times 2^{n_{\text{lad}}}$ states by using the lines $\mathbf{Q}^{(x)}$ or $\mathbf{Q}^{(y)}$ in momentum space, which correspond to decoupled ladders running along the diagonal directions of the lattice. Altogether, we have found the $3\times 2^{n}_{\text{lad}}$ discrete classical ground states discussed in the main text by using the Luttinger-Tisza minimization method.
Appendix B Technical details about the ED study
B.1 The symmetry group of the Hamiltonian
The full symmetry group of the $K_{1}$-$K_{2}$ model, for half-integer spins, is $\mathcal{T}\times\widetilde{C}_{6\text{v}}\times\widetilde{\mathsf{D}}_{2}$, which consists of:
1.
The translation group $\mathcal{T}$ generated by translations by the primitive vectors $\mathbf{t}_{1}$ and $\mathbf{t}_{2}$, see Fig. 1 of the main text.
2.
The double cover $\widetilde{\mathsf{C}}_{6\text{v}}$ of the group $\mathsf{C}_{6\text{v}}\subset\mathsf{SO}(3)$ in the combined spin and real space, where the six-fold axis goes through one of hexagon centers. This group is generated by two operations: the six-fold rotation $\mathsf{C}_{6}$ around $[111]$, whose spin part maps the components $(x,y,z)\mapsto(y,z,x)$, and the reflection plane $(1\bar{1}0)$ that passes through the $zz$-bonds of the model, whose spin part maps $(x,y,z)\mapsto(-y,-x,-z)$.
3.
The double cover $\widetilde{\mathsf{D}}_{2}$ of the point group $\mathsf{D}_{2}\subset\mathsf{SO}(3)$, which consists of three $\pi$-rotations $\mathsf{C}_{2x}$, $\mathsf{C}_{2y}$, and $\mathsf{C}_{2z}$ in spin space. The first maps the spin components $(x,y,z)\mapsto(x,-y,-z)$, etc.
B.2 Finite clusters
In our ED study we considered two clusters with periodic boundary conditions, one with 24 and another with 32 sites, with spanning vectors $(2\mathbf{t}_{1}\!-\!4\mathbf{t}_{2},4\mathbf{t}_{1}\!-\!2\mathbf{t}_{2})$ and $(2\mathbf{t}_{1}\!-\!4\mathbf{t}_{2},4\mathbf{t}_{1})$, respectively. These clusters are shown in Fig. 6 (a, c). The 24-site cluster has the full point group symmetry of the infinite lattice, i.e. $\widetilde{\mathsf{C}}_{6\text{v}}\times\widetilde{\mathsf{D}}_{2}$, whereas the 32-site cluster has the lower symmetry $\widetilde{\mathsf{C}}_{2\text{v}}\times\widetilde{\mathsf{D}}_{2}$, where $\widetilde{\mathsf{C}}_{2\text{v}}$ contains the reflection planes $(110)$ and $(1\bar{1}0)$.
Turning to translational symmetry, the allowed momenta for each cluster are shown in Fig. 6(b, d). Both clusters accommodate the three $\mathbf{M}$ points of the Brillouin zone (BZ) and are therefore commensurate with all magnetic states of the phase diagram. The difference between the two clusters is that the three $\mathbf{M}$ points are degenerate for $N\!=\!24$ but not for $N\!=\!32$.
In our ED study we have exploited: i) translations, ii) the $\mathsf{C}_{2}$ subgroup of full $\mathsf{C}_{6\text{v}}$ point group (which is equivalent to the inversion $I$ in real space through the hexagon centers), and iii) the global spin inversion which maps the local $S_{z}$ basis states $|\!\uparrow\rangle\mapsto|\!\downarrow\rangle$. This operation is described by $\prod_{i}\sigma_{i}^{x}$, which is nothing else than the global $\pi$-rotation $\mathsf{C}_{2x}$ in spin space, divided by a phase factor $i^{N}$. Consequently, the energy eigenstates are labeled by: i) the momentum $\mathbf{k}$, ii) the parity under $\mathsf{C}_{2}$ (‘e’ for even, ‘o’ for odd), and iii) the parity under $S_{z}$ spin inversion (‘Sze’ for even, ‘Szo’ for odd).
B.3 Symmetry spectroscopy of classical phases
Here we derive the symmetry decomposition of the twelve magnetic states of region I and II of the phase diagram. As explained in the main paper, the other two regions, III and IV, map to I and II, respectively, by the hidden symmetry $H_{yxz}$ combined with a simultaneous change of sign in $K_{1}$ and $K_{2}$.
B.3.1 Phase I
In the following, $|\text{str},\alpha^{\bm{\beta}}\rangle$ denotes the stripy state with FM ladders running along the direction of the $\alpha$-bonds, and the spins pointing along $\bm{\beta}$ in spin space. The twelve magnetic states of region I of the phase diagram can be split into four groups:
$$\displaystyle\mathcal{S}_{1}=\{|\text{str},x^{\mathbf{z}}\rangle,|\text{str},y%
^{\mathbf{x}}\rangle,|\text{str},z^{\mathbf{y}}\rangle\},~{}~{}\overline{%
\mathcal{S}}_{1}=\{|\text{str},x^{-\mathbf{z}}\rangle,|\text{str},y^{-\mathbf{%
x}}\rangle,|\text{str},z^{-\mathbf{y}}\rangle\},$$
$$\displaystyle\mathcal{S}_{2}=\{|\text{str},y^{\mathbf{z}}\rangle,|\text{str},z%
^{\mathbf{x}}\rangle,|\text{str},x^{\mathbf{y}}\rangle\},~{}~{}\overline{%
\mathcal{S}}_{2}=\{|\text{str},y^{-\mathbf{z}}\rangle,|\text{str},z^{-\mathbf{%
x}}\rangle,|\text{str},x^{-\mathbf{y}}\rangle\},$$
Table 2 shows how these twelve states transform under some of the symmetry operations of the group. Let us first examine the translation group. We have, $\forall\bm{\beta}$:
$$\displaystyle\mathcal{T}_{\mathbf{t}_{1}}\cdot|\text{str},x^{\bm{\beta}}%
\rangle=|\text{str},x^{-\bm{\beta}}\rangle,~{}~{}\mathcal{T}_{\mathbf{t}_{2}}%
\cdot|\text{str},x^{\bm{\beta}}\rangle=|\text{str},x^{-\bm{\beta}}\rangle,$$
$$\displaystyle\mathcal{T}_{\mathbf{t}_{1}}\cdot|\text{str},y^{\bm{\beta}}%
\rangle=|\text{str},y^{-\bm{\beta}}\rangle,~{}~{}\mathcal{T}_{\mathbf{t}_{2}}%
\cdot|\text{str},y^{\bm{\beta}}\rangle=|\text{str},y^{\bm{\beta}}\rangle,$$
$$\displaystyle\mathcal{T}_{\mathbf{t}_{1}}\cdot|\text{str},z^{\bm{\beta}}%
\rangle=|\text{str},z^{\bm{\beta}}\rangle,~{}~{}\mathcal{T}_{\mathbf{t}_{2}}%
\cdot|\text{str},z^{\bm{\beta}}\rangle=|\text{str},z^{-\bm{\beta}}\rangle.$$
Thus $\frac{1}{\sqrt{2}}\left(|\text{str},x^{\bm{\beta}}\rangle\!+\!|\text{str},x^{-%
\bm{\beta}}\rangle\right)$ transforms as $\mathbf{k}\!=\!0$ ($\bm{\Gamma}$ point) and
$\frac{1}{\sqrt{2}}\left(|\text{str},x^{\bm{\beta}}\rangle\!-\!|\text{str},x^{-%
\bm{\beta}}\rangle\right)$ transforms as $\mathbf{k}\!=\!\frac{1}{a}(-\frac{\pi}{\sqrt{3}},\pi)\!\equiv\!\mathbf{M}_{x}$.
Similarly, $\frac{1}{\sqrt{2}}\left(|\text{str},y^{\bm{\beta}}\rangle\!+\!|\text{str},y^{-%
\bm{\beta}}\rangle\right)$ transforms as $\mathbf{k}\!=\!0$ and $\frac{1}{\sqrt{2}}\left(|\text{str},y^{\bm{\beta}}\rangle\!-\!|\text{str},y^{-%
\bm{\beta}}\rangle\right)$ transforms as $\mathbf{k}\!=\!\frac{1}{a}(\frac{\pi}{\sqrt{3}},\pi)\!\equiv\!\mathbf{M}_{y}$,
$\frac{1}{\sqrt{2}}\left(|\text{str},z^{\bm{\beta}}\rangle\!+\!|\text{str},z^{-%
\bm{\beta}}\rangle\right)$ transforms as $\mathbf{k}\!=\!0$, and $\frac{1}{\sqrt{2}}\left(|\text{str},z^{\bm{\beta}}\rangle\!-\!|\text{str},z^{-%
\bm{\beta}}\rangle\right)$ transforms as $\mathbf{k}\!=\!\frac{1}{a}(\frac{2\pi}{\sqrt{3}},0)\!\equiv\!\mathbf{M}_{z}$. Altogether:
$$\displaystyle\{|\text{str},x^{\mathbf{z}}\rangle,~{}|\text{str},x^{-\mathbf{z}%
}\rangle\}\to\bm{\Gamma}\oplus\mathbf{M}_{x},~{}~{}~{}~{}~{}\{|\text{str},y^{%
\mathbf{x}}\rangle,~{}|\text{str},y^{-\mathbf{x}}\rangle\}\to\bm{\Gamma}\oplus%
\mathbf{M}_{y},~{}~{}~{}~{}~{}\{|\text{str},z^{\mathbf{y}}\rangle,~{}|\text{%
str},z^{-\mathbf{y}}\rangle\}\to\bm{\Gamma}\oplus\mathbf{M}_{z},$$
$$\displaystyle\{|\text{str},x^{-\mathbf{y}}\rangle,~{}|\text{str},x^{\mathbf{y}%
}\rangle\}\to\bm{\Gamma}\oplus\mathbf{M}_{x},~{}~{}~{}~{}~{}\{|\text{str},y^{-%
\mathbf{z}}\rangle,~{}|\text{str},y^{-\mathbf{z}}\rangle\}\to\bm{\Gamma}\oplus%
\mathbf{M}_{y},~{}~{}~{}~{}~{}\{|\text{str},z^{-\mathbf{x}}\rangle,~{}|\text{%
str},z^{\mathbf{x}}\rangle\}\to\bm{\Gamma}\oplus\mathbf{M}_{z}.$$
Next, let us examine the parities with respect to the $\mathsf{C}_{2}$ rotation in real space and the $\mathsf{C}_{2x}$ rotation in spin space. It is easy to see that the first symmetry is not broken by any of the twelve states, while the second is broken when $\bm{\beta}=\mathbf{y}$ and $\mathbf{z}$. So all twelve states are even with respect to $\mathsf{C}_{2}$, the $\bm{\beta}=\mathbf{x}$ are even with respect to $\mathsf{C}_{2x}$, while $\bm{\beta}=\mathbf{y}$ and $\mathbf{z}$ must decompose into both even and odd parities with respect to $\mathsf{C}_{2x}$. Altogether:
$$\displaystyle\{|\text{str},x^{\mathbf{z}}\rangle,|\text{str},x^{-\mathbf{z}}%
\rangle\}\!\to\!\bm{\Gamma}.e.\text{Sz}e\oplus\mathbf{M}_{x}.e.\text{Sz}o,~{}~%
{}~{}\{|\text{str},x^{-\mathbf{y}}\rangle,|\text{str},x^{\mathbf{y}}\rangle\}%
\!\to\!\bm{\Gamma}.e.\text{Sz}e\oplus\mathbf{M}_{x}.e.\text{Sz}o,$$
$$\displaystyle\{|\text{str},y^{\mathbf{x}}\rangle,|\text{str},y^{-\mathbf{x}}%
\rangle\}\!\to\!\bm{\Gamma}.e.\text{Sz}e\oplus\mathbf{M}_{y}.e.\text{Sz}e,~{}~%
{}~{}\{|\text{str},y^{-\mathbf{z}}\rangle,|\text{str},y^{\mathbf{z}}\rangle\}%
\!\to\!\bm{\Gamma}.e.\text{Sz}e\oplus\mathbf{M}_{y}.e.\text{Sz}o,$$
(7)
$$\displaystyle\{|\text{str},z^{\mathbf{y}}\rangle,|\text{str},z^{-\mathbf{y}}%
\rangle\}\!\to\!\bm{\Gamma}.e.\text{Sz}e\oplus\mathbf{M}_{z}.e.\text{Sz}o,~{}~%
{}~{}\{|\text{str},z^{-\mathbf{x}}\rangle,|\text{str},z^{\mathbf{x}}\rangle\}%
\!\to\!\bm{\Gamma}.e.\text{Sz}e\oplus\mathbf{M}_{z}.e.\text{Sz}e~{}.$$
‘Extra’ degeneracy at the $\mathbf{M}$ points for $N=24$.
The above quantum numbers for the $\mathbf{M}$ points are fully consistent with what we find in the low-energy spectra of Fig. 3 (a) of the main paper. For the symmetric, $N\!=\!24$ cluster, the three $\mathbf{M}$ points are degenerate due to the six-fold symmetry. However we see that the two sets of $\mathbf{M}$ points are also degenerate with respect to each other, i.e. we have a six-fold degeneracy. This extra degeneracy comes from the $\widetilde{\mathsf{D}}_{2}$ symmetry in spin space. To see this, let us relabel the spin inversion part of (B.3.1) using the actual IR of the group $\widetilde{\mathsf{D}}_{2}$ (see Table 1, right), instead of the parity with respect to $\mathsf{C}_{2x}$ (which contains less information about the state):
$$\displaystyle\{|\text{str},x^{\mathbf{z}}\rangle,|\text{str},x^{-\mathbf{z}}%
\rangle\}\!\to\!\bm{\Gamma}.e.\text{A}\oplus\mathbf{M}_{x}.e.\text{B}_{1},~{}~%
{}~{}\{|\text{str},x^{-\mathbf{y}}\rangle,|\text{str},x^{\mathbf{y}}\rangle\}%
\!\to\!\bm{\Gamma}.e.\text{A}\oplus\mathbf{M}_{x}.e.\text{B}_{2},$$
$$\displaystyle\{|\text{str},y^{\mathbf{x}}\rangle,|\text{str},y^{-\mathbf{x}}%
\rangle\}\!\to\!\bm{\Gamma}.e.\text{A}\oplus\mathbf{M}_{y}.e.\text{B}_{3},~{}~%
{}~{}\{|\text{str},y^{-\mathbf{z}}\rangle,|\text{str},y^{\mathbf{z}}\rangle\}%
\!\to\!\bm{\Gamma}.e.\text{A}\oplus\mathbf{M}_{y}.e.\text{B}_{1},$$
(8)
$$\displaystyle\{|\text{str},z^{\mathbf{y}}\rangle,|\text{str},z^{-\mathbf{y}}%
\rangle\}\!\to\!\bm{\Gamma}.e.\text{A}\oplus\mathbf{M}_{z}.e.\text{B}_{2},~{}~%
{}~{}\{|\text{str},z^{-\mathbf{x}}\rangle,|\text{str},z^{\mathbf{x}}\rangle\}%
\!\to\!\bm{\Gamma}.e.\text{A}\oplus\mathbf{M}_{z}.e.\text{B}_{3}~{}.$$
We see that the two states belonging to a given $\mathbf{M}$ point transform differently under $\widetilde{\mathsf{D}}_{2}$, so the Hamiltonian does not couple the two states. Yet, these states are mapped to each other by one of the reflection planes of $\widetilde{\mathsf{C}}_{6\text{v}}$, so they must be degenerate, leading to an overall six-fold degeneracy at the $\mathbf{M}$ points.
Degeneracies at the $\bm{\Gamma}$ point for $N=24$. The little group of the $\bm{\Gamma}$ point is the full point group $\widetilde{\mathsf{C}}_{6\text{v}}\times\widetilde{\mathsf{D}}_{2}$. However, all of the above six states that belong to the $\bm{\Gamma}$ point belong to the identity IR of $\widetilde{\mathsf{D}}_{2}$, so it is enough to decompose them with respect to the $\widetilde{\mathsf{C}}_{6\text{v}}$ part of the little group. To this end we use the well known formula from group theory Tinkham (2003)
$$m_{\alpha}=\frac{1}{|\widetilde{\mathsf{C}}_{6\text{v}}|}\sum_{{\bf g}\in%
\widetilde{\mathsf{C}}_{6\text{v}}}\chi^{\alpha}({\bf g})X({\bf g})^{\ast}~{},$$
(9)
which gives the number of times $m_{\alpha}$ that the $\alpha$-th IR of $\widetilde{\mathsf{C}}_{6\text{v}}$ appears in the decomposition of the $6\times 6$ representation formed by the six states belonging to the $\bm{\Gamma}$ point. Here $X({\bf g})$ gives the character of this representation, while $\chi^{\alpha}({\bf g})$ is the character of the $\alpha$-th IR of $\widetilde{\mathsf{C}}_{6\text{v}}$, see Table 1 (left). From Table 2 it follows that $X({\bf g})$ is finite only for the elements $E$, $\widetilde{E}$, $\mathsf{C}_{2}$, and $\widetilde{\mathsf{C}}_{2}$, and using the characters of Table 1 (left) we find that the only finite $m_{\alpha}$ are the following: $m_{A_{1}}=m_{A_{2}}=1$, $m_{E_{2}}=2$, namely
$$6\bm{\Gamma}\to\text{A}_{1}\oplus\text{A}_{2}\oplus 2\text{E}_{2}~{}.$$
(10)
i.e. we expect two singlets and two doublets. All states are found in the low-energy spectra shown in Fig. 3 (a) of the main paper, where the degeneracy of the E${}_{2}$ levels has been confirmed numerically.
B.3.2 Phase II
Here we denote by $|\text{zig},\alpha\alpha^{\prime\bm{\beta}}\rangle$ the zigzag state with FM lines formed by consecutive $\alpha$ and $\alpha^{\prime}$ type of bonds, and the spins pointing along $\bm{\beta}$ in spin space. The twelve magnetic states of region II can be split into four groups:
$$\displaystyle\mathcal{S}_{3}=\{|\text{zig},yz^{\mathbf{z}}\rangle,|\text{zig},%
zx^{\mathbf{x}}\rangle,|\text{zig},xy^{\mathbf{y}}\rangle\},~{}~{}\overline{%
\mathcal{S}}_{3}=\{|\text{zig},yz^{-\mathbf{z}}\rangle,|\text{zig},zx^{-%
\mathbf{x}}\rangle,|\text{zig},xy^{-\mathbf{y}}\rangle\},$$
$$\displaystyle\mathcal{S}_{4}=\{|\text{zig},zx^{\mathbf{z}}\rangle,|\text{zig},%
xy^{\mathbf{x}}\rangle,|\text{zig},yz^{\mathbf{y}}\rangle\},~{}~{}\overline{%
\mathcal{S}}_{4}=\{|\text{zig},zx^{-\mathbf{z}}\rangle,|\text{zig},xy^{-%
\mathbf{x}}\rangle,|\text{zig},yz^{-\mathbf{y}}\rangle\},$$
Under $\mathsf{T}$ and $\mathsf{C}_{2x}$ in spin space, these states transform in analogous way with the twelve states of region I, see (B.3.1). The difference is that the present states break the $\mathsf{C}_{2}$ rotation around the hexagon centers, and therefore the decomposition will contain both even and odd parities with respect to $\mathsf{C}_{2}$. Specifically,
$$\displaystyle\{|\text{zig},yz^{\mathbf{z}}\rangle,|\text{zig},yz^{-\mathbf{z}}%
\rangle\}\!\to\!\bm{\Gamma}.e.\text{Sz}e\oplus\mathbf{M}_{x}.o.\text{Sz}o,~{}~%
{}~{}\{|\text{zig},yz^{-\mathbf{y}}\rangle,|\text{zig},yz^{\mathbf{y}}\rangle%
\}\!\to\!\bm{\Gamma}.e.\text{Sz}e\oplus\mathbf{M}_{x}.o.\text{Sz}o,$$
$$\displaystyle\{|\text{zig},zx^{\mathbf{x}}\rangle,|\text{zig},zx^{-\mathbf{x}}%
\rangle\}\!\to\!\bm{\Gamma}.e.\text{Sz}e\oplus\mathbf{M}_{y}.o.\text{Sz}e,~{}~%
{}~{}\{|\text{zig},zx^{-\mathbf{z}}\rangle,|\text{zig},zx^{\mathbf{z}}\rangle%
\}\!\to\!\bm{\Gamma}.e.\text{Sz}e\oplus\mathbf{M}_{y}.o.\text{Sz}o,$$
(11)
$$\displaystyle\{|\text{zig},xy^{\mathbf{y}}\rangle,|\text{zig},xy^{-\mathbf{y}}%
\rangle\}\!\to\!\bm{\Gamma}.e.\text{Sz}e\oplus\mathbf{M}_{z}.o.\text{Sz}o,~{}~%
{}~{}\{|\text{zig},xy^{-\mathbf{x}}\rangle,|\text{zig},xy^{\mathbf{x}}\rangle%
\}\!\to\!\bm{\Gamma}.e.\text{Sz}e\oplus\mathbf{M}_{z}.o.\text{Sz}e~{}.$$
In analogy with region I, for the symmetric 24-site cluster, the six states belonging to the $\mathbf{M}$ points are degenerate due to the additional $\widetilde{\mathsf{D}}_{2}$ symmetry, and the six states belonging to the $\bm{\Gamma}$ point decompose as in (10), namely $6\bm{\Gamma}\to\text{A}_{1}\oplus\text{A}_{2}\oplus 2\text{E}_{2}$. Again, all states are found in the low-energy spectra shown in Fig. 3 (a) of the main paper.
B.4 Spin-spin correlation profiles
Figure 7 shows the GS expectation values of the spin-spin correlations $\langle S_{i}^{\alpha}S_{j}^{\alpha}\rangle$, in the three channels $\alpha\!=\!x$, $y$ and $z$, as calculated for the $N\!=\!32$ cluster, inside the first QSL phase and slightly outside (magnetic phase I). The results show clearly the ultra short-range nature of the correlations inside the QSL region, and the long-range nature outside. In addition, the data demonstrate the anisotropic character of the correlations, whereby different spin components $\alpha$ are correlated along different directions of the lattice.
B.5 ‘Symmetrized’ spin structure factor and spin length
Here we discuss the ‘symmetrized’ spin structure factor $\widetilde{S}(\mathbf{Q})$ and explain the overall normalization factor that we use to extract the spin length. As we discuss in the main text, NN ladders do not couple by the symmetry $H_{xyz}$, and so the quantum ground state of a finite cluster contains both relative orientations of the two sets of ladders $L_{1}$ and $L_{2}$ with equal amplitude. As a result, the spin-spin correlations between two spins that belong to $L_{1}$ and $L_{2}$ are zero for any finite cluster. If we wish to calculate the local spin lengths from the ground state spin-spin correlation data we can calculate the ‘symmetrized’ spin structure factor for one of the two subsets of ladders only, say $L_{1}$:
$$\mathcal{S}_{1}(\mathbf{Q})=\frac{1}{N_{1}^{2}}\sum_{\alpha}\sum_{\mathbf{r},%
\mathbf{r}^{\prime}\in L_{1}}\langle S^{\alpha}_{\mathbf{r}}S^{\alpha}_{%
\mathbf{r}^{\prime}}\rangle e^{i\mathbf{Q}^{(\alpha)}\cdot(\mathbf{r}-\mathbf{%
r}^{\prime})},$$
(12)
where $N_{1}=N/2$ is the number of sites inside the sublattice $L_{1}$, and $\mathbf{Q}^{(a)}$ is the ordering wavevector corresponding to the spin component $\alpha=\{x,y,z\}$.
By translation symmetry,
$$\langle S^{\alpha}_{\mathbf{r}}S^{\alpha}_{\mathbf{r}^{\prime}}\rangle=\langle
S%
^{\alpha}_{\mathbf{r}+\bm{\delta}}S^{\alpha}_{\mathbf{r}^{\prime}+\bm{\delta}}%
\rangle\Rightarrow\mathcal{S}_{1}(\mathbf{Q})=\frac{1}{N_{1}}\sum_{\alpha}\sum%
_{\mathbf{r}\in L_{1}}\langle S^{\alpha}_{0}S^{\alpha}_{\mathbf{r}}\rangle e^{%
i\mathbf{Q}^{(\alpha)}\cdot\mathbf{r}},$$
(13)
where we have chosen a reference site $\mathbf{r}^{\prime}=0$. The local spin length $m$ is then given by $m^{2}=\frac{2}{N}\mathcal{S}_{1}(\mathbf{Q})$.
By contrast, the corresponding ‘symmetrized’ spin structure factor of the full lattice $\mathcal{S}(\mathbf{Q})$, defined by
$$\mathcal{S}(\mathbf{Q})=\frac{1}{N^{2}}\sum_{\alpha}\sum_{\mathbf{r},\mathbf{r%
}^{\prime}~{}\in~{}L_{1}\cup L_{2}}\langle S^{\alpha}_{\mathbf{r}}S^{\alpha}_{%
\mathbf{r}^{\prime}}\rangle e^{i\mathbf{Q}^{(\alpha)}\cdot(\mathbf{r}-\mathbf{%
r}^{\prime})},$$
(14)
would give in the present case
$$\mathcal{S}(\mathbf{Q})=\frac{1}{2}\mathcal{S}_{1}(\mathbf{Q}),$$
(15)
and the corresponding local spin lengths would be off by a factor of $\sqrt{2}$.
Appendix C Pseudofermion functional renormalization group (PFFRG) approach
In addition to ED we studied the $K_{1}$-$K_{2}$ honeycomb model using the pseudofermion functional renormalization group (PFFRG) approach. Rewriting the spin operators in terms of Abrikosov auxiliary fermions, the resulting fermionic model can be efficiently treated using a one loop functional renormalization group procedure. This technique calculates diagrammatic contributions to the spin-spin correlation function in infinite order in the exchange couplings, including terms in different interaction channels: The inclusion of direct particle-hole terms insures the correct treatment of the large spin limit $S\rightarrow\infty$ while the crossed particle-hole and particle-particle terms lead to exact results in the large $N$ limit. This allows to study the competition between magnetic order tendencies and quantum fluctuations in an unbiased way. For details we refer to reader to Ref. [Reuther and Wölfle, 2010].
The PFFRG method calculates the static spin-structure factor as given by
$$\chi^{\alpha\beta}(\mathbf{k})=\int_{0}^{\infty}d\tau\langle T_{\tau}\{S^{%
\alpha}(-\mathbf{k},\tau)S^{\beta}(\mathbf{k},0)\}\rangle\,,$$
(16)
with
$$S^{\alpha}(\mathbf{k},\tau)=\frac{1}{\sqrt{N}}\sum_{i}e^{-i\mathbf{k}\mathbf{r%
}_{i}}e^{H\tau}S_{i}^{z}e^{-H\tau}\,,$$
(17)
where $\tau$ denotes the imaginary time and $T_{\tau}$ is the corresponding time-ordering operator. Being able to treat large system sizes (calculations for the $K_{1}$-$K_{2}$ model are performed for a spin cluster with 265 sites) the PFFRG yields results close to the thermodynamic limit. Fig. 8 shows three representative plots for the momentum resolved spin-structure factor $\chi^{zz}(\mathbf{k})$ in the Kitaev spin-liquid phase in the vicinity of $\psi=0$. While in the exact Kitaev limit $\psi=0$ the PFFRG reproduces the well-known nearest neighbor correlations as indicated by a single harmonics profile of the spin-structure factor, deviations from $\psi=0$ lead to longer-range correlations and a more diverse spin-structure factor.
Appendix D Strong-coupling expansion
Here we provide some technical details on the derivation of the effective model around the strong coupling limit of $K_{1}^{x(y)}\!=\!K_{2}^{x(y)}\!=\!0$. In this limit we have $n_{\text{lad}}$ decoupled vertical ladders (which are the ladders made of the $zz$-bonds), leading to a sub-extensive ground state degeneracy. The ordering pattern within each individual vertical ladder is fixed (up to a global sign) by the signs of $K_{1}^{z}$ and $K_{2}^{z}$.
The GS degeneracy is lifted by the transverse perturbations $K_{1}^{x(y)}$ and $K_{2}^{x(y)}$, which give rise to effective couplings between the ladders (or more accurately between NNN ladders, as discussed in the main paper). These couplings can be found by degenerate perturbation theory. Let us denote by $H_{0}$ the sum of all $K_{1}^{z}$ and $K_{2}^{z}$ interactions and by $V$ the sum of all remaining terms of the model.
In the following we define the strong coupling parameter $r$ to be the ratio between $K_{1,2}^{x(y)}$ and $K_{1,2}^{z}$, as in the main text.
It is easy to see that the degeneracy is first lifted at fourth order in $V$, and the corresponding effective Hamiltonian is described by the standard expression
$$\mathcal{H}_{\text{eff}}=PVRVRVRVP\,,$$
(18)
where $P$ is the projection into the ground state manifold of $V\!=\!0$, and $R\!=\!\frac{1-P}{E_{0}-H_{0}}$ is the resolvent, where $E_{0}\!=\!(-|K_{1}|/2-|K_{2}|)N$ is the ground state energy at $V\!=\!0$, and $N$ is the number of sites. By expanding the different terms of $V$ in (18) we get three types of loop-four virtual processes, that involve: i) only NN perturbations $K_{1}^{x(y)}$ (Sec. D.1), ii) only NNN perturbations $K_{2}^{x(y)}$ (Sec. D.2), and iii) both $K_{1}^{x(y)}$ and $K_{2}^{x(y)}$ perturbations (Sec. D.3).
D.1 Effective terms arising from $K_{1}^{x(y)}$ only (Toric code terms)
The $K_{1}^{x(y)}$ perturbations give rise to intra-ladder, six-body terms of the form $J_{W}\hat{W}_{p}$, where $\hat{W}_{p}$ is Kitaev’s Kitaev (2006) flux operator:
$$\hat{W}_{p}=2^{6}S_{1}^{z}S_{2}^{y}S_{3}^{x}S_{4}^{z}S_{5}^{y}S_{6}^{x}~{},$$
(19)
where $1$-$6$ label clockwise the six sites of the hexagon plaquette $p$, as shown in Fig. 9 (A). To find $J_{W}$ in fourth order in $r$, it suffices to consider one hexagon only. Let us denote the local configuration of this hexagon in any of the ground states at $r\!=\!0$ by $|\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4},\sigma_{5},\sigma_{6}\rangle$, with the spin projections $S_{j}^{z}\!=\!\frac{1}{2}\sigma_{j}$, and $\sigma_{1}\sigma_{6}\!=\!\sigma_{3}\sigma_{4}\!=\!-\text{sgn}(K_{1}^{z})$. In this case, the perturbation $V\!=\!A$ can be written as (see Fig. 9):
$$A=A_{a}+A_{b}+A_{c}+A_{d}=K_{1}^{x}S_{1}^{x}S_{6}^{x}+K_{1}^{y}S_{1}^{y}S_{2}^%
{y}+K_{1}^{x}S_{3}^{x}S_{4}^{x}+K_{1}^{y}S_{4}^{y}S_{5}^{y}\,,$$
(20)
and Eq. (18) contains 24 terms in total, which have the form
$$\mathcal{H}_{\text{eff}}^{(A,dcba)}=PA_{d}RA_{c}RA_{b}RA_{a}P,~{}~{}~{}~{}%
\text{etc.}$$
In the following we define $\mu\!=\!(K_{1}^{x}K_{1}^{y})^{2}$ and use the relations $S^{x}|\sigma\rangle\!=\!\frac{1}{2}|\!-\sigma\rangle$ and $S^{y}|\sigma\rangle\!=\!\frac{i\sigma}{2}|\!-\sigma\rangle$. The energy excitations of various intermediate states are
$$\displaystyle\Delta_{12}=\Delta_{16}=\Delta_{34}=\Delta_{45}=-|K_{1}^{z}|-2|K_%
{2}^{z}|,$$
$$\displaystyle\Delta_{26}=\Delta_{35}=-|K_{1}^{z}|-|K_{2}^{z}|,$$
$$\displaystyle\Delta_{1635}=\Delta_{1235}=\Delta_{2634}=\Delta_{2645}=-|K_{1}^{%
z}|-3|K_{2}^{z}|,$$
$$\displaystyle\Delta_{1634}=\Delta_{1245}=-2|K_{1}^{z}|-4|K_{2}^{z}|,$$
$$\displaystyle\Delta_{1234}=\Delta_{1645}=-|K_{1}^{z}|-4|K_{2}^{z}|,$$
Let us first consider the terms of the type
$$\displaystyle\mathcal{H}_{\text{eff}}^{(A,dcba)}|\sigma_{1},\sigma_{2},\sigma_%
{3},\sigma_{4},\sigma_{5},\sigma_{6}\rangle\!=\!\frac{\mu~{}\sigma_{1}\sigma_{%
2}\sigma_{4}\sigma_{5}}{4^{4}\Delta_{45}\Delta_{35}\Delta_{1235}}|\sigma_{1},-%
\sigma_{2},-\sigma_{3},\sigma_{4},-\sigma_{5},-\sigma_{6}\rangle\,.$$
The final state is not the same as the initial one, but belongs to the unperturbed manifold of states, so this is a valid process. The operator that does the job is:
$$\frac{\mu}{4\Delta_{45}\Delta_{35}\Delta_{1235}}S_{1}^{z}S_{2}^{y}S_{3}^{x}S_{%
4}^{z}S_{5}^{y}S_{6}^{x}\!=\!\frac{\mu}{2^{8}\Delta_{45}\Delta_{35}\Delta_{123%
5}}\hat{W}_{p}\,.$$
This result can be also found right away by taking
$$\mathcal{H}_{\text{eff}}^{(A,dcba)}\!=\!\mathcal{H}_{\text{eff}}^{(A,abcd)}\!=%
\!\mu PS_{1}^{x}S_{6}^{x}RS_{1}^{y}S_{2}^{y}RS_{3}^{x}S_{4}^{x}RS_{4}^{y}S_{5}%
^{y}P\to\frac{\mu}{D_{1}}\underbrace{(S_{1}^{x}S_{1}^{y})}_{\frac{i}{2}S_{1}^{%
z}}S_{2}^{y}S_{3}^{x}\underbrace{(S_{4}^{x}S_{4}^{y})}_{\frac{i}{2}S_{4}^{z}}S%
_{5}^{y}S_{6}^{x}\!=\!-\frac{\mu}{2^{8}D_{1}}\hat{W}_{p}~{},$$
with $D_{1}\!=\!\Delta_{45}\Delta_{35}\Delta_{1235}$.
Similarly
$$\displaystyle\mathcal{H}_{\text{eff}}^{(A,cdba)}\!=\!\mathcal{H}_{\text{eff}}^%
{(A,abdc)}\!=\!\mathcal{H}_{\text{eff}}^{(A,dcab)}\!=\!\mathcal{H}_{\text{eff}%
}^{(A,bacd)}\!=\!-\mathcal{H}_{\text{eff}}^{(A,cdab)}\!=\!-\mathcal{H}_{\text{%
eff}}^{(A,badc)}\!=\!\frac{\mu}{2^{8}D_{1}}\hat{W}_{p}~{},$$
where we used $\Delta_{34}\Delta_{35}\Delta_{1235}\!=\!\Delta_{45}\Delta_{35}\Delta_{1235}\!=%
\!\Delta_{45}\Delta_{35}\Delta_{1635}\!=\!\Delta_{45}\Delta_{35}\Delta_{1235}%
\!=\!\Delta_{34}\Delta_{35}\Delta_{1635}\!=\!\Delta_{45}\Delta_{35}\Delta_{123%
5}\!=\!D_{1}$. So the eight processes $\{abcd,bacd,abdc,badc\}$ and $\{dcba,dcab,cdba,cdab\}$ cancel each other out.
Next come the processes:
$$\displaystyle\mathcal{H}_{\text{eff}}^{(A,acbd)}\!=\!\mathcal{H}_{\text{eff}}^%
{(A,dbca)}\!=\!\mu PS_{4}^{y}S_{5}^{y}RS_{1}^{y}S_{2}^{y}RS_{3}^{x}S_{4}^{x}RS%
_{1}^{x}S_{6}^{x}P\to\frac{\mu}{D_{2}}\underbrace{(S_{1}^{y}S_{1}^{x})}_{-%
\frac{i}{2}S_{1}^{z}}S_{2}^{y}S_{3}^{x}\underbrace{(S_{4}^{y}S_{4}^{x})}_{-%
\frac{i}{2}S_{4}^{z}}S_{5}^{y}S_{6}^{x}=-\frac{\mu}{2^{8}D_{2}}\hat{W}_{p}~{},$$
with $D_{2}\!=\!\Delta_{16}\Delta_{1634}\Delta_{2634}$. Similarly,
$$\displaystyle\mathcal{H}_{\text{eff}}^{(A,cabd)}\!=\!\mathcal{H}_{\text{eff}}^%
{(A,dbac)}\!=\!\mathcal{H}_{\text{eff}}^{(A,acdb)}\!=\!\mathcal{H}_{\text{eff}%
}^{(A,bdca)}\!=\!\mathcal{H}_{\text{eff}}^{(A,cadb)}\!=\!\mathcal{H}_{\text{%
eff}}^{(A,bdac)}\!=\!-\frac{\mu}{2^{8}D_{2}}\hat{W}_{p},$$
where we used $\Delta_{34}\Delta_{1634}\Delta_{2634}\!=\!\Delta_{16}\Delta_{1634}\Delta_{1635%
}\!=\!\Delta_{34}\Delta_{1634}\Delta_{1635}\!=\!D_{2}$. These eight processes $\{cabd,acbd,cadb,acdb\}$ and $\{dbac,dbca,bdac,bdca\}$ give the same contribution and, thus, do not cancel out.
Finally, there are the processes
$$\displaystyle\mathcal{H}_{\text{eff}}^{(A,cbad)}\!=\!\mathcal{H}_{\text{eff}}^%
{(A,dabc)}\!=\!\mu PS_{4}^{y}S_{5}^{y}RS_{1}^{x}S_{6}^{x}RS_{1}^{y}S_{2}^{y}RS%
_{3}^{x}S_{4}^{x}P\to\frac{\mu}{D_{3}}\underbrace{(S_{1}^{x}S_{1}^{y})}_{\frac%
{i}{2}S_{1}^{z}}S_{2}^{y}S_{3}^{x}\underbrace{(S_{4}^{y}S_{4}^{x})}_{-\frac{i}%
{2}S_{4}^{z}}S_{5}^{y}S_{6}^{x}\!=\!+\frac{\mu}{2^{8}D_{3}}\hat{W}_{p}~{},$$
with $D_{3}\!=\!\Delta_{34}\Delta_{1234}\Delta_{2634}$. Similarly
$$\displaystyle\mathcal{H}_{\text{eff}}^{(A,bcad)}\!=\!\mathcal{H}_{\text{eff}}^%
{(A,dacb)}\!=\!\mathcal{H}_{\text{eff}}^{(A,cbda)}\!=\!\mathcal{H}_{\text{eff}%
}^{(A,adbc)}\!=\!\mathcal{H}_{\text{eff}}^{(A,bcda)}\!=\!\mathcal{H}_{\text{%
eff}}^{(A,adcb)}\!=\!+\frac{\mu}{2^{8}D_{3}}\hat{W}_{p}~{},$$
where we used $\Delta_{12}\Delta_{1234}\Delta_{2634}\!=\!\Delta_{34}\Delta_{1234}\Delta_{1235%
}\!=\!\Delta_{12}\Delta_{1234}\Delta_{1235}\!=\!D_{3}$. So the eight processes $\{cbad,bcad,cbda,bcda\}$ and $\{dabc,dacb,adbc,adcb\}$ also do not cancel out.
Altogether:
$$\mathcal{H}_{\text{eff}}^{(A)}\!=\!8\mathcal{H}_{\text{eff}}^{(A,acbd)}+8%
\mathcal{H}_{\text{eff}}^{(A,cbad)}\!=\!\frac{\mu}{2^{5}}(\frac{1}{D_{3}}-%
\frac{1}{D_{2}})\hat{W}_{p}\!=\!\frac{\mu(\Delta_{1634}-\Delta_{1234})}{2^{5}%
\Delta_{34}\Delta_{1634}\Delta_{1635}\Delta_{1234}}\hat{W}_{p}~{}.$$
We have $\Delta_{1634}-\Delta_{1234}\!=\!-|K_{1}^{z}|$, and therefore
$$\boxed{\mathcal{H}_{\text{eff}}^{(A)}=J_{W}\hat{W}_{p}},~{}~{}~{}\boxed{J_{W}=%
\frac{-\mu|K_{1}^{z}|}{2^{6}(|K_{1}^{z}|+2|K_{2}^{z}|)^{2}(|K_{1}^{z}|+3|K_{2}%
^{z}|)(|K_{1}^{z}|+4|K_{2}^{z}|)}}~{}.$$
(21)
For $K_{2}^{z}\!=\!0$ we get $J_{W}\!=\!-\frac{(K_{1}^{x}K_{1}^{y})^{2}}{2^{6}|K_{1}^{z}|^{3}}$, which agrees with the result obtained by Kitaev Kitaev (2006).
D.2 Effective terms arising from $K_{2}^{x(y)}$ only.
Consider three consecutive ladders in the honeycomb lattice. We will show that the $K_{2}^{x(y)}$ terms give rise to an effective NNN inter-ladder coupling of the form $J_{1}S_{1}^{z}S_{7}^{z}$, see Fig. 9 (B). In this case, the perturbation $V\!=\!B$ is given by (see Fig. 9):
$$B=B_{a}+B_{b}+B_{c}+B_{d}=K_{2}^{x}S_{1}^{x}S_{3}^{x}+K_{2}^{y}S_{3}^{y}S_{7}^%
{y}+K_{2}^{x}S_{5}^{x}S_{7}^{x}+K_{2}^{y}S_{1}^{y}S_{5}^{y}~{}.$$
(22)
Again, Eq. (18) gives 24 relevant contributions.
In the following we define $\lambda\!=\!(K_{2}^{x}K_{2}^{y})^{2}$, and use the relation $\sigma_{3}\sigma_{5}\!=\!-\text{sgn}(K_{2}^{z})$. We also introduce the excitation energies of various intermediate virtual states:
$$\Delta_{13}\!=\!\Delta_{17}\!=\!\Delta_{15}\!=\!\Delta_{37}\!=\!\Delta_{57}\!=%
\!-|K_{1}^{z}|-2|K_{2}^{z}|,~{}~{}\Delta_{35}\!=\!-|K_{1}^{z}|-|K_{2}^{z}|,~{}%
~{}\Delta_{1357}\!=\!-2|K_{1}^{z}|-3|K_{2}^{z}|~{}.$$
We find:
$$\displaystyle\mathcal{H}_{\text{eff}}^{(B,abcd)}|\sigma_{1},\sigma_{2},\sigma_%
{3},\sigma_{4},\sigma_{5},\sigma_{6},\sigma_{7}\rangle$$
$$\displaystyle=$$
$$\displaystyle+\frac{\lambda\sigma_{1}\sigma_{7}}{4^{4}\Delta_{13}\Delta_{17}%
\Delta_{15}}\text{sgn}(K_{2}^{z})|\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4},%
\sigma_{5},\sigma_{6},\sigma_{7}\rangle~{},$$
$$\displaystyle\mathcal{H}_{\text{eff}}^{(B,abdc)}|\sigma_{1},\sigma_{2},\sigma_%
{3},\sigma_{4},\sigma_{5},\sigma_{6},\sigma_{7}\rangle$$
$$\displaystyle=$$
$$\displaystyle-\frac{\lambda\sigma_{7}\sigma_{1}}{4^{4}\Delta_{13}\Delta_{17}%
\Delta_{57}}\text{sgn}(K_{2}^{z})|\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4},%
\sigma_{5},\sigma_{6},\sigma_{7}\rangle~{},$$
$$\displaystyle\mathcal{H}_{\text{eff}}^{(B,bacd)}|\sigma_{1},\sigma_{2},\sigma_%
{3},\sigma_{4},\sigma_{5},\sigma_{6},\sigma_{7}\rangle$$
$$\displaystyle=$$
$$\displaystyle=-\frac{\lambda\sigma_{1}\sigma_{7}}{4^{4}\Delta_{37}\Delta_{17}%
\Delta_{15}}\text{sgn}(K_{2}^{z})|\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4},%
\sigma_{5},\sigma_{6},\sigma_{7}\rangle~{},$$
$$\displaystyle\mathcal{H}_{\text{eff}}^{(B,badc)}|\sigma_{1},\sigma_{2},\sigma_%
{3},\sigma_{4},\sigma_{5},\sigma_{6},\sigma_{7}\rangle$$
$$\displaystyle=$$
$$\displaystyle+\frac{\lambda\sigma_{1}\sigma_{7}}{4^{4}\Delta_{37}\Delta_{17}%
\Delta_{57}}\text{sgn}(K_{2}^{z})|\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4},%
\sigma_{5},\sigma_{6},\sigma_{7}\rangle~{}.$$
So the eight terms coming from $\{abcd,abdc,bacd,badc\}$ cancel out, and the same is true for their inverse processes $\{dcba,cdba,dcab,cdab\}$. Next come the processes:
$$\displaystyle H_{\text{eff}}^{(B,cbda)}|\sigma_{1},\sigma_{2},\sigma_{3},%
\sigma_{4},\sigma_{5},\sigma_{6},\sigma_{7}\rangle$$
$$\displaystyle=$$
$$\displaystyle=-\frac{\lambda\sigma_{1}\sigma_{7}}{4^{4}\Delta_{13}\Delta_{35}%
\Delta_{57}}\text{sgn}(K_{2}^{z})|\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4},%
\sigma_{5},\sigma_{6},\sigma_{7}\rangle~{},$$
$$\displaystyle H_{\text{eff}}^{(B,cbad)}|\sigma_{1},\sigma_{2},\sigma_{3},%
\sigma_{4},\sigma_{5},\sigma_{6},\sigma_{7}\rangle$$
$$\displaystyle=$$
$$\displaystyle=+\frac{\lambda\sigma_{1}\sigma_{7}}{4^{4}\Delta_{15}\Delta_{35}%
\Delta_{57}}\text{sgn}(K_{2}^{z})|\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4},%
\sigma_{5},\sigma_{6},\sigma_{7}\rangle~{},$$
and similarly $H_{\text{eff}}^{(B,bcad)}\!=\!-H_{\text{eff}}^{(B,cbad)}$, and $H_{\text{eff}}^{(B,bcda)}\!=\!-H_{\text{eff}}^{(B,cbda)}$. So the processes coming from $\{cbad,cbda,bcad,bcda\}$ cancel out, and the same is true for their inverse processes $\{dabc,adbc,dacb,adcb\}$.
The only finite contributions then come from the remaining eight processes: $\{acbd,\!acdb,\!bdac,\!bdca\}$ and their inverses $\{dbca,\!bdca,\!cadb,\!acdb\}$.
Here $H_{\text{eff}}^{(B,acbd)}\!=\!H_{\text{eff}}^{(B,cabd)}\!=\!H_{\text{eff}}^{(B%
,acdb)}\!=\!H_{\text{eff}}^{(B,cadb)}$, so there is no cancellation. We have:
$$\displaystyle\mathcal{H}_{\text{eff}}^{(B,dbca)}|\sigma_{1},\sigma_{2},\sigma_%
{3},\sigma_{4},\sigma_{5},\sigma_{6},\sigma_{7}\rangle=-\frac{\lambda\sigma_{1%
}\sigma_{7}}{4^{4}\Delta_{13}\Delta_{1357}\Delta_{15}}\text{sgn}(K_{2z})|%
\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4},\sigma_{5},\sigma_{6},\sigma_{7}%
\rangle~{}.$$
In total, the effective terms arising from the NNN perturbations $K_{2}^{x(y)}$ is
$$\boxed{\mathcal{H}_{\text{eff}}^{(B)}\!=\!8\mathcal{H}_{\text{eff}}^{(B,dbca)}%
\!=\!J_{1}S_{1}^{z}S_{7}^{z}},~{}~{}~{}\boxed{J_{1}=\frac{(K_{2}^{x}K_{2}^{y})%
^{2}}{8(|K_{1}^{z}|+2|K_{2}^{z}|)^{2}(2|K_{1}^{z}|+3|K_{2}^{z}|)}\text{sgn}(K_%
{2}^{z})}~{}.$$
(23)
For $K_{1}^{z}\!=\!0$, $J_{1}\!=\!\frac{(K_{2}^{x}K_{2}^{y})^{2}}{24\cdot 4(K_{2}^{z})^{3}}\text{sgn}(%
K_{2}^{z})$, in agreement with the result obtained by Jackeli and Avella Jackeli and Avella for the triangular lattice case.
D.3 Effective terms arising from mixed $K_{1}^{x(y)}$ and $K_{2}^{x(y)}$ perturbations.
Finally, we consider the perturbations due to mixed $K_{1}^{x(y)}$ and $K_{2}^{x(y)}$ terms. Figure 9 (C-H) shows the six minimal loops that contribute to an effective coupling of the form $J_{2}S_{1}^{z}S_{4}^{z}$, between sites $1$ and $4$. In the following we define $\kappa\!=\!K_{1}^{x}K_{1}^{y}K_{2}^{x}K_{2}^{y}$, and introduce the excitation energies of various intermediate virtual states:
$$\displaystyle\Delta_{12}\!=\!\Delta_{16}\!=\!\Delta_{14}\!=\!\Delta_{46}\!=\!%
\Delta_{24}\!=\!-|K_{1}^{z}|-2|K_{2}^{z}|,$$
$$\displaystyle\Delta_{26}\!=\!\Delta_{35}\!=\!-|K_{1}^{z}|-|K_{2}^{z}|,~{}~{}~{%
}\Delta_{23}\!=\!\Delta_{56}\!=\!-2|K_{2}^{z}|,$$
$$\displaystyle\Delta_{1246}\!=\!\Delta_{1345}\!=\!-2|K_{1}^{z}|-3|K_{2}^{z}|,~{%
}~{}~{}\Delta_{1234}\!=\!\Delta_{1456}\!=\!-|K_{1}^{z}|-4|K_{2}^{z}|.$$
Let us discuss the different processes C-H of Fig. 9 one by one.
D.3.1 C & D processes
The perturbation $V\!=\!C$ described by the loops of type C of Fig. 9 splits as
$$C=C_{a}+C_{b}+C_{c}+C_{d}=K_{1}^{y}S_{1}^{y}S_{2}^{y}+K_{2}^{y}S_{2}^{y}S_{4}^%
{y}+K_{2}^{x}S_{4}^{x}S_{6}^{x}+K_{1}^{x}S_{6}^{x}S_{1}^{x}.$$
(24)
Replacing (24) into (18), we get twenty four contributions. We have
$$\displaystyle\mathcal{H}_{\text{eff}}^{(C,dcba)}|\sigma_{1},\sigma_{2},\sigma_%
{3},\sigma_{4},\sigma_{5},\sigma_{6}\rangle$$
$$\displaystyle=$$
$$\displaystyle\mathcal{H}_{\text{eff}}^{(C,abcd)}|\sigma_{1},\sigma_{2},\sigma_%
{3},\sigma_{4},\sigma_{5},\sigma_{6}\rangle=\frac{-\kappa\sigma_{1}\sigma_{4}}%
{4^{4}{\Delta_{12}}{\Delta_{14}}{\Delta_{16}}}|\sigma_{1},\sigma_{2},\sigma_{3%
},\sigma_{4},\sigma_{5},\sigma_{6}\rangle~{}.$$
We also find $\mathcal{H}_{\text{eff}}^{(C,dcab)}\!=\!\mathcal{H}_{\text{eff}}^{(C,cdba)}\!=%
\!\mathcal{H}_{\text{eff}}^{(C,dcba)}$. So all eight processes $\{dcba,\!dcab,\!cdba,\!cdab\}$ and $\{abcd,\!bacd,\!abdc,\!badc\}$ give the same contribution.
Next come the processes of the type
$$\displaystyle\mathcal{H}_{\text{eff}}^{(C,dbca)}|\sigma_{1},\sigma_{2},\sigma_%
{3},\sigma_{4},\sigma_{5},\sigma_{6}\rangle$$
$$\displaystyle=$$
$$\displaystyle\mathcal{H}_{\text{eff}}^{(C,acbd)}|\sigma_{1},\sigma_{2},\sigma_%
{3},\sigma_{4},\sigma_{5},\sigma_{6}\rangle=\frac{\kappa\sigma_{1}\sigma_{4}}{%
4^{4}{\Delta_{12}}{\Delta_{1246}}{\Delta_{16}}}|\sigma_{1},\sigma_{2},\sigma_{%
3},\sigma_{4},\sigma_{5},\sigma_{6}\rangle~{},$$
and $\mathcal{H}_{\text{eff}}^{(C,dbac)}\!=\!\mathcal{H}_{\text{eff}}^{(C,bdca)}\!=%
\!\mathcal{H}_{\text{eff}}^{(C,dbca)}$. So all 8 processes $\{dbca\!,dbac,\!bdca,\!bdac\}$ and $\{acbd\!,cabd,\!acdb,\!cadb\}$ give the same contribution.
Finally there are the processes of the type:
$$\displaystyle\mathcal{H}_{\text{eff}}^{(C,cbda)}|\sigma_{1},\sigma_{2},\sigma_%
{3},\sigma_{4},\sigma_{5},\sigma_{6}\rangle$$
$$\displaystyle=$$
$$\displaystyle\mathcal{H}_{\text{eff}}^{(C,adbc)}|\sigma_{1},\sigma_{2},\sigma_%
{3},\sigma_{4},\sigma_{5},\sigma_{6}\rangle=-\frac{\kappa\sigma_{1}\sigma_{4}}%
{4^{4}\Delta_{12}\Delta_{26}\Delta_{46}}|\sigma_{1},\sigma_{2},\sigma_{3},%
\sigma_{4},\sigma_{5},\sigma_{6}\rangle~{}.$$
Here, however, $\mathcal{H}_{\text{eff}}^{(C,cbad)}\!=\!-\mathcal{H}_{\text{eff}}^{(C,cbda)}$, and similarly $\mathcal{H}_{\text{eff}}^{(C,bcda)}\!=\!-\mathcal{H}_{\text{eff}}^{(C,cbda)}$. As a result, the last eight processes $\{cbda,\!bcda,\!cbad,\!bcad\}$ and $\{adbc,\!adcb,\!dabc,\!dacb\}$ cancel out. So the total contribution from the $C$ loops of Fig. 9 (C) is:
$$\displaystyle\mathcal{H}_{\text{eff}}^{(C)}$$
$$\displaystyle=$$
$$\displaystyle 8\mathcal{H}_{\text{eff}}^{(C,abcd)}+8\mathcal{H}_{\text{eff}}^{%
(C,dbca)}=\frac{\kappa\left(\Delta_{12}-\Delta_{1246}\right)}{32\Delta_{12}^{3%
}\Delta_{1246}}\sigma_{1}\sigma_{4}~{},$$
where $\Delta_{12}\!-\!\Delta_{1246}\!=\!|K_{1}^{z}|\!+\!|K_{2}^{z}|\!>\!0$. So the coupling is AFM.
Finally, by symmetry, $\mathcal{H}_{\text{eff}}^{(D)}\!=\!\mathcal{H}_{\text{eff}}^{(C)}$.
D.3.2 E & F processes
These processes give rise to an overall constant, so they can be ignored.
D.3.3 G & H processes
Here the corresponding perturbation can be written as
$$G=G_{a}+G_{b}+G_{c}+G_{d}=K_{1}^{y}S_{1}^{y}S_{2}^{y}+K_{2}^{y}S_{2}^{y}S_{4}^%
{y}+K_{1}^{x}S_{3}^{x}S_{4}^{x}+K_{2}^{x}S_{1}^{x}S_{3}^{x}~{}.$$
(25)
We have
$$\displaystyle\mathcal{H}_{\text{eff}}^{(G,dcba)}|\sigma_{1},\sigma_{2},\sigma_%
{3},\sigma_{4},\sigma_{5},\sigma_{6}\rangle$$
$$\displaystyle=$$
$$\displaystyle\mathcal{H}_{\text{eff}}^{(G,abcd)}|\sigma_{1},\sigma_{2},\sigma_%
{3},\sigma_{4},\sigma_{5},\sigma_{6}\rangle=\frac{-\kappa\sigma_{1}\sigma_{4}}%
{4^{4}\Delta_{12}^{3}}|\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4},\sigma_{5},%
\sigma_{6}\rangle~{},$$
where we used the relation ${\Delta_{13}}\!=\!{\Delta_{14}}\!=\!\Delta_{12}$. Similarly, we can also show that $\mathcal{H}_{\text{eff}}^{(G,dcba)}\!=\!\mathcal{H}_{\text{eff}}^{(G,cdba)}\!=%
\!\mathcal{H}_{\text{eff}}^{(G,dcab)}$, so the eight processes $\{dcba,\!dcab,\!cdba,\!cdab\}$ and $\{abcd,\!bacd,\!abdc,\!badc\}$ give the same contribution.
Next come the processes of the type:
$$\displaystyle\mathcal{H}_{\text{eff}}^{(G,dbca)}|\sigma_{1},\sigma_{2},\sigma_%
{3},\sigma_{4},\sigma_{5},\sigma_{6}\rangle$$
$$\displaystyle=$$
$$\displaystyle\mathcal{H}_{\text{eff}}^{(G,acbd)}|\sigma_{1},\sigma_{2},\sigma_%
{3},\sigma_{4},\sigma_{5},\sigma_{6}\rangle=\frac{\kappa\sigma_{1}\sigma_{4}}{%
4^{4}{\Delta_{12}}{\Delta_{1234}}{\Delta_{13}}}|\sigma_{1},\sigma_{2},\sigma_{%
3},\sigma_{4},\sigma_{5},\sigma_{6}\rangle~{}.$$
Again, $\mathcal{H}_{\text{eff}}^{(G,dbca)}\!=\!\mathcal{H}_{\text{eff}}^{(G,dbac)}\!=%
\!\mathcal{H}_{\text{eff}}^{(G,bdca)}$. So all eight processes $\{dbca,\!dbac,\!bdca,\!bdac\}$ and $\{acbd,\!cabd,\!acdb,\!cadb\}$ give the same contribution.
Finally there are the processes of the type:
$$\displaystyle\mathcal{H}_{\text{eff}}^{(G,cbda)}|\sigma_{1},\sigma_{2},\sigma_%
{3},\sigma_{4},\sigma_{5},\sigma_{6}\rangle$$
$$\displaystyle=$$
$$\displaystyle\mathcal{H}_{\text{eff}}^{(G,adbc)}|\sigma_{1},\sigma_{2},\sigma_%
{3},\sigma_{4},\sigma_{5},\sigma_{6}\rangle=-\frac{\kappa\sigma_{1}\sigma_{4}}%
{4^{4}{\Delta_{12}}{\Delta_{23}}{\Delta_{34}}}|\sigma_{1},\sigma_{2},\sigma_{3%
},\sigma_{4},\sigma_{5},\sigma_{6}\rangle~{}.$$
Similarly, $\mathcal{H}_{\text{eff}}^{(G,cbad)}\!=\!\mathcal{H}_{\text{eff}}^{(G,bcda)}\!=%
\!-\mathcal{H}_{\text{eff}}^{(G,cbda)}$. So here, $\{cbda,\!cbad,\!bcda,\!bcad\}$ and $\{adbc,\!dabc,\!adcb,\!dacb\}$ cancel out.
Altogether
$$\displaystyle\mathcal{H}_{\text{eff}}^{(G)}=8\mathcal{H}_{\text{eff}}^{(G,dcba%
)}+8\mathcal{H}_{\text{eff}}^{(G,dbca)}=\frac{\kappa\left(\Delta_{12}-\Delta_{%
1234}\right)}{32\Delta_{12}^{3}\Delta_{1234}}\sigma_{1}\sigma_{4}~{},$$
where $\Delta_{12}-\Delta_{1234}\!=\!2|K_{2}^{z}|\!>\!0$. So $\mathcal{H}_{\text{eff}}^{(G)}$ is also AFM. Finally, by symmetry, $\mathcal{H}_{\text{eff}}^{(H)}\!=\!\mathcal{H}_{\text{eff}}^{(G)}$.
D.3.4 Final result
$$\boxed{\mathcal{H}_{\text{eff}}^{(C-H)}=2\mathcal{H}_{\text{eff}}^{(C)}+2%
\mathcal{H}_{\text{eff}}^{(G)}=J_{2}S_{1}^{z}S_{4}^{z}},~{}~{}~{}~{}~{}\boxed{%
J_{2}=-\frac{\kappa}{4\Delta_{12}^{3}}\left[\frac{|K_{1}^{z}|+|K_{2}^{z}|}{2|K%
_{1}^{z}|+3|K_{2}^{z}|}+\frac{2|K_{2}^{z}|}{|K_{1}^{z}|+4|K_{2}^{z}|}\right]}~%
{}.$$
(26)
References
Luttinger and Tisza (1946)
J. M. Luttinger and L. Tisza, Phys. Rev. 70, 954 (1946).
Bertaut (1961)
E. F. Bertaut, J.
Phys. Chem. Solids 21, 256 (1961).
Litvin (1974)
D. B. Litvin, Physica 77, 205
(1974).
Kaplan and Menyuk (2007)
T. A. Kaplan and N. Menyuk, Philos. Mag. 87, 3711
(2007).
Tinkham (2003)
M. Tinkham, Group Theory and
Quantum Mechanics (Dover, New York, 2003).
Reuther and Wölfle (2010)
J. Reuther and P. Wölfle, Phys. Rev. B 81, 144410 (2010).
Kitaev (2006)
A. Kitaev, Annals of Physics 321, 2 (2006).
(8)
G. Jackeli and A. Avella, arXiv:1504.01435 . |
Large Two-loop Effects in the Higgs Sector as New Physics Probes
Sichun Sun
[email protected]
Jockey Club Institute for Advanced Study, Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong
(November 20, 2020)
Abstract
We consider a simple Higgs portal model in beyond the standard model scenario: an extra real gauge singlet scalar that couples to the Higgs. We calculate the higher-loop corrections to the cross section of the Higgsstrahlung process $e^{+}e^{-}\rightarrow Zh$, along with the tri-Higgs coupling and the wave-function renormalization. We find noticeable contribution to the total Higgsstrahlung cross section, especially coming from two-loop diagrams. We also find that the correction to the tri-Higgs coupling is complex when this extra scalar is lighter than half of the center of mass energy. It indicates a new source of CP violation. In the region of this extra scalar being lighter than a couple of hundreds of GeVs, we argue that the higher-loop calculation is a more reliable approach than the effective field theory calculation.
I introduction
Since the successful and exciting Higgs discovery Chatrchyan et al. (2012); Aad et al. (2012), studies of Higgs properties have been the focus of high energy physics. The Higgs physics: the Higgs related couplings and processes now have become our major guide to new physics beyond the standard model (BSM) and will be further explored in the next coming years.
Many couplings of the Higgs have already been constrained by the large hadronic collider (LHC), including the Higgs couplings to fermions, gluons and electroweak gauge bosons. All of them are within 20$\%$ of the SM prediction ATLAS Collaboration (2015); Khachatryan et al. (2015); Henning et al. (2014); Dawson and Heinemeyer (2002); Peskin (2013); Robens and Stefaniak (2015); Khachatryan et al. (2015). The impending Run-2 of the LHC, and the higher luminosity upgrade program will improve the current measurements and take additional measurements. Among them, the tri-Higgs coupling is an anticipated new type of interaction. Studies show in high luminosity LHC, $30\%$ to $50\%$ accuracy of the measurement can be achieved ATLAS Collaboration (2013); Baglio et al. (2013); Goertz et al. (2013); Barger et al. (2014) and down to around $10\%$ in the future hadronic collider designs Yao (2013); Barr et al. (2015).
However, due to the overwhelming hadronic processes, it is hard to obtain Higgs measurements in hadronic machines below a few percents both due to systematic and theoretical errors. Meanwhile the Higgs coupling measurements depend on Higgs decay rates at LHC which are model dependent, since the total width of the Higgs is not a direct observable. To measure the Higgs couplings with better precision, proposed future precision high-energy $e^{+}e^{-}$ machines, e.g, ILC, CEPC,and FCC-ee (TLEP) are needed. Experimentally, the events are very clean in those leptonic machines. The SM precision measurements can be obtained without hadronic processes. Moreover, the cross section of $e^{+}e^{-}\rightarrow Zh$ is proved to be a robust, model-independent measurement through Z tagging Hagiwara et al. (2000). Those facts make the cross section of $e^{+}e^{-}\rightarrow Zh$ a good observable to probe the physics beyond the SM. In table I we show a list of Higgsstrahlung constraints for different future $e^{+}e^{-}$ collider proposals.
There are extensive previous studies on anomalous Higgs couplings, and their impact on the Higgsstrahlung process Beneke et al. (2014); Hagiwara and Stong (1994); Gounaris et al. (1996); Kilian et al. (1996); Gonzalez-Garcia (1999); Kile and Ramsey-Musolf (2007); Fleischer and Jegerlehner (1983); Kniehl (1992); Carena et al. (1995); Aoki et al. (1982); Pomarol and Riva (2014); Elias-Miró et al. (2014); McCullough (2014); Craig et al. (2013, 2015); Englert and McCullough (2013). A complete one loop SM calculation has been done more than 20 years ago by A. Denner Denner et al. (1992); Denner (1993). Unlike the fact that studies of QCD cross section have gone even beyond the NNLO, because of the perturbative nature of electroweak physics and lack of experimental measurements, people hardly go beyond one-loop in Higgsstrahlung process. Here we consider a well-studied BSM model, adding $H^{\dagger}H\Phi^{2}$ to the Lagrangian with $\Phi$ being a singlet scalar. This model belongs to the Higgs portal class and $\Phi$ could also couple to the dark matter Patt and Wilczek (2006); Burgess et al. (2001); Englert et al. (2011); Djouadi et al. (2012); Chacko et al. (2014); Greljo et al. (2013); Cline et al. (2013). It also has the ability to change the electroweak symmetry breaking dynamics Grojean et al. (2005); Katz and Perelstein (2014); Curtin et al. (2014), achieving a first-order phase transition. This simple model can also be embedded into many UV-completed models, including supersymmetry Cohen et al. (1996); Dimopoulos and Giudice (1995), and stringy ones.
In this paper, we study this simple yet well-motivated SM extension, and go beyond one-loop effects. There has been previous attempt that went beyond one-loop level by using an effective field theory approach Barger et al. (2003); Elias-Miro et al. (2013); Falkowski and Riva (2015). Those approaches are valid only when the new particles are heavier than a few hundred GeV—the scale that lepton colliders are probing. This is due to the nature of the effective theory: integrating out heavy particles/short distance. The lighter scalar region is experimentally valid in some cases, and interesting phenomenologically, e.g., serving as some dark matter portals as in Chacko et al. (2014); Greljo et al. (2013); Cline et al. (2013).
The paper is organized as follow: in section II we lay down the theory framework of this model. We present the possible CP violating tri-Higgs coupling correction and the Higgs wavefunction correction to the SM couplings. In section III we present the calculation of the Higgsstrahlung process up to two-loop level in this BSM scenario. Those contributions can be divided into three groups of different diagrams, and part of those diagrams can be reduced to one-loop diagrams. Some special packages are used to evaluate the loop integrals, and the results are semi-numerical. In section IV we briefly discuss the impact on the electroweak baryogenesis. We conclude in section V.
II The theoretical framework
In this section, we outline the general framework of our discussion, and present two useful results on tri-Higgs coupling and the analysis on the Higgs wavefunction renormalization for later use. Especially we deal carefully with the small mass region of the extra scalar.
II.1 The Higgs potential and the correction to the couplings
In this work, we assume a single SM Higgs doublet H with a general renormalizable tree-level Higgs potential, responsible for electroweak symmetry breaking:
$$\displaystyle V_{0}=-\mu^{2}H^{2}+\lambda H^{4}$$
(1)
Substituting $H=(H^{+},(v+h+iA^{0})/\sqrt{2})$, and going to the unitary gauge now, which sets $H^{+},A^{0}$ to zero. Note that we only do this for the tree and one-loop related calculation. We switch to t’Hooft-Feynman gauge for the two-loop calculation later. In unitary gauge we have:
$$\displaystyle V_{0}=-\frac{1}{2}m_{h}^{2}h^{2}-\frac{m_{h}^{2}}{2v}h^{3}-\frac%
{1}{8}(\frac{m_{h}}{v})^{2}h^{4}$$
(2)
$$\displaystyle=-\frac{1}{2}m_{h}^{2}h^{2}-\frac{1}{3!}3m_{h}^{2}v^{-1}h^{3}-%
\frac{1}{4!}3m_{h}^{2}v^{-2}h^{4}$$
(3)
The Eq.3 above includes the symmetry factor, for the Feynman diagram calculation later. In the choice of our parametrization, the Higgs acquires a VEV $<h>=v=\mu/\sqrt{\lambda}\approx 246$ GeV. A tree level Higgs mass is $m_{h}=\sqrt{2}\mu\approx 125$GeV. Then the coefficients of the Higgs potential are $\lambda\approx 0.13$ and $\mu\approx 90$ GeV.
One includes an extra singlet scalar $\Phi$ and a $Z_{2}$ symmetry under which $\Phi\rightarrow-\Phi$. It couples to the Higgs field as below:
$$\displaystyle L\supset-\kappa H^{2}\Phi^{2}-\frac{1}{2}\mu_{\Phi}^{2}\Phi^{2}$$
(4)
After electroweak symmetry breaking, and taking the SM Higgs part, this becomes:
$$\displaystyle L\supset-\frac{1}{2!2!}2\kappa h^{2}\Phi^{2}-\frac{1}{2}2\kappa
vh%
\Phi^{2}-\frac{1}{2}(\mu_{\Phi}^{2}+\kappa v^{2})\Phi^{2}$$
(5)
We call this model “$Z_{2}$ symmetric SM+S” scenario. It has been discussed in different aspects, to solve problems including ”naturalness”, electroweak symmetry breaking, baryogenesis and WIMP dark matter. This simple model can arise from various BSM scenarios. In this work we adopt the most general and simple form of the model, and focus on the Higgs-related higher loop effects. The higher loop corrections get directly into the precision Higgsstrahlung process and potentially contribute to the EWBG with noticeable size.
The relevant parameters in this model are $\mu_{\Phi}$ and $\kappa$. Here we assume that:
$$\displaystyle m_{\Phi}^{2}=\mu_{\Phi}^{2}+\kappa v^{2}>0$$
(6)
From now on we treat $m_{\Phi}$ as the physical mass and deal with it rather than $\mu_{\Phi}$. We also focus on $m_{\Phi}>m_{h}/2$ region, since otherwise it is very constrained from Higgs decays branching ratio data.
II.2 The Correction to tri-Higgs coupling
We start with the one-loop effect induced by the extra scalar in this model. The tri-Higgs coupling has drawn much attention after the discovery of the Higgs boson ATLAS Collaboration (2013); Baglio et al. (2013); Goertz et al. (2013); Barger et al. (2014). People consider it as a possible new fundamental force: the self-interaction of the spin-0 particles.
We show all the one-loop and two-loop Feynman diagrams in Fig.5. One should notice that the correction to tri-Higgs coupling induced by the extra particle in this case is not a constant. Instead it depends on the 4-momentum of the Higgs in the $H\Phi\Phi$ vertex. Although it is sometimes good to get an estimate by integrating $\Phi$ out, we can not always do that unless $\Phi$ is much heavier than the energy of the process. However notice that the loop integral DOES reduce to a constant when the Higgs propagator of the $H\Phi\Phi$ coupling is on-shell with one of the outgoing particle energy $s=m_{h}^{2}$, which happens in Graph c and d in Fig 5.
Here we define the vertex function with the tree level value and the loop correction as in Fig 1:
$$\displaystyle V(s,m_{\Phi},m_{h})=-3\frac{m_{h}^{2}}{v}i+\frac{1}{2}(-2\kappa i%
)(-2\kappa vi)\int\frac{d^{4}k}{(2\pi)^{4}}\frac{i}{k^{2}-m_{\Phi}^{2}}\frac{i%
}{(k+p)^{2}-m_{\Phi}^{2}}$$
(7)
$$\displaystyle=-3\frac{m_{h}^{2}}{v}i-\frac{\kappa^{2}vi}{8\pi^{2}}\int^{1}_{0}%
dxLog(\frac{m_{\Phi}^{2}-x(1-x)s}{m_{\Phi}^{2}-x(1-x)4m_{\Phi}^{2}})$$
(8)
Defining a function $\delta_{h}(m_{\Phi},s)$ as the correction to the SM tri-Higgs coupling as $-(1+\delta_{h}\kappa^{2})\frac{3m^{2}_{h}}{v}h^{3}$, we have:
$$\displaystyle\delta_{h}(m_{\Phi},s)=(V(s,m_{\Phi},m_{h})/V_{SM}-1)/\kappa^{2}=%
\frac{v^{2}}{m_{h}^{2}24\pi^{2}}\int^{1}_{0}dxLog(\frac{m_{\Phi}^{2}-x(1-x)s}{%
m_{\Phi}^{2}-x(1-x)4m_{\Phi}^{2}})$$
(9)
We plot the $\delta_{h}(s)$ as the function of $m_{\Phi}$ in Fig.2. We use this result to simplify some two-loop diagrams later. It also shows that with much lighter $m_{\Phi}$, the correction to tri-Higgs coupling increases dramatically.
II.3 The wave function renormalization
The extra scalar field $\Phi$ renormalizes the Higgs field at one loop level, and no other one-loop corrections with this scalar at this level. Then one can rescale $h\rightarrow(1-\delta Z_{h}/2)h$. It rescales all Higgs couplings, including cubic and quartic couplings, and all Higgs couplings to weak gauge bosons and fermions. This rescaling is physical. The corrections to one-loop processes due to $Z_{h}$ is considered to be two-loop effects by counting $1/(4\pi)^{2}$. These one-loop integrals can be generated and calculated in Feyncal, FeynArt and Looptools Hahn (2001); Hahn and Perez-Victoria (1999). We define:
$$\displaystyle\delta Z_{H}=-Re\frac{\partial\Sigma^{H}(k^{2})}{\partial k^{2}}%
\mid_{k^{2}=M^{2}_{H}}$$
(10)
With
$$i\Sigma=\frac{1}{2}(-i2\kappa v)^{2}\int\frac{d^{4}k}{(2\pi)^{4}}\frac{i}{k^{2%
}-m_{\Phi}^{2}}\frac{i}{(k+p)^{2}-m_{\Phi}^{2}}=\frac{1}{2}(-i2\kappa v)^{2}B(%
p^{2},m_{\Phi}^{2},m_{\Phi}^{2})\frac{i}{16\pi^{2}}$$
(11)
$$\displaystyle\delta Z_{H}=-Re[\frac{1}{2}(-i2\kappa v)^{2}\frac{1}{16\pi^{2}}%
\partial B(p^{2},m_{\Phi}^{2},m_{\Phi}^{2})/\partial p^{2}\mid_{p^{2}=M^{2}_{H%
}}]$$
(12)
Here the $B(p^{2},m_{\Phi}^{2},m_{\Phi}^{2})$ is defined and evaluated in Feyncal. We plot the value of $\delta Z_{h}$ in Fig.3. At tree level, the HZZ coupling becomes $ieg_{\mu\nu}M_{w}\frac{1}{sc^{2}}(1+\frac{1}{2}\delta Z_{H})$. $\delta Z_{H}$ correction gets into all the Higgs related couplings in form of $\frac{1}{2}\delta Z_{H}$ per Higgs line. We list all the vertex corrections in Table.2.
All the one-loop diagrams are calculated in Denner (1993). It includes the correction from wave-function renormalization in the SM. We plot all the two loop effect combined in Section IV, after the discussion about other diagrams.
III Two-loop effect in Higgsstrahlung and beyond
III.1 Some nontrivial loop integrals
We show all the diagrams up to two-loop level with $\Phi$ scalar in Fig.5. We notice that some of those can be reduced to one-loop diagrams plus corrections: the value of diagram c and d can be evaluated by the diagram a and b with a constant tri-Higgs coupling correction.
For Graph e,f,g, however, we can not reduce them to one-loop diagrams. Notice that at two-loop level when we are doing the calculation of Graph e, f, g in t’Hooft-Feynman gauge, there are some non-standard Higgs related vertices involving eaten Goldstone boson, e.g., $H^{+}H^{-}\Phi\Phi,A^{0}A^{0}\Phi\Phi$. However those vertices are not present in our case. In t’Hooft-Feynman gauge the gauge boson propagator is simplified to $-ig_{\mu\nu}/(k^{2}-M^{2}_{z})$, which simplifies the whole calculation.
One can integrate out the extra scalar in the region that $\Phi$ is much heavier than the energy of the process, typically the proposed energies of ILC and TLEP: a couple of hundred GeVs. Here we do not make that assumption.
We choose to evaluate the loop diagram directly. Define the loop integral as below:
$$\displaystyle E_{e}(s,p,m_{\Phi},m_{h},m_{z})=\int\frac{d^{4}k_{1}}{(2\pi)^{4}%
}\frac{d^{4}k_{2}}{(2\pi)^{4}}$$
$$\displaystyle\frac{i}{k_{1}^{2}-m_{h}^{2}}\frac{-i}{(s-k_{1})^{2}-m_{z}^{2}}%
\frac{i}{k_{2}^{2}-m_{\Phi}^{2}}\frac{i}{(p-k_{1})^{2}-m_{h}^{2}}\frac{i}{(p-k%
_{1}+k_{2})^{2}-m_{\Phi}^{2}}$$
(13)
$$\displaystyle E_{f}(s,p,m_{\Phi},m_{h},m_{z})=\int\frac{d^{4}k_{1}}{(2\pi)^{4}%
}\frac{d^{4}k_{2}}{(2\pi)^{4}}$$
$$\displaystyle\frac{i}{k_{1}^{2}-m_{h}^{2}}\frac{i}{k_{2}^{2}-m_{\Phi}^{2}}%
\frac{i}{(k_{1}-k_{2})^{2}-m_{\Phi}^{2}}\frac{i}{(p-k_{1})^{2}-m_{h}^{2}}\frac%
{i}{(p-k_{1}+k_{2})^{2}-m_{\Phi}^{2}}$$
(14)
$$\displaystyle F_{g}(s,p,m_{\Phi},m_{h},m_{z})=\int\frac{d^{4}k_{1}}{(2\pi)^{4}%
}\frac{d^{4}k_{2}}{(2\pi)^{4}}$$
$$\displaystyle\frac{i}{k_{1}^{2}-m_{h}^{2}}\frac{-i}{(s-k_{1})^{2}-m_{z}^{2}}%
\frac{i}{k_{2}^{2}-m_{\Phi}^{2}}\frac{i}{(k_{1}-k_{2})^{2}-m_{\Phi}^{2}}\frac{%
i}{(p-k_{1})^{2}-m_{h}^{2}}\frac{i}{(p-k_{1}+k_{2})^{2}-m_{\Phi}^{2}}$$
(15)
p is the outgoing 4-momentum of H, and s is the center of mass energy of the process.
The way to treat higher-loop diagrams is widely studied, especially in the SM QCD calculations, and in some formal aspect of scattering amplitudes. The analytical way to do it is through the reduction of amplitudes to master integrals Smirnov et al. (2011); Smirnov (2014). It is more involved than the numerical calculation.
A main challenge in computing Feynman diagrams is the evaluation of integrals over loop momenta. For multi-loop diagrams containing several mass scales we cannot realistically hope for an analytic solution; we are forced to resort to the numerical integration. The general steps are listed as follow:
1.Combine all propagators using a single set of Feynman parameters $x_{i}$. Then shift and rescale the loop momenta to reach the standard form of the denominator.
2.Perform the Gaussian integrals of the momenta.
3.Mapping Feynman parameters to the hypercube and decompose the integrals into separate ones, with IR and/or UV singularities. This step is often called “Sector Decomposition ”.
4.Regularize all the singularities and finish the integral.
The steps 3 and 4 can be very complicated and containing many singularities. However those steps are highly technical and can be done by computers. Here we evaluate our three integrals directly by using FIESTA 3.0, as functions of p and s. FIESTA stands for Feynman Integral Evaluation by a Sector decomposiTion Approach. It is based on the sector decomposition approach to the numerical evaluation of Feynman integrals originally applied by G.Heinrich and T.Binoth Binoth and
Heinrich (2004a, b).
III.2 Two-loop contributions in Higgsstrahlung and beyond
The regularized differential cross section including higher order corrections in $ee\rightarrow ZH$ process is:
$$\displaystyle(\frac{d\sigma}{d\Omega})=\sum_{\sigma,\lambda}\frac{1}{4}(1+2%
\sigma P^{-})(1-2\sigma P^{+})\frac{\beta}{64\pi^{2}s}\{|M_{0}^{\sigma,\lambda%
}|^{2}+2Re[M_{0}^{\sigma,\lambda}(\delta M^{\sigma,\lambda})^{*}]\}$$
(16)
With $\sigma=\pm\frac{1}{2}$, $\lambda=0,1,-1$ being helicities of the incoming electrons and outgoing bosons. $P^{\pm}$ are the degrees of polarization of the incoming fermions, such that a purely right- or left-handed electron corresponds to 1 and -1 respectively. If we assume an unpolarized beam, $P^{\pm}=0$.
$$\beta=\frac{1}{4E^{2}}\sqrt{[4E^{2}-(M_{Z}+M_{H})^{2}][4E^{2}-(M_{Z}-M_{H})^{2%
}]}$$
(17)
E is the beam energy, $S=(p_{1}+p_{2})^{2}=4E^{2}$. $g_{e}^{\sigma}$ is the coupling of the Z-boson to left- and right-handed electrons.
Notice that at the tree level $ee\rightarrow ZH$:
$$M_{0}^{\sigma,\lambda}=e^{2}g_{e}^{\sigma}\frac{M_{z}}{s_{w}c_{w}}\frac{1}{S-M%
_{z}^{2}}P_{1}^{\sigma,\lambda}$$
(18)
$P_{1}^{\sigma,\lambda}$ contains the polarization information of Z boson, for the definition see Denner et al. (1992); Denner (1993).
$$\displaystyle P_{1}^{\sigma,\lambda}=\Big{\{}\begin{tabular}[]{cc}$\sqrt{2}E(%
\cos\theta\mp 2\sigma)$&\text{for} $\lambda=\pm 1$\\
$\frac{4E^{2}+M_{Z}^{2}-M_{H}^{2}}{2M_{Z}}\sin\theta$&\text{for} $\lambda=0$%
\end{tabular}$$
(19)
The $\delta M^{\sigma,\lambda}$ here stands for the contribution of the higher loop diagrams to the invariant matrix element.
III.2.1 Two-loop results from wavefunction renormalization
At tree level due to the extra scalar wavefunction renormalization, the HZZ coupling becomes $ieg_{\mu\nu}M_{w}\frac{1}{sc^{2}}(1+\frac{1}{2}\delta Z_{H})$. The matrix element from this vertex correction is:
$$\displaystyle\delta M^{\sigma,\lambda}_{\text{one-loop}}=\frac{1}{2}\delta Z_{%
H}M_{0}^{\sigma,\lambda}$$
(20)
The correction to the tree level diagrams due to this wave function renormalization is:
$$\frac{\delta\sigma_{\kappa\neq 0,\text{one-loop}}}{\sigma_{SM}^{ZH}}(s,m_{\Phi%
})=\delta Z_{h}$$
(21)
This is considered to be all the one-loop correction coming from the extra scalar, in the case that the extra scalar only couples to the Higgs sector.
For one-loop diagrams, $\delta Z_{H}$ correction gets into all the Higgs related couplings in forms of $\frac{1}{2}\delta Z_{H}$ per Higgs line. That gives rise to:
$$\displaystyle\delta M^{\sigma,\lambda}_{\text{two-loop}}=\sum_{i}\frac{n^{i}_{%
h}}{2}\delta Z_{H}\delta M_{i,\text{one-loop}}^{\sigma,\lambda}$$
(22)
The i summing over all the SM one-loop diagrams, around 50 diagrams as calculated in Denner (1993). $n_{i}$ here labels the number of the Higgs lines in each diagram. We are not calculating it one-by one here, and only approximate the total correction by using Eq 16:
$$\frac{\delta\sigma^{\kappa\neq 0}_{\text{two-loop, wavefunc}}}{\sigma^{ZH}_{SM%
}}(s,m_{\Phi})\simeq a_{h}\delta Z_{H}\frac{\delta\sigma_{\text{SM,one-loop}}}%
{\sigma^{ZH}_{SM}}$$
(23)
With $a_{h}$ being the average number of Higgs lines in each one-loop diagram. The $\delta\sigma_{SM}^{\text{one-loop}}/\sigma_{SM}$ is the one-loop SM correction comparing the tree level result. This ratio is around $5\%$ for centre of mass energy below 1TeV as calculated in Denner et al. (1992); Denner (1993). The equations above give rise to the two-loop correction coming from the Higgs wave function renormalization. We plot all the two loop effect combined in section IV, after the discussion other diagrams.
III.2.2 Two-loop results from diagram e,f,g
We can write down two-loop diagram corrections to matrix elements as below by using the previous loop integrals:
$$\displaystyle\delta M^{\sigma,\lambda}_{e}=i\frac{1}{2}e^{3}g^{\sigma}_{e}(%
\frac{m_{z}}{s_{w}c_{w}})^{2}(2\kappa i)(2\kappa vi)\frac{1}{s-m_{z}^{2}}E_{e}%
(s,p,m_{\Phi},m_{h},m_{z})P^{\sigma,\lambda}_{1}$$
(24)
$$\displaystyle\delta M^{\sigma,\lambda}_{f}=\frac{1}{4}e^{3}g^{\sigma}_{e}/(2s_%
{w}^{2}c_{w}^{2})(-2\kappa vi)^{3}\frac{1}{s-m_{z}^{2}}E_{f}(s,p,m_{\Phi},m_{h%
},m_{z})P^{\sigma,\lambda}_{1}$$
(25)
$$\displaystyle\delta M^{\sigma,\lambda}_{g}=ie^{3}g^{\sigma}_{e}(\frac{m_{z}}{s%
_{w}c_{w}})^{2}(-2\kappa vi)^{3}\frac{1}{s-m_{z}^{2}}F_{g}(s,p,m_{\Phi},m_{h},%
m_{z})P^{\sigma,\lambda}_{1}$$
(26)
The symmetry factors of Feynman diagram are $S=2$ for Graph e, $S=4$ for Graph f, and $S=1$ for Graph g in Fig.5.
III.2.3 Complete two-loop contributions
So we arrive at:
$$\displaystyle\frac{\delta\sigma_{\kappa\neq 0,\text{two-loop}}}{\sigma^{Zh}_{%
SM}}(s,m_{\Phi})=\frac{4e\kappa^{2}}{s_{w}c_{w}}[-m_{z}viE_{e}+\frac{i\kappa v%
^{3}}{2m_{z}}E_{f}-4v^{3}\kappa m_{z}F_{g}]+\delta\sigma_{\delta_{h}}(s,m_{%
\Phi})+a_{h}\delta Z_{H}\frac{\delta\sigma^{\text{one-loop}}}{\sigma^{Zh}_{SM}}$$
(27)
The first three terms count the contributions from Graph e,f,g, while $\delta\sigma_{\delta_{h}}(s,m_{\Phi})$ includes the contribution from Graph c,d, calculated from the tri-Higgs coupling corrections. And the last piece includes all the corrections to the one-loop SM diagrams. We use the result from McCullough (2014) to calculate the $\delta\sigma_{\delta_{h}}(s,m_{\Phi})$. We have:
$$\displaystyle\delta\sigma_{\delta_{h}}=0.013\delta_{h}\kappa^{2},\quad\quad a_%
{h}\delta Z_{H}\frac{\sigma^{\text{one-loop}}}{\sigma^{Zh}_{SM}}\sim 0.07%
\delta Z_{H},\quad\text{for}\quad\sqrt{s}=250\text{ GeV}$$
(28)
$$\displaystyle\delta\sigma_{\delta_{h}}=-0.002\delta_{h}\kappa^{2},\quad\quad a%
_{h}\delta Z_{H}\frac{\sigma^{\text{one-loop}}}{\sigma^{Zh}_{SM}}\sim 0.05%
\delta Z_{H},\quad\text{for}\quad\sqrt{s}=500\text{ GeV}$$
(29)
Now choosing the the center of mass energy $\sqrt{s}=250,500\text{ GeV}$ and combining the results, we can plot the dependence of the correction to the total cross section on the mass of extra scalar $\Phi$. Notice that this is the two-loop contribution. The one-loop result is only through wave-function renormalization: $\delta\sigma_{\text{wavefunction}}/\sigma_{SM}=\delta Z_{h}$ as plotted in Fig.3. The two-loop contribution comes in different sign in most of the range above comparing to the one-loop contribution, and comparable by size.
IV The connection to Electroweak Baryogenesis
There has been extensive studies about the constraints on a first-order electroweak phase transition with a single scalar extension to the SM Curtin et al. (2014); Katz and Perelstein (2014). This is sometimes called “nightmare scenario”, for the reason that it is hard to be probed and only contains several slim parameter regions where the strong electroweak phase transition can be achieved. The precision measurement of the Higgsstrahlung cross section can potentially reduce allowed parameter space according to the two-loop result presented in this paper, especially for a gauge singlet scalar. Our result here shows that to correctly constrain those parameters through Higgsstrahlung, one needs to include the two-loop effect, for the scalar mass being smaller than a couple of hundred GeVs. One can see Curtin et al. (2014) for more discussion on the parameter space of the EWBG. This parameter region will be further constrained by a virtual $h^{*}\rightarrow\Phi\Phi$, tri-Higgs couplings and the Higgsstrahlung measurements in next generation colliders.
The loop correction in tri-Higgs coupling provides an interesting new source for CP violation needed in electroweak baryogenesis. This effect can come from a single Higgs doublet and an extra real scalar rather than the standard two-Higgs doublet model Cohen et al. (1993). Whether this new CP violation source is enough to achieve the EWBG remains unknown.
The two-loop thermal correction to Higgs potential due to the extra scalar is beyond the scope of this paper. However we expect it be sizable as well and place non-trivial bounds on the EWPT.
V Conclusions
In this paper, we calculate the two-loop contribution to the Higgsstrahlung process for a particular BSM scenario. We have included one more real singlet scalar to the SM. This extra scalar couples to the SM Higgs with $Z_{2}$ symmetry. We divide all the two-loop contributions into three parts: tri-Higgs coupling correction, wavefunction renormalization and the others. The two-loop correction partly cancels out the one-loop correction especially when the mass of the extra scalar is below the Higgs mass. Next generation lepton colliders will measure the Higgsstrahlung cross section within the precision of the two-loop effect. The loop effect from this real scalar can also induce a new CP phase to the tri-Higgs coupling when the mass of this real scalar is smaller than half of center of mass energy.
Although these loop calculations are valid even if this real scalar is charged under electroweak $SU(2)$ or color, there are more loop diagrams involved in Higgsstrahlung process for charged extra scalars. The contributions from those diagrams are not necessarily larger than the ones present in this paper. Furthermore colored extra scalar will get more constraints from Higgs production cross sections in gluon fusion. Electroweakly charged extra scalar can modify $BR(h\rightarrow\gamma\gamma)$ which is potentially a powerful observable. The Higgs couplings to all SM particles have already been modified regardless of the quantum number of the extra scalar, as we have shown in section II. In the “nightmare scenario”, when the scalar singlet is considered to be a possible dark matter candidate, we have very limited observables to probe it rather than the Higgsstrahlung cross section.
Acknowledgements.
I would like to thank Lian-tao Wang and Tao Liu for the early involvement of this work and Hua-xing Zhu for introducing the FIESTA program. I thank Jan Hajer for commenting on the draft. This work was supported by the CRF Grants of the Government of the Hong Kong SAR under HUKST4/CRF/13G.
References
Chatrchyan et al. (2012)
S. Chatrchyan
et al. (CMS), Phys.
Lett. B716, 30
(2012), eprint 1207.7235.
Aad et al. (2012)
G. Aad et al.
(ATLAS), Phys. Lett.
B716, 1 (2012),
eprint 1207.7214.
ATLAS Collaboration (2015)
ATLAS Collaboration, ArXiv
e-prints (2015), eprint 1507.04548.
Khachatryan et al. (2015)
V. Khachatryan
et al. (CMS), Eur.
Phys. J. C75, 212
(2015), eprint 1412.8662.
Henning et al. (2014)
B. Henning,
X. Lu, and
H. Murayama
(2014), eprint 1404.1058.
Dawson and Heinemeyer (2002)
S. Dawson and
S. Heinemeyer,
Phys. Rev. D66,
055002 (2002), eprint hep-ph/0203067.
Peskin (2013)
M. E. Peskin, in
Community Summer Study 2013: Snowmass on the
Mississippi (CSS2013) Minneapolis, MN, USA, July 29-August 6, 2013
(2013), eprint 1312.4974,
URL http://www.slac.stanford.edu/econf/C1307292/docs/submittedArxivFiles/1312.4974.pdf.
Robens and Stefaniak (2015)
T. Robens and
T. Stefaniak,
Eur. Phys. J. C75,
104 (2015), eprint 1501.02234.
Khachatryan et al. (2015)
V. Khachatryan,
A. M. Sirunyan,
A. Tumasyan,
W. Adam,
T. Bergauer,
M. Dragicevic,
J. Erö,
M. Friedl,
R. Frühwirth,
V. M. Ghete,
et al., European Physical Journal C
75, 212 (2015),
eprint 1412.8662.
ATLAS Collaboration (2013)
ATLAS Collaboration,
ATL-PHYS-PUB-2013-001 (2013).
Baglio et al. (2013)
J. Baglio,
A. Djouadi,
R. Gröber,
M. M. Mühlleitner,
J. Quevillon,
and M. Spira,
Journal of High Energy Physics
4, 151 (2013),
eprint 1212.5581.
Goertz et al. (2013)
F. Goertz,
A. Papaefstathiou,
L. L. Yang,
and J. Zurita,
Journal of High Energy Physics
6, 16 (2013),
eprint 1301.3492.
Barger et al. (2014)
V. Barger,
L. L. Everett,
C. B. Jackson,
and
G. Shaughnessy,
Physics Letters B 728,
433 (2014), eprint 1311.2931.
Yao (2013)
W. Yao,
ArXiv e-prints (2013),
eprint 1308.6302.
Barr et al. (2015)
A. J. Barr,
M. J. Dolan,
C. Englert,
D. E. F. de Lima,
and
M. Spannowsky,
Journal of High Energy Physics
2, 16 (2015),
eprint 1412.7154.
Hagiwara et al. (2000)
K. Hagiwara,
S. Ishihara,
J. Kamoshita,
and B. A.
Kniehl, Eur. Phys. J.
C14, 457 (2000),
eprint hep-ph/0002043.
Beneke et al. (2014)
M. Beneke,
D. Boito, and
Y.-M. Wang,
JHEP 11, 028
(2014), eprint 1406.1361.
Hagiwara and Stong (1994)
K. Hagiwara and
M. L. Stong,
Z. Phys. C62,
99 (1994), eprint hep-ph/9309248.
Gounaris et al. (1996)
G. J. Gounaris,
F. M. Renard,
and N. D.
Vlachos, Nucl. Phys.
B459, 51 (1996),
eprint hep-ph/9509316.
Kilian et al. (1996)
W. Kilian,
M. Kramer, and
P. M. Zerwas,
Phys. Lett. B381,
243 (1996), eprint hep-ph/9603409.
Gonzalez-Garcia (1999)
M. C. Gonzalez-Garcia,
Int. J. Mod. Phys. A14,
3121 (1999), eprint hep-ph/9902321.
Kile and Ramsey-Musolf (2007)
J. Kile and
M. J. Ramsey-Musolf,
Phys. Rev. D76,
054009 (2007), eprint 0705.0554.
Fleischer and Jegerlehner (1983)
J. Fleischer and
F. Jegerlehner,
Nucl. Phys. B216,
469 (1983).
Kniehl (1992)
B. A. Kniehl,
Z. Phys. C55,
605 (1992).
Carena et al. (1995)
M. Carena,
J. R. Espinosa,
M. Quiros, and
C. E. M. Wagner,
Phys. Lett. B355,
209 (1995), eprint hep-ph/9504316.
Aoki et al. (1982)
K. I. Aoki,
Z. Hioki,
M. Konuma,
R. Kawabe, and
T. Muta,
Prog. Theor. Phys. Suppl. 73,
1 (1982).
Pomarol and Riva (2014)
A. Pomarol and
F. Riva,
JHEP 01, 151
(2014), eprint 1308.2803.
Elias-Miró et al. (2014)
J. Elias-Miró,
C. Grojean,
R. S. Gupta, and
D. Marzocca,
JHEP 05, 019
(2014), eprint 1312.2928.
McCullough (2014)
M. McCullough,
Phys.Rev. D90,
015001 (2014), eprint 1312.3322.
Craig et al. (2013)
N. Craig,
C. Englert, and
M. McCullough,
Phys.Rev.Lett. 111,
121803 (2013), eprint 1305.5251.
Craig et al. (2015)
N. Craig,
M. Farina,
M. McCullough,
and
M. Perelstein,
JHEP 1503, 146
(2015), eprint 1411.0676.
Englert and McCullough (2013)
C. Englert and
M. McCullough,
JHEP 1307, 168
(2013), eprint 1303.1526.
Denner et al. (1992)
A. Denner,
J. Kublbeck,
R. Mertig, and
M. Bohm,
Z.Phys. C56,
261 (1992).
Denner (1993)
A. Denner,
Fortsch.Phys. 41,
307 (1993), eprint 0709.1075.
Patt and Wilczek (2006)
B. Patt and
F. Wilczek
(2006), eprint hep-ph/0605188.
Burgess et al. (2001)
C. P. Burgess,
M. Pospelov, and
T. ter Veldhuis,
Nucl. Phys. B619,
709 (2001), eprint hep-ph/0011335.
Englert et al. (2011)
C. Englert,
T. Plehn,
D. Zerwas, and
P. M. Zerwas,
Phys. Lett. B703,
298 (2011), eprint 1106.3097.
Djouadi et al. (2012)
A. Djouadi,
O. Lebedev,
Y. Mambrini, and
J. Quevillon,
Phys. Lett. B709,
65 (2012), eprint 1112.3299.
Chacko et al. (2014)
Z. Chacko,
Y. Cui, and
S. Hong,
Phys. Lett. B732,
75 (2014), eprint 1311.3306.
Greljo et al. (2013)
A. Greljo,
J. Julio,
J. F. Kamenik,
C. Smith, and
J. Zupan,
JHEP 11, 190
(2013), eprint 1309.3561.
Cline et al. (2013)
J. M. Cline,
K. Kainulainen,
P. Scott, and
C. Weniger,
Phys. Rev. D88,
055025 (2013), eprint 1306.4710.
Grojean et al. (2005)
C. Grojean,
G. Servant, and
J. D. Wells,
Phys. Rev. D71,
036001 (2005), eprint hep-ph/0407019.
Katz and Perelstein (2014)
A. Katz and
M. Perelstein,
JHEP 1407, 108
(2014), eprint 1401.1827.
Curtin et al. (2014)
D. Curtin,
P. Meade, and
C.-T. Yu,
JHEP 1411, 127
(2014), eprint 1409.0005.
Cohen et al. (1996)
A. G. Cohen,
D. B. Kaplan,
and A. E.
Nelson, Phys. Lett.
B388, 588 (1996),
eprint hep-ph/9607394.
Dimopoulos and Giudice (1995)
S. Dimopoulos and
G. F. Giudice,
Phys. Lett. B357,
573 (1995), eprint hep-ph/9507282.
Barger et al. (2003)
V. Barger,
T. Han,
P. Langacker,
B. McElrath, and
P. Zerwas,
Phys. Rev. D67,
115001 (2003), eprint hep-ph/0301097.
Elias-Miro et al. (2013)
J. Elias-Miro,
J. R. Espinosa,
E. Masso, and
A. Pomarol,
JHEP 11, 066
(2013), eprint 1308.1879.
Falkowski and Riva (2015)
A. Falkowski and
F. Riva,
JHEP 02, 039
(2015), eprint 1411.0669.
Asner et al. (2013)
D. M. Asner
et al., in Community Summer Study
2013: Snowmass on the Mississippi (CSS2013) Minneapolis, MN, USA, July
29-August 6, 2013 (2013), eprint 1310.0763,
URL http://inspirehep.net/record/1256491/files/arXiv:1310.0763.pdf.
Dawson et al. (2013)
S. Dawson et al.,
in Community Summer Study 2013: Snowmass on the
Mississippi (CSS2013) Minneapolis, MN, USA, July 29-August 6, 2013
(2013), eprint 1310.8361,
URL http://inspirehep.net/record/1262795/files/arXiv:1310.8361.pdf.
Baer et al. (2013)
H. Baer,
T. Barklow,
K. Fujii,
Y. Gao,
A. Hoang,
S. Kanemura,
J. List,
H. E. Logan,
A. Nomerotski,
M. Perelstein,
et al. (2013), eprint 1306.6352.
et al (The CEPC-SPPC Study Group) (2015)
M. A. et al (The CEPC-SPPC
Study Group), CEPC-SPPC Preliminary Conceptual Design
Report IHEP-CEPC-DR-2015-01, IHEP-EP-2015-01, IHEP-TH-2015-01
(2015).
Bicer et al. (2014)
M. Bicer et al.
(TLEP Design Study Working Group),
JHEP 01, 164
(2014), eprint 1308.6176.
Hahn (2001)
T. Hahn,
Comput. Phys. Commun. 140,
418 (2001), eprint hep-ph/0012260.
Hahn and Perez-Victoria (1999)
T. Hahn and
M. Perez-Victoria,
Comput. Phys. Commun. 118,
153 (1999), eprint hep-ph/9807565.
Smirnov et al. (2011)
A. V. Smirnov,
V. A. Smirnov,
and
M. Tentyukov,
Computer Physics Communications
182, 790 (2011),
eprint 0912.0158.
Smirnov (2014)
A. V. Smirnov,
Computer Physics Communications
185, 2090 (2014),
eprint 1312.3186.
Binoth and
Heinrich (2004a)
T. Binoth and
G. Heinrich,
Nuclear Physics B 680,
375 (2004a),
eprint hep-ph/0305234.
Binoth and
Heinrich (2004b)
T. Binoth and
G. Heinrich,
Nuclear Physics B 693,
134 (2004b),
eprint hep-ph/0402265.
Cohen et al. (1993)
A. G. Cohen,
D. B. Kaplan,
and A. E.
Nelson, Annual Review of Nuclear and
Particle Science 43, 27
(1993), eprint hep-ph/9302210. |
Scalable Bayesian Inverse Reinforcement Learning
Alex J. Chan & Mihaela van der Schaar
Department of Applied Mathematics and Theoretical Physics
University of Cambridge
Cambridge, UK
{ajc340,mv472}@cam.ac.uk
Abstract
Bayesian inference over the reward presents an ideal solution to the ill-posed nature of the inverse reinforcement learning problem. Unfortunately current methods generally do not scale well beyond the small tabular setting due to the need for an inner-loop MDP solver, and even non-Bayesian methods that do themselves scale often require extensive interaction with the environment to perform well, being inappropriate for high stakes or costly applications such as healthcare. In this paper we introduce our method, Approximate Variational Reward Imitation Learning (AVRIL), that addresses both of these issues by jointly learning an approximate posterior distribution over the reward that scales to arbitrarily complicated state spaces alongside an appropriate policy in a completely offline manner through a variational approach to said latent reward. Applying our method to real medical data alongside classic control simulations, we demonstrate Bayesian reward inference in environments beyond the scope of current methods, as well as task performance competitive with focused offline imitation learning algorithms.
1 Introduction
For applications in complicated and high-stakes environments it can often mean operating in the minimal possible setting - that is with no access to knowledge of the environment dynamics nor intrinsic reward, nor even the ability to interact and test policies. In this case learning and inference must be done solely on the basis of logged trajectories from a competent demonstrator showing only the states visited and the the action taken in each case.
Clinical decision making is an important example of this, where there is great interest in learning policies from medical professionals but is completely impractical and unethical to deploy policies on patients mid-training. Moreover this is an area where it is not only the policies, but also knowledge of the demonstrator’s preferences and goals, that we are interested in.
While imitation learning (IL) generally deals with the problem of producing appropriate policies to match a demonstrator, with the added layer of understanding motivations this would then usually be approached through inverse reinforcement learning (IRL). Here attempting to learn the assumed underlying reward driving the demonstrator, before secondarily learning a policy that is optimal with respect to the reward using some forward reinforcement learning (RL) technique.
By composing the RL and IRL procedures in order to perform IL we arrive at apprenticeship learning (AL), which introduces its own challenges, particularly in the offline setting.
Notably for any given set of demonstrations there are (infinitely) many rewards for which the actions would be optimal (Ng et al., 2000). Max-margin (Abbeel & Ng, 2004) and max-entropy (Ziebart et al., 2008) methods for heuristically differentiating plausible rewards do so at the cost of potentially dismissing the true reward for not possessing desirable qualities. On the other hand a Bayesian approach to IRL (BIRL) is more conceptually satisfying, taking a probabilistic view of the reward, we are interested in the posterior distribution having seen the demonstrations (Ramachandran & Amir, 2007), accounting for all possibilities. BIRL is not without its own drawbacks though, as noted in Brown & Niekum (2019), making it inappropriate for modern complicated environments: assuming linear rewards; small, solvable environments; and repeated, inner-loop, calls to forward RL.
The main contribution then of this paper is a method for advancing BIRL beyond these obstacles, allowing for approximate reward inference using an arbitrarily flexible class of functions, in any environment, without costly inner-loop operations, and importantly entirely offline. This leads to our algorithm AVRIL, depicted in figure 1, which represents a framework for jointly learning a variational posterior distribution over the reward alongside an imitator policy in an auto-encoder-esque manner. In what follows we review the modern methods for offline IRL/IL (Section 2) with a focus on the approach of Bayesian IRL and the issues it faces when confronted with challenging environments. We then address the above issues by introducing our contributions (Section 3), and demonstrate the gains of our algorithm in real medical data and simulated control environments, notably that it is now possible to achieve Bayesian reward inference in such settings (Section 4). Finally we wrap up with some concluding thoughts and directions (Section 5). Code for AVRIL and our experiments is made available at https://github.com/XanderJC/scalable-birl and https://github.com/vanderschaarlab/mlforhealthlabpub.
2 Approaching Apprenticeship and Imitation Offline
Preliminaries.
We consider the standard Markov decision process (MDP) environment, with states $s\in\mathcal{S}$, actions $a\in\mathcal{A}$, transitions $T\in\Delta(\mathcal{S})^{\mathcal{S}\times\mathcal{A}}$, rewards $R\in\mathbb{R}^{\mathcal{S}\times\mathcal{A}}$111We define a state-action reward here, as is usual in the literature. Extensions to a state-only reward are simple, and indeed can be preferable, as we will see later., and discount $\gamma\in[0,1]$.
For a policy $\pi\in\Delta(\mathcal{A})^{\mathcal{S}}$ let $\rho_{\pi}(s,a)=\mathbb{E}_{\pi,T}[\sum_{t=0}^{\infty}\gamma^{t}\mathbbm{1}_{\{s_{t}=s,a_{t}=a\}}]$ be the induced unique occupancy measure alongside the state-only occupancy measure $\rho_{\pi}(s)=\sum_{a\in\mathcal{A}}\rho_{\pi}(s,a)$.
Despite this full environment model, the only information available to us is the MDP$\backslash RT$, in that we have no access to either the underlying reward or the transitions, with our lacking knowledge of the transitions being also strong in the sense that further we are unable to simulate the environment to sample them.
The learning signal is then given by access to $m$-many trajectories of some demonstrator assumed to be acting optimally w.r.t. the MDP, following a policy $\pi_{D}$, making up a data set $\mathcal{D}_{raw}=\{(s_{1}^{(i)},a_{1}^{(i)},\ldots,s_{\tau^{(i)}}^{(i)},a_{\tau^{(i)}}^{(i)})\}_{i=1}^{m}$ where $s_{t}^{(i)}$ is the state and $a_{t}^{(i)}$ is the action taken at step $t$ during the $i$th demonstration, and $\tau^{(i)}$ is the (max) time horizon of the $i$th demonstration. Given the Markov assumption though it is sufficient and convenient to consider the demonstrations simply as a collection of $n$-many state, action, next state, next action tuples such that $\mathcal{D}=\{(s_{i},a_{i},s^{\prime}_{i},a^{\prime}_{i})\}_{i=1}^{n}$ with $n=\sum_{i=1}^{m}(\tau^{(i)}-1)$.
Apprenticeship through rewards.
Typically AL proceeds by first inferring an appropriate reward function with an IRL procedure (Ng et al., 2000; Ramachandran & Amir, 2007; Rothkopf & Dimitrakakis, 2011; Ziebart et al., 2008) before running forward RL to obtain an appropriate policy. This allows for easy mix-and-match procedures, swapping in different standard RL and IRL methods depending on the situation. These algorithms though depend on either knowledge of $T$ in order to solve exactly or the ability to perform roll-outs in the environment, with little previous work focusing on the entirely offline setting. One simple solution is through attempting to learn the dynamics (Herman et al., 2016), though without a large supply of diverse demonstrations or a small environment this becomes impractical given imperfections in the model. Alternatively Klein et al. (2011) and Lee et al. (2019) attempt off-policy feature matching through least-squared temporal difference and deep neural networks to uncover appropriate feature representations.
Implicit-reward policy learning.
Recent work has often forgone an explicit representation of the reward. Moving within the maximum-entropy RL framework (Ziebart, 2010; Levine, 2018), Ho & Ermon (2016) noted that the full procedure (RL $\circ$ IRL) can be interpreted equivalently as the minimisation of some divergence between occupancy measures of the imitator and demonstrator:
$$\displaystyle\underset{\pi}{\operatorname*{arg\,min}}\{\psi^{*}(\rho_{\pi}-\rho_{\pi_{D}})-H(\pi)\},$$
(1)
with $H(\pi)$ being the discounted causal entropy (Bloem & Bambos, 2014) of the policy and $\psi^{*}$ the Fenchel conjugate of a chosen regulariser on the form of the reward. These are typically optimised in an adversarial fashion (Goodfellow et al., 2014) and given the focus on evaluating $\rho_{\pi}$ this often requires extensive interaction with the environment, otherwise banking on approximations over a replay buffer (Kostrikov et al., 2018) or a reformulation of the divergence to allow for off-policy evaluation (Kostrikov et al., 2019). Bear in mind that optimal policies within the maximum-entropy framework are parameterised by a Boltzmann distribution:
$$\displaystyle\pi(a|s)=\frac{\exp(Q(s,a))}{\sum_{b\in\mathcal{A}}\exp(Q(s,b))},$$
(2)
with $Q(s,a)$ the soft $Q$-function, defined recursively via the soft Bellman-equation:
$$\displaystyle Q(s,a)\triangleq R(s,a)+\gamma\mathbb{E}_{s^{\prime}\sim\rho_{\pi}}\Big{[}\underset{a^{\prime}}{\text{soft max }}Q(s^{\prime},a^{\prime}))\Big{]}.$$
(3)
Then for a learnt parameterised policy given in terms of $Q$-values from a function approximator $Q_{\theta}$, we can obtain an implied reward given by:
$$\displaystyle R_{Q_{\theta}}(s,a)=Q_{\theta}(s,a)-\gamma\mathbb{E}_{s^{\prime}\sim\rho_{\pi}}\Bigg{[}\log\Bigg{(}\sum_{a^{\prime}\in\mathcal{A}}\exp(Q_{\theta}(s^{\prime},a^{\prime}))\Bigg{)}\Bigg{]}.$$
(4)
A number of algorithms make use of this fact with
Piot et al. (2014) and Reddy et al. (2019) working by essentially placing a sparsity prior on this implied reward, encouraging it towards zero, and thus incorporating subsequent state information.
Alternatively Jarrett et al. (2020) show that even the simple behavioural cloning (Bain & Sammut, 1995) is implicitly maximising some reward with an approximation that the expectation over states is taken with respect to the demonstrator, not the learnt policy. They then attempt to rectify part of this approximation using the properties of the energy-based model implied by the policy (Grathwohl et al., 2019).
The problem with learning an implicit reward in an offline setting is that it remains just that, implicit, only able to be evaluated at points seen in the demonstrations, and even then only approximately. Thus even if their consideration improves imitator policies performance they offer no real improvement for interpretation.
2.1 Bayesian Inverse Reinforcement Learning
We are then resigned to directly reason about the underlying reward, bringing us back to the question of IRL, and in particular BIRL for a principled approach to reasoning under uncertainty. Given a prior over possible functions, having seen some demonstrations, we calculate the posterior over the function using a theoretically simple application of Bayes rule.
Ramachandran & Amir (2007) defines the likelihood of an action at a state as a Boltzmann distribution with inverse temperature and respective state-action values, yielding a probabilistic demonstrator policy given by:
$$\displaystyle\pi_{D}(a|s,R)=\frac{\exp(\beta Q_{R}^{\pi_{D}}(s,a))}{\sum_{b\in\mathcal{A}}\exp(\beta Q_{R}^{\pi_{D}}(s,b))},$$
(5)
where $\beta\in[0,\infty)$ represents the the confidence in the optimality of the demonstrator. Note that despite similarities, moving forward we are no longer within the maximum-entropy framework and $Q_{R}^{\pi}(s,a)$ now denotes the traditional, not soft (as in equation 3), state-action value ($Q$-value) function given a reward $R$ and policy $\pi$ such that $Q_{R}^{\pi}(s,a)=\mathbb{E}_{\pi,T}[\sum_{t=0}^{\infty}\gamma^{t}R(s_{t})|s_{0}=s,a_{0}=a]$.
Unsurprisingly this yields an intractable posterior distribution leading to a Markov chain Monte Carlo (MCMC) algorithm based on a random grid-walk to sample from the posterior.
Issues in complex and unknown environments.
This original formulation, alongside extensions that consider maximum-a-posteriori inference (Choi & Kim, 2011) and multiple rewards (Choi & Kim, 2012; Dimitrakakis & Rothkopf, 2011), suffer from three major drawbacks that make them impractical for modern, complicated, and model-free task environments.
1.
The reward is a linear combination of state features. Naturally this is a very restrictive class of functions and assumes access to carefully hand-crafted features of the state space.
2.
The cardinality of the state-space is finite, $|\mathcal{S}|<\infty$. Admittedly this can be relaxed in practical terms, although it does mean the rapid-mixing bounds derived by Ramachandran & Amir (2007) do not hold at all in the infinite case. For finite approximations they scale at $\mathcal{O}(|\mathcal{S}|^{2})$, rapidly becoming vacuous and causing BIRL to inherit the usual MCMC difficulties on assessing convergence and sequential computation (Gamerman & Lopes, 2006).
3.
The requirement of an inner-loop MDP solve. Most importantly at every step a new reward is sampled and the likelihood of the data must then be evaluated. This requires calculating the $Q$-values of the policy with respect to the reward, in other words running forward RL. While not an insurmountable problem in the simple cases where everything is known and can be quickly solved with a procedure guaranteed to converge correctly, this becomes an issue in the realm where only deep function approximation works adequately (i.e. the non-tabular setting). DQN training for example easily stretches into hours (Mnih et al., 2013) and will have to be repeated thousands of times, making it completely untenable.
We have seen that even in the most simple setting the problem of exact Bayesian inference over the reward is intractable, and the above limitations of the current MCMC methods are not trivial to overcome. Consequently very little work has been done in the area and there still remain very open challenges. Levine et al. (2011) addressed linearity through a Gaussian process approach, allowing for a significantly more flexible and non-linear representation though introducing issues of its own, namely the computational complexity of inverting large matrices (Rasmussen, 2003).
More recently Brown & Niekum (2019) have presented the only current solution to the inner-loop problem by introducing an alternative formulation of the likelihood, one based on human recorded pairwise preferences over demonstrations that significantly reduces the complexity of likelihood calculation but does necessitate that we have these preferences available. This certainly can’t be assumed always available and while very effective for the given task is not appropriate in the general case.
One of the key aspects of our contribution is that we deal with all three of these issues while also not requiring any additional information.
The usefulness of uncertainty.
On top of the philosophical consistency of Bayesian inference there are a number of very good reasons for wanting a measure of uncertainty over any uncovered reward that are not available from more traditional IRL algorithms. First and foremost the (epistemic) uncertainty revealed by Bayesian inference tells us a lot about what areas of the state-space we really cannot say anything about because we haven’t seen any demonstrations there - potentially informing future data collection if that is possible (Mindermann et al., 2018).
Additionally in the cases we are mostly concerned about (e.g. medicine) we have to be very careful about letting algorithms pick actions in practice and we are interested in performing safe or risk-averse imitation, for which a degree of confidence over learnt rewards is necessary. Brown et al. (2020) for example use a distribution over reward to optimise a conditional value-at-risk instead of expected return so as to bound potential downsides.
3 Approximate Variational Reward Imitation Learning
A variational Bayesian approach.
In this section we detail our method, AVRIL, for efficiently learning an imitator policy and performing reward inference simultaneously. Unlike the previously mentioned methods, that take sampling or MAP-based approaches to $p(R|\mathcal{D})$, we employ variational inference (Blei et al., 2017) to reason about the posterior. Here we posit a surrogate distribution $q_{\phi}(R)$, parameterised by $\phi$, and aim to minimise the Kullback-Leibler (KL) divergence to the posterior, resulting in an optimisation objective:
$$\displaystyle\underset{\phi}{\min}\{D_{\mathrm{KL}}(q_{\phi}(R)||p(R|\mathcal{D}))\}.$$
(6)
This divergence is still as troubling as the posterior to evaluate, leading to an auxiliary objective function in the Evidence Lower BOund (ELBO):
$$\displaystyle\mathcal{F}(\phi)=\mathbb{E}_{q_{\phi}}\big{[}\log p(\mathcal{D}|R)\big{]}-D_{KL}\big{(}q_{\phi}(R)||p(R)\big{)},$$
(7)
where it can be seen that maximisation over $\phi$ is equivalent to (6).
Generally we are agnostic towards the form of both the prior and variational distribution, for simplicity
we assume a Gaussian process prior with mean zero and unit variance over $R$ alongside the variational posterior distribution given by $q_{\phi}$ such that:
$$\displaystyle q_{\phi}(R)=\mathcal{N}(R;\mu,\sigma^{2}),$$
(8)
where $\mu,\sigma^{2}$ are the outputs of an encoder neural network taking $s$ as input and parameterised by $\phi$. Note that for the algorithm that we will describe these choices are not a necessity and can be easily substituted for more expressive distributions if appropriate. Maintaining the assumption of Boltzmann rationality on the part of the demonstrator, our objective takes the form:
$$\displaystyle\mathcal{F}(\phi)=\mathbb{E}_{q_{\phi}}\Bigg{[}\sum_{(s,a)\in\mathcal{D}}\log\frac{\exp(\beta Q_{R}^{\pi_{D}}(s,a))}{\sum_{b\in\mathcal{A}}\exp(\beta Q_{R}^{\pi_{D}}(s,b))}\Bigg{]}-D_{KL}\big{(}q_{\phi}(R)||p(R)\big{)}.$$
(9)
The most interesting (and problematic) part of this objective as ever centres on the evaluation of $Q_{R}^{\pi_{D}}(s,a)$. Notice that what is really required here is an expression of the $Q$-values as a smooth function of the reward such that with samples of $R$ we could take gradients w.r.t. $\phi$. Of course there is little hope of obtaining this simply, by itself it is a harder problem than that of forward RL which only attempts to evaluate the $Q$-values for a specific $R$ and already in complicated environments has to rely on function approximation and limited guarantees.
A naive approach would be to sample $\hat{R}$ and then approximate the $Q$-values with a second neural network, solving offline over the batched data using a least-squared TD/$Q$-learning algorithm, as is the approach forced on sampling based BIRL methods. It is in fact though doubly inappropriate for this setting, not only does this require a solve as an inner-loop but importantly differentiating through the solving operation is extremely impractical, it requires backpropagating through a number of gradient updates that are essentially unbounded as the complexity of the environment increases.
A further approximation.
This raises an important question - is it possible to jointly optimise a policy and variational distribution only once instead of requiring a repeated solve? This is theoretically suspect, the $Q$-values are defined on a singular reward, constrained as $R(s,a)=\mathbb{E}_{s^{\prime},a^{\prime}\sim\pi,T}[Q_{R}^{\pi}(s,a)-\gamma Q_{R}^{\pi}(s^{\prime},a^{\prime})]$ so we cannot learn a particular standard $Q$-function that reflects the entire distribution.
But can we learn a policy that reflects the expected reward using a second policy neural network $Q_{\theta}$? We can’t simply optimise $\theta$ alongside $\phi$ to maximise the ELBO though as that completely ignores the fact that the learnt policy is intimately related to the distribution over the reward.
Our solution to ensure then that they behave as intended is by constraining $q_{\phi}$ and $Q_{\theta}$ to be consistent with each other, specifically
that the implied reward of the policy is sufficiently likely under the variational posterior (equivalently that the negative log-likelihood is sufficiently low). Thus we arrive at a constrained optimisation objective given by:
$$\displaystyle\underset{\phi,\theta}{\max}\,\sum_{(s,a)\in\mathcal{D}}\log\frac{\exp(\beta Q_{\theta}(s,a))}{\sum_{b\in\mathcal{A}}\exp(\beta Q_{\theta}(s,b))}-D_{KL}\big{(}q_{\phi}(R)||p(R)\big{)},$$
(10)
$$\displaystyle\text{subject to}\,\mathbb{E}_{\pi,T}[-\log q_{\phi}(Q_{\theta}(s,a)-\gamma Q_{\theta}(s^{\prime},a^{\prime}))]<\epsilon.$$
with $\epsilon$ reflecting the strength of the constraint. Rewriting (10) as a Lagrangian under the KKT conditions (Karush, 1939; Kuhn & Tucker, 1951), and given complimentary slackness, we obtain a practical objective function:
$$\displaystyle\mathcal{F}(\phi,\theta,\mathcal{D})=\sum_{(s,a,s^{\prime},a^{\prime})\in\mathcal{D}}$$
$$\displaystyle\log\frac{\exp\beta Q_{\theta}(s,a))}{\sum_{b\in\mathcal{A}}\exp(\beta Q_{\theta}(s,b))}-D_{KL}\big{(}q_{\phi}(R(s,a))||p(R(s,a))\big{)}$$
$$\displaystyle+\lambda\log q_{\phi}(Q_{\theta}(s,a)-\gamma Q_{\theta}(s^{\prime},a^{\prime})).$$
(11)
Here the KL divergence between processes is approximated over a countable set, and $\lambda$ is introduced
to control the strength of constraint.
On the implementation.
Optimisation is simple as both networks are maximising the same objective and gradients can be easily obtained through backpropagation while being amenable to mini-batching, allowing you to call your favourite gradient-based stochastic optimisation scheme. We re-iterate though that AVRIL really represents a framework for doing BIRL and not a specific model since $Q_{\theta}$ and $q_{\phi}$ represent arbitrary function approximators.
So far we have presented both as neural networks, but this does not have to be the case. Of course the advantage of them is their flexibility and ease of training but they are still inherently black box. It is then perfectly possible to swap in any particular function approximator if the task requires it, using simple linear models for example may slightly hurt performance but allow for more insight.
Despite the specific focus on infinite state-spaces, AVRIL can still even be applied in the tabular setting by simply representing the policy and variational distribution with multi-dimensional tensors. Having settled on their forms, equation (3) is calculated simply and the joint gradient with respect to $\theta$ and $\phi$ is straight-forwardly returned using any standard auto-diff package. The whole process is summarised in Algorithm 1.
We can now see how AVRIL does not suffer the issues outlined in section 2.1. Our form of $q_{\phi}(R)$ is flexible and easily accommodates a non-linear form of the reward given a neural architecture - this also removes any restriction on $\mathcal{S}$, or at least allows for any state space that is commonly tackled within the IL/RL literature. Additionally we have a single objective for which all parameters are maximised simultaneously - there are no inner-loops, costly or otherwise, meaning training is faster than the MCMC methods by a factor equal roughly to the number of samples they would require.
The generative model view.
Ultimately a policy represents a generative model for the behavioural data we see while executing in an environment. Ho & Ermon (2016) explicitly make use of this fact by casting the problem in the GAN framework (Goodfellow et al., 2014). Our method is more analogous to a VAE (Kingma & Welling, 2013), though to be clear not exactly, where given the graphical model in figure 2 the reward can be seen as a latent representation of the policy. Our approach takes the seen data and amortises the inference, encoding over the state space. While the policy does not act as a decoder in precisely taking the encoded reward and outputting a policy, it does take the reward and state information and translate into actions and therefore behaviour. This approach has its advantages, first in meaningful interpretation of the latent reward (which is non-existent in adversarial methods), and secondly that we forgo the practical difficulties of alternating min-max optimisation (Kodali et al., 2017) while maintaining a generative view of the policy.
Temporal consistency through reward regularisation.
Considering only the first term of (3) yields the standard behavioural cloning setup (where the logits output of a classification network can be interpreted as the $Q$-values) as it removes the reward from the equation and just focuses on matching actions to states.
AVRIL can then be seen as a policy-learning method regularised by the need for the reward implied by the logits to be consistent. Note that this does not induce any necessary bias since the logits normally contain an extra degree of freedom allowing them to arbitrarily shift by some scale factor. This factor is now explicitly constrained by giving the logits additional meaning in that they represent $Q$-values.
This places great importance on the KL term, since every parameterisation of a policy will have an associated implied reward, the KL regularises these to be not so far from the prior and preventing the reward from overfitting to the policy and becoming pointless.
It also is able to double as a regularising term in a similar manor to previous reward-regularisation methods (Piot et al., 2014; Reddy et al., 2019) depending on the chosen prior, encouraging the reward to be close to zero:
Proposition 1 (Reward Regularisation) Assume that the constraint in (10) is satisfied in that $\mathbb{E}_{q_{\phi}}[R(s,a)]=\mathbb{E}_{\pi,T}[Q_{\theta}(s,a)-\gamma Q_{\theta}(s^{\prime},a^{\prime})]$, then given a standard normal prior $p(R)=\mathcal{N}(R;0,1)$ the KL divergence yields a sparsity regulator on the implied reward:
$$\displaystyle\mathcal{L}_{reg}=\sum_{(s,a,s^{\prime},a^{\prime})\in\mathcal{D}}\frac{1}{2}\big{(}Q_{\theta}(s,a)-\gamma Q_{\theta}(s^{\prime},a^{\prime})\big{)}^{2}+g(\mathrm{Var}_{q_{\phi}}[R(s,a)]).$$
(12)
Proof. Appendix. $\square$ This follows immediately from the fact that the divergence evaluates as $D_{KL}\big{(}q_{\phi}(R(s,a))||p(R(s,a))\big{)}=\frac{1}{2}(-\log(\mathrm{Var}_{q_{\phi}}[R(s,a)])-1+\mathrm{Var}_{q_{\phi}}[R(s,a)]+\mathbb{E}_{q_{\phi}}[R(s,a)]^{2})$
This then allows AVRIL to inherit the benefit of these methods while also explicitly learning a reward that can be queried at any point. We are also allowed the choice of whether it is state-only or state-action. This has so far been arbitrary, but it is important to consider that a state-only reward is a necessary and sufficient condition for a reward that is fully disentangled from the dynamics (Fu et al., 2018). Thus by learning such a reward and given the final term of (3) that directly connects one-step rewards in terms of the policy, this forces the policy (not the reward) to account for the dynamics of the system ensuring temporal consistency in a way that BC for example simply can’t. Alternatively using a state-action reward means that inevitably some of the temporal information leaks out of the policy and into the reward - ultimately to the detriment of the policy but potentially allowing for a more interpretable (or useful) form of reward depending on the task at hand.
4 Experiments
Experimental setup.
We are primarily concerned with the case of medical environments, which is exactly where the issue of learning without interaction is most crucial, you just cannot let a policy sample treatments for a patient to try to learn more about the dynamics. It is also where a level of interpretability in what has been learnt is important, since the consequence of actions are potentially very impactful on human lives. As such we focus our evaluation on learning
on a real-life healthcare problem, with demonstrations taken from the Medical Information Mart for Intensive Care (MIMIC-III) dataset (Johnson et al., 2016). The data contains trajectories of patients in intensive care recording their condition and theraputic interventions at one day intervals. We evaluate the ability of the methods to learn a medical policy in both the two and four action setting - specifically whether the patient should be placed on a ventilator, and the decision for ventilation in combination with antibiotic treatment. These represent the two most common, and important, clinical interventions recorded in the data.
Without a recorded notion of reward, performance is measured with respect to action matching against a held out test set of demonstrations with cross-validation.
Alongside the healthcare data and for the purposes of demonstrating generalisability, we provide additional results on standard environments of varying complexity in the RL literature, the standard control problems of: CartPole, a classic control environment aiming to swing up and balance a pendulum; Acrobot, which aims to maintain a sequence of joints above a given height; and LunarLander, guiding a landing module to a safe touchdown on the moon surface.
In these settings given sufficient demonstration data all benchmarks are very much capable of reaching demonstrator level performance, so we test the algorithms on their ability to handle sample complexity in the low data regime by testing their performance when given access to a select number of trajectories which we adjust, replicating the setup in Jarrett et al. (2020).
With access to a simulation through the OpenAI gym (Brockman et al., 2016), we measure performance by deploying the learnt policies live and calculating their average return over 300 episodes.
Benchmarks.
We test our method (AVRIL) against a number of benchmarks from the offline IRL/IL setting:
Deep Successor Feature Network (DSFN) (Lee et al., 2019), an offline adaptation of max-margin IRL that generalises past the linear methods using a deep network with least-squares temporal-difference learning, the only other method that produces both a reward and policy;
Reward-regularized Classification for Apprenticeship Learning (RCAL) (Piot et al., 2014), where an explicit regulariser on the sparsity of the implied reward is introduced in order to account for the dynamics information;
ValueDICE (VDICE) (Kostrikov et al., 2019), an adversarial imitation learning, adapted for the offline setting by removing the replay regularisation;
Energy-based Distribution Matching (EDM) (Jarrett et al., 2020), the state-of-the-art in offline imitation learning;
and finally the standard example of Behavioural Cloning (BC).
To provide evidence that we are indeed learning an appropriate reward we show an ablation of our method on the MIMIC data:
we take the reward learnt by AVRIL and use it as the ‘true’ reward used to train a $Q$-network offline to learn a policy (A-RL).
Note that we have not included previous BIRL methods for the reasons explained in section 2.1, training a network just once in these environments takes in the order of minutes and repeating this sequentially thousands of times is just not practical.
For aid in comparison all methods share the same network architecture of two hidden layers of 64 units with ELU activation functions and are trained using Adam (Kingma & Ba, 2014) with learning rates individually tuned. Further details on experimental setup and the implementation of benchmarks can be found in the appendix.
Evaluation.
We see for all tasks AVRIL learns an appropriate policy that performs strongly across the board, being competitive in all cases and in places beating out all of the other benchmarks. The results for our healthcare example are given in table 1, with AVRIL performing very strongly, having the highest accuracy and precision score in both tasks.
The results for the control environments are shown in figure 3.
AVRIL performs competitively and is easily capable of reaching demonstrator level performance in the samples given for these tasks, though not always as quickly as some of the dedicated offline IL methods.
Reward insight.
Remember though that task performance is not exactly our goal. Rather the
key aspect of AVRIL is the inference over the unseen reward in order to gain information about the preferences of the agent that other black-box policy methods can’t.
In the previous experiments our reward encoder was a neural network for maximum flexibility and we can see from the performance of A-RL we learn a representation of the reward that can be used to relearn in the environment very effectively, albeit not quite to the same standard of AVRIL.
Note this also reflects an original motivation for AVRIL in that offpolicy RL on top of a learnt reward suffers. In figure 4 we explore how to gain more insight from the learnt reward using different parameterisations of the reward. The top graph shows how a learnt state-action reward changes as a function of blood-oxygen level for an otherwise healthy patient, and it can be seen that as it drops below average the reward for ventilating the patient becomes much higher (note this is average for patients in the ICU, not across the general population). While this is intuitive we still have to query a neural network repeatedly over the state space to gain insight, the bottom graph of figure 4 presents then a simpler but perhaps more useful representation. In this case we learn a state-only reward as before but as a linear model. This is not as strong a constraint on the policy since that is still free to be non-linear as a neural network but simultaneously allows us the insight of what our model considers high value in the environment as we plot the relative model coefficients for each covariate. We can see here for example that the biggest impact on the overall estimated quality of a state is given by blood pressure, well known as an important indicator of health (Hepworth et al., 1994), strongly impacted by trauma and infection.
Gridworld ground-truth comparison
While environments like MIMIC are the main focus of this work they do not lend them selves to inspection of the uncovered reward as the ground truth simply is not available to us. We thus demonstrate on a toy gridworld environment, in order to clearly see the effect of learning a posterior distribution over the reward. In this (finite) example both the encoder and decoder are represented by tensors but otherwise the procedure remains the same. Figure 5 plots scaled heat-maps of: a) the ground truth reward; b) the relative state occupancy of the expert demonstrations, obtained using value-iteration; c) the reward posterior mean; and d) the reward standard deviation. The interesting thing to note is that the standard deviation of the learnt reward essentially resembles the complement of the state occupancy - revealing the epistemic uncertainty around that part of the state-space given we haven’t seen any demonstrations there.
5 Conclusions
We have presented a novel algorithm, Approximate Variational Reward Imitation Learning, for addressing the scalability issues that prevent current Bayesian IRL methods being used in large and unknown environments. We show that this performs strongly on real and toy data for learning imitation policies completely offline and importantly recovers a reward that is both effective for retraining policies but also offers useful insight into the preferences of the demonstrator.
Of course this still represents an approximation, and there is room for further, more exact methods or else guarantees on the maximum divergence.
We have focused on simply obtaining the appropriate uncertainty over reward as well as imitation in high stakes environments - in these settings it is crucial that learnt policies avoid catastrophic failure and so how exactly to use the uncertainty in order to achieve truly safe imitation (or indeed better-that-demonstrator apprenticeship) is increasingly of interest.
Acknowledgements
AJC would like to acknowledge and thank Microsoft Research for its support through its PhD Scholarship Program with the EPSRC.
This work was additionally supported by the Office of Naval Research (ONR) and the NSF (Grant number: 1722516).
We would like to thank all of the anonymous reviewers on OpenReview, alongside the many members of the van der Schaar lab, for their input, comments, and suggestions at various stages that have ultimately improved the manuscript.
References
Abbeel & Ng (2004)
Pieter Abbeel and Andrew Y Ng.
Apprenticeship learning via inverse reinforcement learning.
In Proceedings of the twenty-first international conference on
Machine learning, pp. 1, 2004.
Bain & Sammut (1995)
Michael Bain and Claude Sammut.
A framework for behavioural cloning.
In Machine Intelligence 15, pp. 103–129, 1995.
Blei et al. (2017)
David M Blei, Alp Kucukelbir, and Jon D McAuliffe.
Variational inference: A review for statisticians.
Journal of the American statistical Association, 112(518):859–877, 2017.
Bloem & Bambos (2014)
Michael Bloem and Nicholas Bambos.
Infinite time horizon maximum causal entropy inverse reinforcement
learning.
In 53rd IEEE Conference on Decision and Control, pp. 4911–4916. IEEE, 2014.
Brockman et al. (2016)
Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman,
Jie Tang, and Wojciech Zaremba.
Openai gym, 2016.
Brown et al. (2020)
Daniel Brown, Scott Niekum, and Marek Petrik.
Bayesian robust optimization for imitation learning.
Advances in Neural Information Processing Systems, 33, 2020.
Brown & Niekum (2019)
Daniel S Brown and Scott Niekum.
Deep bayesian reward learning from preferences.
arXiv preprint arXiv:1912.04472, 2019.
Choi & Kim (2011)
Jaedeug Choi and Kee-Eung Kim.
Map inference for bayesian inverse reinforcement learning.
In Advances in Neural Information Processing Systems, pp. 1989–1997, 2011.
Choi & Kim (2012)
Jaedeug Choi and Kee-Eung Kim.
Nonparametric bayesian inverse reinforcement learning for multiple
reward functions.
In Advances in Neural Information Processing Systems, pp. 305–313, 2012.
Dimitrakakis & Rothkopf (2011)
Christos Dimitrakakis and Constantin A Rothkopf.
Bayesian multitask inverse reinforcement learning.
In European workshop on reinforcement learning, pp. 273–284. Springer, 2011.
Fu et al. (2018)
Justin Fu, Katie Luo, and Sergey Levine.
Learning robust rewards with adverserial inverse reinforcement
learning.
In International Conference on Learning Representations, 2018.
Gamerman & Lopes (2006)
Dani Gamerman and Hedibert F Lopes.
Markov chain Monte Carlo: stochastic simulation for Bayesian
inference.
CRC Press, 2006.
Goodfellow et al. (2014)
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley,
Sherjil Ozair, Aaron Courville, and Yoshua Bengio.
Generative adversarial nets.
In Advances in neural information processing systems, pp. 2672–2680, 2014.
Grathwohl et al. (2019)
Will Grathwohl, Kuan-Chieh Wang, Joern-Henrik Jacobsen, David Duvenaud,
Mohammad Norouzi, and Kevin Swersky.
Your classifier is secretly an energy based model and you should
treat it like one.
In International Conference on Learning Representations, 2019.
Hepworth et al. (1994)
Joseph T Hepworth, Sherry Garrett Hendrickson, and Jean Lopez.
Time series analysis of physiological response during icu visitation.
Western journal of nursing research, 16(6):704–717, 1994.
Herman et al. (2016)
Michael Herman, Tobias Gindele, Jörg Wagner, Felix Schmitt, and Wolfram
Burgard.
Inverse reinforcement learning with simultaneous estimation of
rewards and dynamics.
In Artificial Intelligence and Statistics, pp. 102–110.
PMLR, 2016.
Hill et al. (2018)
Ashley Hill, Antonin Raffin, Maximilian Ernestus, Adam Gleave, Anssi
Kanervisto, Rene Traore, Prafulla Dhariwal, Christopher Hesse, Oleg Klimov,
Alex Nichol, Matthias Plappert, Alec Radford, John Schulman, Szymon Sidor,
and Yuhuai Wu.
Stable baselines.
https://github.com/hill-a/stable-baselines, 2018.
Ho & Ermon (2016)
Jonathan Ho and Stefano Ermon.
Generative adversarial imitation learning.
In Advances in neural information processing systems, pp. 4565–4573, 2016.
Jarrett et al. (2020)
Daniel Jarrett, Ioana Bica, and Mihaela van der Schaar.
Strictly batch imitation learning by energy-based distribution
matching.
Advances in Neural Information Processing Systems, 33, 2020.
Johnson et al. (2016)
Alistair EW Johnson, Tom J Pollard, Lu Shen, H Lehman Li-Wei, Mengling Feng,
Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and
Roger G Mark.
Mimic-iii, a freely accessible critical care database.
Scientific data, 3(1):1–9, 2016.
Karush (1939)
William Karush.
Minima of functions of several variables with inequalities as side
constraints.
M. Sc. Dissertation. Dept. of Mathematics, Univ. of Chicago,
1939.
Kingma & Ba (2014)
Diederik P Kingma and Jimmy Ba.
Adam: A method for stochastic optimization.
arXiv preprint arXiv:1412.6980, 2014.
Kingma & Welling (2013)
Diederik P Kingma and Max Welling.
Auto-encoding variational bayes.
arXiv preprint arXiv:1312.6114, 2013.
Klein et al. (2011)
Edouard Klein, Matthieu Geist, and Olivier Pietquin.
Batch, off-policy and model-free apprenticeship learning.
In European Workshop on Reinforcement Learning, pp. 285–296. Springer, 2011.
Kodali et al. (2017)
Naveen Kodali, Jacob Abernethy, James Hays, and Zsolt Kira.
On convergence and stability of gans.
arXiv preprint arXiv:1705.07215, 2017.
Kostrikov et al. (2018)
Ilya Kostrikov, Kumar Krishna Agrawal, Debidatta Dwibedi, Sergey Levine, and
Jonathan Tompson.
Discriminator-actor-critic: Addressing sample inefficiency and reward
bias in adversarial imitation learning.
In International Conference on Learning Representations, 2018.
Kostrikov et al. (2019)
Ilya Kostrikov, Ofir Nachum, and Jonathan Tompson.
Imitation learning via off-policy distribution matching.
International Conference on Learning Representations (ICLR),
2019.
Kuhn & Tucker (1951)
HW Kuhn and AW Tucker.
Nonlinear programming.
In Proceedings of the Second Berkeley Symposium on Mathematical
Statistics and Probability. The Regents of the University of California,
1951.
Lee et al. (2019)
Donghun Lee, Srivatsan Srinivasan, and Finale Doshi-Velez.
Truly batch apprenticeship learning with deep successor features.
International Joint Conference on Artificial Intelligence
(IJCAI), 2019.
Levine (2018)
Sergey Levine.
Reinforcement learning and control as probabilistic inference:
Tutorial and review.
arXiv preprint arXiv:1805.00909, 2018.
Levine et al. (2011)
Sergey Levine, Zoran Popovic, and Vladlen Koltun.
Nonlinear inverse reinforcement learning with gaussian processes.
In Advances in Neural Information Processing Systems, pp. 19–27, 2011.
Mindermann et al. (2018)
Sören Mindermann, Rohin Shah, Adam Gleave, and Dylan Hadfield-Menell.
Active inverse reward design.
arXiv preprint arXiv:1809.03060, 2018.
Mnih et al. (2013)
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis
Antonoglou, Daan Wierstra, and Martin Riedmiller.
Playing atari with deep reinforcement learning.
arXiv preprint arXiv:1312.5602, 2013.
Ng et al. (2000)
Andrew Y Ng, Stuart J Russell, et al.
Algorithms for inverse reinforcement learning.
In Icml, volume 1, pp. 2, 2000.
Piot et al. (2014)
Bilal Piot, Matthieu Geist, and Olivier Pietquin.
Boosted and reward-regularized classification for apprenticeship
learning.
In Proceedings of the 2014 international conference on
Autonomous agents and multi-agent systems, pp. 1249–1256. International
Foundation for Autonomous Agents and Multiagent Systems, 2014.
Raffin (2018)
Antonin Raffin.
Rl baselines zoo.
https://github.com/araffin/rl-baselines-zoo, 2018.
Ramachandran & Amir (2007)
Deepak Ramachandran and Eyal Amir.
Bayesian inverse reinforcement learning.
In IJCAI, volume 7, pp. 2586–2591, 2007.
Rasmussen (2003)
Carl Edward Rasmussen.
Gaussian processes in machine learning.
In Summer School on Machine Learning, pp. 63–71. Springer,
2003.
Reddy et al. (2019)
Siddharth Reddy, Anca D Dragan, and Sergey Levine.
Sqil: Imitation learning via reinforcement learning with sparse
rewards.
In International Conference on Learning Representations, 2019.
Rothkopf & Dimitrakakis (2011)
Constantin A Rothkopf and Christos Dimitrakakis.
Preference elicitation and inverse reinforcement learning.
In Joint European conference on machine learning and knowledge
discovery in databases, pp. 34–48. Springer, 2011.
Schulman et al. (2017)
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov.
Proximal policy optimization algorithms.
arXiv preprint arXiv:1707.06347, 2017.
Ziebart (2010)
Brian D Ziebart.
Modeling Purposeful Adaptive Behavior with the Principle of
Maximum Causal Entropy.
PhD thesis, University of Washington, 2010.
Ziebart et al. (2008)
Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, and Anind K Dey.
Maximum entropy inverse reinforcement learning.
In Aaai, volume 8, pp. 1433–1438. Chicago, IL, USA, 2008.
Appendix A Experimental Setup
Expert Demonstrators.
Demonstrations are produced by running pre-trained and hyperparmeter-optimised agents taken from the RL Baselines Zoo (Raffin, 2018) in OpenAI Stable Baselines (Hill et al., 2018). For Acrobot and LunarLander these are DQNs (Mnih et al., 2013), while CartPole uses PPO2 (Schulman et al., 2017). Trajectories were then sub-sampled for every 20th step in Acrobot and CartPole, and every 5th step in LunarLander.
Testing setup.
For control environments algorithms were presented with (1,3,7,10,15) trajectories uniformly sampled from a pool of 1000 expert trajectories. Each algorithm was then trained until convergence and tested by performing 300 live roll-outs in the simulated environment and recording the average accumulated reward received in each episode. This whole process was then repeated 10 times, consequently with different initialisations and seen trajectories.
Implementations.
All methods are neural network based and so in experiments they share the same architecture of 2 hidden layers of 64 units each connected by exponential linear unit (ELU) activation functions.
Publicly available code was used in the implementations of a number of the benchmarks, specifically:
•
VDICE (Kostrikov et al., 2019):
https://github.com/google-research/google-research/tree/master/value_dice
•
DSFN (Lee et al., 2019):
https://github.com/dtak/batch-apprenticeship-learning
•
EDM (Jarrett et al., 2020):
https://github.com/wgrathwohl/JEM
Note that VDICE was originally designed for continuous actions with a Normal distribution output which we adapt for the experiments by replacing with a Gumbel-softmax.
Appendix B Proofs
Proof of proposition 1.
Assuming the constraint is satisfied, we are maximising the following objective:
$$\displaystyle\mathcal{F}(\phi,\theta)=\sum_{(s,a,s^{\prime},a^{\prime})\in\mathcal{D}}$$
$$\displaystyle\log\frac{\exp\beta Q_{\theta}(s,a))}{\sum_{b\in\mathcal{A}}\exp(\beta Q_{\theta}(s,b))}-D_{KL}\big{(}q_{\phi}(R(s,a))||p(R(s,a))\big{)}$$
(13)
Which is equivalent to minimising the negative value
$$\displaystyle\mathcal{F}(\phi,\theta)=\sum_{(s,a,s^{\prime},a^{\prime})\in\mathcal{D}}$$
$$\displaystyle\underbrace{-\log\frac{\exp\beta Q_{\theta}(s,a))}{\sum_{b\in\mathcal{A}}\exp(\beta Q_{\theta}(s,b))}}_{\mathcal{L}_{BC}}+\underbrace{D_{KL}\big{(}q_{\phi}(R(s,a))||p(R(s,a))\big{)}}_{\mathcal{L}_{reg}},$$
(14)
with the first term $\mathcal{L}_{BC}$ being the negative log-likelihood of the data and the classic behavioural cloning objective. Now given a standard Gaussian prior then the KL divergence of a Gaussian with mean $\mu$ and variance $\sigma^{2}$ from the prior is given by $\frac{1}{2}(-\log(\sigma^{2})+\sigma^{2}-1+\mu^{2})$ (Kingma & Welling, 2013). Then given our prior $p(R(s,a))=\mathcal{N}(R;0,1)$, the KL evaluates as:
$$\displaystyle\mathcal{L}_{reg}$$
$$\displaystyle=\sum_{(s,a,s^{\prime},a^{\prime})\in\mathcal{D}}D_{KL}\big{(}q_{\phi}(R(s,a))||p(R(s,a))\big{)}$$
(15)
$$\displaystyle=\sum_{(s,a,s^{\prime},a^{\prime})\in\mathcal{D}}\frac{1}{2}(-\log(\mathrm{Var}_{q_{\phi}}[R(s,a)])+\mathrm{Var}_{q_{\phi}}[R(s,a)]-1+\mathbb{E}_{q_{\phi}}[R(s,a)]^{2})$$
(16)
$$\displaystyle=\sum_{(s,a,s^{\prime},a^{\prime})\in\mathcal{D}}\frac{1}{2}(\mathbb{E}_{q_{\phi}}[R(s,a)]^{2})+g(\mathrm{Var}_{q_{\phi}}[R(s,a)])$$
(17)
$$\displaystyle=\sum_{(s,a,s^{\prime},a^{\prime})\in\mathcal{D}}\frac{1}{2}\big{(}Q_{\theta}(s,a)-\gamma Q_{\theta}(s^{\prime},a^{\prime})\big{)}^{2}+g(\mathrm{Var}_{q_{\phi}}[R(s,a)])$$
(18)
Since by assumption $\mathbb{E}_{q_{\phi}}[R(s,a)]=\mathbb{E}_{\pi,T}[Q_{\theta}(s,a)-\gamma Q_{\theta}(s^{\prime},a^{\prime})]$ with the expectation approximated over samples in the data and considering a definition of the function $g$ to be $g(\mathrm{Var}_{q_{\phi}}[R(s,a)])=\frac{1}{2}(-\log(\mathrm{Var}_{q_{\phi}}[R(s,a)])+\mathrm{Var}_{q_{\phi}}[R(s,a)]-1)$. $\square$ |
Private Algebras In Quantum Information And Infinite-Dimensional Complementarity
Jason Crann
${}^{1}$School of Mathematics & Statistics, Carleton University, Ottawa, ON, Canada H1S 5B6
${}^{2}$Université Lille 1 - Sciences et Technologies, UFR de Mathématiques, Laboratoire de Mathématiques Paul Painlevé
- UMR CNRS 8524, 59655 Villeneuve d’Ascq Cédex, France
[email protected]
,
David W. Kribs
${}^{1}$Department of Mathematics & Statistics, University of Guelph, Guelph, ON, Canada N1G 2W1
${}^{2}$Institute for Quantum Computing, University of Waterloo, Waterloo, ON, Canada N2L 3G1
[email protected]
,
Rupert H. Levene
School of Mathematical Sciences, University College Dublin, Belfield, Dublin 4, Ireland
[email protected]
and
Ivan G. Todorov
Pure Mathematics Research Centre, Queen’s University Belfast, Belfast BT7 1NN, United Kingdom
[email protected]
(Date:: 14 August 2015)
Abstract.
We introduce a generalized framework for private quantum codes using
von Neumann algebras and the structure of commutants. This leads
naturally to a more general notion of complementary channel, which
we use to establish a generalized complementarity theorem between
private and correctable subalgebras that applies to both the finite and
infinite-dimensional settings. Linear bosonic channels are considered and
specific examples of Gaussian quantum channels are given to illustrate the new framework together
with the complementarity theorem.
1. Introduction
One of the most basic notions in quantum privacy is the private quantum code. Arising initially as the quantum
analogue of the classical one-time pad, they were first called private quantum channels and investigated for optimal encryption
schemes [2, 10]. The subject has grown considerably over the past decade and a half, with related applications in quantum secret
sharing [13, 14] and the terminology “private quantum subsystems” taking hold as part of work on the theory of private shared
reference frames [4, 5]. In recent years, focus in the subject has turned to investigating relevant properties of completely
positive maps. This has led to connections established with quantum error correction [22], discussed in more detail below, as well as
algebraic conditions characterizing private subsystems and new, surprisingly simple examples that suggest private subsystems are more
ubiquitous than previously thought [19, 20]. These more recent works, along with [12], have also suggested deeper connections with the theory of
operator algebras, opening up the possibility of extending the subject to infinite-dimensional Hilbert spaces.
From a different but related direction, throughout the development of quantum theory, the notion of complementarity
has played a fundamental role in the interpretation of quantum measurements,
providing, for instance, the theoretical basis behind quantum state tomography. At the level of
quantum channels, an appropriate notion of complementarity has been formulated and shown to be vital for understanding
their overall structure [15, 23]. An underlying feature of complementarity is the trade-off between
information and disturbance. For finite-dimensional quantum channels,
this trade-off was quantified in [25], and was used to establish
a complementarity theorem between private and correctable
subsystems for a channel and its complementary channel [22].
As there is a more general framework for (infinite-dimensional)
quantum error correction at the level of von Neumann algebras [6, 7, 8, 9],
a natural question is to seek a generalized notion of private quantum codes that is also viable in the infinite-dimensional
setting, and for which a suitable complementarity theorem holds. Using von Neumann algebras
and the structure of commutants, in this paper we introduce a generalized framework
for private quantum codes which may be seen as the complementary analogue of so-called operator
algebra error correction, resulting in a natural notion of “private algebras”. This in turn leads to a more
general notion of complementary channel, and we establish a generalized complementarity
theorem for arbitrary dimensions in the new framework. As a corollary, we also obtain
a structure theorem for correctable subalgebras that generalizes a
finite-dimensional result [21]. We finish by illustrating the framework and concepts for infinite-dimensional
linear bosonic channels and a specific class of Gaussian quantum channels [16, 18].
The outline of the paper is as follows. In Section 2, we
discuss the necessary preliminaries on infinite-dimensional channels
and von Neumann algebras. We then introduce our
generalized framework for private quantum codes in Section 3,
and discuss their basic properties and examples. Section 4 contains the
generalized complementarity theorem and its aforementioned application.
In Section 5, we study
explicit examples of linear bosonic and Gaussian quantum channels
which illustrate the new framework along with the complementarity
theorem. We end with a conclusion summarizing the results of the paper
and an outlook on future work.
2. Preliminaries
Let $S$ be a (not necessarily finite-dimensional) Hilbert space. We
assume that the inner product is linear in the second variable
and denote by $\mathcal{B}(S)$ (resp. $\mathcal{T}(S)$) the space of all bounded linear
(resp. trace class) operators on $S$.
There is a canonical isometric isomorphism between the Banach space
dual $\mathcal{T}(S)^{*}$ of $\mathcal{T}(S)$ and $\mathcal{B}(S)$ via the trace:
$$\langle T,\rho\rangle:=\mathop{\rm Tr}\nolimits(T\rho),\quad T\in\mathcal{B}(S%
),\rho\in\mathcal{T}(S).$$
Thus, $\mathcal{T}(S)$ can be identified with the space of normal (i.e. weak* continuous) linear
functionals on $\mathcal{B}(S)$, where,
if $|\eta\rangle\in S$ and $\langle\xi|$ belongs to the dual $S^{*}$ of $S$, the rank one operator
$|\eta\rangle\langle\xi|\in\mathcal{T}(S)$ corresponds to the vector functional given by
$\omega_{\xi,\eta}(X)=\langle\xi|X|\eta\rangle$, $X\in\mathcal{B}(S)$.
We denote by $\mathcal{S}(S)$ the set of all states on $S$; thus, an
element $\rho\in\mathcal{T}(S)$ belongs to $\mathcal{S}(S)$ precisely when $\rho$
is positive (that is, $\langle X,\rho\rangle=\mathop{\rm Tr}\nolimits(X\rho)\geq 0$ whenever
$X\geq 0$) and $\langle I,\rho\rangle=\mathop{\rm Tr}\nolimits(\rho)=1$.
If $S$ and $S^{\prime}$ denote the respective input and output systems of a dynamical quantum process, then,
in the Schrödinger picture, states in $\mathcal{T}(S)$ evolve under a completely positive trace preserving (CPTP) map to states
in $\mathcal{T}(S^{\prime})$. In the Heisenberg picture, which will be adopted in this paper,
observables in $\mathcal{B}(S^{\prime})$ evolve under a normal
(i.e. weak*-weak* continuous) unital completely positive (NUCP)
map $\mathcal{E}$ to observables in $\mathcal{B}(S)$.
As a normal map, $\mathcal{E}$ has a unique pre-adjoint $\mathcal{E}_{*}:\mathcal{T}(S)\rightarrow\mathcal{T}(S^{\prime})$ which is a
CPTP map describing the corresponding evolution of states.
Suppose that in the above scenario, one wished, or had the ability, to
measure only a certain subset $\mathcal{O}$ of observables on the output
space $S^{\prime}$. The results of the measurements will then be governed by
the spectral projections of the corresponding elements in $\mathcal{O}$,
which, by general spectral theory, lie in the von Neumann algebra $M$
generated by $\mathcal{O}$. Thus, the relevant dynamics is encoded in the
restriction of $\mathcal{E}$ to $M$, that is, in a NUCP map $\mathcal{E}:M\rightarrow\mathcal{B}(S)$. As such mappings are natural objects of
study in operator algebra theory, and include the class of classical
information channels, we will adopt this more general framework in
this paper. The remainder of this section will be devoted to a
brief overview of the relevant concepts; for details, we refer the
reader to [27, 28].
A von Neumann algebra on a Hilbert space $S$ is a
$*$-subalgebra $M$ of $\mathcal{B}(S)$ with unit $1_{M}=I_{S}\in M$
which is closed in the strong operator topology.
For a subset $L\subseteq\mathcal{B}(S)$, its commutant is the
subspace
$$L^{\prime}:=\{X\in\mathcal{B}(S)\mid XT=TX,\ \mbox{ for all }T\in L\}.$$
Von Neumann’s bicommutant theorem states that a unital $*$-subalgebra
$M$ of $\mathcal{B}(S)$ is a von Neumann algebra if and only if $M^{\prime\prime}:=(M^{\prime})^{\prime}$
coincides with $M$. As $(L^{\prime})^{\prime\prime}=L^{\prime}$ for any subset
$L\subseteq\mathcal{B}(S)$, the commutant $M^{\prime}$ of a von Neumann algebra
$M$ is again a von Neumann algebra on $S$.
Another distinguishing feature of a von Neumann algebra $M$ is that it is (isometrically isomorphic to) the dual of a unique Banach space $M_{*}$,
called the predual of $M$, which consists of all weak* continuous linear functionals on $M$. For example, $M=\mathcal{B}(S)$ is a von Neumann algebra with $M_{*}=\mathcal{T}(S)$.
We will denote by $\mathcal{S}(M)$ the set of normal states on $M$, that is,
the positive elements $\rho$ of $M_{*}$ satisfying $\langle I_{S},\rho\rangle=1$.
If $M$ and $N$ are von Neumann algebras,
a bounded linear map $\mathcal{E}:M\rightarrow N$ is said to be normal if it is weak*-weak* continuous.
In this case, $\mathcal{E}$ has a unique pre-adjoint $\mathcal{E}_{*}:N_{*}\rightarrow M_{*}$ satisfying
$$\langle X,\mathcal{E}_{*}(\rho)\rangle=\langle\mathcal{E}(X),\rho\rangle,\quad
X%
\in M,\ \rho\in N_{*}.$$
Moreover, $\mathcal{E}$ is a NUCP map if and only if $\mathcal{E}_{*}$ is completely positive and
$\mathcal{E}_{*}(\mathcal{S}(N))\subseteq\mathcal{S}(M)$.
Given two Hilbert spaces $S$ and $S^{\prime}$, we denote by $S\otimes S^{\prime}$
their Hilbertian tensor product. For operators $X\in\mathcal{B}(S)$ and
$Y\in\mathcal{B}(S^{\prime})$, as usual we denote by $X\otimes Y$ the (unique)
operator in $\mathcal{B}(S\otimes S^{\prime})$ with $(X\otimes Y)(\xi\otimes\eta)=X\xi\otimes Y\eta$, $\xi\in S$, $\eta\in S^{\prime}$. If $M\subseteq\mathcal{B}(S)$
and $N\subseteq\mathcal{B}(S^{\prime})$ are von Neumann algebras, the weak* closed
linear span $M\bar{\otimes}N$ of $\{X\otimes Y\mid X\in M,\ Y\in N\}$
is a von Neumann subalgebra of $\mathcal{B}(S\otimes S^{\prime})$. In particular,
$\mathcal{B}(S)\bar{\otimes}\mathcal{B}(S^{\prime})=\mathcal{B}(S\otimes S^{%
\prime})$. If $\rho\in M_{*}$
and $\omega\in N_{*}$, then there exists a (unique) element
$\rho\otimes\omega\in(M\bar{\otimes}N)_{*}$ such that
$$\langle X\otimes Y,\rho\otimes\omega\rangle=\langle X,\rho\rangle\langle Y,%
\omega\rangle,\quad X\in M,\ Y\in N.$$
Thus, we have a natural embedding of the algebraic tensor product $M_{*}\odot N_{*}$
into $(M\bar{\otimes}N)_{*}$; its image is norm dense in $(M\bar{\otimes}N)_{*}$.
Given a Hilbert space $S$ and a von Neumann algebra $M$, a quantum channel is a NUCP map
$\mathcal{E}:M\to\mathcal{B}(S)$. (This is the dual viewpoint of how channels are typically presented
in quantum information theory as CP trace-preserving maps, but is more natural in the operator algebra
setting.)
Note that a quantum channel $\mathcal{E}$ is automatically completely bounded (see e.g. [27]).
We denote by $\|\Phi\|_{\mathop{\rm cb}}$ the c.b. norm of a completely bounded map $\Phi$.
In the case $M=\mathbb{C}$, an important example is the depolarizing channel
$\mathcal{D}:\mathbb{C}\to\mathcal{B}(S)$ given by $\mathcal{D}(\lambda)=\lambda I$.
It is straightforward to check that $\mathcal{D}_{*}:\mathcal{T}(S)\to\mathbb{C}$ coincides with the trace.
If $\mathcal{F}:N\rightarrow\mathcal{B}(S^{\prime})$ is another quantum channel on the von Neumann algebra $N$, then there exists a (unique)
quantum channel $\mathcal{E}\otimes\mathcal{F}:M\bar{\otimes}N\rightarrow\mathcal{B}(S\otimes S%
^{\prime})$ such that
$$(\mathcal{E}\otimes\mathcal{F})(X\otimes Y)=\mathcal{E}(X)\otimes\mathcal{F}(Y%
),\quad X\in M,\ Y\in N.$$
Channels can similarly be tensored in the Schrödinger picture, and
$(\mathcal{E}\otimes\mathcal{F})_{*}=\mathcal{E}_{*}\otimes\mathcal{F}_{*}$.
Stinespring’s theorem for normal maps
asserts that if $\mathcal{E}:M\rightarrow\mathcal{B}(S)$ is a quantum
channel, then there exist a Hilbert space $H$, a normal unital
$*$-homomorphism $\pi:M\rightarrow\mathcal{B}(H)$ and an isometry $V:S\rightarrow H$ such that
(1)
$$\mathcal{E}(X)=V^{*}\pi(X)V,\quad X\in M.$$
We refer to the triple $(\pi,V,H)$ as a Stinespring triple for
$\mathcal{E}$, and to identity
(1) as a Stinespring representation of $\mathcal{E}$.
Such a Stinespring representation is unique up to
a conjugation by a partial isometry
in the following sense: if $(\pi_{1},V_{1},H_{1})$ and $(\pi_{2},V_{2},H_{2})$ are
Stinespring triples for $\mathcal{E}$, then there is a partial isometry
$U:H_{1}\rightarrow H_{2}$ such that
(2)
$$UV_{1}=V_{2},\quad U^{*}V_{2}=V_{1}\ \ \text{and}\ \ U\pi_{1}(X)=\pi_{2}(X)U$$
for all $X\in M$. If $(\pi_{1},V_{1},H_{1})$ yields a minimal
Stinespring representation, meaning that the linear span of
$\pi_{1}(M)V_{1}S$ is a dense subspace of $H_{1}$, then we will call
$(\pi_{1},V_{1},H_{1})$ a minimal Stinespring triple for $\mathcal{E}$. In this case, the map $U$ above
is necessarily an isometry,
and any two minimal Stinespring representations for $\mathcal{E}$ are
unitarily equivalent.
3. Private Quantum Codes via Commutant Structures
We now introduce our generalized notion of privacy for quantum channels.
Given Hilbert spaces $S$ and $S^{\prime}$ and a bounded operator $T:S^{\prime}\to S$,
we write $\mathcal{C}_{T}:\mathcal{B}(S^{\prime})\to\mathcal{B}(S)$, $\mathcal{C}_{T}(X)=TXT^{*}$ for
conjugation by $T$. Clearly, if $T$ is a partial isometry then $\mathcal{C}_{T}$ is a
quantum channel from $\mathcal{B}(S^{\prime})$ into $\mathcal{B}(TT^{*}S)$.
For a Hilbert space $S$, we let $\mathcal{P}(S)$ denote the set of projections in $\mathcal{B}(S)$.
Definition 3.1.
Let $S$ be a Hilbert space, let $M$ be a von Neumann algebra, and
let $\mathcal{E}:M\rightarrow\mathcal{B}(S)$ be a quantum channel.
If $P\in\mathcal{P}(S)$, a von Neumann subalgebra
$N\subseteq\mathcal{B}(PS)$ is called private for $\mathcal{E}$ with respect to $P$
if
$$\mathcal{C}_{P}\circ\mathcal{E}(M)\subseteq N^{\prime}.$$
Given $\varepsilon>0$,
we say that $N$ is $\varepsilon$-private for $\mathcal{E}$ with respect to $P$
if there exists a quantum channel $\mathcal{F}:M\rightarrow\mathcal{B}(S)$ such that
$$\lVert\mathcal{E}-\mathcal{F}\rVert_{\mathop{\rm cb}}<\varepsilon$$
and $N$ is private for $\mathcal{F}$ with respect to $P$. If $P=I$,
we simply say that $N\subseteq\mathcal{B}(S)$ is
private (resp. $\varepsilon$-private) for $\mathcal{E}$.
Remark.
The definition of a private subalgebra is motivated by the notion of
an operator private subsystem [4, 5, 19, 20, 22].
Recall that, if
$S,A,B$ and $S^{\prime}$ are finite-dimensional Hilbert spaces with $S=(A\otimes B)\oplus(A\otimes B)^{\perp}$ and $\mathcal{E}:\mathcal{B}(S^{\prime})\rightarrow\mathcal{B}(S)$ is a UCP map with pre-adjoint $\mathcal{E}_{*}:\mathcal{T}(S)\rightarrow\mathcal{T}(S^{\prime})$, then $B$ is called an operator private subsystem for $\mathcal{E}$
if $\mathcal{E}_{*}\circ(\mathcal{C}_{P})_{*}=\mathcal{F}_{*}\otimes\mathop{\rm Tr}\nolimits$ for some quantum
channel $\mathcal{F}:\mathcal{B}(S^{\prime})\rightarrow\mathcal{B}(A)$, where $P$ is the
projection from $S$ onto $A\otimes B$ [22]. Assuming $P\rho P=\sum_{i=1}^{n}\rho_{i}^{A}\otimes\rho_{i}^{B}$, where $\rho_{i}^{A}$
(resp. $\rho_{i}^{B}$) are elements of $\mathcal{T}(A)$ (resp. $\mathcal{T}(B)$), we
have
$$\displaystyle\langle\mathcal{C}_{P}\circ\mathcal{E}(T),\rho\rangle$$
$$\displaystyle=\langle T,\mathcal{E}_{*}\circ(\mathcal{C}_{P})_{*}(\rho)\rangle%
=\langle T,\mathcal{E}_{*}(P\rho P)\rangle$$
$$\displaystyle=\sum_{i=1}^{n}\langle T,\mathcal{E}_{*}(\rho_{i}^{A}\otimes\rho_%
{i}^{B})\rangle=\sum_{i=1}^{n}\langle T,(\mathcal{F}_{*}\otimes\mathop{\rm Tr}%
\nolimits)(\rho_{i}^{A}\otimes\rho_{i}^{B})\rangle$$
$$\displaystyle=\sum_{i=1}^{n}\langle T,\mathcal{F}_{*}(\rho_{i}^{A})\rangle%
\langle I_{B},\rho_{i}^{B}\rangle=\sum_{i=1}^{n}\langle\mathcal{F}(T)\otimes I%
_{B},\rho_{i}^{A}\otimes\rho_{i}^{B}\rangle$$
$$\displaystyle=\langle\mathcal{F}(T)\otimes I_{B},P\rho P\rangle=\langle P(%
\mathcal{F}(T)\otimes I_{B})P,\rho\rangle.$$
Thus,
$$\mathcal{C}_{P}\circ\mathcal{E}(\mathcal{B}(S^{\prime}))\subseteq\mathcal{B}(A%
)\otimes I_{B}=(I_{A}\otimes\mathcal{B}(B))^{\prime}=N^{\prime},$$
where $N:=I_{A}\otimes\mathcal{B}(B)=\{I_{A}\otimes Y:Y\in\mathcal{B}(B)\}$, a
von Neumann subalgebra of $\mathcal{B}(PS)$.
Conversely, if $\mathcal{C}_{P}\circ\mathcal{E}(\mathcal{B}(S^{\prime}))\subseteq N^{\prime}=%
\mathcal{B}(A)\otimes I_{B}$, then for every $T\in\mathcal{B}(S^{\prime})$ there exists $X_{T}\in\mathcal{B}(A)$ such that $\mathcal{C}_{P}\circ\mathcal{E}(T)=X_{T}\otimes I_{B}$. Set $\mathcal{F}(T)=X_{T}$. It is easy to check that this defines a UCP map $\mathcal{F}\colon\mathcal{B}(S^{\prime})\to\mathcal{B}(A)$
and that its pre-adjoint
$\mathcal{F}_{*}:\mathcal{T}(A)\rightarrow T(S^{\prime})$ satisfies $\mathcal{E}_{*}\circ(\mathcal{C}_{P})_{*}=\mathcal{F}_{*}\otimes\mathop{\rm Tr}\nolimits$. Thus, $B$ is an operator private subsystem if and only
if $N\cong\mathcal{B}(B)$ is a private subalgebra for $\mathcal{E}$ with respect
to $P$.
The choice of the term “private” is justified by
the fact that any information stored in the operator private subsystem
$B$ completely decoheres under the action of $\mathcal{E}_{*}$ [2, 5].
From the Heisenberg perspective, observables on the output system evolve under $\mathcal{E}$ to observables having uniform statistics with respect to the subsystem $B$
in the sense that the expected value of a measurement of $\mathcal{E}(T)$ in the state $\rho\in\mathcal{T}(A\otimes B)$ solely depends on the marginal state $\mathop{\rm Tr}\nolimits_{B}(\rho)\in\mathcal{T}(A)$.
In the more general setting of private subalgebras, not all information about observables in the subalgebra $N\subseteq\mathcal{B}(PS)$ is lost under the action of
$\mathcal{E}:M\rightarrow\mathcal{B}(S)$, just the quantum information. Indeed, the only obtainable information about $N$ after an application of the channel is the classical information
contained in its center $\mathcal{Z}(N)=N\cap N^{\prime}$. Thus, we recover the usual sense of privacy when $N$ is a factor, meaning $\mathcal{Z}(N)=\mathbb{C}I$.
If $N$ is a factor of type I, then $N\cong I_{A}\otimes\mathcal{B}(B)$ for some Hilbert spaces $A$ and $B$ [28].
This induces a decomposition $S=(A\otimes B)\oplus(A\otimes B)^{\perp}$ and it follows that $B$ is an operator private subsystem for $\mathcal{E}$.
Hence, operator private subsystems are precisely the private type I factors.
Examples.
An immediate class of examples of private subalgebras arises from normal conditional expectations.
If $S$ is a Hilbert space and $\mathcal{E}:\mathcal{B}(S)\rightarrow N^{\prime}$ is a normal conditional expectation, that is, a weak*-weak* continuous projection of norm one,
where $N\subseteq\mathcal{B}(S)$ is a von Neumann subalgebra,
then trivially, $N$ is private for the quantum channel $\mathcal{E}$. Some concrete examples are the following.
(i)
Deletion channels: $\mathcal{E}(T)=\langle T,\rho\rangle I$, for some $\rho\in\mathcal{S}(S)$; in this case $N=\mathcal{B}(S)$.
(ii)
Uniform phase-flips on $n$-qubits:
$$\mathcal{E}(T)=\frac{1}{2^{n}}\sum_{(s_{1},\cdots,s_{n})\in\mathbb{Z}_{2}^{n}}%
Z_{(s_{1},\cdots,s_{n})}TZ_{(s_{1},\cdots,s_{n})}^{*},$$
where $Z_{(s_{1},\cdots,s_{n})}=\otimes_{i=1}^{n}Z_{s_{i}}$ where $Z_{0}=I$
and $Z_{1}=\left(\begin{smallmatrix}1&0\\
0&-1\end{smallmatrix}\right)$; in this case, $N=N^{\prime}=\otimes_{i=1}^{n}\Delta_{2}$,
where $\Delta_{2}$ is the diagonal subalgebra of $M_{2}(\mathbb{C})$.
(iii)
Uniform bit-flips on $n$-qubits:
$$\mathcal{E}(T)=\frac{1}{2^{n}}\sum_{(s_{1},\cdots,s_{n})\in\mathbb{Z}_{2}^{n}}%
X_{(s_{1},\cdots,s_{n})}TX_{(s_{1},\cdots,s_{n})}^{*},$$
where $X_{(s_{1},\cdots,s_{n})}=\otimes_{i=1}^{n}X_{s_{i}}$ with $X_{0}=I$
and $X_{1}=\left(\begin{smallmatrix}0&1\\
1&0\end{smallmatrix}\right)$; in this case, $N=N^{\prime}=\otimes_{i=1}^{n}\mathcal{C}_{2}$,
where $\mathcal{C}_{2}$ is the subalgebra of circulant matrices in $M_{2}(\mathbb{C})$.
The latter two examples fall under a general class of conditional expectations arising from compact group representations:
if $\pi:G\rightarrow\mathcal{B}(H_{\pi})$ is a unitary representation of a compact group, then $\mathcal{E}:\mathcal{B}(H_{\pi})\rightarrow\mathcal{B}(H_{\pi})$ defined by
$$\mathcal{E}(T)=\int_{G}\pi(s)T\pi(s)^{*}\,dh(s)$$
where $h$ is a normalized Haar measure on $G$, is a conditional expectation onto $\pi(G)^{\prime}$, so that $N=\pi(G)^{\prime\prime}$ in this case. A similar class of examples
was considered in [5].
4. Complementarity with Correctable Subalgebras
In finite dimensions, a perfect duality exists between operator private and correctable subsystems: a subsystem is correctable for a channel $\mathcal{E}$ if and only if it is private
for any complementary channel $\mathcal{E}^{c}$ [22]. Using the continuity of the Stinespring representation [25],
an approximate version of the complementarity theorem was also established [22].
In this section, we generalize the notion of complementarity to quantum channels of the form $\mathcal{E}:M\rightarrow\mathcal{B}(S)$, and in this new framework,
extend the complementarity theorem and its approximate version.
Definition 4.1.
Let $S$ be a Hilbert space,
let $M$ be a von Neumann algebra, and let $\mathcal{E}:M\rightarrow\mathcal{B}(S)$ be a quantum channel.
Given a Stinespring triple
$(\pi,V,H)$ for $\mathcal{E}$, we define the complementary channel of $\mathcal{E}$
with respect to $(\pi,V,H)$ to be
the NUCP map $\mathcal{E}^{c}_{\pi,V,H}:\pi(M)^{\prime}\rightarrow\mathcal{B}(S)$ given by
$$\mathcal{E}^{c}_{\pi,V,H}(X)=V^{*}XV,\quad X\in\pi(M)^{\prime}.$$
We also say that $\mathcal{E}^{c}_{\pi,V,H}$ is a complementary channel of $\mathcal{E}$.
Remark 4.2.
Suppose that $(\pi_{1},V_{1},H_{1})$ and $(\pi_{2},V_{2},H_{2})$ are Stinespring triples
for $\mathcal{E}$, and let
$\mathcal{F}_{1}=\mathcal{E}^{c}_{\pi_{1},V_{1},H_{1}}$ and $\mathcal{F}_{2}=\mathcal{E}^{c}_{\pi_{2},V_{2},H_{2}}$.
By the uniqueness of the Stinespring representation,
there exists a partial isometry $U:H_{1}\to H_{2}$ satisfying identities (2).
It follows that, if $Y\in\pi_{1}(M)^{\prime}$ and $X\in M$ then
$$\displaystyle\mathcal{C}_{U}(Y)\pi_{2}(X)$$
$$\displaystyle=UYU^{*}\pi_{2}(X)=UY\pi_{1}(X)U^{*}=U\pi_{1}(X)YU^{*}$$
$$\displaystyle=\pi_{1}(X)UYU^{*}=\pi_{2}(X)\mathcal{C}_{U}(Y);$$
thus,
$\mathcal{C}_{U}(\pi_{1}(M)^{\prime})\subseteq\pi_{2}(M)^{\prime}$ and,
similarly, $\mathcal{C}_{U^{*}}(\pi_{2}(M)^{\prime})\subseteq\pi_{1}(M)^{\prime}$.
Hence the maps $\mathcal{F}_{2}\circ\mathcal{C}_{U}$ and $\mathcal{F}_{1}\circ\mathcal{C}_{U^{*}}$
are well-defined; by (2),
$\mathcal{F}_{1}=\mathcal{F}_{2}\circ\mathcal{C}_{U}$ and $\mathcal{F}_{2}=\mathcal{F}_{1}\circ\mathcal{C}_{U^{*}}$.
Let $\mathcal{E}:M\rightarrow\mathcal{B}(S)$ be a quantum channel and
suppose that $(\pi,V,H)$ is a Stinespring triple for $\mathcal{E}$ with $\pi$ faithful.
Let $\mathcal{E}^{c}=\mathcal{E}^{c}_{\pi,V,H}$, and note that
$(\mathop{\rm id}\nolimits_{\pi(M)^{\prime}},V,H)$ is a Stinespring triple for $\mathcal{E}^{c}$
(here $\mathop{\rm id}\nolimits_{\pi(M)^{\prime}}:\pi(M)^{\prime}\to\pi(M)^{\prime}$ is the identity map).
Letting $\mathcal{E}^{cc}:\pi(M)\to\mathcal{B}(S)$ be the complement of $\mathcal{E}^{c}$
with respect to this Stinespring triple, we have that
$$\mathcal{E}^{c}(\pi(X))=\mathcal{E}(X),\ \ \ X\in M.$$
Identifying $M$ with $\pi(M)$, we see that $\mathcal{E}^{cc}=\mathcal{E}$; thus, the
generalized notion of complementarity is involutive, as expected.
A specific example of a Stinespring triple for $\mathcal{E}$
whose corresponding normal representation is faithful can be obtained as follows.
Let $M_{1}\subseteq\mathcal{B}(H_{1})$ and $M_{2}\subseteq\mathcal{B}(H_{2})$ be
von Neumann algebras. The amplification-induction theorem [28, Theorem IV.5.5]
states that for every normal $*$-homomorphism
$\pi$ from $M_{1}$ onto $M_{2}$, there exists a Hilbert space $H_{3}$, a projection $P\in M_{1}^{\prime}\bar{\otimes}\mathcal{B}(H_{3})$
and a unitary $U:H_{2}\to P(H_{1}\otimes H_{3})$ such that
$$\pi(X)=U^{*}P(X\otimes I_{H_{3}})PU,\ \ X\in M_{1}.$$
Viewing $PU$ as an isometry $W:H_{2}\to H_{1}\otimes H_{3}$, we have
(3)
$$\pi(X)=W^{*}(X\otimes I_{H_{3}})W,\ \ X\in M_{1}.$$
Now suppose that $M\subseteq\mathcal{B}(S^{\prime})$ is a von Neumann algebra,
$\mathcal{E}:M\to\mathcal{B}(S)$ is a quantum channel and
$(\pi,V,H)$ is a Stinespring triple for $\mathcal{E}$ (with $\pi$ not necessarily faithful).
Since the image $\pi(M)$ is a von Neumann algebra on $H$ [28],
the amplification-induction theorem allows us to write
(4)
$$\mathcal{E}(X)=\widetilde{V}^{*}(X\otimes I_{H_{3}})\widetilde{V},\ \ X\in M,$$
where $\widetilde{V}=WV$ is the composition of the Stinespring isometry $V:S\rightarrow H$
and the isometry $W:H\rightarrow S^{\prime}\otimes H_{3}$ from the representation of $\pi$
as in equation (3).
Note that if $M=\mathcal{B}(S^{\prime})$, then $M^{\prime}=\mathbb{C}I_{S^{\prime}}$ and $P=I_{S^{\prime}}\otimes P^{\prime}\in I_{S^{\prime}}\bar{\otimes}\mathcal{B}(H%
_{3})$
for some $P^{\prime}\in\mathcal{P}(H_{3})$,
so we may view $W$ as a unitary from $H$ onto $S^{\prime}\otimes P^{\prime}H_{3}$.
Equation (4) then becomes the usual Stinespring representation
of a quantum channel $\mathcal{E}:\mathcal{B}(S^{\prime})\rightarrow\mathcal{B}(S)$,
and its corresponding complement $\mathcal{E}^{c}:I_{S^{\prime}}\bar{\otimes}\mathcal{B}(P^{\prime}H_{3})\to%
\mathcal{B}(S)$
is the usual complementary channel as studied in the literature.
Lemma 4.3.
Let $S$ and $S^{\prime}$ be Hilbert spaces, $M\subseteq\mathcal{B}(S^{\prime})$ be a von
Neumann algebra, $\mathcal{E}:M\rightarrow\mathcal{B}(S)$ be a quantum channel
and $W:S\to S$ be a partial isometry. If $\mathcal{E}^{c}$ is a complementary channel of $\mathcal{E}$, then $\mathcal{C}_{W}\circ\mathcal{E}^{c}$ is a complementary channel of $\mathcal{C}_{W}\circ\mathcal{E}$.
Proof.
Suppose that $\mathcal{E}^{c}$ is associated with the Stinespring triple $(\pi,V,H)$ of $\mathcal{E}$.
Then
$$\mathcal{E}(X)=V^{*}\pi(X)V,\quad X\in M$$
and
$$\mathcal{E}^{c}(Y)=V^{*}YV,\quad Y\in\pi(M)^{\prime}.$$
Thus,
$$\mathcal{C}_{W}\circ\mathcal{E}(X)=WV^{*}\pi(X)VW^{*},\quad X\in M,$$
and hence $(\pi,VW^{*},H)$ is a Stinespring triple for $\mathcal{C}_{W}\circ\mathcal{E}$.
The claim is now immediate.
∎
Before proceeding to the complementarity theorem,
we recall the operator algebra formalism of quantum error correction [7, 8, 9].
Definition 4.4.
Let $S$ be a Hilbert space, $M$ be a von Neumann algebra, and
$\mathcal{E}:M\rightarrow\mathcal{B}(S)$ be a quantum channel.
If $P\in\mathcal{P}(S)$, a von Neumann subalgebra $N\subseteq\mathcal{B}(PS)$ is said to be
correctable for $\mathcal{E}$ with respect to $P$ if there exists a quantum channel $\mathcal{R}:N\rightarrow M$
such that
$$\mathcal{C}_{P}\circ\mathcal{E}\circ\mathcal{R}=\mathop{\rm id}\nolimits_{N}.$$
Given $\varepsilon>0$,
we say that $N$ is $\varepsilon$-correctable for $\mathcal{E}$ with respect to $P$
if there exists a quantum channel $\mathcal{R}:N\rightarrow M$ such that
$$\lVert\mathcal{C}_{P}\circ\mathcal{E}\circ\mathcal{R}-\mathop{\rm id}\nolimits%
_{N}\rVert_{\mathop{\rm cb}}<\varepsilon.$$
If $P=I$, we simply say that $N\subseteq\mathcal{B}(S)$ is
correctable (resp. $\varepsilon$-correctable) for $\mathcal{E}$.
The above definition unifies the notions of correctable
and noiseless (meaning correctable, but with no active correction required) subspaces and subsystems under one umbrella, allowing
for a general treatment of quantum error correction using the language
of operator algebras. As mentioned in [9], correctable
subsystems correspond to correctable von Neumann algebras of type I,
analogous to the situation above for operator private subsystems.
Note that the channel $\mathcal{R}$ in Definition 4.4 (called the
recovery channel) has a slightly more general form than the one
usually studied in the literature, (namely, a NUCP map $\mathcal{R}:\mathcal{B}(S^{\prime})\rightarrow\mathcal{B}(S)$ satisfying $\mathcal{C}_{P}\circ\mathcal{E}\circ\mathcal{R}=\mathcal{C}_{P}|_{N}$). The reason is to keep in line with our general picture of
quantum channels as NUCP maps whose domain can be a general von Neumann algebra.
Lemma 4.5.
Let $M$ be a von Neumann algebra and let $\mathcal{E}:M\to B(S)$ be a quantum channel. If $\varepsilon>0$ and $N\subseteq\mathcal{B}(S)$ is a von Neumann algebra which is $\varepsilon$-correctable
(respectively, correctable) for some particular complement of $\mathcal{E}$, then $N$ is $\varepsilon$-correctable (respectively, correctable)
for every complement of $\mathcal{E}$.
Proof.
Let $(\pi_{0},V_{0},H_{0})$ and $(\pi,V,H)$ be Stinespring triples
for $\mathcal{E}$, and denote the corresponding complements by $\mathcal{E}_{0}^{c}$
and $\mathcal{E}^{c}$. Suppose that $N$ is $\varepsilon$-correctable for
$\mathcal{E}_{0}^{c}$; we will show that the same is true of $\mathcal{E}^{c}$. There is a quantum channel $\mathcal{R}_{0}:N\to\pi_{0}(M)^{\prime}$ with
$\lVert\mathcal{E}_{0}\circ\mathcal{R}_{0}-\mathop{\rm id}\nolimits_{N}\rVert_{%
\mathop{\rm cb}}<\varepsilon$. By
Remark 4.2, there is a partial isometry $U:H_{0}\to H$ so
that
(5)
$$U\pi_{0}(M)^{\prime}U^{*}\subseteq\pi(M)^{\prime},\quad\mathcal{E}_{0}^{c}=%
\mathcal{E}^{c}\circ\mathcal{C}_{U}\ \ \mbox{ and }\ \ \mathcal{E}^{c}=%
\mathcal{E}_{0}^{c}\circ\mathcal{C}_{U^{*}}.$$
Fix a normal state
$\omega\in N_{*}$ and define a quantum channel $\mathcal{R}:N\to\pi(M)^{\prime}$
by
$$\mathcal{R}(T)=U\mathcal{R}_{0}(T)U^{*}+\langle T,\omega\rangle(1-UU^{*}),%
\quad T\in N.$$
(The second term is required to ensure that $\mathcal{R}$ is
unital.) Since $U^{*}U$ is a projection, we have $\mathcal{C}_{U^{*}}\circ\mathcal{R}=\mathcal{C}_{U^{*}}\circ\mathcal{C}_{U}%
\circ\mathcal{R}_{0}$ and so, by (5),
$$\mathcal{E}^{c}\circ\mathcal{R}=\mathcal{E}_{0}^{c}\circ\mathcal{C}_{U^{*}}%
\circ\mathcal{R}=\mathcal{E}_{0}^{c}\circ\mathcal{C}_{U^{*}}\circ\mathcal{C}_{%
U}\circ\mathcal{R}_{0}=\mathcal{E}_{0}^{c}\circ\mathcal{R}_{0}.$$
Hence $\lVert\mathcal{E}^{c}\circ\mathcal{R}-\mathop{\rm id}\nolimits_{N}\rVert_{%
\mathop{\rm cb}}=\lVert\mathcal{E}_{0}^{c}\circ\mathcal{R}_{0}-\mathop{\rm id}%
\nolimits_{N}\rVert_{\mathop{\rm cb}}<\varepsilon$ and so $N$ is $\varepsilon$-correctable for $\mathcal{E}^{c}$. The assertion with correctability in place of
$\varepsilon$-correctability is proven by replacing “less than
$\varepsilon$” with “equal to zero” in the preceding.
∎
The following elementary lemma will be used to obtain quantum
channels from (not necessarily unital) normal completely positive
maps.
Lemma 4.6.
Let $S$ be a Hilbert space and
let $M$ and $N$ be von Neumann algebras with $N\subseteq\mathcal{B}(S)$. If
${\mathcal{F}}:M\to N$ is a normal completely positive contractive map,
then there is a quantum channel $\widetilde{\mathcal{F}}:M\to N$ with $\|\widetilde{\mathcal{F}}-\mathcal{E}\|_{\mathop{\rm cb}}\leq 2\|\mathcal{F}-%
\mathcal{E}\|_{\mathop{\rm cb}}$
for any quantum channel $\mathcal{E}:M\to\mathcal{B}(S)$.
Proof.
Let $\omega\in M_{*}$ be a normal state and set $A=1_{N}-{\mathcal{F}}(1_{M})$.
Since $\mathcal{F}$ is contractive and positive, ${\mathcal{F}}(1_{M})$ is a positive contraction and so $A\geq 0$.
Let $\widetilde{\mathcal{F}}$ be the map defined by
$\widetilde{\mathcal{F}}(X)={\mathcal{F}}(X)+\langle X,\omega\rangle A$, $X\in M$. Then
$\widetilde{\mathcal{F}}$ is unital by construction, and as it is the sum
of two normal completely positive maps into $N$, we see that
$\widetilde{\mathcal{F}}$ is a quantum channel from $M$ into $N$. The map $\widetilde{\mathcal{F}}-\mathcal{F}$ is completely positive, so it attains its (completely
bounded) norm at $1_{M}$ [27]; hence,
$$\lVert\widetilde{\mathcal{F}}-\mathcal{F}\rVert_{\mathop{\rm cb}}=\|\langle 1_%
{M},\omega\rangle A\|=\|A\|=\|(\mathcal{E}-\mathcal{F})(1_{M})\|\leq\lVert%
\mathcal{F}-\mathcal{E}\rVert_{\mathop{\rm cb}}.$$
Thus $\lVert\widetilde{\mathcal{F}}-\mathcal{E}\rVert_{\mathop{\rm cb}}\leq\lVert%
\widetilde{\mathcal{F}}-\mathcal{F}\rVert_{\mathop{\rm cb}}+\lVert\mathcal{F}-%
\mathcal{E}\rVert_{\mathop{\rm cb}}\leq 2\lVert\mathcal{F}-\mathcal{E}\rVert_{%
\mathop{\rm cb}}$.
∎
The next theorem is one of the central results of the
paper. It generalizes the main results of
both [22] and [6], which correspond to the special case that $S^{\prime}$ is
finite dimensional and $M=\mathcal{B}(S^{\prime})$. In the proof, we will use
results from [24]; the latter paper is concerned with the
continuity of the Stinespring representation for completely positive
maps defined on C*-algebras. By Stinespring’s theorem for normal maps,
it is straightforward to verify that the results we will need
remain valid in the case of normal completely positive maps defined on
von Neumann algebras.
Theorem 4.7.
Let $S$ and $S^{\prime}$ be Hilbert spaces, $M\subseteq\mathcal{B}(S^{\prime})$ be a von Neumann algebra, $\mathcal{E}:M\rightarrow\mathcal{B}(S)$ be a quantum channel and
$P\in\mathcal{P}(S)$.
If a von Neumann subalgebra $N\subseteq\mathcal{B}(PS)$ is $\varepsilon$-private (respectively, $\varepsilon$-correctable) for $\mathcal{E}$ with respect to $P$
then it is $2\sqrt{\varepsilon}$-correctable (respectively, $8\sqrt{\varepsilon}$-private) for any complement
of $\mathcal{E}$ with respect to $P$.
In particular, $N$ is private (respectively, correctable) for $\mathcal{E}$ with respect to $P$ if and only if it is correctable (respectively, private)
for any complement of $\mathcal{E}$ with respect to $P$.
Proof.
Without loss of generality we may suppose that $P=I$; indeed,
$N\subseteq\mathcal{B}(PS)$ is $\varepsilon$-private (respectively,
$\varepsilon$-correctable) for $\mathcal{E}$ with respect to $P$ if and only it is
$\varepsilon$-private (respectively, $\varepsilon$-correctable) for $\mathcal{C}_{P}\circ\mathcal{E}$. The general statement now follows from Lemma 4.3,
according to which $\mathcal{C}_{P}\circ\mathcal{E}^{c}$ is complementary to $\mathcal{C}_{P}\circ\mathcal{E}$.
We first consider one of the implications in the case $\varepsilon=0$.
Namely, suppose that $N$ is private for $\mathcal{E}$, so that $\mathcal{E}(M)\subseteq N^{\prime}$, and hence $N=N^{\prime\prime}\subseteq\mathcal{E}(M)^{\prime}$.
Let $\mathcal{E}^{c}$ be the complement of $\mathcal{E}$ with respect to a minimal
Stinespring triple $(\pi,V,H)$ for $\mathcal{E}$. It follows from
Arveson’s commutant lifting theorem [3, Theorem 1.3.1] that there
exists a normal *-homomorphism $\rho:\mathcal{E}(M)^{\prime}\rightarrow\pi(M)^{\prime}$
such that $\rho(X)V=VX$ for all $X\in\mathcal{E}(M)^{\prime}$ (see also [28, IV.3.6]).
Consider the quantum channel $\mathcal{R}:=\rho|_{N}:N\to\pi(M)^{\prime}$. Since $\mathcal{E}$ is
unital, $V$ is an isometry and hence
$$\mathcal{E}^{c}(\mathcal{R}(T))=V^{*}\rho(T)V=V^{*}VT=T$$
for all $T\in N$.
Thus, $N$ is correctable for $\mathcal{E}^{c}$. By
Lemma 4.5, $N$ is correctable for any
complement of $\mathcal{E}$.
Now suppose that $N$ is $\varepsilon$-private for $\mathcal{E}$, so that $N$
is private for some channel $\mathcal{F}:M\rightarrow\mathcal{B}(S)$ with
$\lVert\mathcal{E}-\mathcal{F}\rVert_{\mathop{\rm cb}}<\varepsilon$. By [24, Proposition 6],
there is a common normal representation
$\pi:M\rightarrow\mathcal{B}(H)$ with Stinespring triples
$(\pi,V_{\mathcal{E}},H)$ and $(\pi,V_{\mathcal{F}},H)$ for $\mathcal{E}$ and $\mathcal{F}$, respectively, so that
$$\lVert V_{\mathcal{E}}-V_{\mathcal{F}}\rVert\leq\sqrt{\lVert\mathcal{E}-%
\mathcal{F}\rVert_{\mathop{\rm cb}}}<\sqrt{\varepsilon}.$$
Let $\mathcal{E}^{c}:\pi(M)^{\prime}\rightarrow\mathcal{B}(S)$ and $\mathcal{F}^{c}:\pi(M)^{\prime}\rightarrow\mathcal{B}(S)$ be the corresponding
complementary channels. It follows from [24, Proposition 3]
that
$$\lVert\mathcal{E}^{c}-\mathcal{F}^{c}\rVert_{\mathop{\rm cb}}\leq 2\lVert V_{%
\mathcal{E}}-V_{\mathcal{F}}\rVert<2\sqrt{\varepsilon}.$$
Since $N$ is private for $\mathcal{F}$, it is correctable for $\mathcal{F}^{c}$ by
the previous paragraphs, so there exists a channel
$\mathcal{R}:N\rightarrow\pi(M)^{\prime}$ such that $\mathcal{F}^{c}\circ\mathcal{R}=\mathop{\rm id}\nolimits_{N}$. Hence,
$$\lVert\mathcal{E}^{c}\circ\mathcal{R}-\mathop{\rm id}\nolimits_{N}\rVert_{%
\mathop{\rm cb}}=\lVert(\mathcal{E}^{c}-\mathcal{F}^{c})\circ\mathcal{R}\rVert%
_{\mathop{\rm cb}}<2\sqrt{\varepsilon}$$
as $\lVert\mathcal{R}\rVert_{\mathop{\rm cb}}=1$. Thus, $N$ is
$2\sqrt{\varepsilon}$-correctable for $\mathcal{E}^{c}$. By
Lemma 4.5, the same is true of any other
complement of $\mathcal{E}$.
Conversely, suppose that $N$ is $\varepsilon$-correctable for $\mathcal{E}$,
so that $\lVert\mathcal{E}\circ\mathcal{R}-\mathop{\rm id}\nolimits_{N}\rVert<\varepsilon$ for some quantum channel
$\mathcal{R}:N\to M$. Again by [24, Proposition 6], there
exists a common normal representation $\pi:M\rightarrow\mathcal{B}(H)$ and
Stinespring triples $(\pi,V_{\mathcal{E}\mathcal{R}},H)$ and $(\pi,V_{\mathop{\rm id}\nolimits},H)$
for $\mathcal{E}\circ\mathcal{R}$ and $\mathop{\rm id}\nolimits_{N}$, respectively, so that
$$\lVert V_{\mathcal{E}\mathcal{R}}-V_{\mathop{\rm id}\nolimits}\rVert\leq\sqrt{%
\lVert\mathcal{E}\circ\mathcal{R}-\mathop{\rm id}\nolimits_{N}\rVert_{\mathop{%
\rm cb}}}<\sqrt{\varepsilon}.$$
By the amplification-induction theorem,
there exist Hilbert spaces $H_{\mathcal{E}},H_{\mathcal{R}}$ and isometries
$V_{\mathcal{E}}:S\to S^{\prime}\otimes H_{\mathcal{E}}$ and
$V_{\mathcal{R}}:S^{\prime}\to S\otimes H_{\mathcal{R}}$ such that
$$\mathcal{E}(X)=V_{\mathcal{E}}^{*}(X\otimes I_{H_{\mathcal{E}}})V_{\mathcal{E}%
},\quad X\in M,$$
and
$$\mathcal{R}(T)=V^{*}_{\mathcal{R}}(T\otimes I_{H_{\mathcal{R}}})V_{\mathcal{R}%
},\quad T\in N.$$
Thus,
$$\mathcal{E}\circ\mathcal{R}(T)=V_{\mathcal{E}}^{*}(V^{*}_{\mathcal{R}}\otimes I%
_{H_{\mathcal{E}}})(T\otimes I_{H_{\mathcal{R}}}\otimes I_{H_{\mathcal{E}}})(V%
_{\mathcal{R}}\otimes I_{H_{\mathcal{E}}})V_{\mathcal{E}},\ \ T\in N,$$
and, by Remark 4.2, there exists a partial isometry
$U:H\rightarrow S\otimes H_{\mathcal{R}}\otimes H_{\mathcal{E}}$ such that
$UV_{\mathcal{E}\mathcal{R}}=(V_{\mathcal{R}}\otimes I_{H_{\mathcal{E}}})V_{%
\mathcal{E}}$,
$U\pi(T)=(T\otimes I_{H_{\mathcal{R}}}\otimes I_{H_{\mathcal{E}}})U$ for all $T\in N$,
and
(6)
$$\mathcal{C}_{U^{*}}(N^{\prime}\bar{\otimes}\mathcal{B}(H_{\mathcal{R}})\bar{%
\otimes}\mathcal{B}(H_{\mathcal{E}}))\subseteq\pi(N)^{\prime}.$$
Moreover,
(7)
$$\lVert(V_{\mathcal{R}}\otimes I_{H_{\mathcal{E}}})V_{\mathcal{E}}-UV_{\mathop{%
\rm id}\nolimits}\rVert=\lVert UV_{\mathcal{E}\mathcal{R}}-UV_{\mathop{\rm id}%
\nolimits}\rVert\leq\lVert V_{\mathcal{E}\mathcal{R}}-V_{\mathop{\rm id}%
\nolimits}\rVert<\sqrt{\varepsilon}.$$
Let $\mathcal{R}^{c}:N^{\prime}\bar{\otimes}\mathcal{B}(H_{\mathcal{R}})\to\mathcal%
{B}(S^{\prime})$ be the complement
of $\mathcal{R}$ with respect to the Stinespring triple $(T\mapsto T\otimes I_{H_{\mathcal{R}}},V_{\mathcal{R}},S\otimes H_{\mathcal{R}})$, and define normal
completely positive maps
$\mathcal{F},\mathcal{R}^{\flat}:N^{\prime}\bar{\otimes}\mathcal{B}(H_{\mathcal%
{R}})\bar{\otimes}\mathcal{B}(H_{\mathcal{E}})\to\mathcal{B}(S)$ by
$\mathcal{F}=\mathcal{C}_{V^{*}_{\mathop{\rm id}\nolimits}}\circ\mathcal{C}_{U^%
{*}}$ and
$\mathcal{R}^{\flat}=\mathcal{C}_{V^{*}_{\mathcal{E}}}\circ(\mathcal{R}^{c}%
\otimes\mathop{\rm id}\nolimits_{\mathcal{B}(H_{\mathcal{E}})})$.
By (7) and [24, Proposition 3],
$\lVert\mathcal{F}-\mathcal{R}^{\flat}\rVert_{\mathop{\rm cb}}<2\sqrt{\varepsilon}$.
Since $(\pi,V_{\mathop{\rm id}\nolimits},H)$ is a Stinespring triple for $\mathop{\rm id}\nolimits_{N}$,
the uniqueness of the Stinespring representation
(see (2)) implies that
there exists a partial isometry $W:H\to S$ satisfying $WV_{\mathop{\rm id}\nolimits}=I_{S}$,
$V_{\mathop{\rm id}\nolimits}=W^{*}$ and
$W\pi(T)=TW$, for $T\in N$. Thus,
$V_{\mathop{\rm id}\nolimits}^{*}\pi(N)^{\prime}V_{\mathop{\rm id}\nolimits}%
\subseteq N^{\prime}$ (see Remark 4.2)
and (6) shows that the image of $\mathcal{F}$ lies in $N^{\prime}$.
By Lemma 4.6, there is a quantum channel
$$\widetilde{\mathcal{F}}:N^{\prime}\bar{\otimes}\mathcal{B}(H_{\mathcal{R}})%
\bar{\otimes}\mathcal{B}(H_{\mathcal{E}})\to N^{\prime}\quad\text{with}\quad%
\lVert\widetilde{\mathcal{F}}-\mathcal{R}^{\flat}\rVert_{\mathop{\rm cb}}<4%
\sqrt{\varepsilon}.$$
Since the range of $\mathcal{R}$ lies in $M$, we trivially have that $M^{\prime}$ is private for $\mathcal{R}$.
By the first part of the proof, $M^{\prime}$
is correctable for $\mathcal{R}^{c}$,
so there is a quantum channel
$\mathcal{G}:M^{\prime}\rightarrow N^{\prime}\bar{\otimes}\mathcal{B}(H_{%
\mathcal{R}})$ satisfying $\mathcal{R}^{c}\circ\mathcal{G}=\mathop{\rm id}\nolimits_{M^{\prime}}$.
We have
(8)
$$\mathcal{R}^{\flat}\circ(\mathcal{G}\otimes\mathop{\rm id}\nolimits_{\mathcal{%
B}(H_{\mathcal{E}})})=\mathcal{C}_{V_{\mathcal{E}}^{*}}|_{M^{\prime}\bar{%
\otimes}\mathcal{B}(H_{\mathcal{E}})}=\mathcal{E}^{c},$$
where $\mathcal{E}^{c}:M^{\prime}\bar{\otimes}\mathcal{B}(H_{\mathcal{E}})\rightarrow%
\mathcal{B}(S)$ is
the complement of $\mathcal{E}$ with respect to the Stinespring triple $(T\mapsto T\otimes I_{H_{\mathcal{E}}},V_{\mathcal{E}},S^{\prime}\otimes H_{%
\mathcal{E}})$. By (8) and the fact that $\mathcal{G}\otimes\mathop{\rm id}\nolimits_{\mathcal{B}(H_{\mathcal{E}})}$ is a complete contraction,
$$\lVert\widetilde{\mathcal{F}}\circ(\mathcal{G}\otimes\mathop{\rm id}\nolimits_%
{\mathcal{B}(H_{\mathcal{E}})})-\mathcal{E}^{c}\rVert_{\mathop{\rm cb}}\leq%
\lVert\widetilde{\mathcal{F}}-\mathcal{R}^{\flat}\rVert_{\mathop{\rm cb}}<4%
\sqrt{\varepsilon}.$$
Since the range of $\widetilde{\mathcal{F}}$ is contained in $N^{\prime}$, the von Neumann algebra $N$ is
$4\sqrt{\varepsilon}$-private for $\mathcal{E}^{c}$.
Finally, if $\mathcal{E}^{\sharp}:\pi^{\sharp}(M)^{\prime}\rightarrow\mathcal{B}(S)$ is another complement to $\mathcal{E}$, then there exists a partial isometry
$U^{\sharp}:H^{\sharp}\rightarrow S^{\prime}\otimes H_{\mathcal{E}}$ satisfying $\mathcal{E}^{\sharp}=\mathcal{E}^{c}\circ\mathcal{C}_{U^{\sharp}}$.
Then
$$\lVert\widetilde{\mathcal{F}}\circ(\mathcal{G}\otimes\mathop{\rm id}\nolimits_%
{\mathcal{B}(H_{\mathcal{E}})})\circ\mathcal{C}_{U^{\sharp}}-\mathcal{E}^{%
\sharp}\rVert<4\sqrt{\varepsilon}.$$
Applying Lemma 4.6 to the normal completely positive contraction
$\mathcal{Q}=\widetilde{\mathcal{F}}\circ(\mathcal{G}\otimes\mathop{\rm id}%
\nolimits_{\mathcal{B}(H_{\mathcal{E}})})\circ\mathcal{C}_{U^{\sharp}}$, we obtain a quantum channel $\widetilde{\mathcal{Q}}:\pi^{\sharp}(M)^{\prime}\to N^{\prime}$ satisfying
$\lVert\widetilde{\mathcal{Q}}-\mathcal{E}^{\sharp}\rVert_{\mathop{\rm cb}}<8%
\sqrt{\varepsilon},$
so $N$ is $8\sqrt{\varepsilon}$-private for $\mathcal{E}^{\sharp}$.
∎
Applications of Theorem 4.7 to
Gaussian quantum channels will be given in the next section.
In the remainder of the present section, we give two illustrations of this result.
The first one relates to
discrete Schur multipliers; we refer the reader to [27] for
the relevant background.
Example 4.8.
Let $X$ be a non-empty countable set and $(\delta_{x})_{x\in X}$ be
the standard orthonormal basis of $\ell_{2}(X)$. We identify every
element of $\mathcal{B}(\ell^{2}(X))$ with its corresponding (possibly
infinite) matrix $[T_{x,y}]_{x,y\in X}$, where $T_{x,y}=\langle T\delta_{y},\delta_{x}\rangle$, $x,y\in X$. Any collection of unit vectors
$(|\psi_{x}\rangle)_{x\in X}$ in the Hilbert space $H=\ell_{2}(X)$ defines a
correlation matrix $C:=[\langle\psi_{y}|\psi_{x}\rangle]_{x,y\in X}$, which in
turn yields a NUCP map $\Phi:\mathcal{B}(\ell_{2}(X))\to\mathcal{B}(\ell_{2}(X))$
via Schur multiplication:
$$\Phi(T)=[\langle\psi_{y}|\psi_{x}\rangle T_{x,y}]_{x,y\in X},\ \ T\in\mathcal{%
B}(\ell_{2}(X)).$$
By abuse of notation, we denote by $\ell_{\infty}(X)$ the von Neumann
subalgebra of diagonal matrices in $\mathcal{B}(\ell_{2}(X))$. It is
straightforward to verify that $\Phi(D_{1}TD_{2})=D_{1}\Phi(T)D_{2}$ for all
$D_{1},D_{2}\in\ell_{\infty}(X)$ and all $T\in\mathcal{B}(\ell_{2}(X))$, i.e., that $\Phi$ is an $\ell_{\infty}(X)$-bimodule map. Thus,
$\ell_{\infty}(X)$ is correctable for $\Phi$ and, by Theorem
4.7, it is private for any complement $\Phi^{c}$ of $\Phi$.
In particular, the range of any complement of $\Phi$ is contained in a
commutative von Neumann algebra, reflecting the well-known fact that complements of discrete Schur
multipliers are entanglement breaking (see [23]).
We next present an application of Theorem 4.7 by
generalizing the main result in [21] concerning the structure of
correctable subsystems for finite-dimensional channels as generalized
multiplicative domains. In [21, Theorem 11], a one-to-one
correspondence was established between correctable subsystems $B$ of a
finite-dimensional channel $\mathcal{E}:\mathcal{B}(S)\rightarrow\mathcal{B}(S)$ and
generalized multiplicative domains $\mathop{\rm MD}\nolimits_{\pi}(\mathcal{E})$, where the
latter is defined relative to a projection $P\in\mathcal{P}(S)$, a
C*-subalgebra $N\subseteq\mathcal{B}(PS)$ and a representation
$\pi:N\rightarrow\mathcal{B}(S)$, to be
$$\displaystyle\mathop{\rm MD}\nolimits_{\pi}(\mathcal{E}):=\big{\{}T\in N\mid%
\pi(T)(\mathcal{E}_{*}\circ(\mathcal{C}_{P})_{*}(R))=\mathcal{E}_{*}\circ(%
\mathcal{C}_{P})_{*}(TR)\\
\displaystyle\text{and }\hskip 2.0pt(\mathcal{E}_{*}\circ(\mathcal{C}_{P})_{*}%
(R))\pi(T)=\mathcal{E}_{*}\circ(\mathcal{C}_{P})_{*}(RT),\mbox{ for all }R\in N%
\big{\}}.$$
Specifically, if $S=(A\otimes B)\oplus(A\otimes B)^{\perp}$, then $B$ is
correctable if and only if $I_{A}\otimes\mathcal{B}(B)=\mathop{\rm MD}\nolimits_{\pi}(\mathcal{E})$ for some
representation $\pi\colon I_{A}\otimes\mathcal{B}(B)\to\mathcal{B}(S)$. In the Heisenberg picture, $T\in\mathop{\rm MD}\nolimits_{\pi}(\mathcal{E})$ if and only if
$$\langle\mathcal{(}\mathcal{C}_{P}\circ\mathcal{E}(X))T,R\rangle=\langle%
\mathcal{C}_{P}\circ\mathcal{E}(X\pi(T)),R\rangle$$
and
$$\langle T(\mathcal{C}_{P}\circ\mathcal{E}(X)),R\rangle=\langle\mathcal{C}_{P}%
\circ\mathcal{E}(\pi(T)X),R\rangle$$
for all $R\in N$ and $X\in\mathcal{B}(S)$.
Corollary 4.9.
Let $S$ and $S^{\prime}$ be Hilbert spaces, $M\subseteq\mathcal{B}(S^{\prime})$ be a von Neumann algebra, and
$\mathcal{E}:M\rightarrow\mathcal{B}(S)$ be a quantum channel and $P\in\mathcal{P}(S)$. A von Neumann subalgebra $N\subseteq\mathcal{B}(PS)$ is correctable for
$\mathcal{E}$ with respect to $P$ if and only if there exists a normal
representation $\pi:N\rightarrow M$ such that
(9)
$$(\mathcal{C}_{P}\circ\mathcal{E}(X))T=\mathcal{C}_{P}\circ\mathcal{E}(X\pi(T))%
\quad\text{and}\quad T(\mathcal{C}_{P}\circ\mathcal{E}(X))=\mathcal{C}_{P}%
\circ\mathcal{E}(\pi(T)X)$$
for all $T\in N$ and $X\in\mathcal{B}(S)$.
Proof.
As in the proof of Theorem 4.7,
it suffices to consider the case $P=I_{S}$.
If there exists a normal representation $\pi:N\to M$ satisfying (9),
then by taking $X=I$ in (9) and using the fact that
$\mathcal{E}$ is unital, we see that $N$ is correctable for $\mathcal{E}$.
Conversely, if $N$ is correctable for $\mathcal{E}$, then by Theorem 4.7, $N$ is private
for any complement $\mathcal{E}^{c}$ of $\mathcal{E}$.
Taking a Stinespring representation for $\mathcal{E}$ of the form
$\mathcal{E}(X)=V^{*}(X\otimes I_{H})V$, $X\in M$ (see (4)),
the corresponding complement $\mathcal{E}^{c}:M^{\prime}\bar{\otimes}\mathcal{B}(H)\to\mathcal{B}(S)$ has range in
$N^{\prime}$, so $N$ is a von Neumann subalgebra of
$\mathcal{E}^{c}(M^{\prime}\bar{\otimes}\mathcal{B}(H))^{\prime}$. Taking a minimal Stinespring triple $(\pi^{c},V^{c},H^{c})$ for $\mathcal{E}^{c}$,
it follows by Arveson’s commutant lifting theorem
[3, Theorem 1.3.1] that there exists a normal representation $\pi^{\prime}:\mathcal{E}^{c}(M^{\prime}\bar{\otimes}\mathcal{B}(H))^{\prime}%
\to\pi^{c}(M^{\prime}\bar{\otimes}\mathcal{B}(H))^{\prime}$
satisfying $\pi^{\prime}(Y)V^{c}=V^{c}Y$ for all $Y\in\mathcal{E}^{c}(M^{\prime}\bar{\otimes}\mathcal{B}(H))^{\prime}$.
By the uniqueness of the Stinespring representation, there exists an isometry $W:H^{c}\rightarrow S^{\prime}\otimes H$
such that $WV^{c}=V$, $V^{c}=W^{*}V$, $W\pi^{c}(X^{\prime})=X^{\prime}W$ for all $X^{\prime}\in M^{\prime}\bar{\otimes}\mathcal{B}(H)$,
and
$$W\pi^{c}(M^{\prime}\bar{\otimes}\mathcal{B}(H))^{\prime}W^{*}\subseteq(M^{%
\prime}\bar{\otimes}\mathcal{B}(H))^{\prime}=M\otimes I_{H}.$$
Let $\pi^{\prime\prime}:M\otimes I_{H}\to M$ be the ${}^{*}$-isomorphism defined by
$\pi^{\prime\prime}(X\otimes 1)=X$ and note that, since $W$ is an isometry, $\mathcal{C}_{W}\circ\pi^{\prime}$ is a
normal *-homomorphism.
Thus,
$\pi:=\pi^{\prime\prime}\circ\mathcal{C}_{W}\circ\pi^{\prime}|_{N}:N\to M$ is a normal representation satisfying
$$\displaystyle\mathcal{E}(X\pi(T))$$
$$\displaystyle=V^{*}((X\pi(T))\otimes I_{H})V=V^{*}(X\otimes I_{H})(\pi(T)%
\otimes I_{H})V$$
$$\displaystyle=V^{*}(X\otimes I_{H})W\pi^{\prime}(T)W^{*}V=V^{*}(X\otimes I_{H}%
)W\pi^{\prime}(T)V^{c}$$
$$\displaystyle=V^{*}(X\otimes I_{H})WV^{c}T=V^{*}(X\otimes I_{H})VT=\mathcal{E}%
(X)T$$
for all $X\in M$ and $T\in N$. Similarly, $T\mathcal{E}(X)=\mathcal{E}(\pi(T)X)$ for all $X\in M$ and $T\in N$.∎
Remark 4.10.
Corollary 4.9 implies that the correction channel $\mathcal{R}$ may always be taken to be ${}^{*}$-homomorphism, a
fact previously observed in the case $M=\mathcal{B}(S^{\prime})$ for a separable
Hilbert space $S^{\prime}$ [9, Proposition 4.4].
5. Private Algebras for Linear Bosonic Quantum Channels
In this section we begin our analysis of private algebras and
generalized complementarity for linear bosonic quantum channels,
focusing mainly on the subclass of Gaussian channels. Such channels
arise naturally in the dynamics of open bosonic systems described by
quadratic Hamiltonians (see [29] and the references therein).
We begin with a short review of the relevant machinery, adopting the
notation of [18], to which we refer the reader for details.
Let $\mathbb{R}^{2n}$ represent the phase space of a system of $n$ bosonic modes. We will write vectors in $\mathbb{R}^{2n}$ as $z=(x_{1},y_{1},x_{2},y_{2},\cdots,x_{n},y_{n})$,
where $x=(x_{1},...,x_{n})$ and $y=(y_{1},...,y_{n})$ are vectors in $\mathbb{R}^{n}$ describing the positions and momenta of the $n$ modes.
Let $U,V:\mathbb{R}^{n}\to\mathcal{B}(L_{2}(\mathbb{R}^{n}))$ be the strongly continuous unitary representations
given by
$$V_{x}\psi(s)=e^{i\langle x,s\rangle}\psi(s)\ \ \text{and}\ \ U_{y}\psi(s)=\psi%
(s+y)$$
for $\psi\in L_{2}(\mathbb{R}^{n})$ and $s\in\mathbb{R}^{n}$. These one parameter groups satisfy the Weyl form of the canonical commutation relations (CCR):
$$U_{y}V_{x}=e^{i\langle x,y\rangle}V_{x}U_{y},\quad x,y\in\mathbb{R}^{n}.$$
Composing the two, we obtain the Weyl representation
$W:\mathbb{R}^{2n}\to\mathcal{B}(L_{2}(\mathbb{R}^{n}))$ given by
$$W(z)=e^{\frac{i}{2}\langle x,y\rangle}V_{x}U_{y},\quad z\in\mathbb{R}^{2n}.$$
Let
$$\Delta_{n}=\bigoplus_{i=1}^{n}\begin{pmatrix}0&1\\
-1&0\end{pmatrix}$$
and, writing $z^{\prime}=(x_{1}^{\prime},y_{1}^{\prime},\dots,x_{n}^{\prime},y_{n}^{\prime})$, let
$$\Delta(z,z^{\prime})=\langle z,\Delta_{n}(z^{\prime})\rangle=\sum_{i=1}^{n}(x_%
{i}y_{i}^{\prime}-x_{i}^{\prime}y_{i})$$
be the canonical symplectic form on $\mathbb{R}^{2n}$.
The Weyl representation $W$ satisfies the Weyl–Segal form of the CCR:
(10)
$$W(z+z^{\prime})=e^{\frac{i}{2}\Delta(z,z^{\prime})}W(z)W(z^{\prime}),\quad z,z%
^{\prime}\in\mathbb{R}^{2n}.$$
The linear transformations $T:\mathbb{R}^{2n}\to\mathbb{R}^{2n}$
which preserve the symplectic form $\Delta$,
in the sense that
$$\Delta(Tz,Tz^{\prime})=\Delta(z,z^{\prime}),\quad z,z^{\prime}\in\mathbb{R}^{2%
n},$$
are called symplectic transformations. These form a subgroup of
$\mathop{\rm GL}(2n,\mathbb{R})$ denoted by $\mathop{\rm Sp}(2n,\mathbb{R})$. Note that, by
(10), $[W(z),W(z^{\prime})]=0$ if and only if
$\Delta(z,z^{\prime})\in 2\pi\mathbb{Z}$, where as usual $[X,Y]=XY-YX$ is the
commutator of two operators $X$ and $Y$.
By (10) and the Stone-von Neumann theorem,
given any $T\in\mathop{\rm Sp}(2n,\mathbb{R})$, there exists a unitary $U_{T}\in\mathcal{B}(L_{2}(\mathbb{R}^{n}))$ such
that
(11)
$$W(Tz)=U_{T}^{*}W(z)U_{T}$$
for all $z\in\mathbb{R}^{2n}$.
An important feature of the Weyl representation $W$ is that it allows
one to study the statistical properties of quantum states via a
“non-commutative characteristic function”. Specifically, given a state
$\rho\in\mathcal{T}(L_{2}(\mathbb{R}^{n}))$, we let $\varphi_{\rho}(z)=\mathop{\rm Tr}\nolimits(\rho W(z))$, for $z\in\mathbb{R}^{2n}$.
This characteristic function $\varphi_{\rho}$ determines the operator
$\rho$ via the following inversion formula:
$$\rho=\frac{1}{(2\pi)^{n}}\int_{\mathbb{R}^{2n}}\varphi_{\rho}(z)W(-z)\,dz,$$
where the integral converges to $\rho$ in the weak operator topology
by [17, Corollary 5.3.5].
A state $\rho\in\mathcal{T}(L_{2}(\mathbb{R}^{n}))$ is said to be Gaussian if its characteristic function is of the form
$$\varphi_{\rho}(z)=\exp\left(i\langle m,z\rangle-\tfrac{1}{2}\alpha(z,z)\right)$$
where $m\in\mathbb{R}^{2n}$ is a vector, called the mean of $\rho$, and $\alpha$ is a symmetric bilinear form on $\mathbb{R}^{2n}$ known as the covariance matrix of $\rho$.
A linear bosonic channel is a quantum channel $\mathcal{E}:\mathcal{B}(L_{2}(\mathbb{R}^{n}))\to\mathcal{B}(L_{2}(\mathbb{R}^%
{n}))$ for which there exists $\ell\in\mathbb{N}$, a
state $\rho_{E}\in\mathcal{T}(L_{2}(\mathbb{R}^{\ell}))$ in an $\ell$-mode bosonic
environment and a symplectic block matrix
$$T=\begin{pmatrix}K&L\\
K_{E}&L_{E}\end{pmatrix}\in\mathop{\rm Sp}(2(n+\ell),\mathbb{R})$$
where $K$ is $2n\times 2n$ and $L_{E}$ is
$2\ell\times 2\ell$, so that if $U_{T}\in\mathcal{B}(L_{2}(\mathbb{R}^{n+\ell}))$ is the unitary
associated by (11) with $T$, then
the pre-adjoint of $\mathcal{E}$ has the form
$$\mathcal{E}_{*}(\rho)=\mathop{\rm Tr}\nolimits_{E}(U_{T}(\rho\otimes\rho_{E})U%
_{T}^{*}),\quad\rho\in\mathcal{T}(L_{2}(\mathbb{R}^{n}))$$
where the partial trace is taken over the tensor
factor $E=L_{2}(\mathbb{R}^{\ell})$ of $L_{2}(\mathbb{R}^{n+\ell})=L_{2}(\mathbb{R}^{n})\otimes L_{2}(\mathbb{R}^{\ell})$.
Using the block decomposition of $T$, one may easily
verify (see [18, §12.4.1]) that
$$\mathcal{E}(W(z))=f(z)W(Kz),\quad\text{where}\quad f(z)=\varphi_{\rho_{E}}(K_{%
E}z),\quad z\in\mathbb{R}^{2n}.$$
If $f$ is the characteristic function of a Gaussian state, then $\mathcal{E}$ is called a Gaussian channel.
In this case, the environment state $\rho_{E}$ in the representation of $\mathcal{E}_{*}$ is a Gaussian state.
One immediately obtains private subalgebras if $K:\mathbb{R}^{n}\to\mathbb{R}^{n}$ does not have full rank. Indeed, if $R\subseteq\mathbb{R}^{n}$ denotes the image of $K$,
then it is clear that
$\mathcal{E}(\mathcal{B}(L_{2}(\mathbb{R}^{n})))\subseteq W(R)^{\prime\prime}$,
where the double commutant $W(R)^{\prime\prime}$ coincides with
the von Neumann subalgebra of $\mathcal{B}(L_{2}(\mathbb{R}^{n}))$
generated by $\{W(z)\mid z\in R\}$.
Let
$$R^{\Delta}:=\{z\in\mathbb{R}^{2n}\mid\Delta(z,z^{\prime})=0\ \mbox{for all }z^%
{\prime}\in R\}$$
be the symplectic complement of $R$.
By the CCR (10),
$[W(z),W(z^{\prime})]=0$ if $\Delta(z,z^{\prime})=0$,
and it follows that $W(R^{\Delta})\subseteq W(R)^{\prime}$.
Hence,
$$\mathcal{E}(\mathcal{B}(L_{2}(\mathbb{R}^{n})))\subseteq W(R^{\Delta})^{\prime%
}=(W(R^{\Delta})^{\prime\prime})^{\prime},$$
and we have the following result.
Proposition 5.1.
Let $\mathcal{E}:\mathcal{B}(L_{2}(\mathbb{R}^{n}))\to\mathcal{B}(L_{2}(\mathbb{R}^%
{n}))$ be a linear bosonic channel, and let $R$ be the range of the matrix $K$ with
symplectic complement $R^{\Delta}$. Then the von Neumann algebra $W(R^{\Delta})^{\prime\prime}$ is private for $\mathcal{E}$.
Example 5.2.
For a simple example with $n=1$, let $S=L_{2}(\mathbb{R})$ and consider the class of single
mode Gaussian channels $\mathcal{E}:\mathcal{B}(S)\to\mathcal{B}(S)$ satisfying
$$\mathcal{E}(W(z))=f(z)W(Kz),\quad z=(x,y)\in\mathbb{R}^{2}$$
where
$$K=\begin{pmatrix}1&0\\
0&0\end{pmatrix}\quad\text{and}\quad f(z)=\exp\big{(}-\tfrac{1}{2}\alpha(x^{2}%
+y^{2})\big{)},$$
with $\alpha=N_{0}+\tfrac{1}{2}$ for some non-negative integer $N_{0}$.
This class is known as $A_{2}$ in Holevo’s classification of single
mode Gaussian channels [16]. In this case, the range of $K$ is
$R=\mathbb{R}\times\{0\}$ and $R^{\Delta}=R$, so
$$W(R)^{\prime\prime}=\{V_{x}\mid x\in\mathbb{R}\}^{\prime\prime}=L_{\infty}(%
\mathbb{R})$$
is private for $\mathcal{E}$, where we
canonically identify $L_{\infty}(\mathbb{R})$ with the (abelian) von Neumann
subalgebra of $\mathcal{B}(S)$ consisting of multiplication operators by
essentially bounded functions.
By Theorem 4.7, $L_{\infty}(\mathbb{R})$ is a correctable
subalgebra for any complementary channel $\mathcal{E}^{c}$ of $\mathcal{E}$. Let
us show this explicitly by computing a correction channel $\mathcal{R}$
for one particular complement $\mathcal{E}^{c}$. First, one may easily
verify that the pre-adjoint $\mathcal{E}_{*}:\mathcal{T}(S)\to\mathcal{T}(S)$ can be
represented as
$$\mathcal{E}_{*}(\rho)=\mathop{\rm Tr}\nolimits_{E}(U_{T}(\rho\otimes\rho_{E})U%
_{T}^{*}),\quad\rho\in\mathcal{T}(S),$$
where $E$ is a copy of $L_{2}(\mathbb{R})$ and $\rho_{E}\in\mathcal{T}(E)$ is the
Gaussian state with characteristic function $\varphi_{\rho_{E}}=f$, and
$T\in\mathop{\rm Sp}(4,\mathbb{R})$ is given by the block matrix
$$T=\begin{pmatrix}K&-I\\
I&K^{\prime}\end{pmatrix}$$
where $I$ is the $2\times 2$ identity matrix and $K^{\prime}=\left(\begin{smallmatrix}0&0\\
0&1\end{smallmatrix}\right)$.
The state $\rho_{E}$ is the Gibbs thermal state with mean photon
number $N_{0}$, and is pure if and only if $N_{0}=0$. Thus,
let $E^{\prime}$ be another copy of $L_{2}(\mathbb{R})$, and let $|\psi\rangle\in E\otimes E^{\prime}$
be a canonical purification of $\rho_{E}$, that is, $\rho_{E}=\mathop{\rm Tr}\nolimits_{E^{\prime}}(|\psi\rangle\langle\psi|)$. Then
$$\mathcal{E}_{*}(\rho)=\mathop{\rm Tr}\nolimits_{E\otimes E^{\prime}}((U_{T}%
\otimes I_{E^{\prime}})(\rho\otimes|\psi\rangle\langle\psi|)(U_{T}^{*}\otimes I%
_{E^{\prime}})),\quad\rho\in\mathcal{T}(S),$$
so we can obtain a complement $\mathcal{E}^{c}:\mathcal{B}(E\otimes E^{\prime})\to\mathcal{B}(S)$ whose pre-adjoint $\mathcal{E}^{c}_{*}$ is given by
$$\mathcal{E}^{c}_{*}(\rho)=\mathop{\rm Tr}\nolimits_{S}((U_{T}\otimes I_{E^{%
\prime}})(\rho\otimes|\psi\rangle\langle\psi|)(U_{T}^{*}\otimes I_{E^{\prime}}%
)),\quad\rho\in\mathcal{T}(S).$$
For $H$ a Hilbert space of the form $L^{2}(\mathbb{R}^{n})$, let us
denote the corresponding Weyl representation $W\colon\mathbb{R}^{2n}\to\mathcal{B}(H)$ by $W_{H}$. For
$z,z^{\prime}\in\mathbb{R}^{2}$ and $\rho\in\mathcal{T}(S)$, we have
$$\displaystyle\langle\mathcal{E}^{c}$$
$$\displaystyle(W_{E\otimes E^{\prime}}(z,z^{\prime})),\rho\rangle$$
$$\displaystyle=\mathop{\rm Tr}\nolimits\big{(}(U_{T}\otimes I_{E^{\prime}})(%
\rho\otimes|\psi\rangle\langle\psi|)(U_{T}^{*}\otimes I_{E^{\prime}})(I_{S}%
\otimes W_{E\otimes E^{\prime}}(z,z^{\prime}))\big{)}$$
$$\displaystyle=\mathop{\rm Tr}\nolimits\big{(}(\rho\otimes|\psi\rangle\langle%
\psi|)(U_{T}^{*}\otimes I_{E^{\prime}})W_{S\otimes E\otimes E^{\prime}}(0,z,z^%
{\prime})(U_{T}\otimes I_{E^{\prime}})\big{)}$$
$$\displaystyle=\mathop{\rm Tr}\nolimits\big{(}(\rho\otimes|\psi\rangle\langle%
\psi|)W_{S\otimes E\otimes E^{\prime}}(T(0,z),z^{\prime})\big{)}$$
$$\displaystyle=\mathop{\rm Tr}\nolimits\big{(}(\rho\otimes|\psi\rangle\langle%
\psi|)W_{S\otimes E\otimes E^{\prime}}(-z,(0,y),z^{\prime})\big{)}$$
$$\displaystyle=\mathop{\rm Tr}\nolimits\big{(}|\psi\rangle\langle\psi|W_{E%
\otimes E^{\prime}}((0,y),z^{\prime})\big{)}\cdot\langle W_{S}(-z),\rho\rangle.$$
Since $\rho\in\mathcal{T}(S)$
was arbitrary, it follows that
$$\mathcal{E}^{c}(W_{E\otimes E^{\prime}}(z,z^{\prime}))=\mathop{\rm Tr}%
\nolimits(|\psi\rangle\langle\psi|W_{E\otimes E^{\prime}}((0,y),z^{\prime}))W_%
{S}(-z),\quad z,z^{\prime}\in\mathbb{R}^{2}.$$
Given the above structure of $\mathcal{E}^{c}$, it is clear that the map
$$\mathcal{R}:L_{\infty}(\mathbb{R})\to\mathcal{B}(E\otimes E^{\prime}),\quad%
\mathcal{R}(W_{S}(x,0))=W_{E\otimes E^{\prime}}((-x,0),0),\quad x\in\mathbb{R}$$
defines a quantum
channel satisfying $\mathcal{E}^{c}\circ\mathcal{R}=\mathop{\rm id}\nolimits_{L_{\infty}(\mathbb{R%
})}$.
Remark 5.3.
The symplectic matrix $T$ in the preceding example is not unique. Indeed, any symplectic block matrix of the form
$$\begin{pmatrix}K&\ast\\
I&\ast\end{pmatrix}$$
will do, as only the first column is relevant for the description of $\mathcal{E}$. In general,
if $A,B:\mathbb{R}^{n}\to\mathbb{R}^{n}$ satisfy $\Delta=A^{t}\Delta A+B^{t}\Delta B$ (so that the map
$z\mapsto Az\oplus Bz$ is a symplectic embedding), then the matrix
$$\begin{pmatrix}A&\ast\\
B&\ast\end{pmatrix}$$
can be completed to an element of $\mathop{\rm Sp}(2n,\mathbb{R})$ (see [18, Theorem 12.30]). In particular, when $[A,B]=0$, which is the case in the above example, there is a canonical choice for matrices $C,D\in M_{n}(\mathbb{R})$ turning
$$\begin{pmatrix}A&C\\
B&D\end{pmatrix}$$
into a symplectic matrix, namely $C=-B^{\prime}$ and $D=A^{\prime}$, where
$B^{\prime}=\Delta^{-1}B^{t}\Delta$ and $A^{\prime}=\Delta^{-1}A^{t}\Delta$ are the symplectic adjoints of $A$ and $B$, respectively.
This is precisely how we chose $T$ above, and since the structure of $K^{\prime}=\left(\begin{smallmatrix}0&0\\
0&1\end{smallmatrix}\right)$ was crucial in determining the recovery channel $\mathcal{R}$ (and the overall structure
of $\mathcal{E}^{c}$), the above example may be a glimpse of a deeper connection between complementarity and symplectic duality.
6. Conclusion
In this paper, we generalized the formalism of private subspaces and
private subsystems to the setting of von Neumann algebras using commutant structures, introduced a generalized
framework for studying complementarity of quantum channels, and established a
general complementarity theorem between operator private and correctable
subalgebras. This new framework is particularly amenable to the important class of linear bosonic channels, and our preliminary investigations suggests a deeper connection between
complementarity and symplectic duality. Moreover, since symplectic geometry has played a decisive role in the development of quantum error correcting codes [11], it is
natural to develop such a formalism for private quantum codes via complementarity in both the finite and infinite-dimensional settings.
This, and related questions are currently being pursued and will appear in future work.
References
[1]
[2]
Ambainis, A., Mosca, M., Tapp, A., de Wolf, R.,
Private quantum channels,
41st Annual Symposium on Foundations of Computer Science (Redondo Beach, CA, 2000), 547-553, IEEE Comput. Soc. Press, Los Alamitos, CA, 2000.
[3]
Arveson, W. B.,
Subalgebras of $C^{*}$-algebras,
Acta Math. 123 (1969), 141-224.
[4]
Bartlett, S. D., Hayden, P., Spekkens, R. W.,
Random subspaces for encryption based on a private shared Cartesian frame,
Phys. Rev. A 72 (5), (2005) 052329.
[5]
Bartlett, S. D., Rudolph, T., Spekkens, R. W.,
Decoherence-full subsystems and the cryptographic power of a private shared reference frame,
Phys. Rev. A 70 (3), (2004) 032307.
[6]
Bény, C.,
Conditions for the approximate correction of algebras,
TQC 2009 LNCS 5906, (2009) pp. 6675.
[7]
Bény, C., Kempf, A., Kribs, D. W.,
Generalization of quantum error correction via the Heisenberg picture,
Phys. Rev. Lett. 98 (2007), 100502.
[8]
Bény, C., Kempf, A., Kribs, D. W.,
Quantum error correction of observables,
Phys. Rev. A 76 (2007), 042303.
[9]
Bény, C., Kempf, A., Kribs, D. W.,
Quantum error correction on infinite-dimensional Hilbert spaces,
J. Math. Phys. 50 (2009), no. 6, 062108, 24 pp.
[10]
Boykin, P.O., Roychowdhury, V.
Optimal encryption of quantum bits,
Phys. Rev. A 67 (2003), 042317.
[11]
Calderbank, A. R., Rains, E. M., Shor, P. W., Sloane, N. J. A.,
Quantum error correction and orthogonal geometry,
Phys. Rev. Lett. 78 (1997), no. 3, 405-408.
[12]
Church, A., Kribs, D.W., Pereira, R., Plosker, S.,
Private quantum channels, conditional expectations, and trace vectors,
Quant. Inf. & Comp. 11 (2011), 774-783.
[13]
Crepeau, C., Gottesman, D., Smith, A.,
Secure multi-party quantum computing,
34th Annual Symposium on Theory of Computing (ACM, Montreal) (2002), 643.
[14]
Cleve, R., Gottesman, D., Lo, H.-L.,
How to share a quantum secret,
Phys. Rev. Lett. 83 (1999), 648.
[15]
Holevo, A.S.,
On complementary channels and the additivity problem,
Probab. Theory and Appl. 51 (2005), 133-143.
[16]
Holevo, A. S.,
One-mode quantum Gaussian channels: structure and quantum capacity,
Prob. Inf. Trans. 43 (2007), no. 1, 1-11.
[17]
Holevo, A. S.,
Probabilistic and Statistical Aspects of Quantum Theory,
Scuola Normale Superiore Pisa, 2011.
[18]
Holevo, A. S.,
Quantum Systems, Channels, Information. A Mathematical Introduction,
De Gruyter Studies in Mathematical Physics, 16. De Gruyter, Berlin, 2012.
[19]
Jochym-O’Conor, T., Kribs, D. W., Laflamme, R., Plosker, S.,
Quantum subsystems: exploring complementarity of quantum privacy and error correction,
Phys. Rev. A 90, (2014) 032305.
[20]
Jochym-O’Conor, T., Kribs, D. W., Laflamme, R., Plosker, S.,
Private quantum subsystems,
Phys. Rev. Lett. 111, (2013) 030502.
[21]
Johnston, N., Kribs, D. W.,
Generalized multiplicative domains and quantum error correction,
Proc. Amer. Math. Soc. 139 (2011), 627-639.
[22]
Kretschmann, D., Kribs, D. W., Spekkens, R. W.,
Complementarity of private and correctable subsystems in quantum cryptography and error correction,
Phys. Rev. A 78 (3), (2008) 032330.
[23]
King, C., Matsumoto, K., Nathanson, M., Ruskai, M. B.,
Properties of conjugate channels with applications to additivity and multiplicativity,
Markov Process. Related Fields 13, (2007) 391-423.
[24]
Kretschmann, D., Schlingemann, D., Werner, R. F.,
A continuity theorem for Stinespring’s dilation,
J. Funct. Anal. 255 (2008) 1889-1904.
[25]
Kretschmann, D., Schlingemann, D., Werner, R. F.,
The information-disturbance tradeoff and the continuity of Stinespring’s representation,
IEEE Trans. Inf. Theory, 54 (4), (2008) pp. 1708-1717.
[26]
Linblad, G.,
A general no-cloning theorem,
Lett. Math. Phys. 47 (1999), 189-196.
[27]
Paulsen, V. I.,
Completely bounded maps and operator algebras,
Cambridge University Press, 2002.
[28]
Takesaki, M.,
Theory of Operator Algebras I,
Encyclopedia of Mathematical Sciences 124, Springer-Verlag Berlin–Heidelberg–New York (2002).
[29]
Weedbrook, C., et al,
Gaussian quantum information,
Rev. Mod. Phys. 84 (2012), 621-669. |
Extending low energy effective field theory with a complete set of dimension-7 operators
[10pt]
Yi Liao ${}^{a,c}[email protected],
Xiao-Dong Ma ${}^{b}[email protected],
Quan-Yu Wang ${}^{a}[email protected],
${}^{a}$ School of Physics, Nankai University, Tianjin 300071, China
${}^{b}$ Department of Physics, National Taiwan University, Taipei 10617, Taiwan
${}^{c}$ Center for High Energy Physics, Peking University, Beijing 100871, China
Abstract
We present a complete and independent set of dimension-7 operators in the low energy effective field theory (LEFT) where the dynamical degrees of freedom are the standard model five quarks and all of the neutral and charged leptons. All operators are non-Hermitian and are classified according to their baryon ($\Delta B$) and lepton ($\Delta L$) numbers violated. Including Hermitian-conjugated operators, there are in total $3168$, $750$, $588$, $712$ operators with $(\Delta B,\Delta L)=(0,0),~{}(0,\pm 2),~{}(\pm 1,\mp 1),~{}(\pm 1,\pm 1)$ respectively. We perform the tree-level matching with the standard model effective field theory (SMEFT) up to dimension-7 (dim-7) operators in both LEFT and SMEFT. As a phenomenological application we study the effective neutrino-photon interactions due to dim-7 lepton number violating operators that are induced and much enhanced at one loop from dim-6 operators that in turn are matched from dim-7 SMEFT operators. We compare the cross sections of various neutrino-photon scattering with their counterparts in the standard model and highlight the new features. Finally we illustrate how these effective interactions could arise from ultraviolet completion.
1 Introduction
While neutrino mass and dark matter provide evidence for physics beyond the standard model (SM), persistent searches for new heavy particles have hitherto yielded a null result. In this circumstance effective field theory (EFT) offers an appropriate and universal approach to quantify unknown effects of possibly very heavy new particles on the interactions of SM particles at relatively low energies. In this framework, i.e., the standard model effective field theory (SMEFT), the standard model appears as the leading interactions that are generally augmented by an infinite tower of effective interactions that involve higher and higher dimensional operators and are more and more suppressed by heavy particles masses. The precise measurements and severe constraints on these effective interactions will shed light on possible form of new physics.
Suppose that certain new physics scale $\Lambda_{\rm NP}$ is significantly higher than the electroweak scale $\Lambda_{\rm EW}\sim 10^{2}~{}{\rm GeV}$ and that there are no particles other than the SM ones of a mass around or below $\Lambda_{\rm EW}$. The effective field theory between the scales $\Lambda_{\rm NP}$ and $\Lambda_{\rm EW}$ is then the SMEFT that includes all SM fields and satisfies the complete gauge symmetry $SU(3)_{C}\times SU(2)_{L}\times U(1)_{Y}$. Since it is an EFT at low energy compared to $\Lambda_{\rm NP}$, it can be organized by the dimensions of operators involved in effective interactions. The bases of complete and independent operators have now been known at dimension 5 (dim-5) [1], dimension 6 [2, 3], dimension 7 [4, 5], and dimension 8 [6, 7, 8, 9], and the one-loop renormalization of those basis operators due to the SM interactions has been accomplished up to dimension 7 in Refs. [10, 11, 12, 13, 14, 15, 16, 17, 18, 5, 19]. As the dimension of operators goes up further, the number of basis operators increases horribly fast [7]; for recent efforts on basis operators of even higher dimensions, see for instance, Refs [7, 20, 21, 22, 22, 23, 24]. On the other hand, if there are new particles that have a mass less than $\Lambda_{\rm EW}$ and are most likely a singlet under the SM gauge group, such as sterile neutrinos, they must be incorporated into the EFT framework thus extending the regime of SMEFT [25, 26, 27, 28].
Since most measurements are made below the electroweak scale, it is necessary to develop EFTs below $\Lambda_{\rm EW}$. By integrating out the heavy particles in SM, i.e., the weak gauge bosons $W^{\pm},~{}Z$, the Higgs boson $h$, and the top quark $t$, we arrive at the so-called low energy effective field theory (LEFT). It thus includes all other SM fields as its dynamical degrees of freedom including five quarks, all neutral and charged leptons, and respects the gauge symmetry $SU(3)_{C}\times U(1)_{\rm EM}$. It has been successfully applied in flavor physics; for a review, see for instance, Ref. [29]. In recent years LEFT has been systematically developed. The classification of its basis operators up to dimension 6 and their tree-level and one-loop matching to the SMEFT also up to dimension 6 have been made in Refs. [30, 31]. (We note in passing that the basis of dim-6 operators in LEFT extended with light sterile neutrinos has been worked out recently [32, 33].) The complete one-loop renormalization of those basis operators has been accomplished in Ref. [34]. In this work we will push this systematic investigation one step further by building the basis of dim-7 operators in LEFT and matching the effective interactions at tree level between SMEFT and LEFT both to dim-7 operators.
The outline of this paper is as follows. We first establish in section 2 the basis of dim-7 operators in LEFT, and then do the tree-level matching between the SMEFT and the LEFT in section 3 by incorporating new terms due to dim-7 operators in SMEFT or LEFT or in both. As a simple yet interesting application we study in section 4 the lepton number violating neutrino-photon interactions arising from dim-7 operators, and calculate various scattering cross sections and compare them with the SM results. We will also show a few examples of ultraviolet completion of a dim-7 operator in SMEFT that enters the above neutrino-photon interactions. Our main results are finally summarized in section 5.
2 The basis of dim-7 operator in LEFT
In the LEFT with which we are working the electroweak symmetry breakdown has already taken place, so that the gauge group is $SU(3)_{C}\times U(1)_{\rm EM}$. We have also integrated out the heavy particles of a mass of order $\Lambda_{\rm EW}$, i.e., the weak gauge bosons $W^{\pm},~{}Z$, the Higgs boson $h$, and the top quark $t$. Then the dynamical degrees of freedom are the $n_{f}=3$ number of the down-type quarks ($d,~{}s,~{}b$) and of the neutral ($\nu_{1,2,3}$) plus charged ($e,~{}\mu,~{}\tau$) leptons, the $n_{u}=2$ number of the up-type quarks ($u,~{}c$), and the photon ($A_{\mu}$) and eight gluons ($G_{\mu}^{A}$). Although we work with chiral fields ($\psi_{L,R}$), we assume they are already in their mass eigenstates. This means that any factors of quark and lepton mixing matrix elements are hidden in the Wilson coefficients of high dimensional operators. We label the fermion fields usually by the indices $p,~{}r,~{}s,~{}t$, i.e., $\nu_{p},~{}e_{ip},~{}u_{ip},~{}d_{ip}$ with chirality $i=L,~{}R$, that appear in the same order in an operator and its Wilson coefficient. For specific applications these indices assume a generation value or a flavor name interchangeably.
The bases of dim-5 and dim-6 operators have been established in Ref. [30]. In the following we will do the similar thing for dim-7 operators. First of all, Lorentz symmetry restricts dim-7 operators to the following possible classes:
$$\displaystyle\psi^{2}X^{2},$$
$$\displaystyle\psi^{4}D_{\mu},$$
$$\displaystyle\psi^{2}XD_{\mu}^{2},$$
$$\displaystyle\psi^{2}D_{\mu}^{4},$$
(1)
where the gauge covariant derivative $D_{\mu}=\partial_{\mu}-ieQA_{\mu}-ig_{s}T^{A}G_{\mu}^{A}$ with $Q$ and $T^{A}$ being the electric charge and color generators and with $e$ and $g_{s}$ being gauge couplings, and $X=F_{\mu\nu},~{}G_{\mu\nu}^{A}$ are the gauge field strengths. There are no pure bosonic operators made out of $X$ and $D_{\mu}$ because Lorentz invariance requires an even number of $D_{\mu}$ factors which however cannot lead to an odd-dimensional operator. Now we show that the operators in the classes $\psi^{2}XD_{\mu}^{2}$ and $\psi^{2}D_{\mu}^{4}$ are actually reducible to those in the other two classes $\psi^{2}X^{2}$ and $\psi^{4}D_{\mu}$ plus the dim-5 and dim-6 ones already covered in [30], by the use of equation of motion (EoM) and integration by parts (IBP). As an example, consider the following reduction of an operator in the class $\psi^{2}XD_{\mu}^{2}$:
$$\displaystyle(\overline{\psi_{1}}\overleftarrow{D_{\mu}})(D_{\nu}\psi_{2})X^{%
\mu\nu}\xlongequal[]{\rm IBP}-{1\over 2}\overline{\psi_{1}}[D_{\mu},D_{\nu}]%
\psi_{2}X^{\mu\nu}-\overline{\psi_{1}}(D_{\nu}\psi_{2})D_{\mu}X^{\mu\nu}+%
\framebox{T}\xlongrightarrow[]{\rm EoM}\psi^{2}X^{2}+\psi^{4}D_{\mu},$$
(2)
where the commutator $[D_{\mu},D_{\nu}]$ is proportional to some field strength tensor $X_{\mu\nu}$, and T stands for the total derivative terms that can thus be discarded as a redundant operator.
Now we are left with the classes $\psi^{2}X^{2}$ and $\psi^{4}D_{\mu}$ to examine further. Working with chiral fermion fields we see that the monomial operators in these classes are all non-Hermitian due to the special Lorentz structure which can be formed. So in the following we will work out one half of them while the other half can be obtained by Hermitian conjugate. We start with the class $\psi^{2}X^{2}$, which may take the following forms for generic chiral fermion fields $\psi_{1,2}$:
$$\displaystyle{\cal O}_{\psi F1}=(\overline{\psi_{1}}\psi_{2})F_{\mu\nu}F^{\mu%
\nu},$$
$$\displaystyle{\cal O}_{\psi F2}=(\overline{\psi_{1}}\psi_{2})F_{\mu\nu}\tilde{%
F}^{\mu\nu},$$
$$\displaystyle{\cal O}_{\psi FG1}=(\overline{\psi_{1}}T^{A}\psi_{2})F_{\mu\nu}G%
^{A\mu\nu},$$
$$\displaystyle{\cal O}_{\psi FG2}=(\overline{\psi_{1}}T^{A}\psi_{2})F_{\mu\nu}%
\tilde{G}^{A\mu\nu},$$
$$\displaystyle{\cal O}_{\psi FG3}=(\overline{\psi_{1}}T^{A}\sigma_{\mu\nu}\psi_%
{2})F^{\mu\alpha}G^{A\nu}_{\alpha},$$
$$\displaystyle{\cal O}_{\psi G1}=(\overline{\psi_{1}}\psi_{2})G^{A}_{\mu\nu}G^{%
A\mu\nu},$$
$$\displaystyle{\cal O}_{\psi G2}=(\overline{\psi_{1}}\psi_{2})G^{A}_{\mu\nu}%
\tilde{G}^{A\mu\nu},$$
$$\displaystyle{\cal O}_{\psi G3}=d_{ABC}(\overline{\psi_{1}}T^{A}\psi_{2})G^{B}%
_{\mu\nu}G^{C\mu\nu},$$
$$\displaystyle{\cal O}_{\psi G4}=d_{ABC}(\overline{\psi_{1}}T^{A}\psi_{2})G^{B}%
_{\mu\nu}\tilde{G}^{C\mu\nu},$$
$$\displaystyle{\cal O}_{\psi G5}=f_{ABC}(\overline{\psi_{1}}\sigma_{\mu\nu}T^{A%
}\psi_{2})G^{B\mu\alpha}G^{C\nu}_{\alpha},$$
(3)
where the field strength dual $\tilde{X}^{\mu\nu}=\epsilon^{\mu\nu\rho\sigma}X_{\rho\sigma}/2$, $f_{ABC}$ is the structure constant of $SU(3)$, and $d_{ABC}$ the symmetric invariant appearing in the anticommutator of generators in the fundamental representation, $\{T^{A},T^{B}\}=\delta^{AB}/3+d^{ABC}T^{C}$. The other possible operators either vanish or can be reduced to the above ones,
$$\displaystyle(\overline{\psi_{1}}\sigma_{\mu\nu}\psi_{2})X^{\mu\alpha}X^{\nu}_%
{~{}\alpha}=0,$$
$$\displaystyle(\overline{\psi_{1}}\sigma_{\mu\nu}\psi_{2})X^{\mu\alpha}\tilde{X%
}^{\nu}_{~{}\alpha}=0,~{}~{}~{}X=F,G^{A},$$
$$\displaystyle(\overline{\psi_{1}}T^{A}\sigma_{\mu\nu}P_{\pm}\psi_{2})F^{\mu%
\alpha}\tilde{G}^{A\nu}_{~{}~{}~{}\alpha}=\pm i{\cal O}_{\psi FG3},$$
$$\displaystyle f_{ABC}(\overline{\psi_{1}}\sigma_{\mu\nu}T^{A}P_{\pm}\psi_{2})G%
^{B\mu\alpha}\tilde{G}^{C\nu}_{~{}~{}~{}\alpha}=\pm i{\cal O}_{\psi G5},$$
$$\displaystyle f_{ABC}(\overline{\psi_{1}}T^{A}\psi_{2})G^{B}_{\mu\nu}G^{C\mu%
\nu}=0,$$
$$\displaystyle f_{ABC}(\overline{\psi_{1}}T^{A}\psi_{2})G^{B}_{\mu\nu}\tilde{G}%
^{C\mu\nu}=0,$$
(4)
where the chiral projectors $P_{\pm}=(1\pm\gamma^{5})/2$ are also understood to appear in ${\cal O}_{\psi FG3}$ and ${\cal O}_{\psi G5}$ of equation (4), and the reduction makes use of the following identities,
$$\displaystyle\sigma_{\mu\nu}P_{\pm}=\mp\frac{i}{2}\epsilon_{\mu\nu\rho\sigma}%
\sigma^{\rho\sigma}P_{\pm},$$
$$\displaystyle\epsilon_{\mu\nu\rho\sigma}\epsilon^{\alpha\beta\gamma\sigma}=-g^%
{[\alpha}_{\mu}g^{\beta}_{\nu}g^{\gamma]}_{\rho},$$
(5)
with $[\dots]$ indicating antisymmetrization of the arguments inside. With equation (3) it is easy to figure out the relevant fields $\psi_{1,2}$ and find the complete set of operators in this class. These operators conserve baryon number ($\Delta B=0$) but can conserve ($\Delta L=0$) or violate lepton number by two units ($\Delta L=\pm 2$), which are displayed respectively in tables 1 and 2.
For the class $\psi^{4}D$, there are two possible Lorentz structures,
$$\displaystyle(\overline{\psi_{1}}\sigma^{\mu\nu}\psi_{2})(\overline{\psi_{3}}%
\gamma_{[\mu}\overleftrightarrow{D_{\nu]}}\psi_{4}),$$
$$\displaystyle(\overline{\psi_{1}}\gamma^{\mu}\psi_{2})(\overline{\psi_{3}}i%
\overleftrightarrow{D_{\mu}}\psi_{4}),$$
(6)
where $\overline{A}\overleftrightarrow{D_{\mu}}B=\overline{A}D_{\mu}B-\overline{A}%
\overleftarrow{D_{\mu}}B$. However, the two structures are not independent as the tensor structure can be reduced to the vector one plus dim-6 operators (dim-6) with the aid of EoM, IBP, and the Fierz identities (FI):
$$\displaystyle(\overline{\psi_{1}}\sigma^{\mu\nu}\psi_{2})(\overline{\psi_{3}}%
\gamma_{[\mu}i\overleftrightarrow{D_{\nu]}}\psi_{4})$$
(7)
$$\displaystyle\xlongequal{\rm IBP}$$
$$\displaystyle+2iD_{\nu}(\overline{\psi_{1}}\sigma^{\mu\nu}\psi_{2})(\overline{%
\psi_{3}}\gamma_{\mu}\psi_{4})+4(\overline{\psi_{1}}\sigma^{\mu\nu}\psi_{2})(%
\overline{\psi_{3}}\gamma_{\mu}iD_{\nu}\psi_{4})+\framebox{\rm T}$$
$$\displaystyle\xlongequal{\rm EoM}$$
$$\displaystyle-2(\overline{\psi_{1}}i\overleftrightarrow{D_{\mu}}\psi_{2})(%
\overline{\psi_{3}}\gamma^{\mu}\psi_{4})+4(\overline{\psi_{1}}\gamma^{\mu}%
\gamma^{\nu}\psi_{2})(\overline{\psi_{3}}\gamma_{\mu}iD_{\nu}\psi_{4})+%
\framebox{dim-6}$$
$$\displaystyle\xlongequal[\rm EoM]{\rm FI,~{}IBP}$$
$$\displaystyle\begin{cases}-2(\overline{\psi_{1}}i\overleftrightarrow{D_{\mu}}P%
_{\pm}\psi_{2})(\overline{\psi_{3}}\gamma^{\mu}P_{\pm}\psi_{4})-4(\overline{%
\psi_{1}}i\overleftrightarrow{D_{\mu}}P_{\pm}\psi_{4})(\overline{\psi_{3}}%
\gamma^{\mu}P_{\pm}\psi_{2})+\framebox{T}+\framebox{dim-6},\\
-2(\overline{\psi_{1}}i\overleftrightarrow{D_{\mu}}P_{\pm}\psi_{2})(\overline{%
\psi_{3}}\gamma^{\mu}P_{\mp}\psi_{4})+\framebox{dim-6}.\end{cases}$$
In the second step we have used the relation $\sigma^{\mu\nu}=i\gamma^{\mu}\gamma^{\nu}-ig^{\mu\nu}=ig^{\mu\nu}-i\gamma^{\nu%
}\gamma^{\mu}$, and in the last step distinguished between two cases in which $\psi_{2,4}$ have same or opposite chirality to apply the FIs:
$$\displaystyle(\overline{\psi_{1}}\gamma^{\mu}\gamma^{\nu}P_{\pm}\psi_{2})(%
\overline{\psi_{3}}\gamma_{\mu}iD_{\nu}P_{\pm}\psi_{4})=$$
$$\displaystyle-2(\overline{\psi_{1}}iD_{\mu}P_{\pm}\psi_{4})(\overline{\psi_{3}%
}\gamma^{\mu}P_{\pm}\psi_{2}),$$
$$\displaystyle(\overline{\psi_{1}}\gamma^{\mu}\gamma^{\nu}P_{\pm}\psi_{2})(%
\overline{\psi_{3}}\gamma_{\mu}iD_{\nu}P_{\mp}\psi_{4})=$$
$$\displaystyle 2(\overline{\psi_{1}}P_{\pm}\psi_{2})(\overline{\psi_{3}}i\not{D%
}P_{\mp}\psi_{4})+2(\overline{\psi_{1}}i\not{D}P_{\mp}\psi_{4})(\overline{\psi%
_{3}}P_{\pm}\psi_{2}).$$
(8)
Therefore, the tensor structure can be discarded in favor of the vector one in equation (6) to work out all possible operators. For a given field configuration $(\psi_{1},~{}\psi_{2},~{}\psi_{3},~{}\psi_{4})$ fulfilling gauge invariance, we may form several apparently different operators. However, we find there is only one independent operator by the use of IBP, EoM, and the following Fierz transformations [5]:
$$\displaystyle-(\overline{\psi_{1}}\gamma^{\mu}P_{\pm}\psi_{2})(\overline{\psi_%
{3}}P_{\pm}\psi_{4})=$$
$$\displaystyle(\overline{\psi_{1}}\gamma^{\mu}P_{\pm}\psi_{3}^{C})(\overline{%
\psi_{2}^{C}}P_{\pm}\psi_{4})+(\overline{\psi_{1}}\gamma^{\mu}P_{\pm}\psi_{4})%
(\overline{\psi_{3}}P_{\pm}\psi_{2}),$$
$$\displaystyle-(\overline{\psi_{1}}\gamma^{\mu}P_{\pm}\psi_{2})(\overline{\psi_%
{3}}P_{\mp}\psi_{4})=$$
$$\displaystyle(\overline{\psi_{1}}P_{\mp}\psi_{3}^{C})(\overline{\psi_{4}^{C}}%
\gamma^{\mu}P_{\pm}\psi_{2})+(\overline{\psi_{1}}P_{\mp}\psi_{4})(\overline{%
\psi_{3}}\gamma^{\mu}P_{\pm}\psi_{2}),$$
(9)
where charge conjugation field is defined as $\psi^{C}=C\bar{\psi}^{{\rm T}}$ with the matrix $C$ satisfying the relations $C^{{\rm T}}=C^{\dagger}=-C$ and $C^{2}=-1$ so that $(\psi^{C})^{C}=\psi$. Considering the above reduction, for a given configuration of fields $\psi_{1,2,3,4}\in\{u_{L/R},d_{L/R},e_{L/R},\nu\}$, one can write down the corresponding gauge invariant operator. The final complete operators in this class are shown in the rest part of tables 1 and 2 according to their lepton and baryon numbers.
In tables 1 and 2 we also show the number of each operator for generally $n_{u}$ up-type quarks, $n_{f}$ down-type quarks, neutral and charged leptons. Comparing to dim-7 operators in SMEFT [4, 5] and its sterile neutrino extended $\nu$SMEFT [28] which only have $(\Delta L,\Delta B)=(2,0),~{}(1,-1)$, the dim-7 operators in LEFT have additional sectors with $(\Delta L,\Delta B)=(2,0),~{}(1,1)$. In counting independent operators in each sector we have taken into account symmetries in their flavor indices. In the sector with $(\Delta L,\Delta B)=(0,0)$, only the operators ${\cal O}_{eeD1}^{prst}$ and ${\cal O}_{eeD2}^{prst}$ have flavor symmetries, and are respectively antisymmetric under $p\leftrightarrow s$ and $r\leftrightarrow t$ up to dim-6 terms by EoM, thus reducing the number of their independent operators. In the sector with $(\Delta L,\Delta B)=(2,0)$, the operators ${\cal O}_{\nu F1,2}^{pr}$ and ${\cal O}_{\nu G1,2}^{pr}$ are symmetric under $p\leftrightarrow r$, ${\cal O}_{e\nu D1,2}^{prst}$, ${\cal O}_{d\nu D1,2}^{prst}$, and ${\cal O}_{u\nu D1,2}^{prst}$ are antisymmetric in the neutrino indices $s,~{}t$, while ${\cal O}_{\nu\nu D}^{prst}$ are totally antisymmetric in the neutrino indices $r,~{}s,~{}t$. In the sector with $(\Delta L,\Delta B)=(1,-1)$, operators ${\cal O}_{u\nu dD1,2}^{prst}$ and ${\cal O}_{dedD2,3}^{prst}$ are symmetric under $s\leftrightarrow t$, while ${\cal O}_{dedD1,4}^{prst}$ are totally symmetric in $p,s,t$ for the three down-type quark fields up to dim-6 terms by EoM. In the last sector with $(\Delta L,\Delta B)=(1,1)$, the operators ${\cal O}_{u\nu dD3,4}^{prst}$ and ${\cal O}_{deuD1,2,3,4}^{prst}$ are all symmetric under $s\leftrightarrow t$. We have confirmed our above count of independent operators by the Hilbert series method in Ref. [7]. For the SM case with $n_{u}=2,~{}n_{f}=3$ and including Hermitian conjugates of the operators, there are in total $3168|^{\Delta L=0}_{\Delta B=0}+750|^{\Delta L=\pm 2}_{\Delta B=0}+588|^{L=\pm
1%
}_{\Delta B=\mp 1}+712|^{\Delta L=\pm 1}_{\Delta B=\pm 1}$ independent dim-7 operators in LEFT.
3 Matching with the SMEFT up to dimension 7
Although the SMEFT is defined above the electroweak scale $\Lambda_{\rm EW}$ and stays closer to certain new physics at the scale $\Lambda_{\rm NP}$, we have to employ the LEFT defined below $\Lambda_{\rm EW}$ when coping with low energy processes. The new physics information parameterized in SMEFT is then inherited by LEFT through the matching conditions and renormalization group effects. Previously, the tree-level matching has been done in [30] from the SMEFT effective interactions up to dim-6 operators to the LEFT also up to dim-6 operators. In this section we extend this matching to the dim-7 operators in both SMEFT and LEFT based on the basis of dim-7 operators in LEFT described in section 2 and the basis of dim-7 operators in SMEFT established in Ref. [5] and further refined in Ref. [37]. This result will be necessary for a consistent study of new physics effects at low energy beyond the leading order.
The matching is done by integrating out the SM heavy particles $W^{\pm},~{}Z,~{}h,~{}t$ from the SMEFT in the electroweak symmetry broken phase. Since the effective interactions of higher-dimension operators in SMEFT are supposed to be suppressed by more powers of $\Lambda_{\rm NP}$ which is much larger than $\Lambda_{\rm EW}$, we will work to the linear terms in them. Then the effective interaction of a dim-$m$ ($m\geq 5$) operator in SMEFT will possibly induce an effective interaction in LEFT of a dim-$n$ operator with the correspondence of the Wilson coefficients:
$$\displaystyle\textrm{SMEFT: }C_{\rm SMEFT}^{{\rm dim}-m}\sim{1\over\Lambda_{%
\rm NP}^{m-4}}$$
$$\displaystyle\Rightarrow$$
$$\displaystyle\textrm{LEFT: }C_{\rm LEFT}^{{\rm dim}-n}\sim{1\over\Lambda_{\rm
NP%
}^{m-4}\Lambda_{\rm EW}^{n-m}},$$
(10)
where we do not include couplings in SM. Since $t$ couples to another heavy particle ($W^{\pm}$) or another heavy particle ($Z,~{}h$) and itself, it cannot contribute to the tree-level matching up to dimension 7. Excluding the heavy particles ($W^{\pm},~{}Z,~{}t$), $h$ couples very weakly to the light fermions. We will therefore ignore these small Yukawa couplings, so that the Higgs doublet field $H$ can be simply replaced by its vacuum expectation value (vev) $v/\sqrt{2}$ for the purpose of matching calculation. This leaves with us only the integration of the weak gauge bosons $W^{\pm},~{}Z$. Inspection of the effective interactions from the dim-6 and dim-7 operators in SMEFT shows that a single $W^{\pm},~{}Z$ propagator is required to connect an SMEFT vertex to an SM vertex to arrive at an LEFT operator up to dim-7.
We adopt for the dim-6 operators in SMEFT the Warsaw basis [3], and for the dim-7 operators the basis in Ref. [37] that is refined from our previous basis [5] and reproduced in table 3. The bases of dim-5 and dim-6 operators in LEFT are taken from Ref. [30] while the basis of dim-7 operators is listed in tables 1 and 2. Our matching results are recorded as follows. While the matching to dim-7 operators in LEFT is new, the matching results up to dim-6 operators in LEFT are to be added to those in Ref. [30] when both baryon and lepton numbers match.
$\scriptscriptstyle\blacksquare$ Matching from dim-5/7 operators in SMEFT to dim-3 operators in LEFT
$$\displaystyle{\cal O}_{\nu}^{pr}=$$
$$\displaystyle(\overline{\nu^{C}_{p}}\nu_{r}),$$
$$\displaystyle C_{\nu}^{pr}=$$
$$\displaystyle+\frac{1}{2}C_{5}^{pr}v^{2}+\frac{1}{4}C_{LH}^{pr}v^{4},$$
(11)
where $C_{5}^{pr}$ is the Wilson coefficient of dim-5 Weinberg operator ${\cal O}_{5}=\epsilon_{ij}\epsilon_{mn}(\overline{L^{C,i}}L^{m})H^{j}H^{n}$.
$\scriptscriptstyle\blacksquare$ Matching from dim-7 operators in SMEFT to dim-5 operators in LEFT
$$\displaystyle{\cal O}_{\nu\gamma}^{pr}=$$
$$\displaystyle(\overline{\nu^{C}_{p}}\sigma_{\mu\nu}\nu_{r})F^{\mu\nu},$$
$$\displaystyle C_{\nu\gamma}^{pr}=$$
$$\displaystyle+\frac{1}{4}ev^{2}\Big{(}2C_{LHB}^{pr}+C_{LHW}^{rp}-C_{LHW}^{pr}%
\Big{)},$$
(12)
where the dim-5 Majorana neutrino dipole moment operator vanishes for identical flavors.
$\scriptscriptstyle\blacksquare$ Matching from dim-7 operators in SMEFT to dim-6 operators in LEFT
•
Operators with $(\Delta L,\Delta B)=(2,0)$:
$$\displaystyle{\cal O}_{e\nu 1}^{S,prst}=(\overline{e_{Rp}}e_{Lr})(\overline{%
\nu^{C}_{s}}\nu_{t}),$$
$$\displaystyle C_{e\nu 1}^{S,prst}=-{\sqrt{2}v\over 8}\big{(}2C_{\bar{e}LLLH}^{%
prst}+C_{\bar{e}LLLH}^{psrt}+s\leftrightarrow t\big{)},$$
$$\displaystyle{\cal O}_{e\nu 2}^{S,prst}=(\overline{e_{Lp}}e_{Rr})(\overline{%
\nu^{C}_{s}}\nu_{t}),$$
$$\displaystyle C_{e\nu 2}^{S,prst}=-{\sqrt{2}v\over 2}\big{(}C_{LeHD}^{sr}%
\delta^{tp}+C_{LeHD}^{tr}\delta^{sp}\big{)},$$
$$\displaystyle{\cal O}_{e\nu}^{T,prst}=(\overline{e_{Rp}}\sigma_{\mu\nu}e_{Lr})%
(\overline{\nu^{C}_{s}}\sigma^{\mu\nu}\nu_{t}),$$
$$\displaystyle C_{e\nu}^{T}=+{\sqrt{2}v\over 32}\big{(}C_{\bar{e}LLLH}^{psrt}-C%
_{\bar{e}LLLH}^{ptrs}\big{)},$$
$$\displaystyle{\cal O}_{d\nu}^{S,prst}=(\overline{d_{Rp}}d_{Lr})(\overline{\nu^%
{C}_{s}}\nu_{t}),$$
$$\displaystyle C_{d\nu}^{S,prst}=-{\sqrt{2}v\over 4}V_{xr}\big{(}C_{\bar{d}QLLH%
1}^{pxst}+C_{\bar{d}QLLH1}^{pxts}\big{)},$$
$$\displaystyle{\cal O}_{d\nu}^{T,prst}=(\overline{d_{Rp}}\sigma_{\mu\nu}d_{Lr})%
(\overline{\nu^{C}_{s}}\sigma^{\mu\nu}\nu_{t}),$$
$$\displaystyle C_{d\nu}^{T,prst}=-{\sqrt{2}v\over 4}V_{xr}\big{(}C_{\bar{d}QLLH%
2}^{pxst}-C_{\bar{d}QLLH2}^{pxts}\big{)},$$
$$\displaystyle{\cal O}_{u\nu}^{S,prst}=(\overline{u_{Lp}}u_{Rr})(\overline{\nu^%
{C}_{s}}\nu_{t}),$$
$$\displaystyle C_{u\nu}^{S,prst}=+{\sqrt{2}v\over 4}\big{(}C_{\bar{Q}uLLH}^{%
prst}+C_{\bar{Q}uLLH}^{prts}\big{)},$$
$$\displaystyle{\cal O}_{du\nu\,e1}^{S,prst}=(\overline{d_{Rp}}u_{Lr})(\overline%
{\nu^{C}_{s}}e_{Lt}),$$
$$\displaystyle C_{du\nu\,e1}^{S,prst}=+{\sqrt{2}v\over 2}C_{\bar{d}QLLH1}^{prts},$$
$$\displaystyle{\cal O}_{du\nu\,e2}^{S,prst}=(\overline{d_{Lp}}u_{Rr})(\overline%
{\nu^{C}_{s}}e_{Lt}),$$
$$\displaystyle C_{du\nu\,e2}^{S,prst}=+{\sqrt{2}v\over 2}V_{xp}^{*}C_{\bar{Q}%
uLLH}^{xrts},$$
$$\displaystyle{\cal O}_{du\nu\,e}^{T,prst}=(\overline{d_{Rp}}\sigma_{\mu\nu}u_{%
Lr})(\overline{\nu^{C}_{s}}\sigma^{\mu\nu}e_{Lt}),$$
$$\displaystyle C_{du\nu\,e}^{T,prst}=-{\sqrt{2}v\over 2}C_{\bar{d}QLLH2}^{prts},$$
$$\displaystyle{\cal O}_{du\nu\,e1}^{V,prst}=(\overline{d_{Lp}}\gamma_{\mu}u_{Lr%
})(\overline{\nu^{C}_{s}}\gamma^{\mu}e_{Rt}),$$
$$\displaystyle C_{du\nu\,e1}^{V,prst}=+{\sqrt{2}v\over 2}V_{rp}^{*}C_{LeHD}^{st},$$
$$\displaystyle{\cal O}_{du\nu\,e2}^{V,prst}=(\overline{d_{Rp}}\gamma_{\mu}u_{Rr%
})(\overline{\nu^{C}_{s}}\gamma^{\mu}e_{Rt}),$$
$$\displaystyle C_{du\nu\,e2}^{V,prst}=+{\sqrt{2}v\over 2}C_{\bar{d}uLeH}^{prst},$$
(13)
where $V_{pr}$ is the CKM matrix coming from SM charged current weak interactions. These matching results can contribute to nuclear neutrinoless double $\beta$ decays and LNV meson decays via long distance mechanism [19, 35, 36, 37].
•
Operators with $(\Delta L,\Delta B)=(-1,1)$:
$$\displaystyle{\cal O}_{\nu\,dud1}^{S,prst}=\epsilon_{\alpha\beta\gamma}(%
\overline{\nu_{p}}d_{Rr}^{\alpha})(\overline{u^{\beta C}_{Rs}}d_{Rt}^{\gamma}),$$
$$\displaystyle C_{\nu\,dud1}^{S,prst}=+{\sqrt{2}v\over 2}C_{\bar{L}dud\tilde{H}%
}^{prst},$$
$$\displaystyle{\cal O}_{\nu\,dud2}^{S,prst}=\epsilon_{\alpha\beta\gamma}(%
\overline{\nu_{p}}d_{Rr}^{\alpha})(\overline{u^{\beta C}_{Ls}}d_{Lt}^{\gamma}),$$
$$\displaystyle C_{\nu\,dud2}^{S,prst}=-{\sqrt{2}v\over 2}V_{xt}C_{\bar{L}dQQ%
\tilde{H}}^{prsx},$$
$$\displaystyle{\cal O}_{eddd1}^{S,prst}=\epsilon_{\alpha\beta\gamma}(\overline{%
e_{Lp}}d_{Rr}^{\alpha})(\overline{d^{\beta C}_{Rs}}d_{Rt}^{\gamma}),$$
$$\displaystyle C_{eddd1}^{S,prst}=+{\sqrt{2}v\over 2}C_{\bar{L}dddH}^{prst},$$
$$\displaystyle{\cal O}_{eddd2}^{S,prst}=\epsilon_{\alpha\beta\gamma}(\overline{%
e_{Rp}}d_{Lr}^{\alpha})(\overline{d^{\beta C}_{Rs}}d_{Rt}^{\gamma}),$$
$$\displaystyle C_{eddd2}^{S,prst}=-{\sqrt{2}v\over 2}V_{xr}C_{\bar{e}Qdd\tilde{%
H}}^{pxst},$$
$$\displaystyle{\cal O}_{eddd3}^{S,prst}=\epsilon_{\alpha\beta\gamma}(\overline{%
e_{Lp}}d_{Rr}^{\alpha})(\overline{d^{\beta C}_{Ls}}d_{Lt}^{\gamma}),$$
$$\displaystyle C_{eddd3}^{S,prst}=-{\sqrt{2}v\over 4}V_{xs}V_{yt}\big{(}C_{\bar%
{L}dQQ\tilde{H}}^{prxy}-C_{\bar{L}dQQ\tilde{H}}^{pryx}\big{)}.$$
(14)
These operators can induce usual nucleon decays such as $p\to\nu\pi^{+}$ [5] and $n\to e\pi^{+}$ that change baryon and lepton numbers by one unit while keeping their sum conserved.
$\scriptscriptstyle\blacksquare$ Matching from dim-7 operators in SMEFT to dim-7 operators in LEFT
•
Operators with $(\Delta L,\Delta B)=(2,0)$:
$$\displaystyle{\cal O}_{\nu\nu D}^{prst}=(\overline{\nu_{p}}\gamma^{\mu}\nu_{r}%
)\left(\overline{\nu^{C}_{s}}i\overleftrightarrow{\partial_{\mu}}\nu_{t}\right),$$
$$\displaystyle C_{\nu\nu D}^{prst}=-\delta^{pr}C_{LX}^{st},$$
$$\displaystyle{\cal O}_{e\nu D1}^{prst}=(\overline{e_{Lp}}\gamma^{\mu}e_{Lr})%
\left(\overline{\nu^{C}_{s}}i\overleftrightarrow{\partial_{\mu}}\nu_{t}\right),$$
$$\displaystyle C_{e\nu D1}^{prst}=+2\left({1\over 2}-s_{W}^{2}\right)\delta^{pr%
}C_{LX}^{st}$$
$$\displaystyle+\left[\delta^{pt}\left(2C_{LHW}^{sr}+C_{LDH1}^{sr}\right)-s%
\leftrightarrow t\right],$$
$$\displaystyle{\cal O}_{e\nu D2}^{prst}=(\overline{e_{Rp}}\gamma^{\mu}e_{Rr})%
\left(\overline{\nu^{C}_{s}}i\overleftrightarrow{\partial_{\mu}}\nu_{t}\right),$$
$$\displaystyle C_{e\nu D2}^{prst}=-2s_{W}^{2}\delta^{pr}C_{LX}^{st}$$
$$\displaystyle{\cal O}_{d\nu D1}^{prst}=(\overline{d_{Lp}}\gamma^{\mu}d_{Lr})%
\left(\overline{\nu^{C}_{s}}i\overleftrightarrow{\partial_{\mu}}\nu_{t}\right),$$
$$\displaystyle C_{d\nu D1}^{prst}=+2\left({1\over 2}-{1\over 3}s_{W}^{2}\right)%
\delta^{pr}C_{LX}^{st},$$
$$\displaystyle{\cal O}_{d\nu D2}^{prst}=(\overline{d_{Rp}}\gamma^{\mu}d_{Rr})%
\left(\overline{\nu^{C}_{s}}i\overleftrightarrow{\partial_{\mu}}\nu_{t}\right),$$
$$\displaystyle C_{d\nu D2}^{prst}=-{2\over 3}s_{W}^{2}\delta^{pr}C_{LX}^{st},$$
$$\displaystyle{\cal O}_{u\nu D1}^{prst}=(\overline{u_{Lp}}\gamma^{\mu}u_{Lr})%
\left(\overline{\nu^{C}_{s}}i\overleftrightarrow{\partial_{\mu}}\nu_{t}\right),$$
$$\displaystyle C_{u\nu D1}^{prst}=-2\left({1\over 2}-{2\over 3}s_{W}^{2}\right)%
\delta^{pr}C_{LX}^{st},$$
$$\displaystyle{\cal O}_{uD2}^{prst}=(\overline{u_{Rp}}\gamma^{\mu}u_{Rr})\left(%
\overline{\nu^{C}_{s}}i\overleftrightarrow{\partial_{\mu}}\nu_{t}\right),$$
$$\displaystyle C_{\nu\nu D}^{prst}=+{4\over 3}s_{W}^{2}\delta^{pr}C_{LX}^{st},$$
$$\displaystyle{\cal O}_{du\nu eD1}^{prst}=(\overline{d_{Lp}}\gamma^{\mu}u_{Lr})%
\left(\overline{e_{Ls}^{C}}i\overleftrightarrow{D_{\mu}}\nu_{t}\right),$$
$$\displaystyle C_{due\nu D1}^{prst}=+2V_{rp}^{*}\big{(}2C_{LHW}^{ts}+C_{LDH1}^{%
ts}\big{)},$$
$$\displaystyle{\cal O}_{due\nu D2}^{prst}=(\overline{d_{Rp}}\gamma^{\mu}u_{Rr})%
\left(\overline{e_{Ls}^{C}}i\overleftrightarrow{D_{\mu}}\nu_{t}\right),$$
$$\displaystyle C_{due\nu D2}^{prst}=-2C_{\bar{d}uLDL}^{prts},$$
(15)
where $s_{W}=\sin\theta_{W},~{}c_{W}=\cos\theta_{W}$ with $\theta_{W}$ being the weak mixing angle, and the following shortcut is used,
$$\displaystyle C_{LX}^{st}=2s_{W}^{2}C_{LHB}^{st}+c_{W}^{2}(C_{LHW}^{st}-C_{LHW%
}^{ts}).$$
(16)
•
Operators with $(\Delta L,\Delta B)=(1,-1)$:
$$\displaystyle{\cal O}_{u\nu dD2}^{prst}=$$
$$\displaystyle\epsilon_{\alpha\beta\gamma}(\overline{u_{L}^{\alpha}}\gamma^{\mu%
}\nu)(\overline{d_{R}^{\beta}}i\overleftrightarrow{D_{\mu}}d_{R}^{\gamma C}),$$
$$\displaystyle C_{u\nu dD2}^{prst}=$$
$$\displaystyle-C_{\bar{L}QdDd}^{rpst*},$$
$$\displaystyle{\cal O}_{dedD2}^{prst}=$$
$$\displaystyle\epsilon_{\alpha\beta\gamma}(\overline{d_{L}^{\alpha}}\gamma^{\mu%
}e_{L})(\overline{d_{R}^{\beta}}i\overleftrightarrow{D_{\mu}}d_{R}^{\gamma C}),$$
$$\displaystyle C_{dedD2}^{prst}=$$
$$\displaystyle-V_{xp}^{*}C_{\bar{L}QdDd}^{rxst*},$$
$$\displaystyle{\cal O}_{dedD4}^{prst}=$$
$$\displaystyle\epsilon_{\alpha\beta\gamma}(\overline{d_{R}^{\alpha}}\gamma^{\mu%
}e_{R})(\overline{d_{R}^{\beta}}i\overleftrightarrow{D_{\mu}}d_{R}^{\gamma C}),$$
$$\displaystyle C_{dedD4}^{prst}=$$
$$\displaystyle-C_{\bar{e}dddD}^{rpst*}.$$
(17)
$\scriptscriptstyle\blacksquare$ Matching from dim-6 operators in SMEFT to dim-7 operators in LEFT
The operators involved in this matching all conserve baryon and lepton numbers. We will use the shortcuts:
$$\displaystyle C_{eX}^{st}=c_{W}C_{eW}^{st}+s_{W}C_{eB}^{st},~{}~{}C_{dX}^{st}=%
c_{W}C_{dW}^{st}+s_{W}C_{dB}^{st},~{}~{}C_{uX}^{st}=c_{W}C_{uW}^{st}-s_{W}C_{%
uB}^{st}.$$
(18)
•
Operators in the class $(\bar{L}\gamma^{\mu}L)(\bar{L}iD_{\mu}R)$:
$$\displaystyle{\cal O}_{\nu eD}^{prst}=(\overline{\nu_{p}}\gamma^{\mu}\nu_{r})(%
\overline{e_{Ls}}i\overleftrightarrow{D_{\mu}}e_{Rt}),$$
$$\displaystyle C_{\nu eD}^{prst}=-\frac{\sqrt{2}}{m_{Z}}\delta^{pr}C_{eX}^{st}-%
\frac{2\sqrt{2}}{m_{W}}\delta^{sr}C_{eW}^{pt},$$
$$\displaystyle{\cal O}_{\nu dD}^{prst}=(\overline{\nu_{p}}\gamma^{\mu}\nu_{r})(%
\overline{d_{Ls}}i\overleftrightarrow{D_{\mu}}d_{Rt}),$$
$$\displaystyle C_{\nu dD}^{prst}=-\frac{\sqrt{2}}{m_{Z}}\delta^{pr}V^{*}_{xs}C_%
{dX}^{xt},$$
$$\displaystyle{\cal O}_{\nu uD}^{prst}=(\overline{\nu_{p}}\gamma^{\mu}\nu_{r})(%
\overline{u_{Ls}}i\overleftrightarrow{D_{\mu}}u_{Rt}),$$
$$\displaystyle C_{\nu uD}^{prst}=+\frac{\sqrt{2}}{m_{Z}}\delta^{pr}C_{uX}^{st},$$
$$\displaystyle{\cal O}_{eeD1}^{prst}=(\overline{e_{Lp}}\gamma^{\mu}e_{Lr})(%
\overline{e_{Ls}}i\overleftrightarrow{D_{\mu}}e_{Rt}),$$
$$\displaystyle C_{eeD1}^{prst}=+\frac{2\sqrt{2}}{m_{Z}}\left({1\over 2}-s_{W}^{%
2}\right)\delta^{pr}C_{eX}^{st},$$
$$\displaystyle{\cal O}_{edD1}^{prst}=(\overline{e_{Lp}}\gamma^{\mu}e_{Lr})(%
\overline{d_{Ls}}i\overleftrightarrow{D_{\mu}}d_{Rt}),$$
$$\displaystyle C_{edD1}^{prst}=+\frac{2\sqrt{2}}{m_{Z}}\left({1\over 2}-s_{W}^{%
2}\right)\delta^{pr}V^{*}_{xs}C_{dX}^{xt},$$
$$\displaystyle{\cal O}_{euD1}^{prst}=(\overline{e_{Lp}}\gamma^{\mu}e_{Lr})(%
\overline{u_{Ls}}i\overleftrightarrow{D_{\mu}}u_{Rt}),$$
$$\displaystyle C_{euD1}^{prst}=-\frac{2\sqrt{2}}{m_{Z}}\left({1\over 2}-s_{W}^{%
2}\right)\delta^{pr}C_{uX}^{st},$$
$$\displaystyle{\cal O}_{deD1}^{prst}=(\overline{d_{Lp}}\gamma^{\mu}d_{Lr})(%
\overline{e_{Ls}}i\overleftrightarrow{D_{\mu}}e_{Rt}),$$
$$\displaystyle C_{deD1}^{prst}=+\frac{2\sqrt{2}}{m_{Z}}\left({1\over 2}-{1\over
3%
}s_{W}^{2}\right)\delta^{pr}C_{eX}^{st},$$
$$\displaystyle{\cal O}_{ddD1}^{prst}=(\overline{d_{Lp}}\gamma^{\mu}d_{Lr})(%
\overline{d_{Ls}}i\overleftrightarrow{D_{\mu}}d_{Rt}),$$
$$\displaystyle C_{ddD1}^{prst}=+\frac{2\sqrt{2}}{m_{Z}}\left({1\over 2}-{1\over
3%
}s_{W}^{2}\right)\delta^{pr}V^{*}_{xs}C_{dX}^{xt},$$
$$\displaystyle{\cal O}_{duD1}^{prst}=(\overline{d_{Lp}}\gamma^{\mu}d_{Lr})(%
\overline{u_{Ls}}i\overleftrightarrow{D_{\mu}}u_{Rt}),$$
$$\displaystyle C_{duD1}^{prst}=-\frac{2\sqrt{2}}{m_{Z}}\left({1\over 2}-{1\over
3%
}s_{W}^{2}\right)\delta^{pr}C_{uX}^{st},$$
$$\displaystyle{\cal O}_{duD2}^{prst}=(\overline{d_{Lp}}\gamma^{\mu}d_{Lr}][%
\overline{u_{Ls}}i\overleftrightarrow{D_{\mu}}u_{Rt}),$$
$$\displaystyle C_{duD2}^{prst}=-\frac{2\sqrt{2}}{m_{W}}V_{sr}V^{*}_{xp}C_{uW}^{%
xt},$$
$$\displaystyle{\cal O}_{ueD1}^{prst}=(\overline{u_{Lp}}\gamma^{\mu}u_{Lr})(%
\overline{e_{Ls}}i\overleftrightarrow{D_{\mu}}e_{Rt}),$$
$$\displaystyle C_{ueD1}^{prst}=-\frac{2\sqrt{2}}{m_{Z}}\left({1\over 2}-{2\over
3%
}s_{W}^{2}\right)\delta^{pr}C_{eX}^{st},$$
$$\displaystyle{\cal O}_{udD1}^{prst}=(\overline{u_{Lp}}\gamma^{\mu}u_{Lr})(%
\overline{d_{Ls}}i\overleftrightarrow{D_{\mu}}d_{Rt}),$$
$$\displaystyle C_{udD1}^{prst}=-\frac{2\sqrt{2}}{m_{Z}}\left({1\over 2}-{2\over
3%
}s_{W}^{2}\right)\delta^{pr}V^{*}_{xs}C_{dX}^{xt},$$
$$\displaystyle{\cal O}_{udD2}^{prst}=(\overline{u_{Lp}}\gamma^{\mu}u_{Lr}][%
\overline{d_{Ls}}i\overleftrightarrow{D_{\mu}}d_{Rt}),$$
$$\displaystyle C_{udD2}^{prst}=-\frac{2\sqrt{2}}{m_{W}}V^{*}_{rs}C_{dW}^{pt},$$
$$\displaystyle{\cal O}_{uuD1}^{prst}=(\overline{u_{Lp}}\gamma^{\mu}u_{Lr})(%
\overline{u_{Ls}}i\overleftrightarrow{D_{\mu}}u_{Rt}),$$
$$\displaystyle C_{uuD1}^{prst}=+\frac{2\sqrt{2}}{m_{Z}}\delta^{pr}\left({1\over
2%
}-{2\over 3}s_{W}^{2}\right)C_{uX}^{st},$$
$$\displaystyle{\cal O}_{\nu eduD}^{prst}=(\overline{\nu_{p}}\gamma^{\mu}e_{Lr})%
(\overline{d_{Ls}}i\overleftrightarrow{D_{\mu}}u_{Rt}),$$
$$\displaystyle C_{\nu eduD}^{prst}=+\frac{2\sqrt{2}}{m_{W}}\delta^{pr}V^{*}_{ws%
}C_{uW}^{wt},$$
$$\displaystyle{\cal O}_{e\nu udD}^{prst}=(\overline{e_{Lp}}\gamma^{\mu}\nu_{r})%
(\overline{u_{Ls}}i\overleftrightarrow{D_{\mu}}d_{Rt}),$$
$$\displaystyle C_{e\nu udD}^{prst}=+\frac{2\sqrt{2}}{m_{W}}\delta^{pr}C_{dW}^{%
st},$$
$$\displaystyle{\cal O}_{du\nu eD1}^{prst}=(\overline{d_{Lp}}\gamma^{\mu}u_{Lr})%
(\overline{\nu_{s}}i\overleftrightarrow{D_{\mu}}e_{Rt}),$$
$$\displaystyle C_{du\nu eD1}^{prst}=+\frac{2\sqrt{2}}{m_{W}}V^{*}_{rp}C_{eW}^{%
st}.$$
(19)
•
Operators in the class $(\bar{R}\gamma^{\mu}R)(\bar{L}iD_{\mu}R)$:
$$\displaystyle{\cal O}_{eeD2}^{prst}=$$
$$\displaystyle(\overline{e_{Rp}}\gamma^{\mu}e_{Rr})(\overline{e_{Ls}}i%
\overleftrightarrow{D_{\mu}}e_{Rt}),$$
$$\displaystyle C_{eeD2}^{prst}=$$
$$\displaystyle-\frac{2\sqrt{2}}{m_{Z}}s_{W}^{2}\delta^{pr}C_{eX}^{st},$$
$$\displaystyle{\cal O}_{edD2}^{prst}=$$
$$\displaystyle(\overline{e_{Rp}}\gamma^{\mu}e_{Rr})(\overline{d_{Ls}}i%
\overleftrightarrow{D_{\mu}}d_{Rt}),$$
$$\displaystyle C_{edD2}^{prst}=$$
$$\displaystyle-\frac{2\sqrt{2}}{m_{Z}}s_{W}^{2}\delta^{pr}V^{*}_{xs}C_{dX}^{xt},$$
$$\displaystyle{\cal O}_{euD2}^{prst}=$$
$$\displaystyle(\overline{e_{Rp}}\gamma^{\mu}e_{Rr})(\overline{u_{Ls}}i%
\overleftrightarrow{D_{\mu}}u_{Rt}),$$
$$\displaystyle C_{euD2}^{prst}=$$
$$\displaystyle+\frac{2\sqrt{2}}{m_{Z}}s_{W}^{2}\delta^{pr}C_{uX}^{st},$$
$$\displaystyle{\cal O}_{deD2}^{prst}=$$
$$\displaystyle(\overline{d_{Rp}}\gamma^{\mu}d_{Rr})(\overline{e_{Ls}}i%
\overleftrightarrow{D_{\mu}}e_{Rt}),$$
$$\displaystyle C_{deD2}^{prst}=$$
$$\displaystyle-\frac{2\sqrt{2}}{m_{Z}}{1\over 3}s_{W}^{2}\delta^{pr}C_{eX}^{st},$$
$$\displaystyle{\cal O}_{ddD2}^{prst}=$$
$$\displaystyle(\overline{d_{Rp}}\gamma^{\mu}d_{Rr})(\overline{d_{Ls}}i%
\overleftrightarrow{D_{\mu}}d_{Rt}),$$
$$\displaystyle C_{ddD2}^{prst}=$$
$$\displaystyle-\frac{2\sqrt{2}}{m_{Z}}{1\over 3}s_{W}^{2}\delta^{pr}V^{*}_{xs}C%
_{dX}^{xt},$$
$$\displaystyle{\cal O}_{duD3}^{prst}=$$
$$\displaystyle(\overline{d_{Rp}}\gamma^{\mu}d_{Rr})(\overline{u_{Ls}}i%
\overleftrightarrow{D_{\mu}}u_{Rt}),$$
$$\displaystyle C_{duD3}^{prst}=$$
$$\displaystyle+\frac{2\sqrt{2}}{m_{Z}}{1\over 3}s_{W}^{2}\delta^{pr}C_{uX}^{st},$$
$$\displaystyle{\cal O}_{ueD2}^{prst}=$$
$$\displaystyle(\overline{u_{Rp}}\gamma^{\mu}u_{Rr})(\overline{e_{Ls}}i%
\overleftrightarrow{D_{\mu}}e_{Rt}),$$
$$\displaystyle C_{ueD2}^{prst}=$$
$$\displaystyle+\frac{2\sqrt{2}}{m_{Z}}{2\over 3}s_{W}^{2}\delta^{pr}C_{eX}^{st},$$
$$\displaystyle{\cal O}_{udD3}^{prst}=$$
$$\displaystyle(\overline{u_{Rp}}\gamma^{\mu}u_{Rr})(\overline{d_{Ls}}i%
\overleftrightarrow{D_{\mu}}d_{Rt}),$$
$$\displaystyle C_{udD3}^{prst}=$$
$$\displaystyle+\frac{2\sqrt{2}}{m_{Z}}{2\over 3}s_{W}^{2}\delta^{pr}V^{*}_{xs}C%
_{dX}^{xt},$$
$$\displaystyle{\cal O}_{uuD2}^{prst}=$$
$$\displaystyle(\overline{u_{Rp}}\gamma^{\mu}u_{Rr})(\overline{u_{Ls}}i%
\overleftrightarrow{D_{\mu}}u_{Rt}),$$
$$\displaystyle C_{uuD2}^{prst}=$$
$$\displaystyle-\frac{2\sqrt{2}}{m_{Z}}{2\over 3}s_{W}^{2}\delta^{pr}C_{uX}^{st}.$$
(20)
4 Low energy neutrino-photon interactions and ultraviolet completion
Among the dim-7 operators in LEFT the most interesting might be the $\psi^{2}X^{2}$ type, which contains both $(\Delta L,\Delta B)=(0,0)$ and $(\Delta L,\Delta B)=(2,0)$ sectors. We first note that Ref. [38] listed the subset of dim-7 operators for the $b\rightarrow s$ transition which however contains an identically vanishing operator $\mathcal{E}^{T}_{L,R}=\bar{b}\sigma^{\nu\rho}P_{L/R}sF_{\mu\nu}F^{\mu}_{~{}%
\rho}=0$. In this section we consider the low energy neutrino-photon ($\nu\gamma$) interactions in the $(\Delta L,\Delta B)=(2,0)$ sector. The leading terms appear at dimension 7:
$$\displaystyle{\cal L}_{\nu\gamma}^{\textrm{LNV}}$$
$$\displaystyle=$$
$$\displaystyle{\cal O}_{\nu F1}^{\alpha\beta}C_{\nu F1}^{\alpha\beta}+{\cal O}_%
{\nu F2}^{\alpha\beta}C_{\nu F2}^{\alpha\beta}+{\rm h.c.},$$
(21)
where the two operators are listed in table 2 whose Wilson coefficients are symmetric in neutrino flavors $\alpha,~{}\beta$. We note in passing that in Ref. [39] the operators ${\cal O}_{\nu F1/2}$ and ${\cal O}_{\nu G1/2}$ have been used to study coherent elastic neutrino-nucleus scattering. As a neutral particle, these interactions cannot originate directly from a tree level matching to the first few high-dimension operators in SMEFT. Instead, they would arise as a loop effect of the effective interactions between neutrinos and charged particles in LEFT that can originate from a tree level matching to SMEFT. We content ourselves in this work with effective $\nu\gamma$ interactions at energies below the mass of the lightest charged particle, i.e., the electron. We will see that the dominant contribution comes from the dim-6 operators [30] involving the electron:
$$\displaystyle{\cal L}_{\nu e}^{6}$$
$$\displaystyle=$$
$$\displaystyle{\cal O}_{e\nu 1}^{S,ee\alpha\beta}C_{e\nu 1}^{S,ee\alpha\beta}+{%
\cal O}_{e\nu 2}^{S,ee\alpha\beta}C_{e\nu 2}^{S,ee\alpha\beta}+{\cal O}_{e\nu}%
^{T,ee\alpha\beta}C_{e\nu}^{T,ee\alpha\beta}+{\rm h.c.},$$
(22)
where the Wilson coefficients are given in equation (13) in terms of those in SMEFT.
Contracting the two electron lines in any of the vertices in equation (22) and attaching two photons to the contracted electron line yields the effective interactions between two neutrinos and two photons as shown in figure 1, which at energies below the electron mass $m_{e}$ have the form of equation (21), with
$$C_{\nu F1}^{\alpha\beta}={1\over 12\pi m_{e}}\left(C_{e\nu 1}^{S,ee\alpha\beta%
}+C_{e\nu 2}^{S,ee\alpha\beta}\right),~{}~{}~{}C_{\nu F2}^{\alpha\beta}=-{i%
\over 8\pi m_{e}}\left(C_{e\nu 1}^{S,ee\alpha\beta}-C_{e\nu 2}^{S,ee\alpha%
\beta}\right).$$
(23)
The tensor interaction in equation (22) yields a vanishing result because of the Schouten identity,
$$g^{\alpha\beta}\epsilon^{\mu\nu\rho\sigma}+g^{\alpha\mu}\epsilon^{\nu\rho%
\sigma\beta}+g^{\alpha\nu}\epsilon^{\rho\sigma\beta\mu}+g^{\alpha\rho}\epsilon%
^{\sigma\beta\mu\nu}+g^{\alpha\sigma}\epsilon^{\beta\mu\nu\rho}=0,$$
(24)
which is indeed consistent with the absence in table 2 of a neutrino tensor bilinear coupled to a field strength squared. The $1/m_{e}$ factor in equation (23) is not surprising, but is actually the same as that in the $t$-loop contribution to the decay amplitude for $h\to\gamma\gamma$ in the heavy top limit where $1/m_{t}$ is cancelled by the top Yukawa coupling. There is an additional contribution to the Wilson coefficient $C_{\nu F1}$: when one $H$ in the dim-5 Weinberg operator ${\cal O}_{5}$ assumes its vev and the other $H$ field is connected to the two photons through the SM one-loop diagrams, $C_{\nu F1}$ gains a term proportional to $m_{\nu}/(v^{2}m_{h}^{2})$ which is suppressed by the neutrino mass $m_{\nu}$ and can be safely ignored. Parameterizing by $\Lambda_{\rm NP}^{-3}$ the SMEFT Wilson coefficients entering $C_{e\nu 1(2)}^{S,ee\alpha\beta}$ through the matching conditions in equation (13), one has roughly
$$C_{\nu F1(2)}^{\alpha\beta}\sim{v\over 4\pi m_{e}}\frac{1}{\Lambda_{\rm NP}^{3%
}},$$
(25)
which offers a huge enhancement factor of $\sim 10^{4}-10^{5}$ compared to the effect of a usual dim-7 operator.
With the above enhancement in mind we calculate the cross sections for various $\nu\gamma$ scattering processes. The amplitudes are
$$\displaystyle{\cal A}(\gamma_{\lambda}(k)\nu_{\alpha}(p)\to\gamma_{\lambda^{%
\prime}}(k^{\prime})\bar{\nu}_{\beta}(p^{\prime}))$$
$$\displaystyle=2\alpha_{\rm em}\left[C_{\nu F1}^{\alpha\beta}\left(1-\lambda%
\lambda^{\prime}\right)+iC_{\nu F2}^{\alpha\beta}\left(\lambda-\lambda^{\prime%
}\right)\right](-t)^{3/2},$$
$$\displaystyle{\cal A}(\gamma_{\lambda}(k)\gamma_{\lambda^{\prime}}(k^{\prime})%
\to\nu_{\alpha}(p)\nu_{\beta}(p^{\prime}))$$
$$\displaystyle=2\alpha_{\rm em}\left[C_{\nu F1}^{\alpha\beta*}(1+\lambda\lambda%
^{\prime})+iC_{\nu F2}^{\alpha\beta*}(\lambda+\lambda^{\prime})\right]s^{3/2},$$
$$\displaystyle{\cal A}(\nu_{\alpha}(p)\nu_{\beta}(p^{\prime})\to\gamma_{\lambda%
}(k)\gamma_{\lambda^{\prime}}(k^{\prime}))$$
$$\displaystyle=2\alpha_{\rm em}\left[C_{\nu F1}^{\alpha\beta}(1+\lambda\lambda^%
{\prime})+iC_{\nu F2}^{\alpha\beta}(\lambda+\lambda^{\prime})\right]s^{3/2}.$$
(26)
Here $\lambda,~{}\lambda^{\prime}$ denote the helicities of the photons, $s=(k+k^{\prime})^{2}$, and $t=(k-k^{\prime})^{2}$. We have ignored the tiny masses of the neutrinos and explicitly evaluated their spinor wavefunctions. The crossing symmetry is manifest in the above amplitudes: the first and third amplitudes are related by $(s,\lambda^{\prime})\leftrightarrow(-t,-\lambda^{\prime})$ while the last two are related by $(\lambda,\lambda^{\prime})\leftrightarrow(-\lambda,-\lambda^{\prime})$ and complex conjugate. Denoting the photon energy by $\omega$ and the scattering angle by $\theta$ in the center of mass frame, the differential cross sections are,
$$\displaystyle{d\sigma(\nu_{\alpha}\gamma_{\lambda}\to\bar{\nu}_{\beta}\gamma_{%
\lambda^{\prime}})\over d\cos\theta}=$$
$$\displaystyle{\alpha_{\rm em}^{2}\omega^{4}\over 4\pi}\left|C_{\nu F1}^{\alpha%
\beta}\left(1-\lambda\lambda^{\prime}\right)+iC_{\nu F2}^{\alpha\beta}\left(%
\lambda-\lambda^{\prime}\right)\right|^{2}(1-\cos\theta)^{3},$$
$$\displaystyle{d\sigma(\gamma_{\lambda}\gamma_{\lambda^{\prime}}\to\nu_{\alpha}%
\nu_{\beta})\over d\cos\theta}=$$
$$\displaystyle{\alpha_{\rm em}^{2}\omega^{4}\over\pi}\left|C_{\nu F1}^{\alpha%
\beta}\left(1+\lambda\lambda^{\prime}\right)+iC_{\nu F2}^{\alpha\beta}\left(%
\lambda+\lambda^{\prime}\right)\right|^{2}{2\over 1+\delta_{\alpha\beta}},$$
$$\displaystyle{d\sigma(\nu_{\alpha}\nu_{\beta}\to\gamma_{\lambda}\gamma_{%
\lambda^{\prime}})\over d\cos\theta}=$$
$$\displaystyle{\alpha_{\rm em}^{2}\omega^{4}\over\pi}\left|C_{\nu F1}^{\alpha%
\beta}\left(1+\lambda\lambda^{\prime}\right)+iC_{\nu F2}^{\alpha\beta}\left(%
\lambda+\lambda^{\prime}\right)\right|^{2}.$$
(27)
Upon averaging (summing) over the initial (final) photon helicities, the total cross sections are,
$$\displaystyle\sigma(\nu_{\alpha}\gamma\to\bar{\nu}_{\beta}\gamma)=\frac{4%
\alpha_{\rm em}^{2}\omega^{4}}{\pi}\left(|C_{\nu F1}^{\alpha\beta}|^{2}+|C_{%
\nu F2}^{\alpha\beta}|^{2}\right),$$
$$\displaystyle\sigma(\gamma\gamma\to\nu_{\alpha}\nu_{\beta})=\sigma(\gamma\nu_{%
\alpha}\to\bar{\nu}_{\beta}\gamma){2\over 1+\delta_{\alpha\beta}},$$
$$\displaystyle\sigma(\nu_{\alpha}\nu_{\beta}\to\gamma\gamma)=4\sigma(\gamma\nu_%
{\alpha}\to\bar{\nu}_{\beta}\gamma).$$
(28)
There are some salient features in our above results when compared to their counterparts in SM [40], i.e., $\nu\gamma\to\nu\gamma$, $\gamma\gamma\to\nu\bar{\nu}$, and $\nu\bar{\nu}\to\gamma\gamma$. First, the cross sections for each pair of similar processes vanish for opposite paring of photon helicities. For instance, $\nu\gamma_{\lambda}\to\bar{\nu}\gamma_{\lambda^{\prime}}$ here does not occur for identical helicities $\lambda=\lambda^{\prime}$, while $\nu\gamma_{\lambda}\to\nu\gamma_{\lambda^{\prime}}$ in SM is absent for opposite helicities $\lambda=-\lambda^{\prime}$. The situation for the other two pairs of processes is just reversed. This circumstance is an interesting consequence of lepton number being violated or conserved: fixing an always left-handed neutrino in either initial or final states, what is for the second fermion to be a left-handed neutrino (right-handed antineutrino) in the SM process becomes a right-handed (left-handed) neutrino in the process under consideration here. Thus in a sense the flip or nonflip of a photon helicity offers a veto to Dirac or Majorana neutrinos. In addition, $\gamma\nu\to\gamma\bar{\nu}$ cannot take place in the forward direction, while $\gamma\gamma\to\nu\nu$ and $\nu\nu\to\gamma\gamma$ show a purely $s$-wave behavior. These are also different from the SM processes. Second, our cross sections are proportional to $(v^{2}/m_{e}^{2})\Lambda_{\rm NP}^{-6}$ while the SM ones are typical one-loop processes of order $m_{W}^{-8}\ln^{2}(m_{W}^{2}/m_{e}^{2})$. This results in a different low energy behavior in cross sections: our processes behave as $\omega^{4}$ while the SM ones go as $\omega^{6}$. All of this power counting is indeed consistent with the fact that the effective operators for $\nu\gamma$ interactions start with dimension 7 here and with dimension 8 in SM. Numerically, in our case $\sigma(\gamma\nu_{\alpha}\to\bar{\nu}_{\beta}\gamma)/\sigma(\gamma\gamma\to\nu%
_{\alpha}\nu_{\beta})=1~{}(1/2)$ for $\nu_{\alpha}=\nu_{\beta}$ ($\nu_{\alpha}\neq\nu_{\beta}$), which is in contrast to the SM case $\sigma(\nu\gamma\to\nu\gamma)/\sigma(\gamma\gamma\to\nu\bar{\nu})\sim 15$.
To get some feel about the orders of magnitude of various processes here and in SM, we make a simplifying assumption in our matching conditions shown in equation (13), i.e., the Wilson coefficient $C_{LeHD}^{pr}=0$ in SMEFT, so that the Wilson coefficient $C_{e\nu 2}^{S,prst}=0$ in LEFT while $C_{e\nu 1}^{S,prst}$ gains a contribution from the Wilson coefficient $C_{\bar{e}LLLH}^{prst}$ in SMEFT. To compare with the SM processes, we consider the case with $\nu_{\alpha}=\nu_{\beta}\equiv\nu$, so that effectively we have from equations (13) and (23),
$$C_{\nu F1}^{11}=-\frac{\sqrt{2}v}{16\pi m_{e}}C_{\bar{e}LLLH}^{1111},~{}~{}~{}%
C_{\nu F2}^{11}=i\frac{3\sqrt{2}v}{32\pi m_{e}}C_{\bar{e}LLLH}^{1111},$$
(29)
where the superscript 1 refers to the first generation neutrino and charged lepton. Parameterizing $|C_{\bar{e}LLLH}^{1111}|=\Lambda_{\rm NP}^{-3}$, this gives
$$\sigma(\gamma\nu\to\bar{\nu}\gamma)={13\alpha_{\rm em}^{2}\over 128\pi^{3}}{v^%
{2}\over m_{e}^{2}}{\omega^{4}\over\Lambda_{\rm NP}^{6}}\approx 1.1\times 10^{%
-15}\left({\omega\over m_{e}}\right)^{4}\left({{\rm TeV}\over\Lambda_{\rm NP}}%
\right)^{6}\rm{fb},$$
(30)
and $\sigma(\gamma\gamma\to\nu\nu)=\sigma(\gamma\nu\to\bar{\nu}\gamma)$,
$\sigma(\nu\nu\to\gamma\gamma)=4\sigma(\gamma\nu\to\bar{\nu}\gamma)$. The above cross section is depicted in figure 2 as a function of $\omega/m_{e}$ at three values of the new physics scale $\Lambda_{\rm NP}=1,~{}10,~{}100~{}{\rm TeV}$. Also shown are the SM cross sections for $\gamma\nu\to\gamma\nu$, $\gamma\gamma\to\nu\bar{\nu}$ [40], and $\gamma\nu\to\gamma\gamma\nu$ [41]. The last process arises from dim-10 operators whose Wilson coefficients are significantly enhanced at one loop by a factor of $1/m_{e}^{4}$, and has an $\omega^{10}$ behavior in its cross section. As one can see from the figure, the LNV $\nu\gamma$ interactions result in a generically much larger cross section even for a high scale $\Lambda_{\rm NP}$ than the SM interactions. We will systematically explore in the future work its possible implications in cosmology.
Before we conclude this section we illustrate by examples how the dim-7 operators ${\cal O}_{\bar{e}LLLH}^{prst}$ in SMEFT that are called for in the above analysis could be generated by ultraviolet completion. The possible tree-level topologies are classified in figure 3. While the topology (a) only involves new scalar fields, the others require both scalar and fermion fields. We notice that gauge anomaly cancellation may demand vector-like fermions which are usually easy to arrange. The electroweak gauge symmetry $SU(2)_{L}\times U(1)_{Y}$ at each vertex then gives two possible solutions to the quantum numbers of the heavy fields in each topology, which are:
$$\displaystyle\textrm{model (a1)}:~{}\Sigma=(3,-1),~{}~{}\varphi=\Big{(}2,-{1%
\over 2}\Big{)},$$
$$\displaystyle\textrm{model (a2)}:~{}\Sigma=(1,-1),~{}~{}\varphi=\Big{(}2,-{1%
\over 2}\Big{)},$$
$$\displaystyle\textrm{model (b1)}:~{}S=(3,-1),~{}~{}\psi=\Big{(}2,-{3\over 2}%
\Big{)},$$
$$\displaystyle\textrm{model (b2)}:~{}S=(1,-1),~{}~{}\psi=\Big{(}2,-{3\over 2}%
\Big{)},$$
$$\displaystyle\textrm{model (c1)}:~{}S=(3,-1),~{}~{}\psi=(3,0),$$
$$\displaystyle\textrm{model (c2)}:~{}S=(1,-1),~{}~{}\psi=(1,0),$$
$$\displaystyle\textrm{model (d1)}:~{}S=\Big{(}2,-{1\over 2}\Big{)},~{}\psi=(3,0),$$
$$\displaystyle\textrm{model (d2)}:~{}S=\Big{(}2,-{1\over 2}\Big{)},~{}\psi=(1,0).$$
(31)
Let us consider model (a2) as an example. The relevant new terms in the Lagrangian are,
$$\displaystyle{\cal L}\supset Y_{\Sigma,pr}\epsilon_{ij}\overline{L^{C,i}_{p}}L%
^{j}_{r}\Sigma^{\dagger}+\lambda_{\Sigma\varphi}\Sigma\varphi^{\dagger}H+Y_{%
\varphi,pr}\epsilon_{ij}\overline{e_{p}}L^{i}_{r}\varphi^{j}+{\rm h.c.},$$
(32)
where $Y_{\Sigma}=-Y_{\Sigma}^{\rm T}$, $Y_{\varphi}$, and $\lambda_{\Sigma\varphi}$ are generally complex Yukawa coupling matrices in lepton flavors and triple scalar coupling respectively. Then the diagram (a) and its crossings in figure 3 lead to the effective interaction $C_{\bar{e}LLLH}^{prst}{\cal O}_{\bar{e}LLLH}^{prst}$. But before we present the Wilson coefficients we must first decide on the set of independent operators contained in ${\cal O}_{\bar{e}LLLH}^{prst}$ which have nontrivial flavor relations [19]:
$$\displaystyle{\cal O}_{\bar{e}LLLH}^{prst}+{\cal O}_{\bar{e}LLLH}^{ptsr}={\cal
O%
}_{\bar{e}LLLH}^{psrt}+{\cal O}_{\bar{e}LLLH}^{ptrs}={\cal O}_{\bar{e}LLLH}^{%
pstr}+{\cal O}_{\bar{e}LLLH}^{prts}.$$
(33)
Note that the second equality is actually not independent but can be obtained from the first one, and we include it only for clarity. With three generations, suppose we choose the set to be,
$${\cal O}_{\bar{e}LLLH}^{prrr},~{}{\cal O}_{\bar{e}LLLH}^{prss},~{}{\cal O}_{%
\bar{e}LLLH}^{pssr},~{}{\cal O}_{\bar{e}LLLH}^{p123},~{}{\cal O}_{\bar{e}LLLH}%
^{p132},~{}{\cal O}_{\bar{e}LLLH}^{p213},~{}{\cal O}_{\bar{e}LLLH}^{p231},$$
(34)
where $s\neq r$ assumes values $1,~{}2,~{}3$, then the redundant operators are,
$$\displaystyle{\cal O}^{p321}_{\overline{e}LLLH}$$
$$\displaystyle={\cal O}^{p231}_{\overline{e}LLLH}+{\cal O}^{p132}_{\overline{e}%
LLLH}-{\cal O}^{p123}_{\overline{e}LLLH},$$
$$\displaystyle{\cal O}^{p312}_{\overline{e}LLLH}$$
$$\displaystyle={\cal O}^{p231}_{\overline{e}LLLH}+{\cal O}^{p132}_{\overline{e}%
LLLH}-{\cal O}^{p213}_{\overline{e}LLLH},$$
$$\displaystyle{\cal O}^{prsr}_{\overline{e}LLLH}$$
$$\displaystyle={1\over 2}({\cal O}^{prss}_{\overline{e}LLLH}+{\cal O}^{pssr}_{%
\overline{e}LLLH}).$$
(35)
By integrating out heavy particles from the Lagrangian or computing first amplitudes and then rewriting them back into effective interactions, we find the Wilson coefficients for the set of independent operators shown in equation (34) upon applying Fierz and other algebraic identities:
$$\displaystyle C_{\bar{e}LLLH}^{prrr}$$
$$\displaystyle=0,$$
$$\displaystyle C_{\bar{e}LLLH}^{prss}$$
$$\displaystyle=-{\lambda_{\Sigma\varphi}\over m_{\Sigma}^{2}m_{\varphi}^{2}}{%
\cal U}_{ps;rs}=-C_{\bar{e}LLLH}^{pssr},$$
$$\displaystyle C_{\bar{e}LLLH}^{p123}$$
$$\displaystyle=2{\lambda_{\Sigma\varphi}\over m_{\Sigma}^{2}m_{\varphi}^{2}}%
\big{[}{\cal U}_{p1;32}-{\cal U}_{p3;12}\big{]},$$
$$\displaystyle C_{\bar{e}LLLH}^{p132}$$
$$\displaystyle=2{\lambda_{\Sigma\varphi}\over m_{\Sigma}^{2}m_{\varphi}^{2}}{%
\cal U}_{p1;23},$$
$$\displaystyle C_{\bar{e}LLLH}^{p213}$$
$$\displaystyle=2{\lambda_{\Sigma\varphi}\over m_{\Sigma}^{2}m_{\varphi}^{2}}%
\big{[}{\cal U}_{p2;31}-{\cal U}_{p3;21}\big{]},$$
$$\displaystyle C_{\bar{e}LLLH}^{p231}$$
$$\displaystyle=2{\lambda_{\Sigma\varphi}\over m_{\Sigma}^{2}m_{\varphi}^{2}}{%
\cal U}_{p2;13},$$
(36)
where $m_{\Sigma,\varphi}$ are the masses of the new heavy scalars and the shortcut ${\cal U}_{pr;st}=(Y_{\varphi})_{pr}(Y_{\Sigma})_{st}$ is used. With the matching condition in equation (13), we obtain the corresponding LEFT Wilson coefficient in equation (22):
$$\displaystyle C_{e\nu 1}^{S,ee\alpha\beta}=$$
$$\displaystyle{\sqrt{2}v\lambda_{\Sigma\varphi}\over 4m_{\Sigma}^{2}m_{\varphi}%
^{2}}\left[(Y_{\varphi})_{1\alpha}(Y_{\Sigma})_{1\beta}+(Y_{\varphi})_{1\beta}%
(Y_{\Sigma})_{1\alpha}\right].$$
(37)
It is interesting that while $C_{e\nu 1}^{S,eeee}=0$ due to antisymmetry of the $Y_{\Sigma}$ matrix, $C_{e\nu 1}^{S,ee\alpha\beta}$ generically does not vanish when either of the neutrino indices $\alpha,~{}\beta$ or both refers to the second or third generation.
As a second example we consider model (d2) which introduces three vector-like heavy singlet fermions $\psi$ of mass matrix $M_{\psi}$ and one doublet scalar $S$ of mass $m_{S}$. The relevant new Yukawa couplings are,
$$\displaystyle{\cal L}_{\rm yuk}=$$
$$\displaystyle(Y_{H\psi})_{pr}\epsilon_{ij}\bar{\psi}_{p}L_{r}^{i}H^{j}+(Y_{S%
\psi})_{pr}\delta_{ij}\overline{L^{C,i}_{p}}\psi_{r}S^{*j}+(Y_{Se})_{pr}%
\epsilon_{ij}\overline{e_{p}}L_{r}^{i}S^{j}+{\rm h.c.},$$
(38)
where $Y_{H\psi},~{}Y_{S\psi},~{}Y_{Se}$ are complex Yukawa coupling matrices in generation space. Choosing the same set of independent operators in equation (34), the diagram (d) in figure 3 leads to the tree-level result:
$$\displaystyle C_{\bar{e}LLLH}^{prrr}={1\over m_{S}^{2}}{\cal V}_{pr;rr},$$
$$\displaystyle C_{\bar{e}LLLH}^{prss}={1\over m_{S}^{2}}\Big{[}{\cal V}_{pr;ss}%
+{1\over 2}{\cal V}_{ps;rs}\Big{]},$$
$$\displaystyle C_{\bar{e}LLLH}^{pssr}={1\over m_{S}^{2}}\Big{[}{\cal V}_{ps;sr}%
+{1\over 2}{\cal V}_{ps;rs}\Big{]},$$
$$\displaystyle C_{\bar{e}LLLH}^{p123}={1\over m_{S}^{2}}\big{[}{\cal V}_{p1;23}%
-{\cal V}_{p3;21}\big{]},$$
$$\displaystyle C_{\bar{e}LLLH}^{p132}={1\over m_{S}^{2}}\big{[}{\cal V}_{p1;32}%
+{\cal V}_{p3;12}+{\cal V}_{p3;21}\big{]},$$
$$\displaystyle C_{\bar{e}LLLH}^{p213}={1\over m_{S}^{2}}\big{[}{\cal V}_{p2;13}%
-{\cal V}_{p3;12}\big{]},$$
$$\displaystyle C_{\bar{e}LLLH}^{p231}={1\over m_{S}^{2}}\big{[}{\cal V}_{p2;31}%
+{\cal V}_{p3;12}+{\cal V}_{p3;21}\big{]},$$
(39)
with the shortcut ${\cal V}_{pr;st}=(Y_{Se})_{pr}(Y_{S\psi}M_{\psi}^{-1}Y_{H\psi})_{st}$.
Thus the LEFT Wilson coefficient in equation (22) becomes
$$\displaystyle C_{e\nu 1}^{eeee}=-{3\sqrt{2}v\over 4m_{S}^{2}}(Y_{Se})_{11}(Y_{%
S\psi}M_{\psi}^{-1}Y_{H\psi})_{11}.$$
(40)
We see that in this case $C_{e\nu 1}^{S,eeee}$ with identical lepton flavors survives.
5 Conclusion
We have established the basis of dim-7 operators in LEFT which is a low energy effective field theory for the SM particles excluding the weak gauge bosons, the Higgs boson and the top quark. We found these operators are classified into four sectors according to their baryon and lepton numbers. Including Hermitian conjugates of the operators, there are 3168 operators with $(\Delta L,\Delta B)=(0,0)$, 750 operators with $(\Delta L,\Delta B)=(\pm 2,0)$, 588 operators with $(\Delta L,\Delta B)=(\pm 1,\mp 1)$, and 712 operators with $(\Delta L,\Delta B)=(\pm 1,\pm 1)$. We have done a tree-level matching calculation to relate the Wilson coefficients between the SMEFT defined above the electroweak scale and LEFT that incorporates new terms due to dim-7 operators in SMEFT on the one hand and matches to dim-7 operators in LEFT on the other. As a phenomenological application we have calculated the effective neutrino-photon interaction due to dim-7 operators in LEFT, and found several interesting features compared to the SM case. The cross sections for neutrino-photon scattering have a different correlation between the helicities of the photons. The interaction arises from a one-loop effect due to dim-6 operators in LEFT and is significantly enhanced at low energy by an inverse electron mass. As a consequence of this, the cross sections are even larger than their SM counterparts for a new physics scale as large as 100 TeV. Finally, we illustrate by example models how ultraviolet completion could eventually generate the mentioned dim-6 operators.
Acknowledgement
This work was supported in part by the Grants No. NSFC-11975130, No. NSFC-11575089, by The National Key Research and Development Program of China under Grant No. 2017YFA0402200, by the CAS Center for Excellence in Particle Physics (CCEPP). Xiao-Dong Ma is supported by the MOST (Grant No. MOST 106-2112-M-002-003-MY3).
References
[1]
S. Weinberg,
Phys. Rev. Lett. 43, 1566 (1979).
[2]
W. Buchmuller and D. Wyler,
Nucl. Phys. B 268, 621 (1986).
[3]
B. Grzadkowski, M. Iskrzynski, M. Misiak and J. Rosiek,
JHEP 1010, 085 (2010)
[arXiv:1008.4884 [hep-ph]].
[4]
L. Lehman,
Phys. Rev. D 90, 125023 (2014)
[arXiv:1410.4193 [hep-ph]].
[5]
Y. Liao and X. D. Ma,
JHEP 1611, 043 (2016)
[arXiv:1607.07309 [hep-ph]].
[6]
L. Lehman and A. Martin,
JHEP 1602, 081 (2016)
[arXiv:1510.00372 [hep-ph]].
[7]
B. Henning, X. Lu, T. Melia and H. Murayama,
JHEP 1708, 016 (2017)
[arXiv:1512.03433 [hep-ph]].
[8]
H. L. Li, Z. Ren, J. Shu, M. L. Xiao, J. H. Yu and Y. H. Zheng,
arXiv:2005.00008 [hep-ph].
[9]
C. W. Murphy,
arXiv:2005.00059 [hep-ph].
[10]
K. S. Babu, C. N. Leung and J. T. Pantaleone,
Phys. Lett. B 319, 191 (1993)
[hep-ph/9309223].
[11]
S. Antusch, M. Drees, J. Kersten, M. Lindner and M. Ratz,
Phys. Lett. B 519, 238 (2001)
[hep-ph/0108005].
[12]
C. Grojean, E. E. Jenkins, A. V. Manohar and M. Trott,
JHEP 1304, 016 (2013)
[arXiv:1301.2588 [hep-ph]].
[13]
J. Elias-Miro, J. R. Espinosa, E. Masso and A. Pomarol,
JHEP 1308, 033 (2013)
[arXiv:1302.5661 [hep-ph]].
[14]
J. Elias-Miro, J. R. Espinosa, E. Masso and A. Pomarol,
JHEP 1311, 066 (2013)
[arXiv:1308.1879 [hep-ph]].
[15]
E. E. Jenkins, A. V. Manohar and M. Trott,
JHEP 1310, 087 (2013)
[arXiv:1308.2627 [hep-ph]].
[16]
E. E. Jenkins, A. V. Manohar and M. Trott,
JHEP 1401, 035 (2014)
[arXiv:1310.4838 [hep-ph]].
[17]
R. Alonso, E. E. Jenkins, A. V. Manohar and M. Trott,
JHEP 1404, 159 (2014)
[arXiv:1312.2014 [hep-ph]].
[18]
R. Alonso, H. M. Chang, E. E. Jenkins, A. V. Manohar and B. Shotwell,
Phys. Lett. B 734, 302 (2014)
[arXiv:1405.0486 [hep-ph]].
[19]
Y. Liao and X. D. Ma,
JHEP 1903, 179 (2019)
[arXiv:1901.10302 [hep-ph]].
[20]
B. Henning, X. Lu, T. Melia and H. Murayama,
JHEP 1710, 199 (2017)
[arXiv:1706.08520 [hep-th]].
[21]
B. Gripaios and D. Sutherland,
JHEP 1901, 128 (2019)
[arXiv:1807.07546 [hep-ph]].
[22]
J. C. Criado,
Eur. Phys. J. C 79, 256 (2019)
[arXiv:1901.03501 [hep-ph]].
[23]
R. M. Fonseca,
Phys. Rev. D 101, 035040 (2020)
[arXiv:1907.12584 [hep-ph]].
[24]
C. B. Marinissen, R. Rahn and W. J. Waalewijn,
arXiv:2004.09521 [hep-ph].
[25]
A. Aparici, K. Kim, A. Santamaria and J. Wudka,
Phys. Rev. D 80, 013010 (2009)
[arXiv:0904.3244 [hep-ph]].
[26]
F. del Aguila, S. Bar-Shalom, A. Soni and J. Wudka,
Phys. Lett. B 670, 399 (2009)
[arXiv:0806.0876 [hep-ph]].
[27]
S. Bhattacharya and J. Wudka,
Phys. Rev. D 94, 055022 (2016)
[arXiv:1505.05264 [hep-ph]].
[28]
Y. Liao and X. D. Ma,
Phys. Rev. D 96, 015012 (2017)
[arXiv:1612.04527 [hep-ph]].
[29]
G. Buchalla, A. J. Buras and M. E. Lautenbacher,
Rev. Mod. Phys. 68, 1125 (1996)
[hep-ph/9512380].
[30]
E. E. Jenkins, A. V. Manohar and P. Stoffer,
JHEP 1803, 016 (2018)
[arXiv:1709.04486 [hep-ph]].
[31]
W. Dekens and P. Stoffer,
arXiv:1908.05295 [hep-ph].
[32]
M. Chala and A. Titov,
arXiv:2001.07732 [hep-ph].
[33]
T. Li, X. D. Ma and M. A. Schmidt,
arXiv:2005.01543 [hep-ph].
[34]
E. E. Jenkins, A. V. Manohar and P. Stoffer,
JHEP 1801, 084 (2018)
[arXiv:1711.05270 [hep-ph]].
[35]
V. Cirigliano, W. Dekens, J. de Vries, M. Graesser and E. Mereghetti,
JHEP 1712, 082 (2017)
[arXiv:1708.09390 [hep-ph]].
[36]
Y. Liao, X. Ma and H. Wang,
JHEP 2001, 127 (2020)
[arXiv:1909.06272 [hep-ph]].
[37]
Y. Liao, X. Ma and H. Wang,
JHEP 2003, 120 (2020)
[arXiv:2001.07378 [hep-ph]].
[38]
G. Chalons and F. Domingo,
Phys. Rev. D 89, 034004 (2014)
[arXiv:1303.6515 [hep-ph]].
[39]
W. Altmannshofer, M. Tammaro and J. Zupan,
JHEP 1909, 083 (2019)
[arXiv:1812.02778 [hep-ph]].
[40]
D. A. Dicus and W. W. Repko,
Phys. Rev. D 48, 5106 (1993)
[arXiv:hep-ph/9305284 [hep-ph]].
[41]
D. A. Dicus and W. W. Repko,
Phys. Rev. Lett. 79, 569 (1997)
[arXiv:hep-ph/9703210 [hep-ph]]. |
Novel spectral broadening from
vector--axial-vector mixing in dense matter #1#1#1Talk given by M. Harada at
YITP workshop on
“Thermal Quantum Field Theory and Their Applications 2009”
(September, 3 - 5, 2009, Yukawa Institute, Kyoto, Japan).
This talk is based on the work done in Ref. VAmix-dense .
Masayasu Harada
Department of Physics, Nagoya University,
Nagoya, 464-8602, Japan
Chihiro Sasaki
Physik-Department,
Technische Universität München,
D-85747 Garching, Germany
Abstract
In this write-up we summarize main result of
our recent analysis on the mixing
between transverse $\rho$ and $a_{1}$ mesons
through a set of $\omega\rho a_{1}$-type interactions
in dense baryonic matter.
In the analysis, we showed that
a clear enhancement of the
vector spectral function appears below $\sqrt{s}=m_{\rho}$
for small three-momenta of the $\rho$ meson,
and thus the vector spectrum exhibits broadening.
In-medium modifications of hadrons have been extensively
explored in the context of chiral dynamics of QCD review ; rapp .
Due to an interaction with pions in the heat bath, the vector
and axial-vector current correlators are mixed.
At low temperatures or densities a low-energy theorem based on
chiral symmetry describes this mixing (V-A mixing) theorem .
The effects to the thermal vector spectral function have been
studied through the theorem vamix , or using chiral reduction
formulas based on a virial expansion chreduction ,
and near critical temperature in a chiral effective field theory
involving the vector and axial-vector mesons as well as the
pion our .
It has been derived, as a novel effect at finite baryon density,
that a Chern-Simons term leads to mixing between the vector and
axial-vector fields in a holographic QCD model hqcd .
Unlike at zero density, the V-A mixing at finite density
appears in a tree-level Lagrangian.
This mixing modifies the dispersion relation of the transverse
polarizations and will affect the in-medium current correlation
functions independently of specific model dynamics.
In Ref. VAmix-dense ,
we focused on the V-A mixing at tree level and its consequence
on
the in-medium spectral functions which are the main input
to the experimental observables.
We showed that
the mixing produces
a clear enhancement of the vector spectral function
below $\sqrt{s}=m_{\rho}$,
and that
the vector spectral function
is broadened due to the mixing.
We also discussed its relevance to dilepton measurements.
At finite baryon density a system preserves parity
but violates charge conjugation invariance.
Chiral Lagrangians thus in general build
in the term
$${\cal L}_{\rho a_{1}}=2C\,\epsilon^{0\nu\lambda\sigma}\mbox{tr}\left[\partial_%
{\nu}V_{\lambda}\cdot A_{\sigma}{}+\partial_{\nu}A_{\lambda}\cdot V_{\sigma}%
\right]\,,$$
(1)
for the vector $V^{\mu}$ and axial-vector $A^{\mu}$ mesons with
the total anti-symmetric tensor $\epsilon^{0123}=1$ and a
parameter $C$.
This mixing results in the dispersion relation hqcd
$$p_{0}^{2}-\bar{p}^{2}=\frac{1}{2}\left[m_{\rho}^{2}+m_{a_{1}}^{2}\pm\sqrt{(m_{%
a_{1}}^{2}-m_{\rho}^{2})^{2}{}+16C^{2}\bar{p}^{2}}\right]\,,$$
(2)
which describes the propagation of a mixture of the transverse $\rho$
and $a_{1}$ mesons with non-vanishing three-momentum $|\vec{p}|=\bar{p}$.
The longitudinal polarizations, on the other hand,
follow the standard dispersion
relation, $p_{0}^{2}-\bar{p}^{2}=m_{\rho,a_{1}}^{2}$.
When the mixing vanishes as $\bar{p}\to 0$, Eq. (2) with
lower sign provides $p_{0}=m_{\rho}$ and it with upper sign
does $p_{0}=m_{a_{1}}$. In the following, we call the mode following
the dispersion relation with the lower sign in Eq. (2)
“the $\rho$ meson”,
and that with the upper sign “the $a_{1}$ meson”.
The mixing strength $C$ in Eq. (1) can be
estimated assuming the $\omega$-dominance
in the following way:
The gauged Wess-Zumino-Witten terms in
an effective chiral Lagrangian include
the $\omega$-$\rho$-$a_{1}$ term kaiser which leads to
the following mixing term
$${\cal L}_{\omega\rho a_{1}}=g_{\omega\rho a_{1}}\langle\omega_{0}\rangle%
\epsilon^{0\nu\lambda\sigma}\mbox{tr}\left[\partial_{\nu}V_{\lambda}\cdot A_{%
\sigma}{}+\partial_{\nu}A_{\lambda}\cdot V_{\sigma}\right]\,,$$
(3)
where the $\omega$ field is replaced with its expectation value
given by $\langle\omega_{0}\rangle=g_{\omega NN}\cdot n_{B}/m_{\omega}^{2}$.
One finds with empirical numbers
$C=g_{\omega\rho a_{1}}\langle\omega_{0}\rangle\simeq 0.1$ GeV
at normal nuclear matter density.
As we will show below, this is too small to have an importance
in the correlation functions.
In a holographic QCD approach, on the other hand,
the effects from an infinite tower of the $\omega$-type vector
mesons are summed up to give
$C\simeq 1\,\mbox{GeV}\cdot(n_{B}/n_{0})$
with normal nuclear matter density $n_{0}=0.16$ fm${}^{-3}$ hqcd .
In the following we
assume an actual value of $C$ in QCD
in the range $0.1<C<1$ GeV.
Some importance of the higher Kaluza-Klein
(KK) modes even in vacuum in the
context of holographic QCD can be seen in the pion electromagnetic
form factor at the photon on-shell:
This is saturated by the lowest four vector mesons in a top-down
holographic QCD model ss ; matsuzaki2 .
In hot and dense environment those higher members get modified
and the masses might be somewhat decreasing evidenced in an
in-medium holographic model sstem . This might provide
a strong V-A mixing $C>0.1$ GeV in three-color QCD
and the dilepton measurements may give a good testing ground.
In Fig. 1, we show the dispersion
relations (2) for the transverse modes together
with those for the longitudinal modes with $C=1$ and
$0.5$ GeV.
This shows that, when $C=0.5$ GeV,
there are only small changes for both $\rho$ and $a_{1}$ mesons,
while a substantial change for $\rho$ meson when $C=1$ GeV.
For very large $\bar{p}$ the longitudinal and transverse
dispersions are in parallel with a finite gap, $\pm C$.
In Fig. 2,
we plot the integrated spectrum over three momentum,
which is a main ingredient in dilepton production rates.
Figure 2 (left) shows a clear enhancement of the spectrum
below $\sqrt{s}=m_{\rho}$ due to the mixing.
This enhancement becomes much
suppressed when the $\rho$ meson is moving with a large three-momentum
as shown in Fig. 2 (right). The upper
bump now emerges more remarkably and becomes a clear indication
of the in-medium effect from the $a_{1}$ via the mixing.
As an application of the above in-medium spectrum, we calculate
the production rate of a lepton pair emitted from dense
matter through a decaying virtual photon.
Figure 3 presents the integrated rate at $T=0.1$ GeV
for $C=1$ GeV.
One clearly observes a strong three-momentum dependence and an
enhancement below $\sqrt{s}=m_{\rho}$ due to the Bose
distribution function which result in a strong spectral broadening.
The total rate is mostly governed by the spectrum with
low momenta $\bar{p}<0.5$ GeV due to the large mixing parameter $C$.
When density is decreased,
the mixing effect gets irrelevant and
consequently in-medium effect in low $\sqrt{s}$ region is reduced
in compared with that at higher density.
The calculation performed in hadronic many-body theory in fact
shows that the $\rho$ spectral function with a low momentum
carries details of medium modifications riek .
One may have a chance to observe it in heavy-ion collisions
with certain low-momentum binning at J-PARC, GSI/FAIR and RHIC
low-energy running.
It is straightforward to introduce other V-A mixing between
$\omega$-$f_{1}(1285)$ and $\phi$-$f_{1}(1420)$.
In Fig. 4 we plot the integrated rate at $T=0.1$ GeV
with several mixing strength $C$ which are phenomenological option.
One observes that the enhancement below $m_{\rho}$ is suppressed
with decreasing mixing strength. This forms into a broad bump
in low $\sqrt{s}$ region and its maximum moves toward $m_{\rho}$.
Similarly, some contributions are seen just below $m_{\phi}$.
This effect starts at threshold $\sqrt{s}=2m_{K}$.
Self-consistent calculations of the spectrum in dense medium
will provide a smooth change and this eventually makes the $\phi$
meson peak somewhat broadened.
Finally, we remark that
the importance of the mixing effect studied here
relies on the coupling
strength $C$.
Holographic QCD predicts an extremely strong
mixing $C\sim 1$ GeV at $n_{B}=n_{0}$ which leads to
vector meson condensation at $n_{B}\sim 1.1\,n_{0}$ hqcd .
This may be excluded by known properties of nuclear
matter and therefore in reality the strength $C$ will be
smaller. We have discussed a possible range of $C$ to be
$0.1<C<1$ GeV based on higher excitations and their in-medium
modifications.
The parameter $C$ does carry an unknown density dependence.
This will be determined in an elaborated treatment of
hadronic matter along with the underlying QCD dynamics.
If $C\sim 0.1$ GeV at $n_{0}$ were preferred as
the lowest-omega dominance, the mixing effect is irrelevant there.
However, it becomes more important at higher densities,
e.g. $C=0.3$ GeV at $n_{B}/n_{0}=3$ which leads to
a distinct modification from the spectrum in free space.
We would like to thank the organizers of the workshop
for giving an opportunity to present this work.
We acknowledge stimulating discussions with B. Friman,
N. Kaiser, S. Matsuzaki, M. Rho and W. Weise.
The work of C.S. has been supported in part
by the DFG cluster of excellence “Origin and Structure of the
Universe”.
The work of M.H. has been supported in part by
the JSPS Grant-in-Aid for Scientific Research (c) 20540262
and the Global COE Program of Nagoya University
“Quest for Fundamental Principles in the Universe (QFPU)”
from JSPS and MEXT of Japan.
References
(1)
M. Harada and C. Sasaki,
arXiv:0902.3608 [hep-ph].
(2)
See, e.g.,
V. Bernard and U. G. Meissner,
Nucl. Phys. A 489, 647 (1988);
T. Hatsuda and T. Kunihiro,
Phys. Rept. 247, 221 (1994);
R. D. Pisarski,
hep-ph/9503330;
G. E. Brown and M. Rho,
Phys. Rept. 269, 333 (1996);
F. Klingl, N. Kaiser and W. Weise,
Nucl. Phys. A 624, 527 (1997);
F. Wilczek,
hep-ph/0003183;
G. E. Brown and M. Rho,
Phys. Rept. 363, 85 (2002).
(3)
R. Rapp and J. Wambach,
Adv. Nucl. Phys. 25, 1 (2000),
R. S. Hayano and T. Hatsuda,
arXiv:0812.1702 [nucl-ex],
R. Rapp, J. Wambach and H. van Hees,
arXiv:0901.3289 [hep-ph].
(4)
M. Dey, V. L. Eletsky and B. L. Ioffe,
Phys. Lett. B 252, 620 (1990),
B. Krippa,
Phys. Lett. B 427, 13 (1998).
(5)
E. Marco, R. Hofmann and W. Weise,
Phys. Lett. B 530, 88 (2002),
M. Urban, M. Buballa and J. Wambach,
Phys. Rev. Lett. 88, 042002 (2002).
(6)
J. V. Steele, H. Yamagishi and I. Zahed,
Phys. Lett. B 384, 255 (1996);
Phys. Rev. D 56, 5605 (1997),
K. Dusling, D. Teaney and I. Zahed,
Phys. Rev. C 75, 024908 (2007),
K. Dusling and I. Zahed,
Nucl. Phys. A 825, 212 (2009).
(7)
C. Sasaki, M. Harada and W. Weise,
Prog. Theor. Phys. Suppl. 174, 173 (2008);
Phys. Rev. D 78, 114003 (2008);
Nucl. Phys. A 827, 350C (2009).
(8)
S. K. Domokos and J. A. Harvey,
Phys. Rev. Lett. 99, 141602 (2007).
(9)
N. Kaiser and U. G. Meissner,
Nucl. Phys. A 519, 671 (1990).
(10)
T. Sakai and S. Sugimoto,
Prog. Theor. Phys. 113, 843 (2005);
Prog. Theor. Phys. 114, 1083 (2005).
(11)
M. Harada, S. Matsuzaki and K. Yamawaki,
in preparation.
(12)
K. Peeters, J. Sonnenschein and M. Zamaklar,
Phys. Rev. D 74, 106008 (2006).
(13)
M. Harada and C. Sasaki,
Phys. Rev. D 73, 036001 (2006).
(14)
F. Riek, R. Rapp, T. S. Lee and Y. Oh,
Phys. Lett. B 677, 116 (2009). |
Hollow-core infrared fiber incorporating metal-wire metamaterial
Min Yan
[email protected]
Niels Asger Mortensen
[email protected]
Department of Photonics Engineering (DTU Fotonik),
Technical University of Denmark
DK-2800 Kgs. Lyngby, Denmark
Abstract
Infrared (IR) light is considered important for short-range wireless communication, thermal sensing, spectroscopy, material processing, medical surgery, astronomy etc. However, IR light is in general much harder to transport than optical light or microwave radiation. Existing hollow-core IR waveguides usually use a layer of metallic coating on the inner wall of the waveguide. Such a metallic layer, though reflective, still absorbs guided light significantly due to its finite Ohmic loss, especially for transverse-magnetic (TM) light. In this paper, we show that metal-wire based metamaterials may serve as an efficient TM reflector, reducing propagation loss of the TM mode by two orders of magnitude. By further imposing a conventional metal cladding layer, which reflects specifically transverse-electric (TE) light, we can potentially obtain a low-loss hollow-core fiber. Simulations confirm that loss values for several low-order modes are comparable to the best results reported so far.
pacs: 42.25.Bs, 78.66.Sq
I Introduction
Waveguides for IR light are most often inferior as compared to those for optical or microwave light. In general the lack of excellent transparent solids at IR wavelength calls for hollow-core guidance rather than solid-core guidance relying on total internal reflection (TIR). While presently the latter solution remains attractive for single-mode light guidance usually over a short distance, extension of such IR fiber to high-power light guidance is difficult due to adverse effects including Fresnel loss, thermal lensing, and low damage threshold power, etc. Without TIR, hollow-core fibers require a highly-reflective cladding mirror. At the moment, there are mainly two recognized hollow-core IR waveguiding structures, either relying on reflective metal mirrors Harrington:IRWGReview or a dielectric photonic band-gap (PBG) material Temelkuran:BraggFiber which was first explored for optical wavelengths over the past decade Cregan:PBGFiber . Unlike at microwave wavelength, metals at IR wavelength are less perfectly reflecting mirrors due to the presence of finite Ohmic absorption. Hence a direct IR waveguiding using a hollow metallic fiber (HMF) does not work well. The same dilemma exists for the transport of terahertz electromagnetic (EM) waves, i.e. far-IR light (refer to Bowden:TeraHertzFiber and references therein). In practice, it is necessary to further refine the HMF by adding an additional dielectric coating on the metal cladding Harrington:IRWGReview ; Bowden:TeraHertzFiber . Photonic band-gap guidance on the other hand remains as a potential solution, but the guiding mechanism is inherently band-gap limited and heavily relies on a highly periodic photonic crystal cladding. Bragg fibers with a 700 $\rm\mu m$-diameter core have facilitated hollow-core band-gap guidance of CO${}_{2}$ laser radiation ($10.6{\rm\mu m}$) with a propagation loss of $\sim$1 dB/m Temelkuran:BraggFiber . The state-of-the-art HMF with a dielectric coating with the same core size experiences a similar loss level at this wavelength Harrington:IRWGReview .
In this paper, we show that metamaterial holds promises for entirely new guiding mechanisms, being of neither photonic band-gap nature nor relying on the classical total-internal reflection! In fact, the potentials of especially metal-based metamaterial has already been exploited for novel hollow-core planar waveguide Schwartz:externalRef or fiber Smith:nanolett designs. In the present work, we propose to incorporate metal-wire metamaterial into a cladding mirror design for hollow-core IR guidance. The hollow guidance mechanism is fundamentally different from previous proposals Schwartz:externalRef ; Smith:nanolett in which a metamaterial cladding with an effective refractive index between 0 and 1 is deployed for achieving light confinement, but rather based on the fact that a highly anisotropic medium with difference signs in its effective permittivity tensor components is able to totally reflect transverse-magnetic (TM) light. TM-polarized ligth is previously considered problematic to be propagated in a hollow metallic waveguide, both in slab or fiber geometry, as it suffers significantly higher loss compared to transverse-electric (TE) light.
The paper is organized as follows. First, in Section 2 we will show why a planar interface between a metal-wire metamaterial and air can serve as a mirror for TM-polarized light at IR frequency. The condition required for such a total external reflection is stated. We quantitatively derive the angular reflectance spectrum for a silver-wire medium approximated as a homogeneous medium with a effective medium theory (EMT), proving its superior reflection performance at large incidence angles compared to a bulk silver. In Section 3, we study hollow-core IR guidance in cylindrical fibers with metal-wire medium as cladding. The fiber, which converges to the traditional HMF when the fraction of metal increases, has a drastically improved performance over HMF, as we will support by calculations based on both EMT as well as more rigorous full-vectorial finite-element simulations accounting for the mesoscopic structure of the metamaterial cladding. In Section 4, we briefly compare how our proposed metamaterial fiber differs with a HMF with a dielectric inner coating. Finally, discussion and conclusion are presented in Section 5.
II Planar geometry
II.1 Why it reflects?
The planar geometry serves as a simple yet clear example explaining why a metal-wire based metamaterial can easily form a reflector for TM light at IR frequency. Figure 1(a) describes a semi-infinite metamaterial formed by $z$-oriented metal wires imbedded in a host dielectric occupying a half-space, say $x<0$. When the wire diameter and wire separation are much smaller than the operating wavelength, the composite can be conveniently treated as a homogenized medium with an effective permittivity tensor of the diagonal form $\overline{\overline{\varepsilon}}=\mathrm{diag}({\varepsilon_{x},\varepsilon_{%
y},\varepsilon_{z}})$ while for the permeability the medium appears isotropic with $\mu=1$. For uniformly dispersed wires we have $\varepsilon_{x}=\varepsilon_{y}\equiv\varepsilon_{t}$. When the size of metal wires is close to the skin depth of the metal (tens of nanometers), the Maxwell–Garnett theory (MGT) is adequate, especially at small metal filling fraction, for deriving the effective permittivity tensor of the homogenized metamaterial elser:261102 ; Liu:08:nanowire , i.e.
$$\displaystyle\varepsilon_{t}$$
$$\displaystyle=$$
$$\displaystyle\varepsilon_{d}+\frac{f_{m}\varepsilon_{d}(\varepsilon_{m}-%
\varepsilon_{d})}{\varepsilon_{d}+0.5f_{d}(\varepsilon_{m}-\varepsilon_{d})}$$
(1a)
$$\displaystyle\varepsilon_{z}$$
$$\displaystyle=$$
$$\displaystyle f_{m}\varepsilon_{m}+f_{d}\varepsilon_{d},$$
(1b)
where $f_{d}$ and $f_{m}$ represent filling ratios of the dielectric and metallic materials ($f_{d}+f_{m}=1$), respectively; $\epsilon_{d}$ and $\epsilon_{m}$ are the permittivity values of the element dielectric and metallic materials, respectively.
As shown by Eq. (1), the effective permittivity components are solely determined by $f_{m}$. The component $\varepsilon_{z}$ is usually negative due to un-restricted electron motion in $z$ direction, while $\varepsilon_{t}$ is usually positive elser:261102 ; Liu:08:nanowire ; PhysRevLett.76.4773 ; Yao:2008:nanowire . Such a metamaterial with both positive and negative material tensor components is referred to as an indefinite medium Smith:03:indefiniteMedium .
In Fig. 1(a), we schematically show that a metal-wire medium totally reflects a TM-polarized light (with field components E${}_{x}$, H${}_{y}$, and E${}_{z}$) incident from air, with its wavevector lying in $xz$ plane. Before setting out to investigate the reflection conditions, we show how the total reflection is possible with a rather common indefinite medium fulfilling the very first requirement
$$\varepsilon_{x}>0,\ \mu_{y}>0,\ \varepsilon_{z}<0.\ \ \ \ \ \mathrm{(%
requirement\ 1)}$$
(2)
The medium chosen has $\epsilon_{x}=2,\mu_{y}=1,\epsilon_{z}=-10$ (a typical effective medium realizable with metal wires). Notice that only three material parameters are relevant to TM-polarized light.
When a TM wave is propagating in $xz$ plane in a general medium, the dispersion relation of the wave is governed by
$$\frac{k_{z}^{2}}{\varepsilon_{x}}+\frac{k_{x}^{2}}{\varepsilon_{z}}=k_{0}^{2}%
\mu_{y},$$
(3)
where $k_{0}=\omega/c$ is the free-space wave number. In a waveguide picture where the EM field is propagating along the $z$ direction, $k_{z}$ is also called propagation constant $\beta$ which is required to be real. For EM wave propagating in an ordinary isotropic dielectric medium, the eligible real $(k_{z},k_{x})$ combinations form a circle on the $k_{z}$-$k_{x}$ plane, as shown in Fig. 1(b) for the case of vacuum. However, for an indefinite medium defined by Eq. (2), Eq. (3) becomes a hyperbolic equation with real $k_{x}$ if $k_{z}>k_{c}$, where $k_{c}=\sqrt{\varepsilon_{x}(k_{0}^{2}\mu_{y}-k_{x}^{2}/\varepsilon_{z})}$, or otherwise an elliptic equation with imaginary $k_{x}$. For a waveguiding purpose, we always desire an imaginary $k_{x}$ in order for the field to be evanescent along the lateral $x$ direction (a wave propagating along $z$ while evanescent along $x$ is therefore a surface wave). Figure 1(b) shows the $k_{x}\sim k_{z}$ dispersion relation for the above-mentioned representative indefinite medium. As indicated by Fig. 1(b), the incident light from air (red arrow) can easily be matched in tangential $k$ component ($k_{z}$) by a surface wave in the indefinite medium (solid-dotted arrow). Therefore, the medium reflects TM light with $k$ lying in $xz$ plane. It follows that a hollow-core slab waveguide with two claddings made of such media is able to propagate TM light.
Based on the dispersion diagram in Fig. 1(b), we address a further requirement in order for the total reflection to happen at all incidence angles in $xz$ incidence plane. That is, the hyperbola shouldn’t touch the red circle, otherwise at large incident angles, $k_{z}$ component of the incident light can be matched by a propagating wave in the indefinite medium. That is, light would transmit through the substrate. This imposes $k_{0}^{2}\varepsilon_{x}\mu_{y}>k_{0}^{2}$, or simply
$$\varepsilon_{x}\mu_{y}>1,\ \ \ \ \ \mathrm{(requirement\ 2)}$$
(4)
which should be imposed on top of requirement 1.
Putting the two requirements together, we obtain a relaxed (but sufficient) condition for a medium possessing all-angle reflection (in $xz$ incidence plane) for TM light, as
$$\varepsilon_{x}>1,\ \mu_{y}=1,\ \varepsilon_{z}<0.$$
(5)
This condition is rather easy to be fulfilled with a metal-wire medium. For instance, based on MGT approximation, silver wires in a dielectric host with any permittivity from 2 to 12, and with any metal filling fraction from 10% to 50%, will be able to satisfy the above condition from wavelength at $2\mu$m and beyond. This suggests the broad structural and spectral applicability of our proposed reflector and in turn waveguide designs.
II.2 How well it reflects?
Having understood the all-angle reflection for $k$ lying in $xz$ plane, our next immediate task is to know how well the reflection is. Using a full-wave analytical technique Hecht:Optics , we examine how the reflectance of such a metamaterial substrate is compared to that of a plain metal for TM light. In this paper we concentrate primarily on the CO${}_{2}$ laser wavelength, i.e. 10.6 ${\rm\mu m}$. This wavelength is important especially due to the fact that high-power CO${}_{2}$ laser beam can be used for material processing and various medical surgeries. For the wires we consider the use of silver with $\varepsilon=-2951+1654i$ Palik:OpticalConstMetal , imbedded in a host dielectric with a refractive index of 2.5 corresponding to $\varepsilon=6.25$. Several transparent materials at this wavelength have an index around this value, such as zinc selenide (ZnSe) and arsenic selenide (As${}_{2}$Se${}_{3}$). We consider a metamaterial in which silver wires take up a volumetric fraction of 20%.
By Eq. (1), the metamaterial under consideration can be effectively treated as a homogeneous medium with $\varepsilon_{t}=9.3876+0.0071i$ and $\varepsilon_{z}=-585.2+330.8i$, i.e. an indefinite medium. Using this approximation, the reflectance by the metamaterial-air interface as a function of incidence angle is derived in Fig. 2(a). The wavevector of the incoming plane wave is in $xz$ plane. The same reflectance curve is derived for a plain silver metal. Both reflectance curves are characterized by a principal angle-of-incidence Hecht:Optics , where the reflectance is at minimum. By comparison, it is noticed that the reflectance of the indefinite medium is lower than that of a plain metal for most incidence angles. However, as clearly shown by the zoom-in plot in Fig. 2(b), at large incidence angles (greater than 88.6 degree) the metamaterial reflects better. In a large-core hollow waveguide, modes in the lowest orders indeed correspond to near-90 degree incidence angles, as will be confirmed in the next section. Therefore, a metal-wire medium can be potentially used for making a less lossy hollow waveguide as opposed to using plain metal.
However, the metal-wire medium has difficulty in confining TE light, since the metamaterial appears to TE light as if it is a normal dielectric. To remedy the problem, we further add a metal substrate beneath the metamaterial. In Fig. 3, we show reflectance from such a hybrid substrate. The metamaterial layer is the same as that we considered in Fig. 2 and has a thickness of 2$\mu$m. The reflectance spectrum is compared to that from a plain metal, also shown in Fig. 3. It is observed that by adding a layer of metamaterial the reflectance decreases only very slightly, with reflectance still higher than 99.89% when incident angle is larger than $87^{\circ}$. From Figs. 2 and 3, one also sees that a plain silver surface reflects TE light much better than TM light, especially at large incidence angles.
III Fiber geometry
III.1 Confinement condition and the proposal
It is relatively straightforward to understand that the metal-wire medium discussed in Section 2 can be rolled into a fiber geometry, with the metal wires running along the fiber axis, to propagate TM light in a hollow core [see Fig. 4(a)]. Now the TM light, in the fiber’s native cylindrical coordinate, consists of E${}_{r}$, H${}_{\theta}$, and E${}_{z}$ field components. Here to be more rigorous, we re-deduce the material requirement for confinement in cylindrical coordinate system.
For a fiber with an air core, one can derive the most general requirement for achieving TM field confinement (see appendix), expressed in terms of the cladding material parameters, as
$$\frac{\varepsilon_{z}}{\varepsilon_{t}}(k_{0}^{2}\mu_{t}\varepsilon_{t}-\beta^%
{2})<0.$$
(6)
Here, $\beta$ is the propagation constant of confined mode while $\mu_{t}$ and $\varepsilon_{t}$ are cladding material parameters defined as $\varepsilon_{r}=\varepsilon_{\theta}\equiv\varepsilon_{t}$ and $\mu_{r}=\mu_{\theta}\equiv\mu_{t}$.
Similarly, the requirement for TE (with field components H${}_{r}$, E${}_{\theta}$, and H${}_{z}$) confinement is
$$\frac{\mu_{z}}{\mu_{t}}(k_{0}^{2}\varepsilon_{t}\mu_{t}-\beta^{2})<0.$$
(7)
Note that a hollow-core guided mode should have $\beta<k_{0}$.
Apparently there are several combinations of $\varepsilon_{t}$, $\varepsilon_{z}$, $\mu_{t}$ and $\mu_{z}$ which fulfil the inequalities in Eqs. (6) and (7). Bulk metal (i.e. $\varepsilon<0$ and $\mu=1$) is of course a solution. In relating to the metal-wire metamaterial, we identify another set of solutions specifically for TM confinement, which is
$$\varepsilon_{t}>1,\ \mu_{t}=1,\ \varepsilon_{z}<0.$$
(8)
Notice Eq. (8) agrees perfectly with Eq. (5). A similar solution exists for TE confinement, namely $\mu_{t}>1$, $\varepsilon_{t}=1$ and $\mu_{z}<0$. It is so far difficult to realize low-loss metamaterial fulfilling this TE confinement condition. In addition, plain metal is already an excellent TE reflector, as concluded in Subsection 2.2. As will be shown later, the propagation loss of the TE${}_{01}$ mode can easily be 1000 times smaller than that of the TM${}_{01}$ mode in a HMF. Therefore, another choice of TE reflector at a higher expense is unworthy.
The fiber structure we are to propose takes advantages of both a metal-wire medium for reflection of TM light and a plain metal for reflection of TE light. A schematic diagram of the fiber is shown in Fig. 4(b). The fiber cladding consists of a thin layer of metamaterial for reflecting TM light and another plain metal layer for reflecting TE light. Notice that TE light is perturbed only very slightly by the presence of the metamaterial layer, as implied by Fig. 3. We refer to this fiber as a hybrid-clad fiber. For practical realizations, the fiber structure may be coated with an outer jacket for better mechanical stability and ease of handling.
III.2 Numerical results
In our numerical characterization, we first turn attention to the fiber illustrated in Fig. 4(a), whose cladding is all metamaterial. The metal wires (diameter $d$) in the metamaterial are arranged in annular layers, supported by a dielectric host. In our study cases, the metamaterial can be approximately considered as a stack of square cells [inset in Fig. 4(a)] with cell separation $\Lambda$. For such stacking, the filling fraction of metal wires can reach up to $\sim 0.785$.
In a HMF with a relatively large core size, there are normally a huge number of propagating modes. Among them, TE${}_{01}$ mode is recognized as the least lossy mode in a HMF (without dielectric coating) Bowden:TeraHertzFiber ; the first mixed-polarization MP${}_{11}$ mode, or traditionally known as the HE${}_{11}$ mode, is most useful for power delivering applications, due to its close resemblance to a laser output beam both in intensity profile and polarization; modes with TM polarization are the most lossy ones. Mixed-polarization (MP) modes have both TE and TM polarization components, therefore a MP mode usually has a propagation loss larger than that of a TE mode but smaller than that of a TM mode when the modes under comparison belong to the same mode group (e.g. MP${}_{21}$, TE${}_{01}$, and TM${}_{01}$). In this paper it is of high priority to address the improved performance of TM modes especially TM${}_{01}$ mode with our metamaterial fiber design. Generally, a fiber with a less lossy TM${}_{01}$ mode will also have a less lossy MP${}_{11}$ mode, as will be shown by our numerical calculations. In addition, the TM${}_{01}$ mode is in fact valuable for other types of uses on its own. The mode has an annular beam shape and a radially-oriented electric field, which allows it to be focused into a tighter spot (compared to a linearly polarized beam) with a large longitudinal electric field component at the focus. This can be exploited for imaging or material processing, etc Quabis:tighterFocus .
We consider a fiber with all-metamaterial cladding and a core with diameter of 700 $\mu$m. With a homogenized material based on MGT approximation, we are able to quite efficiently derive the guided TM${}_{01}$ mode properties, including its loss and effective mode index ($n_{\mathrm{eff}}$). The mode properties are plotted against $f_{m}$ varying from $\sim$0 (completely dielectric) to 1 (completely metal) in Fig. 5. It should be remarked that the accuracy of MGT might be questionable at large $f_{m}$ values. Nevertheless, the results based on MGT agree well with results calculated by rigorous numerical simulations, as will be shown shortly. It is seen from Fig. 5(a) that, as $f_{m}$ decreases, the loss value initially remains close to that for the metal-clad fiber, and it drops sharply when $f_{m}$ becomes less than 0.2. It suggests that it is possible for the metamaterial cladding to perform better than a full-metal cladding for confining the TM mode.
In practice the limit where MGT is valid, i.e. wire diameter comparable to skin depth, is rather difficult to achieve. Here we numerically simulate realistic fiber structures by taking all mesoscopic geometrical features into account. A finite element method (FEM) has been employed for this purpose. Simulations for a number of realistic structures are summarized in Fig. 5, where we show the loss and $n_{\mathrm{eff}}$ values of the TM${}_{01}$ mode as a function of $f_{m}$ when wire spacing $\Lambda$ takes values of 0.125, 0.25, 0.5, 1, 2, and 4 ${\rm\mu m}$. Notice that FEM simulations for all curves corresponding to realistic microstructured fibers start from $f_{m}=0.02$ and stop at $f_{m}=0.74$. Further beyond that filling fraction, the values (dotted portions of the curves) are extrapolated according to simulated data. It is observed that, when $\Lambda$ gets larger, the loss curve shifts away from the limiting MGT curve, downwards to smaller values. This further evidences that the metal-wire based metamaterial makes a superior reflector for TM light compared to plain metal. However, we notice that the downward shift in loss curve is not without limit. In particular, when $\Lambda$ increases to a certain value (after 2$\mu$m in this case), the spacings between two neighboring layers of metal wires become large enough to support resonant modes. These modes experience relatively high loss due to their proximity to metal wires. The coupling from the TM${}_{01}$ core mode to these cladding resonances will result in higher loss to the TM${}_{01}$ mode, which are manifested by the loss spikes observed in Fig. 5(a). These resonances will become especially severe for a large $\Lambda$ value. From Fig. 5, it is concluded that one should take a compromise between the propagation loss and the number of cladding resonances in choosing the right $\Lambda$ (and thereafter $d$). We should mention that it is difficult for the guided core mode to be resonant with surface plasmon polaritons (SPPs) supported by individual metal wires, as the SPP guided by a single wire has $n_{\mathrm{eff}}>2.5$ which is significantly larger than $n_{\mathrm{eff}}$ of the core mode. From Fig. 5(b), it is shown that the $n_{\mathrm{eff}}$ value is larger at smaller $f_{m}$ values, generally around 0.99982. The corresponding incidence angle for guided light can be estimated, through $\theta=\sin^{-1}(\beta/k_{0})=\sin^{-1}(n_{\mathrm{eff}})$, to be $\sim 88.9^{\circ}$. This is consistent with the reflection spectrum shown in Fig. 2, in which at the particular incidence angle an improved reflection by a metal-wire metamaterial is predicted.
Here we remark that, for all microstructured metamaterial fiber simulations (excluding the homogenized fiber case in MGT limiting form) in Fig. 5, we used five layers of metal wires in the cladding, beyond which we have air background. When $\Lambda=0.125\mu$m, the metamaterial cladding is as thin as 0.625$\mu$m (16 times smaller than 10.6$\mu$m). Quite remarkably, such a metamaterial cladding, though very thin, confines TM light exceptionally well. In Fig. 6, we show the field distribution of the guided TM${}_{01}$ mode. It is noticed from Fig. 6(a) that the overall mode field is well guided by the metamaterial cladding. The zoom-in plot in Fig. 6(b) reveals detailed field interaction with the metal-wire medium. The partially excited SPPs at metal wires adjacent to the interface imply the anti-resonant nature of the cladding metamaterial. The cladding field macroscopically resembles an evanescent wave which rapidly decays away from the core. The plot also quantitatively indicates that a couple of metal-wire layers would be sufficient to prevent leakage of TM light. In other words, the metamaterial cladding behaves like a metal to TM light, and it has a very small (sub-micron) effective skin depth.
The fiber in Fig. 4(a), though supporting TM modes, does neither confine TE modes nor MP modes. The hybrid-clad fiber with two claddings in Fig. 4(b) remedies the problem. Among the two claddings, the inner one is of special importance. Based on our results in Fig. 5, we here focus on a particular metal-wire spacing $\Lambda=2~{}{\rm\mu m}$. First we investigate the effect of the metamaterial cladding thickness by studying two hybrid-clad fibers: one has a metamaterial thickness of 2$\mu$m (one layer of metal wires); the other has a metamaterial thickness of 4$\mu$m (two layers of metal wires). The loss values for the TE${}_{01}$ and TM${}_{01}$ modes guided by the two fibers are shown in Fig. 7, expressed again as functions of $f_{m}$. We also superposed the loss curve of the TM${}_{01}$ mode guided by a fiber with pure metamaterial cladding [i.e. the curve marked with “$\Lambda=2\mu$m” in Fig. 5(a)]. According to Fig. 7, at a relatively large $f_{m}$ value the TM${}_{01}$ mode loss is hardly affected as compared to the value for the metamaterial-clad fiber, which is true even when the hybrid fiber has only one layer of metal wires. At small $f_{m}$, a thin metamaterial cladding layer even helps to reduce the cladding resonance, and therefore to reduce the loss of the TM${}_{01}$ mode. It turns out that the lowest loss for the TM${}_{01}$ mode is achieved by the hybrid fiber with only one layer of metal wires. In the limit of $f_{m}=1.0$, both hybrid fibers degenerate into a HMF, whose TM${}_{01}$ and TE${}_{01}$ modes have loss values as indicated by the black and open stars, respectively, in Fig. 7. Clearly we see that in such a conventional HMF, the TM mode suffers over 1000 times higher loss as compared to the TE mode. This is the key factor that restricts the usage of such conventional fiber at this operating wavelength.
By adding a metamaterial layer as an inner cladding, the TE${}_{01}$ mode is found to have a slightly higher loss as compared to that in a HMF of the same core size. However, the loss value of the TE${}_{01}$ mode is still below 0.1 dB/m, which is well acceptable for a wide range of applications. No cladding resonance have been found in this particular case which deteriorates the TE propagation. However, we point out that, since the inner cladding is transparent to TE wave, it can give rise to constructive wave resonance in the layer. In turn, resonances can amplify the loss caused by metal-wire absorption. This resonance condition is fulfilled when the metamaterial thickness is equal to multiple of half transverse-wavelength for TE light in the medium, which is confirmed numerically (not shown here).
In Fig. 8, we show how loss values of the TE${}_{01}$ and TM${}_{01}$ modes guided in a hybrid-clad fiber vary with respect to the fiber core size. As the core radius $R$ increases, the losses decrease roughly as $1/R^{3}$. Such loss dependence on $R$ is also found for other types of hollow-core waveguides, including HMF Marcatili:hollowFiber and the OmniGuide Johnson:01:BF . In Fig. 8 we also show the loss values of the same two modes guided in a HMF. By comparison, we see that propagation loss for the TM mode is greatly reduced with our hybrid-clad design (by more than 100 times when core diameter is moderately large), whereas the loss for TE mode experiences only a slight increase.
MP modes have both TE and TM field components and the two sets of field components are not independent. Through numerous simulations, we have found that a MP mode with a low azimuthal order number in general has a propagation loss in between that of the TE${}_{01}$ and TM${}_{01}$ modes. In addition, the loss of a MP mode is somewhat closer to that for the TE${}_{01}$ mode. For example, for a fiber with a 700 ${\rm\mu m}$ core diameter, the MP${}_{11}$ has a loss of 0.11 dB/m, and the MP${}_{21}$ mode has a loss of 0.28 dB/m. From Fig. 8, losses for the TE${}_{01}$ and TM${}_{01}$ modes are respectively 0.07 dB/m and 0.52dB/m. It is worth mentioning that the MP${}_{11}$ mode in a HMF with the same core size experiences a loss of 49 dB/m.
IV Comparison to HMF with dielectric coating
It has been a common practice to reduce the propagation loss of a HMF by further imposing a dielectric coating on its inner wall Harrington:IRWGReview ; Bowden:TeraHertzFiber . Here we would like to briefly compare the performances between a dielectric-coated HMF and a metamaterial fiber. Since TE light sees the metamaterial layer as if it is a dielectric medium, the two fiber types under comparison should be more or less equivalent for guiding TE modes. The major difference lies in their guidance of TM modes. With the materials deployed in previous sections, we will show that a HMF with a dielectric coating is able to perform slightly better than a metamaterial fiber in guiding TM${}_{01}$ mode. However, the former has the drawback of being quite sensitive to dielectric coating thickness, which on the other hand also implies its sensitivity to operating wavelength.
Let’s borrow the same dielectric previously used for the metamaterial host ($\varepsilon=6.25$) as the coating for a HMF. The HMF’s outer cladding (silver) starts from a fixed value $r_{2}=350\mu$m. The first cladding interface is located at $r_{1}=r_{2}-d_{c}$ with $d_{c}$ the coating thickness. We examine the propagation loss of the guided TM${}_{01}$ mode as a function $d_{c}$. The result is shown in Fig. 9. When the coating thickness increases from 0 to 6$\mu$m, the mode loss undergos a periodic variation, with value ranging from 0.275 to over 100$\rm dB/m$. This periodic change in loss is caused by the resonance and antiresonance of the dielectric layer. Therefore experimentally one has to locate an optimum coating thickness in order for such a fiber to work at a minimum loss Bowden:TeraHertzFiber . Now, if we replace the dielectric coating with a metamaterial one, the periodic variation in modal loss can be suppressed. This is shown again in Fig. 9 by two flat curves, which correspond to two hybrid fibers with different wire separations but with the same metal filling fraction at 20%. The independence of the TM modal loss to metamaterial coating thickness is an inherent result of the fact that TM waves are evanescent in such metamaterial. Hence practically the hybrid fiber with metamaterial coating might be less sensitive to structural parameters, at least for the TM modes. We have also explored the possibility to further reduce the TM${}_{01}$ loss of a hybrid-clad fiber with a third dielectric coating on top of the metamaterial layer. However, the further reduction in loss is not significant.
We should emphasize that the difference between the two types of fibers is far more complicated than what’s been presented by Fig. 9. For example unlike TM modes, TE modes in the hybrid metamaterial fiber should have a periodic loss dependence on the coating thickness. That is, the two types of fibers have similar performances for guiding TE modes. Comparison of the MP modes between two types of fibers would be more intriguing. Since a MP mode comprises both TE and TM wave components, such modes in both fibers will be sensitive to their coating thickness. Some difference should however be expected because of their different guidance behaviors for the TM wave components. Due to significantly heavier computing resources required for numerically deriving the MP modes, an explicit comparison for guidance of MP modes by the two types of fibers is not presented here.
V Discussion and conclusion
Although our presentation has focused on the CO${}_{2}$ wavelength, simulations were also carried out for other fiber structures based on silver and silica materials designed to operate at 1.55$\mu$m wavelength. Very similar improvement in TM mode guidance for a hybrid-clad metamaterial fiber compared to a metal-clad fiber is noticed. However, due to inherently huge Ohmic absorption of silver at this wavelength, propagation loss for the TM${}_{01}$ mode in a hybrid-clad fiber, e.g. with a 30$\mu$m core diameter, though reduced, can still be higher than 100dB/m, which is hardly useful for practical applications. Hence with commonly accessible materials at disposal, the advantage of the proposed metamaterial fiber only becomes obvious at IR frequencies.
In conclusion, we have shown that a metal-wire based metamaterial functions as a TM reflector. The reflectance from a substrate made of such a metamaterial can be better than that from a plain metal at large incidence angles. Numerical simulations show that a hollow fiber with such a metamaterial cladding can easily propagate a TM mode 100 times farther compared to a simple HMF. With a hybrid-clad fiber which has a thin inner metamaterial cladding and an outer metal cladding, we can achieve low-loss propagation of modes in all categories. Such a fiber in theory can be as compelling as the state-of-the-art fibers, including the Bragg fiber and the more traditional HMF with a dielectric coating, for IR light delivery. We emphasize that the proposed guiding principle and in turn the fiber structure is within our fabrication capability at least in a foreseeable future. Our study also suggests that innovations in metamaterial technology can open many other possibilities for designing new EM waveguides, especially for radiation frequencies previously considered problematic to transport.
Acknowledgement
This work is supported by The Danish Council for Strategic Research through the
Strategic Program for Young Researchers (DSF grant 2117-05-0037).
Appendix
We derive the material requirement for wave confinement in a cylindrical hollow fiber as specified by Eqs. (6) and (7) in the text. We assume that the material parameters of the cladding have the tensor form as
$$\overline{\overline{\varepsilon}}=\left(\begin{array}[]{ccc}\varepsilon_{r}&0&%
0\\
0&\varepsilon_{\theta}&0\\
0&0&\varepsilon_{z}\end{array}\right),\ \ \ \overline{\overline{\mu}}=\left(%
\begin{array}[]{ccc}\mu_{r}&0&0\\
0&\mu_{\theta}&0\\
0&0&\mu_{z}\end{array}\right).$$
(9)
Note that any tenor component can be a negative value. To simplify our analysis, we further assume $\varepsilon_{r}=\varepsilon_{\theta}\equiv\varepsilon_{t}$ and $\mu_{r}=\mu_{\theta}\equiv\mu_{t}$. The harmonic dependence is taken as $\exp(-j\omega t+\beta z)$. Due to cylindrical symmetry, field within the medium is completely characterized by two similar wave equations, one for $\mathrm{H}_{z}$ and the other for $\mathrm{E}_{z}$. The $\mathrm{H}_{z}$ wave equation is
$$\frac{\partial^{2}\mathrm{H}_{z}}{\partial r^{2}}+\frac{1}{r^{2}}\frac{%
\partial^{2}\mathrm{H}_{z}}{\partial\theta^{2}}+\frac{1}{r}\frac{\partial%
\mathrm{H}_{z}}{\partial r}+\frac{\mu_{z}}{\mu_{t}}k_{t}^{2}\mathrm{H}_{z}=0,$$
(10)
where $k_{t}^{2}=k_{0}^{2}\mu_{t}\varepsilon_{t}-\beta^{2}$. By variable separation $\mathrm{H}_{z}=\Psi(r)\Theta(\theta)$, Eq. (10) can be decomposed into two equations. One of them gives rise to angular dependence of the field as $\exp{(im\theta)}$, where $m$ is an integer denoting the angular momentum number. The radial dependence of the field is governed by
$$\frac{\partial^{2}\Psi}{\partial r^{2}}+\frac{1}{r}\frac{\Psi}{\partial r}+%
\frac{1}{r^{2}}\left(\frac{\mu_{z}}{\mu_{t}}k_{t}^{2}r^{2}-m^{2}\right)\Psi=0$$
(11)
Equation (11) is a Bessel or modified Bessel differential equation, depending on the sign of $\frac{\mu_{z}}{\mu_{t}}k_{t}^{2}$. For light confinement in a hollow core, it is necessary for the field to be evanescent in the cladding while $\beta^{2}<k_{0}^{2}$ (as wave should be propagating in the core). Subsequently, we find that this condition can be fulfilled when
$$\frac{\mu_{z}}{\mu_{t}}(k_{0}^{2}\varepsilon_{t}\mu_{t}-\beta^{2})<0$$
(12)
and the corresponding radial eigen-field in the cladding can be written generally in Bessel functions as
$$\Psi=\mathcal{A}I_{m}(\tilde{k}_{t}r)+\mathcal{B}K_{m}(\tilde{k}_{t}r),$$
(13)
where $\tilde{k_{t}}=\sqrt{-\frac{\mu_{z}}{\mu_{t}}k_{t}^{2}}$. Similar analysis can be carried out for the $\mathrm{E}_{z}$ wave equation. And the resulted condition for $\mathrm{E}_{z}$ confinement is
$$\frac{\varepsilon_{z}}{\varepsilon_{t}}(k_{0}^{2}\mu_{t}\varepsilon_{t}-\beta^%
{2})<0.$$
(14)
The general radial wave solution is the same as Eq. (13) except with $\mu$ changed to $\varepsilon$. Other field components can be written as a function of $\mathrm{E}_{z}$ and $\mathrm{H}_{z}$. Therefore once the conditions specified by Eqs. (12) and (14) are fulfilled, confinement of the overall mode can be ensured.
When $m=0$, the six field components are divided into two unrelated groups giving rise to two types of angularly invariant modes: a TE mode group with field components H${}_{z}$, E${}_{\theta}$, and H${}_{r}$; and a TM mode group with field components E${}_{z}$, H${}_{\theta}$, and E${}_{r}$. Therefore, Eq. (12) is the general guidance condition for TE modes, and Eq. (14) is the general guidance condition for TM modes.
References
(1)
J. A. Harrington, “A review of IR transmitting, hollow waveguides,”
Fiber and Integrated Optics 19, 211 (2000).
(2)
B. Temelkuran, S. D. Hart, G. Benoit, J. D. Joannopoulos, and Y. Fink,
“Wavelength-scalable hollow optical fibres with large photonic
bandgaps for CO${}_{2}$ laser transmission,” Nature 420, 650
(2002).
(3)
R. F. Cregan, B. J. Mangan, J. C. Knight, T. A. Birks, P. S. J. Russell, P. J.
Roberts, and D. C. Allan, “Single-mode photonic band gap guidance of
light in air,” Science 285(5433), 1537 (1999).
(4)
B. Bowden, J. A. Harrington, and O. Mitrofanov, “Low-loss modes in
hollow metallic terahertz waveguides with dielectric coatings,” Appl. Phys.
Lett. 93, 181104 (2008).
(5)
B. T. Schwartz and R. Piestun, “Waveguiding in air by total external
reflection from ultralow index metamaterials,” Appl. Phys. Lett. 85,
1 (2004).
(6)
E. J. Smith, Z. Liu, Y. Mei, and O. G. Schmidt, “Combined surface
plasmon and classical waveguiding through metamaterial fiber design,” Nano
Lett., DOI:10.1021/nl900550j (2009).
(7)
J. Elser, R. Wangberg, V. A. Podolskiy, and E. E. Narimanov, “Nanowire
metamaterials with extreme optical anisotropy,” Appl. Phys. Lett.
89(26), 261102 (2006).
(8)
Y. Liu, G. Bartal, and X. Zhang, “All-angle negative refraction and
imaging in a bulk medium made of metallic nanowires in the visible region,”
Opt. Express 16, 15439 (2008).
URL http://www.opticsexpress.org/abstract.cfm?URI=oe-16-20-15439.
(9)
J. B. Pendry, A. J. Holden, W. J. Stewart, and I. Youngs, “Extremely
low frequency plasmons in metallic mesostructures,” Phys. Rev. Lett.
76, 4773 (1996).
(10)
J. Yao, Z. Liu, Y. Liu, Y. Wang, C. Sun, G. Bartal, A. M. Stacy, and X. Zhang,
“Optical negative refraction in bulk metamaterials of nanowires,”
Science 321, 930 (2008).
(11)
D. R. Smith and D. Schurig, “Electromagnetic wave propagation in media
with indefinite permittivity and permeability tensors,” Phys. Rev. Lett.
90, 077405 (2003).
(12)
E. Hecht, Optics, 4th ed. (Addison Wesley, 2002).
(13)
E. D. Palik, Handbook of Optical Constants of Solids (Academic Press,
1985).
(14)
S. Quabis, R. Dorn, M. Eberler, O. Glöckl, and G. Leuchs, “Focusing
light to a tighter spot,” Opt. Comm. 179, 1 (2000).
(15)
E. A. J. Marcatili and R. A. Schmeltzer, “Hollow metallic and
dielectric waveguides for long distance optical transmission and lasers,”
Bell Syst. Tech. J. 34, 1783 (1964).
(16)
S. Johnson, M. Ibanescu, M. Skorobogatiy, O. Weisberg, T. Engeness,
M. Soljacic, S. Jacobs, J. Joannopoulos, and Y. Fink, “Low-loss
asymptotically single-mode propagation in large-core OmniGuide fibers,” Opt.
Express 9, 748 (2001).
URL http://www.opticsexpress.org/abstract.cfm?URI=oe-9-13-748. |
A Heterogeneous Graph Convolution based Method for Short-term OD Flow Completion and Prediction in a Metro System
Jiexia Ye, Juanjuan Zhao*, Furong Zheng, Chengzhong Xu, IEEE Fellow
*Corresponding Author: Juanjuan Zhao
Jiexia Ye is with Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, China (E-mail: {zumri.jiexiaye}@um.edu.mo).
Juanjuan Zhao, Furong Zheng are with Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China (E-mail: {jj.zhao, fr.zheng}@siat.ac.cn).
Chengzhong Xu is with State Key Lab of IOTSC, Department of Computer Science, University of Macau, Macau SAR, China (E-mail: [email protected]).
Abstract
Short-term OD flow (i.e. the number of passenger traveling between stations) prediction is crucial to traffic management in metro systems. Due to the delayed effect in latest complete OD flow collection, complex spatiotemporal correlations of OD flows in high dimension, it is more challengeable than other traffic prediction tasks of time series.
Existing methods need to be improved due to not fully utilizing the real-time passenger mobility data and not sufficiently modeling the implicit correlation of the mobility patterns between stations.
In this paper, we propose a Completion based Adaptive Heterogeneous Graph Convolution Spatiotemporal Predictor. The novelty is mainly reflected in two aspects.
The first is to model real-time mobility evolution by establishing the implicit correlation between observed OD flows and the prediction target OD flows in high dimension based on a key data-driven insight: the destination distributions of the passengers departing from a station are correlated with other stations sharing similar attributes (e.g. geographical location, region function).
The second is to complete the latest incomplete OD flows by estimating the destination distribution of unfinished trips through considering the real-time mobility evolution and the time cost between stations, which is the base of time series prediction and can improve the model’s dynamic adaptability.
Extensive experiments on two real world metro datasets demonstrate the superiority of our model over other competitors with the biggest model performance improvement being nearly 4%. In addition, the data complete framework we propose can be integrated into other models to improve their performance up to 2.1%.
Index Terms:
Origin-Destination Matrix Prediction, Data Completion, Metro, Spatiotemporal, Heterogeneous Graph
I Introduction
Rail transit plays a key role in the comprehensive transportation system and becomes a popular travel mode for passengers because of its safety, speed, punctuality, large traffic volume. In recent years, with the expansion of transportation network and the improvement of accessibility, more and more people choose to use rail transit for travel, and the large-scale passenger flow is becoming more and more normal. Accurate and real-time prediction of the spatiotemporal distribution of passenger flow is the premise of ensuring the safe and efficient operation of rail transit, and has attracted increasing interest.
Short-term passenger flow prediction in URT focuses on utilizing historical ridership information to predict the future passenger demand from few minutes to few hours. It can be divided into three specific tasks, i.e. entry/exit flow prediction, origin-destination (OD) flow prediction, and sectional flow prediction[1]. To date, extensive effort has been devoted to entry/exit demand prediction which aims to forecast the number of passengers entering or exiting each station [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 7]. By comparison, OD flow prediction which further forecasts the passenger destination distribution of each origin station
receives much less attention. Although the station-level entry/exit prediction is useful, it is still coarse and inadequate for manager to implement network-wise real-time monitoring and proactive operations.
The real-time passenger mobility information provided by OD prediction can better support metro systems for train scheduling, fine-grained abnormal flow warning, passenger route planning and it is also an essential input for sectional passenger flow prediction task. For example, the OD flow is the basis to get the travel demand of each metro line. The manager can design appropriate strategy through shortening or extending the train headway of corresponding metro line to meet the dynamic travel demand. In addition, the entry/exit prediction can only detect the abnormal ridership departing from or destining to each station, but unable to detect the fine-grained flows such as the passengers waiting at each platform of a station, and the transfer flow at each transferring station while OD demand prediction can provide information to detect such abnormality. Accurate OD demand can help manager to recommend proper routes for passengers to achieve global traffic balance in whole metro system.
Different from entry/exit prediction in URT which aims to predict a $N$ dimension vector of passenger flows departing or arriving at each station in a metro network with $N$ stations, OD flow prediction model outputs an OD matrix with a high dimensionality of $N^{2}$. That increases its uncertainties, and makes us pay more attention to various passenger mobility patterns.
Passenger mobility is composed of both periodicity (7-day or 24-hour periodicity) and randomicity. The periodicity is mainly due to the constraint of people’s working and living during the day and at night, which can be easily observed from the traffic AM and PM rush hours and different traffic patterns on weekdays and weekends. The randomicity may be caused by passengers’s non-fixed activities (e.g. entertainment, shopping) or some external factors such as celebrations, exhibitions, weather. However in most cases, it is difficult to obtain the real-time travel plans of passengers or external factors. Therefore, how to model the randomicity is the biggest challenge of OD flow prediction. In general, the randomicity exhibits local or global mobility variability.
The local mobility variability occurs between two stations within a period of time. That may be caused by a celebration party organized by an enterprise, where the OD flow between the enterprise and celebration locations has a certain fluctuation. The local mobility evolution can be learned from latest OD flows due to the recent dependency as that of entry/exit flow. However, it’s impractical to feed the OD flows at last time slots into the model because the finished OD flows (i.e. the OD flows collected before the predicted target time slot) are likely to be incomplete due to unfinished trips and the full OD flows can only be obtained after all the passengers finish their journeys. Such real-time data availability needs to be tackled to capture mobility dynamicity for accurate prediction.
The global flow fluctuation occurs between multiple stations over a period of time. It may be caused by some events (concert, athletic meeting, opening of shopping mall). The mobility fluctuations of different stations might be asynchronous and the mobility dynamic patterns can be learned from each other. For example, the opening of a shopping mall located at station D may lead to passenger mobility pattern changing in multiple surrounding stations (e.g. A, B, C). To predict the OD flow of the A-D pair, the mobility patterns of OD flows of B-D and C-D pairs can be beneficial. The establishment of the spatial correlations can help us to select related features to the prediction target from high-dimensional observation data, so as to model passenger mobility patterns of each station more effectively even with insufficient training data. Therefore, how to model the spatial correlations needs to be resolved.
To date, some conventional methods are adopted for OD matrix prediction by modeling the temporal correlation of time series, such as state space model [14, 15], vector autoregression [16], probabilistic model [17], clustering method [18], matrix decomposition [19, 20].
These methods are applicable to the scenes with stable passenger flow. Due to its limited feature extraction and learning ability, it is not suitable for the OD flow prediction with nonlinear and complex spatiotemporal correlations.
Recently, some advanced deep learning architectures based on Long Short-Term Memory Network [21, 22, 23], Convolutional Network [24, 25, 26], Graph Convolutional Network [27, 28] are proposed to learn the spatiotemporal flow dependency of stations to predict the OD flows of various traffic scenarios. However, most of these methods assume that the latest OD flows are available, or not sufficiently use the latest incomplete OD flows, which are not applicable for metro scenes. Among them, to the best of our knowledge,
[24] is the most related work with this research. It proposed CAS-CNN with channel-wise attention mechanism and split CNN for large OD value prediction. It introduced inflow/outflow-gated mechanism to solve the data availability challenge but it just considered the real-time inflows and outflows, totally abandoned the incomplete real-time OD matrixes which might loss accurate real-time mobility information. Besides, the biggest limitation is failing to exploit the global spatial dependency between the mobility patterns of stations.
In this paper, we propose a model called Completion based Adaptive Heterogeneous Graph Convolution Spatiotemporal Predictor (C-AHGCSP) for metro OD prediction. It can be used both for recent OD flow completion and future OD flow prediction. The OD flow prediction is a typical problem of time series in high dimensions. We select some relevant features from latest OD flows for target OD flow prediction to avoid high-dimensional disaster based on the insight that: the destination distributions of the passengers
departing from a station are correlated with other stations sharing similar static or dynamic attributes. For example,
the groups of passengers distributed around geographically adjacent metro stations are more inclined to show similar mobility pattern than those distant ones. The metro stations in regions with similar region functions (entertain, education, business and residence) are possible to have similar mobility patterns. In addition, the mobility patterns of the stations with similar flow changing trend in the latest period may also be similar in the future. These spatial correlations between stations with similar attributes (geographical location, region function, recent flow changing variation) not only can provide mutual enhancement and complementation of mobility prediction for each other, but also help us to extract more effective features from high dimensional input to predict the target OD flows.
In addition, the time series of latest OD flows is the base of OD flow prediction. We convert latest OD flow completion issue into destination distribution estimation of the unfinished trips. Given a past time slot with incomplete OD flows, our idea is first to use the time series of complete OD flows before to predict the OD flows, so that the destination distribution of all trips (contain finished trips and unfinished trips) can be obtained. Then the destination distribution of unfinished trips can be estimated by considering time cost between stations.
Overall, the main contributions of our paper are as follows:
•
We proposes an innovative method C-AHGCSP both for latest OD flow completion and OD flow prediction, which can be used to capture the recent and future fine-grained passenger flow distributions on entire metro network. The OD completion method as a sub-module can also be integrated into existing prediction models to improve their performance.
•
We propose a data complete estimator composed of a prior estimator and AHGCSP based estimator to estimate the full recent OD matrix sequence by taking advantage of the dynamic mobility patterns, unfinished trips, different time cost of OD pairs.
•
Based on the insight that the stations with similar attributes are possible to have similar mobility patterns, we design a module named AHGCSP to learn the global spatiotemporal correlations. That enables our model to extract effective features from high-dimensional input and improve model robustness even with limited training data. It first organizes latest OD flows into a sequence of dynamic station graph with heterogeneous edges (geographical similarity by Gaussian Kernel, region function similarity by KL divergence, and real-time mobility similarity by self-attention mechanism). Then Graph convolution network based adaptive heterogeneous module and LSTM are used to learn the dynamic spatiotemporal mobility evolution trend.
•
Extensive experiments on large-scale real-world metro datasets demonstrate the superiority of our model, compared with baselines. We have published our code and one of our datasets in https://github.com/start2020/C-AHGCSP.
II Related Works
Short-term OD flow (the number of passenger traveling between stations) prediction is crucial to traffic management. In recent years, with the increased availability of data, many researchers have begun to use the various methods to predict OD flows in various traffic scenario, such as road/city OD matrix [29, 18, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40], ride-hailing OD matrix [41, 42, 43], taxi OD matrix [44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54], as well as bus OD matrix [55, 56, 57, 58].
However, the contextual information of the above tasks is different from that of metro system. First, the real-time data availability in these tasks is various and different. As mentioned in the last section, in metro scenario, the latest OD matrices is incomplete due to unfinished trips before the predicted time slot. However, in the case of ride-hailing services, the latest OD matrix is complete, which can be obtained directly when passengers call for car service in related apps. As to bus system (except the bus rapid transit (BRT)), the full OD matrices are totally unavailable even when all trips are finished due to that only boarding station is recorded without the alighting station. The cameras and sensors in road network can only detect section flow instead of the origins or destinations of vehicles. The availability of OD matrices in taxi OD task is similar to that in URT but the boarding and alighting points of taxi are not fixed and previous studies usually divided the whole research region into grid-based origin and destination zones, making it significantly different from subway systems. The different contextual information above makes it hard to transfer the OD prediction models in other traffic scenarios to metro scenario.
For metro scenario, some conventional methods are adopted for OD matrix prediction by modeling the temporal correlation of time series, such as state space model [14, 15], vector autoregression [16], probabilistic model [17], clustering method [18] and so on [59, 60]. However, these methods build a model for each OD pair, which may lead to inefficiency when the number of OD pairs is very large. In addition, they are only applicable to relatively stable passenger flow prediction due to the limited nonlinear correlation learning. Some researches use matrix decomposition or its variants to predict OD passenger flow[19, 20]. Compared with other conventional methods, matrix decomposition based methods considering all OD pairs as a whole can model the stable global correlation between OD pairs. However, these methods are linear techniques in nature, and not suitable for modeling the global nonlinear correlation of OD flows. In addition, they requires a complex decomposition calculation process, which limits the scalability when the number of stations is extremely large. In recent years, for metro scenario, some advanced deep learning architectures based on Long Short-Term Memory Network [21, 22, 23], Convolutional Network [24, 25, 26], Graph Convolutional Network [27, 28] are proposed to solve the metro OD prediction problem. Noursalehi et.al [25] proposed a scalable methodology for realtime OD demand prediction. It extracted the local spatial features by a channel-wise attention block with a squeeze-and-excitation layer. Liu et.al [28] proposed a heterogeneous information aggregation machine to fully take advantage of historical data, e.g. incomplete OD matrices, unfinished order vectors, and DO matrices for OD and OD ridership multi-step forecasting. But they all ignored the time-evolving global passenger mobility pattern correlation in metro OD flow which is valuable for improving the prediction accuracy. To the best of our knowledge, [24] is the most related work with this research. It proposed CAS-CNN with channel-wise attention mechanism and split CNN for large OD value prediction. It introduced inflow/outflow-gated mechanism to solve the data availability challenge but it just considered the real-time inflow and outflow information and totally abandoned the incomplete real-time OD matrix which might loss accurate real-time destination distribution of passengers. Besides, the method relies on multiple CNN layers to automatically learn the global spatial correlation between different OD pairs. The method may be able to learn the global correlation if sufficient training data is available, but not suitable for OD predictions in the scenarios with limited data source (for instance one month) and the dimension of OD pairs is very large. This paper aims to make full use of dynamic observation data and complex factors influencing passenger flow mobility between stations to model the global correlation.
III Preliminary
Terminology. Suppose the metro network in our paper has $N$ station, we define some concepts (as shown in Table I) to facilitate our discussion.
Problem Formulation. The metro system contains $N$ stations. We aim to predict the OD Matrix $\hat{M}_{t^{\prime}}$ at the predicted time slot $t^{\prime}$ with the latest $P$ previous Finished OD matrices $[MF_{t^{\prime}-1},\dots,MF_{t^{\prime}-P}]$, other observed trips information TR (e.g. Inflow, Outflow, ODt Matrix, as defined in Table I) and prior knowledge PK (e.g. geographical distance between stations). The problem can be formulated as follows:
$$\hat{M}_{t^{\prime}}=f([MF_{t^{\prime}-1},\dots,MF_{t^{\prime}-P}],\text{TR},\text{PK},\text{G})$$
(1)
where $f$ is a deep learning based mapping function and $[t^{\prime}-{1},\dots,t^{\prime}-{P}]$ is the input time slot sequence.
Loss Function.
Following previous works [24, 16], we adopt mean square error (MSE) as the loss function, denoted as:
$$\displaystyle\mathcal{L}$$
$$\displaystyle=MSE(M_{t^{\prime}}-\hat{M}_{t^{\prime}})$$
(2)
$$\displaystyle=\frac{1}{N\times N}\sum_{i=1}^{N}\sum_{j=1}^{N}(m_{ij}^{t^{\prime}}-\hat{m}_{ij}^{t^{\prime}})^{2}$$
To large OD values, the difference between their ground truth and prediction is enlarged by MSE. Thus, the model is trained toward reducing the predicting error of large OD values instead of small OD values. This is consistent with the real-world demand that large OD volumes are more important and should be paid more attention in metro scenario.
IV Methodology
In this paper, we propose a Completion based Adaptive Heterogeneous Graph Convolution Spatiotemporal Predictor (C-AHGCSP) to make full use of the real-time and historical observed passenger flows (e.g. finished OD matrix sequence, inflow/outflow) as well as the factors affecting the global mobility pattern correlations between stations to predict the OD Matrix in the target time slot.
The framework of C-AHGCSP is shown in Figure 1, which contains two modules: AHGCSP-based OD matrix completion module and AHGCSP-based OD matrix prediction module. The two modules share same sub-module AHGCSP. Given a target time slot, AHGCSP takes the OD passenger matrixes before as input, learning the spatiotemporal mobility correlations between stations, and output the OD matrix in the target time slot. Be note the target time slot may be the predicted time slot $t^{\prime}$ or a latest time slot where the OD matrix is incomplete.
The completion module aims to complete latest OD matrixes before the predicted target time slot $t^{\prime}$. Given a latest time slot with incomplete OD matrixes, it uses AHGCSP and prior knowledge about time cost between two stations to estimate the destination distribution of unfinished trips.
The prediction module takes the time series of observed full OD matrixes and completed OD matrixes before the target time slot $t^{\prime}$ as input, and use AHGCSP to predict the final OD matrix in the target time slot.
IV-A AHGCSP-based OD Matrix Prediction
This section aims to predict OD matrix at time slot $t^{\prime}$. More specifically, it predicts the destination distributions of passengers starting at each station (geographical location, region function, etc). For that, we first construct a dynamic station graph with heterogeneous edges based on the OD matrix at each latest time slot.
Then, we adopt an adaptive spatiotemporal learning method to learn the global mobility spatiotemporal correlations from the graph sequence, and output the OD matrix at time slot $t^{\prime}$.
IV-A1 Heterogeneous Spatial Correlations Construction
We analyze the data and find that some stations share similar passenger travel pattern due to their similar attributes, such as geographical proximity, functional similarity and global dynamicity(as shown in Figure 5). That is beneficial for their passenger destination distribution prediction. We model such mobility correlation from three perspectives (i.e. geographical proximity, functional similarity and global dynamicity) by appropriate methods based on the available real-time and historical flow information as well as prior geographical information.
Gaussian Kernel based Geographical Proximity: Passengers in geographical close areas tend to share more similar destinations than those in distant areas [61, 42]. In this perspective, the closer the stations are, the more they are likely to learn from each other in passenger travel pattern. Inspired by [42], we define the geographical neighbor of station $i$ (as shown in Figure 3) as follows:
$$\mathcal{N}_{i}=\left\{s_{j}\mid\operatorname{geo}\left(s_{i},s_{j}\right)\leq\text{D}\right\}$$
(3)
where $\operatorname{geo}\left(s_{i},s_{j}\right)$ is the geographical distance calculated by longitude and latitude of each metro station. D is a distance threshold to determine the range of neighborhood. Further, we measure the geographical proximity between stations using the threshold Gaussian Kernel [62] as follows:
$$\alpha^{g}_{ij}=\left\{\begin{array}[]{ll}\exp\left(-\frac{\operatorname{geo}(s_{i},s_{j})^{2}}{\sigma^{2}}\right),&i=j\text{ or }i\in\mathcal{N}_{i}\\
0,&i\neq j\text{ and }i\notin\mathcal{N}_{i}\end{array}\right.$$
(4)
where $\sigma$ is the standard deviation of the distance between all stations and their neighbors and $\alpha^{g}_{ij}\in[0,1]$.
KL Divergence based Functional Correlation:
The region that a metro station located in usually has some functions, e.g. business, education, residential, entertainment function and these functions have a large impact on the mobility pattern of passengers in this region [63, 7]. Metro stations in regions with similar functions are likely to have similar passenger travel patterns, otherwise different patterns. Such similarity is beneficial for prediction. According to our observation from the collected data, the historical inflow and outflow patterns of stations within a day can largely distinguish the region functions (as shown in Figure 5). Therefore, we novelly propose to measure the function similarity between any two stations based on flow patterns by KL divergence.
Specifically, we sum up the inflow of all the historical days with the same week attribute (e.g. Monday) to obtain a $T$ dimension inflow (outflow) vector for each station $i$, denoted as $IV_{i}=[iv_{i}^{1},iv_{i}^{2},\cdots,iv_{i}^{T}]$, where $iv_{i}^{t}$ represents the total number of passengers entering station $i$ at $t$ historically. To eliminate the influence of passenger volume scale on region function, the inflow vector is normalized into a probability distribution as $p_{i}=[\frac{iv_{i}^{1}}{\sum_{t=1}^{T}iv_{i}^{t}},\frac{iv_{i}^{2}}{\sum_{t=1}^{T}iv_{i}^{t}},\cdots,\frac{iv_{i}^{T}}{\sum_{t=1}^{T}iv_{i}^{t}}]$. We measure the similarity of inflow probability distributions between any two stations by KL divergence as follows:
$$SI_{ij}=1-D(p_{i}\|p_{j})=1-\sum_{t=1}^{T}p_{i}^{t}\log\frac{p_{i}^{t}}{p_{j}^{t}}$$
(5)
where $D(p_{i}\|p_{j})$ is the KL divergence score and $SI_{ij}$ is the similarity score of inflow pattern between metro station $i$ and metro station $j$. The more similar the inflow patterns of two stations is, the larger the $SI_{ij}$ is. Similarly, the similarity score of outflow pattern between metro station $i$ and $j$ is calculated and denoted as $SO_{ij}$. We fuse $SI_{ij}$ and $SO_{ij}$ to measure the function similarity between two stations as follows:
$$\alpha_{ij}^{f}=wSI_{ij}+(1-w)SO_{ij}$$
(6)
where $\alpha_{ij}^{f}$ is the function similarity score between station $i$ and station $j$. The larger $\alpha_{ij}^{f}$ is, the more similar functions the two stations share. $w\in\mathbb{R}$ is the coefficient to measure the contribution of inflow pattern to the function similarity. This coefficient is trainable. We randomly choose some stations to validate the effectiveness of KL divergence in Figure 4.
Self-Attention based Real-Time Global Dynamicity:
The geographical graph and functional graph above are primarily based on prior knowledge and historical data. They reflect static spatial correlations in the metro network. The dynamic spatial correlations based on real-time passenger flow are also crucial for prediction. They contain real-time mobility information which is not contained in predefined graphs. For example, a sudden abnormal event like vehicle defects of some stations is likely to change the passenger mobility of many related stations currently and such impact might last for some time. Therefore, we propose to capture the time evolving spatial correlations in the whole network based on real-time destination distribution information, i.e. OD Vector and ODt Vector (defined in Table I). Inspired by the self-attention mechanism in transformer [64], we design the dynamic attention score between any two stations as follows:
$$\displaystyle e_{{i}{j}}^{t}$$
$$\displaystyle=f_{key}(D_{i}^{t})^{T}\bm{\cdot}f_{query}(D_{j}^{t})$$
(7)
$$\displaystyle\bar{e}_{{i}{j}}^{t}$$
$$\displaystyle=\bar{f}_{key}(\bar{D}_{i}^{t})^{T}\bm{\cdot}\bar{f}_{query}(\bar{D}_{j}^{t})$$
$$\displaystyle\alpha_{ij}^{dt}$$
$$\displaystyle=w\frac{\exp(e_{ij}^{t})}{\sum_{j=1}^{N}\exp(e_{ij}^{t})}+\bar{w}\frac{\exp(\bar{e}_{ij}^{t})}{\sum_{j=1}^{N}\exp(\bar{e}_{ij}^{t})}$$
where $f_{key}$, $f_{query}$, $\bar{f}_{key}$, $\bar{f}_{query}$ are all three fully connected layers with activation function ReLu. $e_{{i}{j}}^{t}$ represents the real-time passenger destination distribution similarity between two stations where passengers entering stations at $t$. $\bar{e}_{{i}{j}}^{t}$ represents the real-time passenger destination distribution similarity between two stations where passengers entering stations before $t$ and leave the metro network at $t$. $w$ and $\bar{w}$ are the trainable coefficients to attach different weights to these two similarity. $\alpha_{ij}^{dt}$ is the final dynamic score (as shown in Figure 6).
IV-A2 Adaptive Heterogeneous Spatiotemporal Learning
Up to now, three heterogeneous relations have been constructed for any station pair $ij$ at time slot $t$, i.e. $\alpha_{ij}^{dt},\alpha_{ij}^{f},\alpha_{ij}^{g}$ to reflect the complex passenger travel motivation correlations among stations sufficiently. Intuitively, the correlation strength of any station pair depends on all three relations but each relation contributes differently and we assign different weights to them to obtain the fused correlation score as follow:
$$\displaystyle\alpha_{ij}^{t}=w_{d}\alpha_{ij}^{dt}+w_{f}\alpha_{ij}^{f}+w_{g}\alpha_{ij}^{g}$$
(8)
$$\displaystyle h_{ij}^{t}=\frac{\exp(\alpha_{ij}^{t})}{\sum_{j=1}^{N}\exp(\alpha_{ij}^{t})}$$
where $h_{ij}^{t}\in[0,1]$ is the normalization of $\alpha_{ij}^{t}$.
Graph Convolution based Heterogeneous Correlation Extraction
After the correlations among stations have been constructed and fused adaptively based on the spatiotemporal attributes, each station needs to learn useful destination distribution information from related stations. Recently, Graph Convolution Network (GCN) has achieved the state-of-the-art performance in many traffic tasks, such as metro passenger prediction [65], taxi demand prediction [66]. Following previous works [42], we utilize graph convolution to aggregate the useful passenger mobility information from all other stations for the target station $i$ based on their fused correlation to produce its high-level mobility pattern representation at each input time slot $t$ as $Y^{l}_{i,t}=\bm{\rho}((\sum_{j=1}^{N}h_{ij}^{t}D_{j}^{t})W^{l}+b^{l})$ $Y^{l}_{i,t}$ is the $l_{th}$ layer feature of station $i$ at time slot $t$. $D_{j}^{t}$ is the destination distribution of station $j$ at time slot $t$. $\bm{\rho}$ is the activation function (i.e. ReLU in this paper) and $W^{l},b_{l}$ are the learnable parameter at layer $l_{th}$. Following previous works [67], to alleviate the over-smoothing problem, there are only two graph convolutional layers in our model.
All the stations and time slots share the same graph convolution parameters, we rewrite the related operations as matrix form as $G_{t}=\text{GCN}(H_{t},M_{t})$ where $H_{t}=(h_{ij}^{t})_{N\times N}$ is the fusion matrix for the whole network at time slot $t$ and $M_{t}=[D_{1}^{t},\cdots,D_{N}^{t}]$ is the OD Matrix at time $t$, namely the passenger destination distributions of all the stations. $G_{t}\in\mathbb{R}^{N\times G}$ is the output of the GCN.
LSTM based Temporal Dependency Extraction
We have extracted the heterogeneous correlations on the metro network at each time slot $t$, represented by $G_{t}$. Then we need to dig out the consecutive time dependency from latest high-level destination distribution sequence $[G_{t^{\prime}-1},\dots,G_{t^{\prime}-P}]$. Following previous works [42], Long Short Term Memory Network (LSTM) is utilized to capture the real-time passenger mobility pattern as $\hat{MS}_{t^{\prime}}=\text{LSTM}([G_{t^{\prime}-1},\dots,G_{t^{\prime}-P}])$ where $\hat{MS}_{t^{\prime}}\in\mathbb{R}^{N\times U}$ is the output containing short-term passenger travel pattern information. Afterward, $\hat{MS}_{t^{\prime}}$ is fed into one fully connected layer with ReLU activation to obtain the final prediction as follows:
$$\hat{M}_{t^{\prime}}=ReLU(\hat{MS}_{t^{\prime}}W_{1}+b_{1})$$
(9)
where $[W_{1},b_{1}]$ are trainable parameters and $W_{1}\in\mathbb{R}^{U\times N}$.
IV-B AHGCSP-based Latest OD Matrix Completion
Unlike the ride-hailing OD prediction task, in the metro scenario, the finished OD matrix at each input time slot is likely to be incomplete due to the unfinished trips before the predicted time slot and the full OD matrix is unavailable. The incompleteness leads to the sparsity of finished OD matrix.We expect the latest mobility information to improve the model performance significantly in metro network just like other scenario, especially for real-time abnormal situations caused by special events.
Different from the previous studies which fed the incomplete OD matrix sequence or directly into the model [20, 16, 24], we novelly propose a data complete estimator to estimate a full OD matrix. Specifically, the data complete estimator is composed of a prior estimator and an AHGCSP based estimator. It turns the input data, i.e. the finished OD matrix sequence $[MF_{t^{\prime}-1},\dots,MF_{t^{\prime}-P}]$ at latest $P$ time slots into an estimated full OD matrix sequence $[\hat{M}_{t^{\prime}-1},\dots,\hat{M}_{t^{\prime}-P}]$.
IV-B1 Prior Estimator
According to the observations of our data (as shown in Figure 8), we find that not all the finished OD matrices at the input time slots are incomplete, i.e. some finished OD matrices are complete or nearly complete. The incompleteness of a finished OD matrix is determined by the time gap (i.e. time distance between input time slot and output time slot) and the passengers travel time. We propose to measure the completeness of an finished OD matrix as follows:
$$C_{t}=\frac{\sum MF_{t}}{\sum M_{t}}=f((t^{\prime}-t-1)\delta,(t^{\prime}-t)\delta,TR_{t})$$
(10)
where $C_{t}\in[0,1]$ measures the matrix completeness. $MF_{t}$ is the finished OD matrix and $M_{t}$ is the full OD matrix. $t$ is the input time slot and $t^{\prime}$ is the output time slot. $g$ is the time difference between $t^{\prime}$ and $t$ and $g\in[1,\cdots,P]$. $\delta$ is time granularity. $(t^{\prime}-t-1)\delta/(t^{\prime}-t)\delta$ is the minimum/maximum travel time that a passenger entering the metro network at $t$ has before target time slot $t^{\prime}$. $TR_{t}=\{tm_{t}^{k}|k=1,\cdots,K\}$ is a set of travel time of all trips starting at $t$ with $tm_{t^{\prime}}^{k}$ denoted as travel time of one trip.
For each passenger trip, the travel time is influenced by many factors, such as its chosen route of the OD pair, the waiting time and the time on train vehicles. However, we can expect it to have a maximum value and a minimum value because there exists shortest travel path between OD pair and the time that passengers willing to spend on trains is limited, i.e. $TR_{t}\in[\text{MIN}_{TR_{t}},\text{MAX}_{TR_{t}}]$. And if $\text{MIN}_{TR_{t}}\geq(t^{\prime}-t)\delta$, $C_{t}=0$ because no passenger has finished their trips. Reversely, if $\text{MAX}_{TR_{t}}\leq(t^{\prime}-t-1)\delta$, $C_{t}=1$ because all passengers have finished their journeys before $t$. If both the conditions above are not satisfied, $C_{t}\in(0,1)$. Therefore, to obtain complete matrix, the input time slot should satisfy $t\leq t^{\prime}-(1+\frac{\text{MAX}_{TR_{t}}}{\delta})\leq t^{\prime}-Q$ where $Q=\lfloor\frac{\text{MAX}_{TR}}{\delta}+1\rfloor$. If $P>Q$, finished matrices $[MF_{Q},\cdots,MF_{P}]$ are complete matrices. Therefore, we can estimated the full OD matrices from $[t^{\prime}-Q,\cdots,t^{\prime}-P]$ as follows:
$$\hat{M}_{t}=MF_{t},t\in[t^{\prime}-Q,\cdots,t^{\prime}-P]$$
(11)
Empirically, as shown in Figure 9, almost all the trips are finished within 100 minutes. According to the statistics, there are 94.2% of trips on Shenzhen dataset and 88.2% of trips on Shanghai dataset finished within 60 minutes. Another observation of incompleteness of finished OD matrices is shown in Figure 8, finished OD matrices before $t^{\prime}-5$ are incomplete but those ranging from $[t^{\prime}-5,\cdots,t^{\prime}-8]$ are nearly complete at both datasets. Therefore, we set $Q=5$ based on our datasets.
IV-B2 AHGCSP based Estimator
In this section, we estimate the full OD matrix of each time slot $t\in[t^{\prime}-1,\cdots,t^{\prime}-Q+1]$ before the predicted time slot $t^{\prime}$. The full OD matrix can be partitioned and formulated as follows:
$$\footnotesize\left\{\begin{array}[]{ll}M_{t}&=MF_{t}+MD_{t}\\
MD_{t}&=MDP_{t}*ID_{t}\\
ID_{t}&=I_{t}-IF_{t}\end{array}\right.\Rightarrow M_{t}=MF_{t}+MDP_{t}*(I_{t}-IF_{t})$$
(12)
where $MD_{t}$ is the Delayed OD Matrix, $MDP_{t}$ is the Delayed OD Probability Matrix. $ID_{t}$ is the Delayed Inflow. All the terminologies have been defined in Table I. Since the finished OD matrix $MF_{t}$, Inflow $I_{t}$, Finished Inflow $IF_{t}$ are all available, to estimate $M_{t}$, we just need to estimate $MDP_{t}$.
A simplest way is to use the $MDP_{t}$ at the same time slot at previous week for estimation, which is denoted as $MDP_{t}^{W}$. However, such estimation only contains the historical mobility information, no real-time mobility information. On the other hand, the OD matrix sequence at time slots $[t^{\prime}-Q,\cdots,t^{\prime}-P]$ is almost complete and already available which contains real-time mobility information and our module AHGCSP can learn from the sequence for $MDP_{t}$ estimation.
There may be multiple latest time slots where the OD matrixes are incomplete. The closer of a latest time slot $t$ to the predicted target time slot $t^{\prime}$, the higher incompleteness degree of the OD matrixes. Our strategy is to use an iterative completion method as follow: we first complete the OD matrix at the first time slot with the lowest loss rate through combining AHGCSP and prior prior knowledge of time cost between two stations to estimate the destination distribution of unfinished trips. Afterwards, it uses past completed OD matrixes as input to complete the OD matrix in the second slot, and so on until all latest OD matrixes are completed.
More specifically, AHGCSP is used to take an estimated full OD matrix sequence at previous $K$ time slots as input, defined as $\hat{S}_{t}=[\hat{M}_{t-1},\cdots,\hat{M}_{t-K}]$ and output a predicted OD matrix at time slot $t$, defined as $\hat{M}_{t}$. If we normalize $\hat{M}_{t}$ for each row, we can obtain the destination distribution of all trips starting from each station, called OD probability matrix $MP_{t}$ as defined in Table I. If we further take the time cost into consideration, we can estimate the destination ratio of unfinished trips. Given the number of passengers of an O-D pair at time slot $t$, the time cost determine the ratio of passengers finished or unfinished before time slot $t^{\prime}$, denoted as Delayed OD Ratio Matrix $MDR_{t}$ (defined in I). Since the travel time is relatively stable in a metro system, we utilize historical trips to calculate $MDR_{t}$.
Based on the above analysis, the $MDP_{t}$ is calculated as follows:
$$\footnotesize\hat{M}DP_{t}=f_{{}_{N}}(f_{{}_{N}}(AHGCSP(\hat{S}_{t})*MDR_{t}^{W})$$
(13)
where $f_{{}_{N}}$ is the normalization operation in row.
After we obtain estimation of $MDP_{t}$, we rewrite the estimation of $M_{t}$ as follows:
$$\footnotesize\hat{M}_{t}=MF_{t}+f_{{}_{N}}(f_{{}_{N}}(AHGCSP(\hat{S}_{t})*MDR_{t}^{W})*(I_{t}-IF_{t})$$
(14)
where $t\in[t^{\prime}-1,\cdots,t^{\prime}-Q+1]$. Note that the initial $\hat{S}_{t^{\prime}-Q+1}=[\hat{M}_{t^{\prime}-Q},\cdots,\hat{M}_{t^{\prime}-P}]=[MF_{t-Q},\cdots,MF_{t-P}]$, i.e. output of prior estimator and $K=P-Q+1$.
Up to now, we have estimated full OD matrix sequence from $[t^{\prime}-1,\cdots,t^{\prime}-Q+1,t^{\prime}-Q,\cdots,t^{\prime}-P]$ by the prior estimator and AHGCSP based estimator. Afterward, it is fed into AHGCSP to obtain the final prediction $\hat{M}_{t}$.
V Experiments
We conduct a large amount of experiments on two real-world subway datasets to find answers to the following research questions:
RQ1: Does our model have a better performance compared with other baselines on different metro network datasets?
RQ2: Does each component in our model help to improve the model performance?
RQ3: What about the main hyper parameters sensitivity of the model?
RQ4:What’s the effectiveness of the proposed data completion method?
V-A Experiment Settings
V-A1 Datasets
Extensive experiments are conducted in two real-world subway datasets from China, i.e. Shenzhen (SZ), Shanghai (SH). Each dataset has million trip records by the automatic fare collection systems (AFCs). The datasets have been preprocessed to remove the passenger privacy information, complying with the security and privacy policies. Each trip is composed of five elements, i.e. the ID number of smart card, the ID of origin station, the entering time, the ID of destination station, the exiting time. Most metro lines have different operation time. To be consistent, we do research on operation time period from 7:00 to 23:00. The trips are aggregated at a time granularity of 15 minutes, thus generating $\text{T}=64$ time slots in one day. Z-score is leveraged to normalize the datasets. As to the data split, 70% of each dataset is divided into train data, 10% as validation data and 20% as test data. The details of datasets and some statistical information of the OD values are elaborated in Table III.
V-A2 Baselines
We choose various kinds of approaches as our baselines, including the traditional methods, deep learning methods and the state-of-the-art method.
(1) HA (Historical Average) is the most widely used model to predict time sequential data. It averages the historical value of an OD pair to predict its future value at the next time slot. HA has no trainable parameters and training process. We directly evaluate HA on the test dataset.
(2) Ridge is a linear regression method with L2 regularizer which tends to assign weights to different input features evenly.
(3) TRMF[68] (Temporal Regularized Matrix Factorization) is a matrix factorization method with autoregression (AR) processes on each temporal factor.
(4) ANN (Artificial Neural Network) is a simple neural network which can extract linearity and non-linearity of data in some extent. Our ANN has three layers with units $[256,256,N]$ where $N$ is the number of stations. We choose ReLU as the activation function of all layers.
(5) FC-LSTM (Fully connected-Long Short Term Memory Network) is a classical neural network for time series forecasting which can extract the long term temporal dependency in time series data. Our FC-LSTM has two hidden layers with 256 units and an output layer has N units with ReLU activation function. FC-LSTM is adopted to predict the destination distribution of the target station with its historical destination distributions as input.
(6) ConvLSTM [69] is a variant of FC-LSTM by replacing the fully connection with the convolution operation. Compare with FC-LSTM which can only extract the long-term temporal pattern from time-series data, ConvLSTM can capture the temporal dependency as well as grid-based spatial dependency. The setting of our ConvLSTM is the same as that in previous works [24], which has three layers with 8, 8, 1 filters respectively and 3 x 3 kernel size of all the filters.
(7) GCN[67] (Graph Convolution Network) can extract the spatial dependency through aggregating the features from neighbors in a traffic graph.
(8) CASCNN[24] (Channel-wise Attentive Split Convolutional Neural Network) is a CNN based deep learning model to predict the short-term OD matrix in metro system. It utilized split CNN and channel-wise attention mechanism to process the historical OD data matrices at previous days and it designed an inflow/outflow-gated mechanism to merge the historical OD flow information with real-time inflow/outflow information.
(9) GEML[42] (Grid-Embedding based Multi-task Learning) aims to solve the rail-hailing Origin-Destination matrix prediction with previous origin-destination matrices. Specifically, GEML utilizes GCN to capture the spatial dependency from the geographical neighborhood and semantic neighborhood of the target station and it adopts LSTM to capture temporal attributes of the passenger destination distribution. GEML has three predictive objectives, i.e. inflow, outflow and OD matrix prediction. To be consistent with other baselines and our model, we only keep its OD matrix prediction objective.
V-A3 Evaluation Metrics
All model performances are evaluated with three wildly applied metrics in previous works [42, 24, 16], i.e. MAE (Mean Absolute Error), RMSE (Root Mean Square Error) and WMAPE (Weighted Mean Absolute Percentage).
$$\displaystyle MAE$$
$$\displaystyle=\frac{1}{N\times N}\sum_{i=1}^{N}\sum_{j=1}^{N}\lvert m_{t}^{ij}-\hat{m}_{t}^{ij}\lvert$$
(15)
$$\displaystyle RMSE$$
$$\displaystyle=\sqrt{\frac{1}{N\times N}\sum_{i=1}^{N}\sum_{j=1}^{N}(m_{t}^{ij}-\hat{m}_{t}^{ij})^{2}}$$
$$\displaystyle WMAPE$$
$$\displaystyle=\frac{\sum_{i=1}^{N}\sum_{j=1}^{N}\lvert m_{t}^{ij}-\hat{m}_{t}^{ij}\lvert}{\sum_{i=1}^{N}\sum_{j=1}^{N}m_{t}^{ij}}$$
MAE and RMSE can only be compared among the same dataset and they are sensitive to data scale. WMAPE can be adopted to evaluate model performance on different datasets. WMAPE is data-scale independent and can avoid zero-division and over-skewing confronting by MAPE. WMAPE is used to measure prediction of the whole OD matrix instead of OD pair level.
V-A4 Parameter Settings
To all the models, the input time slot $P$ is set as 8. The trainable parameters of all deep learning models are tuned on validation set. As our model, the input time slot $Q$ in prior estimator is set as 5 according to the data incomplete analysis. The best units of $f_{key}$, $f_{query}$, $\bar{f}_{key}$, $\bar{f}_{query}$ in our real-time global dynamicity module is tuned as $[16,16,16,16]$ in both SH and SZ. The best units of our graph convolution are tuned as $[512,512]$ in SH and $[128,128]$ in SZ . The best units of our LSTM are tuned as $[256,256]$ both in SH and SZ. Note that AHGCSP and AHGCSP based estimator in our model do not share the same trainable parameters but for simplicity, they share the same hyper parameters. All the parameters are optimized to minimize the loss function in Equation 2 by Adam optimizer [70]. The batch size for both datasets is set as 32. Learning rate decay strategy is adopted to get the best result of all the deep learning models. The initial learning rate of our model is set as 0.01. The decay ratio of learning rate is set is 0.9. The learning rate will decay if the validation result does not improve within 15 epochs. The minimum learning rate is 2.0e-06. The maximum number of epochs is set as 1000. Python and TensorFlow are utilized for the code and experiments are running on a GPU (32GB) with TESLA V100.
V-B Experimental Analysis
The experiment results in Table II are obtained from the test data and each experiment is repeated ten times to get more convincing results. The best results are highlighted in bold. As we can see from the Table, the standard error of each experiment result is quite small, indicating that the repeated experiments are stable. We visualize the average results of ten experiments on three metrics of our model in Figure 10.
V-B1 Performance Comparison
Table II shows all models performances on two metro datasets. Our model C-AHGCSP achieves the best performance at all metrics on all the datasets. Specifically, our model improves the prediction accuracy by 3.9%, 3.45% (WMAPE) on Shenzhen and Shanghai datasets compared with GEML, a model with suboptimal performance. It can be seen that traditional methods like HA, Ridge, TRMF do not perform well and have at least 10%WMAPE difference compared with deep learning models because they lack the capacity to deal with a large amount of data and to extract the non-linearity in the OD data. Deep learning methods generally have better performances than conventional approaches for that they are more suitable to enormous data and non-linearity. ANN has the worst performance perhaps because it can’t extract spatiotemporal correlations. FC-LSTM can capture the long-term dependency in OD flows and its WMAPE is slightly better than ANN, namely about 1% improvement on Shenzhen dataset and about 2% improvement in Shanghai dataset. ConvLSTM can also extract the temporal pattern in OD data. However, it also considers the OD matrices as images to model the spatial dependency. From the experimental results, it can be observed that such image-based spatial dependency extraction is inappropriate and it worsens the performance of ConvLSTM compared with LSTM. GCN can not learn temporal correlation while it models the metro system as graph, closer to its nature, which is verified by the fact that GCN has a better performance than ConvLSTM. The performances of LSTM and GCN are quite close and it infers that both temporal property and spatial property exist in OD flow pattern.
CASCNN is better than the simple deep learning methods, e.g. GCN, FC-LSTM. It made full advantage of the real-time inflow/outflow information. However, it compacts OD matrices at previous days to obtain dense information by CNN and it does not pay attention to the temporal pattern in OD flow and it also ignores that graph nature topology of metro system. GEML has a better performance than CASCNN.
GEML can capture spatiotemporal correlations in the traffic network based on graph. However, it is fed with the incomplete real-time OD data. What’s worse, it does not take the functional correlation and real-time global dynamicity into consideration. Compared with GEML, our model proposes to complete the recent OD matrices to obtain more real-time passenger flow information and we simultaneously consider three spatial dependencies (i.e. both local and global, static and dynamic spatial properties) in OD data based on a subway graph. Consequently, C-AHGCSP achieves the best performance among all the methods. The prediction of all models and the ground truth is visualized on Figure 11.
V-B2 Model Component Analysis
This section aims to test the contribution of each component of our model. To achieve this goal, we first remove each module of C-AHGCSP to obtain degraded variants as follows:
$\textbf{C-AHGCSP}_{\textbf{Com}}$: The data complete estimator is removed from our model.
$\textbf{C-AHGCSP}_{\textbf{Geo}}$: The Geographical Proximity is removed from AHGCSP module, namely Equation 8 becomes $\alpha_{ij}^{t}=w_{d}\alpha_{ij}^{dt}+w_{f}\alpha_{ij}^{f}$. Note that data complete estimator does not change and Geographical Proximity is not removed from AHGCSP based estimator.
$\textbf{C-AHGCSP}_{\textbf{KL}}$: The KL Divergence based Functional Correlation is removed from AHGCSP module, namely Equation 8 becomes $\alpha_{ij}^{t}=w_{d}\alpha_{ij}^{dt}+w_{g}\alpha_{ij}^{g}$. Note that the data complete estimator does not change and Functional Correlation is not removed from AHGCSP based estimator.
$\textbf{C-AHGCSP}_{\textbf{Dym}}$: The Real-Time Global Dynamicity is removed from AHGCSP module, namely Equation 8 becomes $\alpha_{ij}^{t}=w_{f}\alpha_{ij}^{f}+w_{g}\alpha_{ij}^{g}$. Note that the data complete estimator does not change and Real-Time Global Dynamicity is not removed from AHGCSP estimator.
$\textbf{C-AHGCSP}_{\textbf{GCN}}$: All three spatial dependency and the graph convolution are removed from AHGCSP module. The input data is transformed by the the data complete estimator, then it is fed into LSTM directly without spatial dependency extraction.
Ablation tests are conducted on these variants to demonstrate the contribution of each part in our model. Due to the space limitation, Table IV only shows the ablation results on Shenzhen dataset. We train all the variants by repeating the experiments 10 times to get stable results. All the model performance are obtained on the test dataset. It can be seen from Table IV that no matter which module is removed, the model performance becomes worse, showing that all modules are necessary to improve prediction accuracy. However, their contributions are different. The data completion module contributes the most, i.e. about 2.8%, which proves the effectiveness of the Full OD Matrix Reconstruction module. Without this module, our model performance degrades to 0.5049 in WMAPE, which is only slightly better than the model performance of GEML with 0.5157 in WMAPE. The second largest contributor is the heterogenous spatial correlation based graph convolution module. If none of the spatial dependency is captured in our model, its performance will decrease to 0.4946 in WMAPE, i.e. 1.8% improvement to our model. It proves that the spatial dependency extraction is important for improving OD prediction accuracy. As for the three kinds of spatial dependency, Real-Time Global Dynamicity contributes the most to the prediction, i.e. 1.45% in WMAPE, while KL Divergence based Functional Correlation contributes 1.28% improvement in WMAPE and Geographical Proximity contributes 1.03% in WMAPE improvement. It indicates that real-time global dynamicity of passenger mobility has a larger impact on OD flow prediction than the other two spatial dependency. But all spatial dependency can improve the model performance and they capture different passenger mobility patterns from different spatial perspectives.
V-B3 Model Efficiency Analysis
This subsection aims to analyze the time cost of the deep learning baselines and our model. Due to the limited space, we only present the experiment efficiency on SZ dataset and the SH dataset shares similar conclusion. As shown in Table V, ANN is the fastest model which only costs 0.2 second per epoch on the training stage due to its simplest network structure. Then comes GCN, it takes 0.5 seconds to finish an epoch. Compared with ANN, it has graph convolution operation besides FC layer. FC-LSTM costs 1.4 seconds for that it has more trainable parameters, namely 939,638 while ANN has just 338,038 trainable parameters. CASCNN requires 2.1 seconds to be trained and it is mainly composed of convolution operation and attention mechanism. GEML requires 3.5 training seconds per epoch and it contains both LSTM and graph convolution operation. ConvLSTM needs more time than GEML for that it has both LSTM and convolution operation. The convolution operation is slower than graph convolution operation. Our model spends 5.0 seconds on the training stage per epoch which is nearly the same as ConvLSTM. Our model contains LSTM and graph convolution and the data completion process therefore it needs more time than other models. But its time cost is close to ConvLSTM while its performance is much better than ConvLSTM, namely about 7% improvement in both datasets.
V-B4 Hyper-parameter Analysis
This section aims to analyze the hyper parameter sensitivity of our model on Shenzhen and Shanghai datasets. Other parameters are kept the same as default when the target parameter changes.
Global Dynamicity Dimension refers to the units of FC layer in the Key/Query part of Global Dynamicity extraction module. The projection dimension of both key and value at data-driven correlation achieves best performance at 16 in both SH and SZ, much smaller than the number of metro stations. Such reduced dimension can help to concentrate the sparse OD flow information.
Graph Convolution Units refers to the units of graph convolution layer in the Graph Convolution based Heterogeneous Correlation Extraction module. When the graph convolution units is set at 512 on SH and 128 on SZ , the model achieves the best performance.
LSTM Units refers to the units of LSTM layer in the LSTM based Temporal Dependency Extraction module. We notice that as the number of LSTM units increases, the model has better performance. The best performance achieves at 256 at both datasets. After that it becomes worse, which might be caused by the overfitting phenomenon.
V-B5 Effectiveness of Data Completion
As mentioned in Model Component Analysis section, the data completeness process can improve the C-AHGCSP performance for about 2.8% in WMAPE in SZ datasets. Since we have the ground truth of the estimated OD matrix, we calculate the MAE, RMSE, WMAPE metrics based on the estimated OD matrices at $[t-Q+1,\cdots,t-1]$ and their ground truths (namely the full OD matrices), to compare the performance of AHGCSP based Estimator and the prediction of OD matrix at $t$. Note that the ground truth can only be collected after all the passengers reach their destinations. Table VI shows the performance of AHGCSP based Estimator upon test data. We can observe that the performance of AHGCSP based Estimator is slightly better than the final time slot OD matrix prediction. This is perhaps due to the reason that prediction of OD matrix at $[t^{\prime}-1,\cdots,t^{\prime}-Q+1]$ incorporates more prior knowledge, i.e. inflow, finished inflow, historical delayed OD ratio matrix, compared with the prediction of OD matrix at final time slot $t^{\prime}$.
Despite the contribution of data complete process to our model, we are also interested in the research question that whether or not this data complete estimator can improve the performance of other models? To answer this research question, we do the following experiments. First, we integrate the data completeness process into deep learning baselines and get their variants, i.e. C-ANN, C-FC-LSTM, C-ConvLSTM, C-GCN, C-GEML. Take C-GCN for example, its architecture is shown in Figure 13. Then we train all the above models and test them to obtain the model performance by repeating each experiments for 10 times and the train/val/test datasets are the same as Table II. The experiment results are shown in Table VI.
As we can see from both Table II and Table VI, the data completion process can generally improve the deep learning baselines performance in different extent on different datasets. For example, C-ANN, C-GCN, C-FC-LSTM, C-ConvLSTM, C-GEML improve more 1.1%, 1.2%, 1.7%, 0.3%, 1.9% in WMAPE in SZ and 1.2%, 1.3%, 1.8%, 0.5%, 2.1% in SH. Note that the improvement in SH is slightly larger than that in SZ, perhaps due to the reason that SH is more incomplete than SZ and SH can benefit more from the data completion process.
VI Conclusion
This paper proposes a model C-AHGCSP to conduct short-term OD prediction in rail transit network. The main module of C-AHGCSP is an Adaptive Heterogeneous Graph Convolution based Spatiotemporal Predictor which extracts multiple heterogeneous spatial correlations (i.e. geographical proximity, region functionality, global dynamicity) with appropriate approaches (i.e. gaussian kernel function, KL divergence, self-attention mechanism). We also propose a data complete estimator to solve the delayed data collection problem in metro scenario, which is composed of a prior estimator and AHGCSP based estimator. Extensive experiments are carried out on two real-world metro datasets and show the superiority of our model over the benchmarks. In addition, our data complete estimator can be also modified to be integrated into other models to improve their model performance.
References
[1]
K. Zhang, N. Jia, L. Zheng, and Z. Liu, “A novel generative adversarial
network for estimation of trip travel time distribution with trajectory
data,” Transportation Research Part C: Emerging Technologies, vol.
108, pp. 223–244, 2019.
[2]
Y. Lu, H. Ding, S. Ji, N. Sze, and Z. He, “Dual attentive graph neural network
for metro passenger flow prediction,” Neural Computing and
Applications, vol. 33, no. 20, pp. 13 417–13 431, 2021.
[3]
J. Chen, L. Liu, H. Wu, J. Zhen, G. Li, and L. Lin, “Physical-virtual
collaboration graph network for station-level metro ridership prediction,”
arXiv preprint arXiv:2001.04889, 2020.
[4]
Z. Li, N. D. Sergin, H. Yan, C. Zhang, and F. Tsung, “Tensor completion for
weakly-dependent data on graph for metro passenger flow prediction,” in
Proceedings of the AAAI Conference on Artificial Intelligence,
vol. 34, no. 04, 2020, pp. 4804–4810.
[5]
Y. Wei and M.-C. Chen, “Forecasting the short-term metro passenger flow with
empirical mode decomposition and neural networks,” Transportation
Research Part C: Emerging Technologies, vol. 21, no. 1, pp. 148–162, 2012.
[6]
X. Ma, J. Zhang, B. Du, C. Ding, and L. Sun, “Parallel architecture of
convolutional bi-directional lstm neural networks for network-wide metro
ridership prediction,” IEEE Transactions on Intelligent Transportation
Systems, vol. 20, no. 6, pp. 2278–2288, 2018.
[7]
L. Liu, J. Chen, H. Wu, J. Zhen, G. Li, and L. Lin, “Physical-virtual
collaboration modeling for intra-and inter-station metro ridership
prediction,” IEEE Transactions on Intelligent Transportation Systems,
2020.
[8]
J. Mulerikkal, S. Thandassery, V. Rejathalal, and D. M. D. Kunnamkody,
“Performance improvement for metro passenger flow forecast using
spatio-temporal deep neural network,” Neural Computing and
Applications, vol. 34, no. 2, pp. 983–994, 2022.
[9]
J. Wang, Y. Zhang, Y. Wei, Y. Hu, X. Piao, and B. Yin, “Metro passenger flow
prediction via dynamic hypergraph convolution networks,” IEEE
Transactions on Intelligent Transportation Systems, vol. 22, no. 12, pp.
7891–7903, 2021.
[10]
J. Ou, J. Sun, Y. Zhu, H. Jin, Y. Liu, F. Zhang, J. Huang, and X. Wang,
“Stp-trellisnets: Spatial-temporal parallel trellisnets for metro station
passenger flow prediction,” in Proceedings of the 29th ACM
International Conference on Information & Knowledge Management, 2020, pp.
1185–1194.
[11]
J. Zhang, F. Chen, Z. Cui, Y. Guo, and Y. Zhu, “Deep learning architecture for
short-term passenger flow forecasting in urban rail transit,” IEEE
Transactions on Intelligent Transportation Systems, vol. 22, no. 11, pp.
7004–7014, 2020.
[12]
J. Zhang, F. Chen, Y. Guo, and X. Li, “Multi-graph convolutional network for
short-term passenger flow forecasting in urban rail transit,” IET
Intelligent Transport Systems, vol. 14, no. 10, pp. 1210–1217, 2020.
[13]
Z. Li, N. Sergin, H. Yan, C. Zhang, and F. Tsung, “Tensor completion for
weakly-dependent data on graph for metro passenger flow prediction,” in
AAAI, 2020.
[14]
X.-m. Yao, P. Zhao, and D.-d. Yu, “Real-time origin-destination matrices
estimation for urban rail transit network based on structural state-space
model,” Journal of Central South University, vol. 22, no. 11, pp.
4498–4506, 2015.
[15]
Z. Chen, B. Mao, Y. Bai, Q. Xu, and T. Zhang, “Short-term origin-destination
estimation for urban rail transit based on multiple temporal scales,”
Transp. Syst. Eng. Inf. Technol., vol. 15, no. 5, pp. 166–172, 2017.
[16]
Z. Cheng, M. Trepanier, and L. Sun, “Real-time forecasting of metro
origin-destination matrices with high-order weighted dynamic mode
decomposition,” Transportation Science, 2022.
[17]
X. Dai, L. Sun, and Y. Xu, “Short-term origin-destination based metro flow
prediction with probabilistic model selection approach,” Journal of
Advanced Transportation, vol. 2018, 2018.
[18]
P.-W. Lin and G.-L. Chang, “A generalized model and solution algorithm for
estimation of the dynamic freeway origin–destination matrix,”
Transportation Research Part B: Methodological, vol. 41, no. 5, pp.
554–572, 2007.
[19]
Y. Gong, Z. Li, J. Zhang, W. Liu, Y. Zheng, and C. Kirsch, “Network-wide crowd
flow prediction of sydney trains via customized online non-negative matrix
factorization,” in Proceedings of the 27th ACM International
Conference on Information and Knowledge Management, 2018, pp. 1243–1252.
[20]
Y. Gong, Z. Li, J. Zhang, W. Liu, and Y. Zheng, “Online spatio-temporal crowd
flow distribution prediction for complex metro system,” IEEE
Transactions on Knowledge and Data Engineering, 2020.
[21]
D. Li, J. Cao, R. Li, and L. Wu, “A spatio-temporal structured lstm model for
short-term prediction of origin-destination matrix in rail transit with
multisource data,” IEEE Access, vol. 8, pp. 84 000–84 019, 2020.
[22]
F. Toqué, E. Côme, M. K. El Mahrsi, and L. Oukhellou, “Forecasting
dynamic public transport origin-destination matrices with long-short term
memory recurrent neural networks,” in 2016 IEEE 19th international
conference on intelligent transportation systems (ITSC), 2016, pp.
1071–1076.
[23]
J. Zhang, F. Chen, Z. Wang, and H. Liu, “Short-term origin-destination
forecasting in urban rail transit based on attraction degree,” IEEE
Access, vol. 7, pp. 133 452–133 462, 2019.
[24]
J. Zhang, H. Che, F. Chen, W. Ma, and Z. He, “Short-term origin-destination
demand prediction in urban rail transit systems: A channel-wise attentive
split-convolutional neural network method,” Transportation Research
Part C: Emerging Technologies, vol. 124, p. 102928, 2021.
[25]
P. Noursalehi, H. N. Koutsopoulos, and J. Zhao, “Dynamic origin-destination
prediction in urban rail systems: A multi-resolution spatio-temporal deep
learning approach,” IEEE Transactions on Intelligent Transportation
Systems, 2021.
[26]
Y. He, Y. Zhao, and K.-L. Tsui, “Short-term forecasting of origin-destination
matrix in transit system via a deep learning approach,”
Transportmetrica A: Transport Science, pp. 1–28, 2022.
[27]
W. Jiang, Z. Ma, and H. N. Koutsopoulos, “Deep learning for short-term
origin–destination passenger flow prediction under partial observability in
urban railway systems,” Neural Computing and Applications, pp. 1–18,
2022.
[28]
L. Liu, Y. Zhu, G. Li, Z. Wu, L. Bai, and L. Lin, “Online metro
origin-destination prediction via heterogeneous information aggregation,”
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
[29]
M. Saberi, H. S. Mahmassani, D. Brockmann, and A. Hosseini, “A complex network
perspective for characterizing urban travel demand patterns: graph
theoretical analysis of large-scale origin–destination demand networks,”
Transportation, vol. 44, no. 6, pp. 1383–1402, 2017.
[30]
X. Hu, Y.-C. Chiu, J. A. Villalobos, and E. Nava, “A sequential decomposition
framework and method for calibrating dynamic origin—destination demand in a
congested network,” IEEE Transactions on intelligent transportation
systems, vol. 18, no. 10, pp. 2790–2797, 2017.
[31]
Y. Ma, R. Kuik, and H. J. van Zuylen, “Day-to-day origin–destination tuple
estimation and prediction with hierarchical bayesian networks using multiple
data sources,” Transportation research record, vol. 2343, no. 1, pp.
51–61, 2013.
[32]
X. Zhou and H. S. Mahmassani, “Dynamic origin-destination demand estimation
using automatic vehicle identification data,” IEEE Transactions on
intelligent transportation systems, vol. 7, no. 1, pp. 105–114, 2006.
[33]
K. Tang, Y. Cao, C. Chen, J. Yao, C. Tan, and J. Sun, “Dynamic
origin-destination flow estimation using automatic vehicle identification
data: A 3d convolutional neural network approach,” Computer-Aided
Civil and Infrastructure Engineering, vol. 36, no. 1, pp. 30–46, 2021.
[34]
X. Xiong, K. Ozbay, L. Jin, and C. Feng, “Dynamic origin–destination matrix
prediction with line graph neural networks and kalman filter,”
Transportation Research Record, vol. 2674, no. 8, pp. 491–503, 2020.
[35]
J. Ren and Q. Xie, “Efficient od trip matrix prediction based on tensor
decomposition,” in 2017 18th IEEE International Conference on Mobile
Data Management (MDM), 2017, pp. 180–185.
[36]
J. Ou, J. Lu, J. Xia, C. An, and Z. Lu, “Learn, assign, and search: real-time
estimation of dynamic origin-destination flows using machine learning
algorithms,” IEEE Access, vol. 7, pp. 26 967–26 983, 2019.
[37]
P. Ye and D. Wen, “Optimal traffic sensor location for origin–destination
estimation using a compressed sensing framework,” IEEE Transactions on
Intelligent Transportation Systems, vol. 18, no. 7, pp. 1857–1866, 2016.
[38]
T. Huang, Y. Ma, Z. T. Qin, J. Zheng, H. X. Liu, H. Zhu, and J. Ye,
“Origin-destination flow prediction with vehicle trajectory data and
semi-supervised recurrent neural network,” in 2019 IEEE International
Conference on Big Data (Big Data). IEEE, 2019, pp. 1450–1459.
[39]
D. He, B. Ruan, B. Zheng, and X. Zhou, “Origin-destination trajectory
diversity analysis: Efficient top-k diversified search,” in 2018 19th
IEEE International Conference on Mobile Data Management (MDM). IEEE, 2018, pp. 135–144.
[40]
W. Ma and Z. S. Qian, “Statistical inference of probabilistic
origin-destination demand using day-to-day traffic data,”
Transportation Research Part C: Emerging Technologies, vol. 88, pp.
227–256, 2018.
[41]
D. Zhang, F. Xiao, M. Shen, and S. Zhong, “Dneat: A novel dynamic node-edge
attention network for origin-destination demand prediction,”
Transportation Research Part C: Emerging Technologies, vol. 122, p.
102851, 2021.
[42]
Y. Wang, H. Yin, H. Chen, T. Wo, J. Xu, and K. Zheng, “Origin-destination
matrix prediction via graph convolution: a new perspective of passenger
demand modeling,” in Proceedings of the 25th ACM SIGKDD International
Conference on Knowledge Discovery & Data Mining, 2019, pp. 1227–1235.
[43]
H. Shi, Q. Yao, Q. Guo, Y. Li, L. Zhang, J. Ye, Y. Li, and Y. Liu, “Predicting
origin-destination flow via multi-perspective graph convolutional network,”
in 2020 IEEE 36th International Conference on Data Engineering
(ICDE). IEEE, 2020, pp. 1818–1821.
[44]
L. Liu, Z. Qiu, G. Li, Q. Wang, W. Ouyang, and L. Lin, “Contextualized
spatial–temporal network for taxi origin-destination demand prediction,”
IEEE Transactions on Intelligent Transportation Systems, vol. 20,
no. 10, pp. 3875–3887, 2019.
[45]
K.-F. Chu, A. Y. Lam, and V. O. Li, “Deep multi-scale convolutional lstm
network for travel demand and origin-destination predictions,” IEEE
Transactions on Intelligent Transportation Systems, vol. 21, no. 8, pp.
3219–3232, 2019.
[46]
J. Zhang, Y. Zheng, and D. Qi, “Deep spatio-temporal residual networks for
citywide crowd flows prediction,” in Thirty-first AAAI conference on
artificial intelligence, 2017.
[47]
N. Lim, B. Hooi, S.-K. Ng, X. Wang, Y. L. Goh, R. Weng, and R. Tan,
“Origin-aware next destination recommendation with personalized preference
attention,” in Proceedings of the 14th ACM International Conference on
Web Search and Data Mining, 2021, pp. 382–390.
[48]
Z. Duan, K. Zhang, Z. Chen, Z. Liu, L. Tang, Y. Yang, and Y. Ni, “Prediction
of city-scale dynamic taxi origin-destination flows using a hybrid deep
neural network combined with travel time,” IEEE Access, vol. 7, pp.
127 816–127 832, 2019.
[49]
X. Yao, Y. Gao, D. Zhu, E. Manley, J. Wang, and Y. Liu, “Spatial
origin-destination flow imputation using graph convolutional networks,”
IEEE Transactions on Intelligent Transportation Systems, vol. 22,
no. 12, pp. 7474–7484, 2020.
[50]
J. Hu, B. Yang, C. Guo, C. S. Jensen, and H. Xiong, “Stochastic
origin-destination matrix forecasting using dual-stage graph convolutional,
recurrent neural networks,” in 2020 IEEE 36th International Conference
on Data Engineering (ICDE). IEEE,
2020, pp. 1417–1428.
[51]
Y. Tong, Y. Chen, Z. Zhou, L. Chen, J. Wang, Q. Yang, J. Ye, and W. Lv, “The
simpler the better: a unified approach to predicting original taxi demands
based on large-scale online platforms,” in Proceedings of the 23rd ACM
SIGKDD international conference on knowledge discovery and data mining,
2017, pp. 1653–1662.
[52]
H. Miao, Y. Fei, S. Wang, F. Wang, and D. Wen, “Deep learning based
origin-destination prediction via contextual information fusion,”
Multimedia Tools and Applications, pp. 1–17, 2021.
[53]
J. Ke, X. Qin, H. Yang, Z. Zheng, Z. Zhu, and J. Ye, “Predicting
origin-destination ride-sourcing demand with a spatio-temporal
encoder-decoder residual multi-graph convolutional network,”
Transportation Research Part C: Emerging Technologies, vol. 122, p.
102858, 2021.
[54]
S. Wang, H. Miao, H. Chen, and Z. Huang, “Multi-task adversarial
spatial-temporal networks for crowd flow prediction,” in Proceedings
of the 29th ACM international conference on information knowledge
management, 2020, pp. 1555–1564.
[55]
M. Jin, M. Wang, Y. Gong, and Y. Liu, “Spatio-temporal constrained
origin-destination inferring using public transit fare card data,”
Physica A: Statistical Mechanics and its Applications, p. 127642,
2022.
[56]
W. Wang, J. P. Attanucci, and N. H. Wilson, “Bus passenger origin-destination
estimation and related analyses using automated data collection systems,”
Journal of Public Transportation, vol. 14, no. 4, p. 7, 2011.
[57]
X. Chen, S. Guo, L. Yu, and B. Hellinga, “Short-term forecasting of transit
route od matrix with smart card data,” in 2011 14th International IEEE
Conference on Intelligent Transportation Systems (ITSC). IEEE, 2011, pp. 1513–1518.
[58]
M. Jafari Kang, S. Ataeian, and S. Amiripour, “A procedure for public transit
od matrix generation using smart card transaction data,” Public
Transport, vol. 13, no. 1, pp. 81–100, 2021.
[59]
F. Zheng, J. Liu, H. van Zuylen, K. Wang, X. Liu, and J. Li, “Dynamic od
prediction for urban networks based on automatic number plate recognition
data: Paramertic vs. non-parametric approaches,” in 2019 IEEE
Intelligent Transportation Systems Conference (ITSC). IEEE, 2019, pp. 4037–4042.
[60]
Y. Yang, J. Liu, P. Shang, X. Xu, and X. Chen, “Dynamic origin-destination
matrix estimation based on urban rail transit afc data: deep optimization
framework with forward passing and backpropagation techniques,”
Journal of Advanced Transportation, vol. 2020, 2020.
[61]
J. Ye, J. Zhao, K. Ye, and C. Xu, “How to build a graph-based deep learning
architecture in traffic domain: A survey,” IEEE Transactions on
Intelligent Transportation Systems, 2020.
[62]
Y. Li, R. Yu, C. Shahabi, and Y. Liu, “Diffusion convolutional recurrent
neural network: Data-driven traffic forecasting,” in ICLR, 2018.
[63]
J. Zhao, L. Zhang, J. Ye, and C. Xu, “Mdlf: A multi-view-based deep learning
framework for individual trip destination prediction in public transportation
systems,” IEEE Transactions on Intelligent Transportation Systems,
2021.
[64]
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez,
L. Kaiser, and I. Polosukhin, “Attention is all you need,” arXiv
preprint arXiv:1706.03762, 2017.
[65]
J. Ye, J. Zhao, K. Ye, and C. Xu, “Multi-stgcnet: A graph convolution
based spatial-temporal framework for subway passenger flow forecasting,” in
2020 International Joint Conference on Neural Networks (IJCNN), 2020,
pp. 1–8.
[66]
X. Geng, Y. Li, L. Wang, L. Zhang, Q. Yang, J. Ye, and Y. Liu, “Spatiotemporal
multi-graph convolution network for ride-hailing demand forecasting,” in
AAAI, vol. 33, 2019, pp. 3656–3663.
[67]
M. Welling and T. N. Kipf, “Semi-supervised classification with graph
convolutional networks,” in J. International Conference on Learning
Representations (ICLR 2017), 2016.
[68]
H.-F. Yu, N. S. Rao, and I. S. Dhillon, “Temporal regularized matrix
factorization for high-dimensional time series prediction,” in NIPS,
2016.
[69]
X. Shi, Z. Chen, H. Wang, D.-Y. Yeung, W.-K. Wong, and W.-c. Woo,
“Convolutional lstm network: A machine learning approach for precipitation
nowcasting,” Advances in neural information processing systems,
vol. 28, 2015.
[70]
D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,”
arXiv preprint arXiv:1412.6980, 2014. |
Deep Reinforcement Learning for IoT Networks: Age of Information and Energy Cost Tradeoff
Xiongwei Wu${}^{1,2}$, Xiuhua Li${}^{3}$, Jun Li${}^{4}$, P. C. Ching${}^{1}$, H. Vincent Poor${}^{2}$
${}^{1}$Dept. of Electronic Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong SAR, China
${}^{2}$Dept. of Electrical Engineering, Princeton University, Princeton, USA
${}^{3}$School of Big Data & Software Engineering, Chongqing University, Chongqing, China
${}^{4}$School of Electronic & Optical Engineering, Nanjing University of Science and Technology, Nanjing, China
E-mail: {xwwu, pcching}@ee.cuhk.edu.hk; [email protected]; [email protected]; [email protected]
This work was supported in part by the Global Scholarship Programme for Research Excellence from CUHK, and in part by the U.S. National Science Foundation under Grant CCF-1908308.
Abstract
In most Internet of Things (IoT) networks, edge nodes are commonly used as to relays to cache sensing data generated by IoT sensors as well as provide communication services for data consumers. However, a critical issue of IoT sensing is that data are usually transient, which necessitates temporal updates of caching content items while frequent cache updates could lead to considerable energy cost and challenge the lifetime of IoT sensors. To address this issue, we adopt the Age of Information (AoI) to quantify data freshness and propose an online cache update scheme to obtain an effective tradeoff between the average AoI and energy cost. Specifically, we first develop a characterization of transmission energy consumption at IoT sensors by incorporating a successful transmission condition. Then, we model cache updating as a Markov decision process to minimize average weighted cost with judicious definitions of state, action, and reward. Since user preference towards content items is usually unknown and often temporally evolving, we therefore develop a deep reinforcement learning (DRL) algorithm to enable intelligent cache updates. Through trial-and-error explorations, an effective caching policy can be learned without requiring exact knowledge of content popularity. Simulation results demonstrate the superiority of the proposed framework.
I Introduction
In the foreseeable future, billions of electronic devices (e.g., smartphones, healthcare sensors, vehicles, smart cameras, smart home appliances, etc.), are anticipated to connect to the Internet, which constitutes the Internet of Things (IoT) [1, 2]. Massive numbers of IoT sensors will inevitably generate a tremendous amount of data, which could impose a heavy traffic burden and make wireless networks extremely congested. To cope with these challenges, pushing network resources from the cloud to the edge of wireless networks has been deemed as a promising approach for reducing traffic burden and enhancing user quality of service (QoS) in future IoT networks.
Considerable research attention has been devoted to caching policies to optimize communication performance metrics, e.g., transmission delay, traffic load, and power consumption [3, 4, 5].
However, these studies generally focus on caching multimedia content items that are usually in-transient once they are produced. In contrast, IoT sensing data cached in edge nodes are transient and could become outdated as time goes by. Hence, previous content caching strategies could not be readily utilized to provide fresh data in IoT services.
To capture data freshness of transient content items, the Age of Information (AoI) has emerged as an effective performance metric. AoI is defined by the time elapsed after the generation of IoT sensing data [6, 7]. It is desirable to minimize the average AoI of transient content items in the design of caching policies. On the other hand, frequent cache updates will cause considerable energy consumption at IoT sensors that usually have a rather limited battery capacity. To accommodate vast numbers of smart devices and extend the lifetime of IoT sensors, efficient cache update designs are needed to be developed towards fifth-generation (5G) and beyond communications.
Recently, machine learning, e.g., deep reinforcement learning (DRL), has been considered as a promising tool for resource management in IoT networks [2]. Some studies have utilized DRL to develop caching policies for IoT sensing. For instance, the study in [8] investigated a caching strategy design for the IoT by using federated DRL. However, it did not take into consideration data freshness. The authors in [9, 10] studied caching transient content items by minimizing the average AoI plus cache update cost. The cost of updating content in these studies was considered as the number of transmissions between sensors and edge nodes. This simple measurement was also used in [11] by assuming unit energy consumption per transmissions. A similar idea was also considered in [12]. Generally, the above-mentioned studies ignore the impact of time-varying content popularity.
This paper considers the scenario in which IoT sensors may produce the inhomogeneous size of transient content items that are required to be stored at the edge node via wireless links; meanwhile, the statistics of user requests towards IoT sensing data are uncertain and temporally evolving. We propose an intelligent policy to attain an equitable tradeoff between the average AoI and cache update cost, which is quantified as transmission energy consumption rather than the number of cache updates.
The main contributions of this work are summarized as follows.
•
We investigate the issue of how to preserve data freshness while reducing energy consumption at IoT sensors. Particularly, we consider cache updating given the inhomogeneous size of IoT sensing data, characteristics of wireless channels, and time-varying content popularity. To tackle the resulting decision-making in such a complex and dynamic environment, we propose a novel DRL-based framework.
•
We develop a characterization of transmission energy consumption for uploading sensing data from IoT sensors to the edge node via wireless links. Particularly, we consider a realistic condition that wireless transmissions are effective when the received signal-to-noise ratio (SNR) is beyond a certain threshold.
•
We conduct simulations to demonstrate that the proposed deep Q-network (DQN) algorithm can significantly reduce energy cost while slightly compromising average AoI. Under the scenario being studied, transmission energy consumption at IoT sensors witnesses a reduction of $52.7\%$, whereas average AoI is increased by 2.41 epochs compared with AoI-oriented results.
The remainder of this paper is organized as follows. Section II introduces the system model. Section III presents the MDP problem formulation, and Section IV develops a DRL-based algorithm. Section V presents simulation results, and Section VI concludes the paper.
II System Model
II-A System Operation
As illustrated in Fig. 1, we consider an IoT network, where the edge node (e.g., a small-cell base station) is deployed at the edge of wireless networks, which is connected to the cloud through fronthaul. There is a total of $F$ randomly distributed IoT sensors. Endowed with caching and computing units, the edge node is able to serve as a relay between data producers (e.g., IoT sensors) and data consumers (e.g., mobile users). More precisely, the edge node maintains a cache unit to aggregate sensing data produced by IoT sensors within its coverage; meanwhile, users can submit their requests to the edge node and retrieve desired content items for data analysis. We consider the scenario, where no direct links present between IoT sensors and users, similar to [13]. Let $\mathcal{F}=\{1,2,\cdots,F\}$ be the set of all indices of sensors.
In addition, system operation time is assumed to be slotted into a sequence of epochs, i.e., $t=1,2,\cdots$. The sensing data cached at the edge node may be dynamically updated as time passed; but every content item, generated by a certain sensor, owns a specific content item index and generation epoch. As previously stated, we term IoT sensing data as transient content. For instance, content item111We slightly abuse the notation, and let $f$ denote the index of either an IoT sensor or the associated content item. $f$ with generation epoch $v_{f}^{t}$ means that, at epoch $t$, the caching content item available at the edge node is generated by sensor $f$ at epoch $v_{f}^{t}$; the associated generation epoch $v_{f}^{t}$ could be reset, once sensor $f$ is determined to upload a new measurement of content item $f$ into the edge node.
II-B Age of Information
We introduce a QoS metric, i.e., AoI, to capture how fresh a transient content item is. Particularly, the AoI of content item $f$ at epoch $t$, i.e., $o_{f}^{t}$, is defined as how many epochs have elapsed since this content item was generated. Accordingly, it gives rise to the following equation:
$$\displaystyle o_{f}^{t}=\max\{t-v_{f}^{t},1\},\forall f\in\mathcal{F},$$
(1)
which can take values from $\{1,2,\cdots,T_{\max}\}$; and $T_{\max}$ denotes the upper limit [14]. Let $N_{f}^{t}$ be the number of user requests for content item $f$ that are observed by the edge node at epoch $t$. Thus, the average AoI for satisfying user requests at epoch $t$ can be given as follows:
$$\displaystyle O^{t}=\frac{\sum_{f\in\mathcal{F}}o_{f}^{t}N_{f}^{t}}{\sum_{f\in%
\mathcal{F}}N_{f}^{t}}.$$
(2)
As caching content items gradually become outdated, a reasonable cache update is needed to serve user requests at the coming epochs. For ease of discussion, we assume that each IoT sensor is able to upload its sensing data to the edge node within a single epoch. That is, the AoI of a stale content turns to be 1 as long as the associated sensor is chosen to upload the current measurement of this content item. For instance, as depicted in Fig. 2, when a transient content is selected to update at epoch $t_{1}$, the corresponding AoI reduces to 1 at epoch $t_{1}+1$; otherwise, the AoI will increment by 1 after every epoch.
II-C Transmission Energy Consumption
To carry out cache updating, IoT sensors need access the edge node via wireless links; and orthogonal frequency bandwidths are allocated to different IoT sensors to avoid interference. Thus, the received SNR at the edge node concerning the $f$-th sensor can be given by:
$$\displaystyle\gamma_{f}=\frac{P_{f}\chi_{f}^{2}|\kappa_{f}|^{2}}{N_{0}B},%
\forall f\in\mathcal{F},$$
(3)
where $P_{f}$ denotes transmission power at the $f$-th sensor; coefficient $\chi_{f}$ denotes the impacts of the path loss and antenna gain; $\kappa_{f}$ denotes the small-scale fading component; $N_{0}$ is the power spectrum density; and $B$ denotes the channel bandwidth. We further assume that $|\kappa_{f}|$ follows the following Rayleigh distribution, i.e., $x\exp(-x^{2}/2)$. We consider transmissions between IoT sensors and the edge node are effective under the condition that the received SNR surpasses a pre-specified threshold $\gamma_{th}$. Given the storage size of sensing data from the $f$-th sensor as $s_{f}$ bits, we present the average transmission energy consumption in the following Proposition.
Proposition 1
The average transmission energy consumption $\bar{E}_{f}$ for the $f$-th IoT sensor ($\forall f\in\mathcal{F})$ to upload its sensing data is given by:
$$\displaystyle\bar{E}_{f}=\frac{\log(2)\times P_{f}s_{f}}{\log(2)\times r_{th}%
\exp\left(-\frac{\gamma_{th}}{2\beta_{f}}\right)+B\exp\left(\frac{1}{2\beta_{f%
}}\right)\mathcal{\rho}_{f}(\gamma_{th}+1)},$$
(4)
where function $\rho_{f}(\cdot)$ is defined as:
$$\displaystyle\rho_{f}(x)\triangleq\int_{x}^{+\infty}\frac{1}{x}\exp\bigg{(}-%
\frac{x}{2\beta_{f}}\bigg{)}dx,$$
(5)
and $\beta_{f}=P_{f}\chi_{f}^{2}/(N_{0}B)$; $r_{th}$ denotes the data rate threshold, given as follows:
$$\displaystyle r_{th}\triangleq\log_{2}(1+\gamma_{th}).$$
(6)
Proof.
Due to the page limit, we only sketch the basic idea here. Since the edge node would fail to decode information if the received SNR is lower than the required threshold $\gamma_{th}$, one can first compute the corresponding outage probability due to channel fading. The next step is to estimate the average transmission delay by calculating the expected transmission data rates. Finally, the average transmission energy consumption is given by the product of transmission power and average transmission delay.
∎
Clearly, owing to limited battery levels at sensors and massive connections in IoT networks, it is crucial to find the best choice at each epoch to perform cache update so as to obtain an effective tradeoff between AoI and energy cost. Note that, the average AoI (e.g., see (2)) highly depends on content popularity distribution that usually exhibits temporal dynamics; meanwhile, energy consumption (e.g., see (4)) comes to storages of sensing data and channel statistics which are usually inhomogeneous among different sensors. Aware of this, we formulate an MDP based online cache update problem in the following section.
III MDP Problem Formulation
Typically, an MDP can be described by a tuple $(\mathcal{S},\mathcal{A},\mathcal{P},R,\gamma)$, where $\mathcal{S}$ denotes the state space containing all possible states $\boldsymbol{S}$; $\mathcal{A}$ denotes the action space collecting all possible actions $\boldsymbol{A}$; $\mathcal{P}$ collects transition probabilities $Pr\{\boldsymbol{S}^{\prime}|\boldsymbol{S},\boldsymbol{A}\}$; $R$ denotes a reward fed back to the agent after executing an action; and $\gamma\in[0,1)$ is a discount factor. In the online cache update problem, the edge node is anticipated to act as the agent; and we customize the above-mentioned elements in an MDP as follows.
•
State: At each epoch, the edge node has exact knowledge of the AoI of caching content items as well as user requests. Hence, we define the system state as follows:
$$\displaystyle\boldsymbol{S}^{t}=\left(\{o_{f}^{t}\}_{f\in\mathcal{F}},\{N_{f}^%
{t}\}_{f\in\mathcal{F}}\right).$$
(7)
•
Action: Similar to study [11], let $\{0,1,\cdots,F\}$ be the action space222The proposed framework can be extended to the case where multiple content items are selected at each epoch. This comes at the cost of energy consumption and bandwidth occupation.. When action $\boldsymbol{A}^{t}=0$, it implies that no content is selected; otherwise, we push the new version of content item $\boldsymbol{A}^{t}\in\mathcal{F}$ into the cache unit. As such, it gives rise to the following relation:
$$\displaystyle o_{f}^{t+1}=(o_{f}^{t}+1)\times\mathcal{I}(f,\boldsymbol{A}^{t})%
+\mathcal{I}(f,\boldsymbol{A}^{t}),\forall f\in\mathcal{F},$$
(8)
where $\mathcal{I}(\cdot)$ is an indicator function333When variables $x,y$ are equal, $\mathcal{I}(x,y)=1$; otherwise, $\mathcal{I}(x,y)=0$.
Clearly, cache update should be carried out after the occurrence of state $\boldsymbol{S}^{t}$, i.e., after user requests are revealed. Subsequently, the system is expected to transfer to a new state $\boldsymbol{S}^{t+1}$ with transition probability $Pr\{\boldsymbol{S}^{t+1}|\boldsymbol{S}^{t},\boldsymbol{A}^{t}\}$.
•
Reward: The agent in an MDP is required to receive a reward signal $R^{t+1}$ along with the appearance of state $\boldsymbol{S}^{t+1}$. Recall that we aim to minimize the average AoI for satisfying user requests during the upcoming epoch while reducing transmission energy consumption. Hence, it is envisioned to minimize the following average weighted cost, i.e.:
$$\displaystyle C^{t+1}=\frac{\sum_{f\in\mathcal{F}}o_{f}^{t+1}N_{f}^{t+1}}{\sum%
_{f\in\mathcal{F}}N_{f}^{t+1}}+\eta\bar{E}_{f}|_{f=\boldsymbol{A}^{t}},$$
(9)
where the first term on the right-hand side denotes the average AoI at epoch $t+1$; and $\eta\geq 0$ is a constant to strike the balance between the average AoI and energy cost. We denote $\bar{E}_{0}=0$ for notational convenience. In accordance with the objective in (9), the reward signal of the proposed framework is designed as follows:
$$\displaystyle R^{t+1}=-C^{t+1},$$
(10)
which is supposed to be received at epoch $t+1$.
To this end, we need to find an optimal policy $\pi^{*}$ that maximizes the expected discounted cumulative reward, i.e.:
$$\displaystyle\pi^{*}=\arg\max_{\pi}\mathbb{E}[V^{t}|\pi],$$
(11)
where the expectation is over all possibilities of $\{\boldsymbol{S}^{t},\boldsymbol{A}^{t},\boldsymbol{S}^{t+1},R^{t+1}\}$, and the discounted cumulative reward is given by:
$$\displaystyle V^{t}=\sum_{\tau=0}^{\infty}(\gamma)^{\tau}R^{t+\tau+1}.$$
(12)
Since transition probability, i.e., $Pr\{\boldsymbol{S}^{\prime}|\boldsymbol{S},\boldsymbol{A}\}$, is generally difficult to acquire in practical applications, we move on to propose a DRL-based algorithm to address this problem through trial-and-error interactions with the environment.
IV Proposed DQN Based Algorithm
In this section, we propose an intelligent cache update design for an IoT network by adopting the state-of-art RL approaches, e.g., DQN. The key idea of this approach is built on utilizing deep neural networks (DNNs) to learn the Q-value function:
$$\displaystyle Q(\boldsymbol{S},\boldsymbol{A})=\mathbb{[}V^{t}|\boldsymbol{S}=%
\boldsymbol{S}^{t},\boldsymbol{A}=\boldsymbol{A}^{t},\pi^{*}],$$
(13)
which indicates the expected cumulative reward after executing action $\boldsymbol{A}^{t}$ and then following policy $\pi^{*}$; as such, an optimal action can be given by the one attaining the maximum Q-value under current state $\boldsymbol{S}^{t}$, i.e.:
$$\displaystyle\boldsymbol{A}^{*}=\arg\max_{\boldsymbol{A}}Q(\boldsymbol{S}^{t},%
\boldsymbol{A}).$$
(14)
According to the Bellman Optimality Equality in [15], we have the following recursive result:
$$\displaystyle Q(\boldsymbol{S}^{t},\boldsymbol{A}^{t})=R^{t+1}+\gamma\max_{%
\boldsymbol{A}^{\prime}\in\mathcal{A}}Q(\boldsymbol{S}^{t+1},\boldsymbol{A}^{%
\prime}).$$
(15)
In general, DQN entails maintaining two networks, i.e., Q-network and target Q-network [16]. Specifically, Q-network $Q_{\boldsymbol{\theta}}(\boldsymbol{S},\boldsymbol{A})$ can be constructed by a plain DNN, which is parametrized by $\boldsymbol{\theta}$. Readers are referred to [16] for greater detail. Target Q-network $Q_{\boldsymbol{\theta}^{-}}(\boldsymbol{S},\boldsymbol{A})$ is parametrized by $\boldsymbol{\theta}^{-}$, which is designed in the same manner as Q-network and can be used to calculate target values for network training.
The training procedure is introduced as follows. To aggregate training data, we leverage Replay Buffer (RB) to store historical experiences, e.g., $\xi^{t}=(\boldsymbol{S}^{t},\boldsymbol{A}^{t},\boldsymbol{S}^{t+1},R^{t+1})$. We assume that RB can store a total of $N$ experiences; and the stalest experience should be replaced by the fresh one as long as RB is fully filled. Subsequently, at each iteration, we randomly draw a mini-batch of experiences $\Xi_{N}$ from RB to update $\boldsymbol{\theta}$ by minimizing the following loss:
$$\displaystyle\hfill Loss=\mathbb{E}_{\xi^{t}\sim\Xi_{N}}\bigg{[}\big{(}y^{t}-Q%
_{\boldsymbol{\theta}}(\boldsymbol{S}^{t},\boldsymbol{A}^{t})\big{)}^{2}\bigg{%
]},$$
(16)
where $y^{t}$ denotes the target value, i.e.:
$$\displaystyle R^{t+1}+\gamma\max_{\boldsymbol{A}^{\prime}}Q_{\boldsymbol{%
\theta}^{-}}(\boldsymbol{S}^{t+1},\boldsymbol{A}^{\prime}).$$
(17)
Thus, adopting stochastic gradient descent approaches, parameter $\boldsymbol{\theta}$ can be updated as follows444We consider that parameter $\boldsymbol{\theta}$ and state $\boldsymbol{S}$ are vectorized with proper dimensions.:
$$\displaystyle\boldsymbol{\theta}\leftarrow\boldsymbol{\theta}-\alpha\bigg{[}%
\big{(}Q_{\boldsymbol{\theta}}(\boldsymbol{S}^{t},\boldsymbol{A}^{t})-y^{t}%
\big{)}\nabla_{\boldsymbol{\theta}}Q_{\boldsymbol{\theta}}(\boldsymbol{S}^{t},%
\boldsymbol{A}^{t})\bigg{]},$$
(18)
where $\alpha$ is the learning rate.
With regard to $\boldsymbol{\theta}^{-}$, it can be updated by $\boldsymbol{\theta}^{-}\leftarrow\boldsymbol{\theta}$ every $T_{0}$ iterations.
Concerning exploration, we adopt the $\varepsilon$-greedy policy to generate actions: at each epoch, taking an action randomly drawn from $\{0,\cdots,F\}$ with probability $\varepsilon$ whereas taking an optimized action $\boldsymbol{A}^{*}=\arg\max_{\boldsymbol{A}}Q_{\boldsymbol{\theta}}(%
\boldsymbol{S},\boldsymbol{A})$ with probability $1-\varepsilon$. This is because the success of RL replies to visiting different state-action pairs so as to aggregate sufficient experiences to infer better actions and avoid suboptimal estimations of Q-value function. Finally, we summarize the proposed DRL-based cache update in Algorithm 1.
V Performance Evaluation
V-A Simulation Setup
In this section, we investigate the performance of the proposed algorithm. We consider the following IoT network: one edge node is deployed which covers a circle of radius 100 m; a total of 20 IoT sensors are randomly distributed within the coverage of the edge node; the storage size of each content item is randomly drawn from $[50,100]$ MB. Regarding communications between the edge node and sensors, the channel bandwidth is set as 10 MHz; large-scale fading is specified by the model used in [17]; noise spectrum density is -172 dBm; and transmission power of IoT sensors is 20 dBm. We consider there are at most 100 users requesting content by following Zipf distributions [17], i.e.:
$$\displaystyle p_{f}=\zeta_{f}^{-\kappa}/\sum_{f^{\prime}\in\mathcal{F}}\zeta_{%
f^{\prime}}^{-\kappa},$$
(19)
where rank orders of content items, e.g., $\{\zeta_{f}\}$, are randomly evolving according to a certain probability transmission matrix; and the skewness factor, e.g., $\kappa$, is randomly drawn from $\{0.5,1,1.5,2\}$. In the default setup, we consider that the average AoI and energy consumption are of equal importance, i.e., $\eta=1$.
To implement algorithm, the Q-network compromises three hidden layers with 512, 256, 128 neurons, respectively. The learning rate is set as 0.001. In $\varepsilon$-greedy policy, we take a random action with probability $\varepsilon=0.9$; then, $\varepsilon$ is decayed by 0.995 every iteration until 0.05. A mini-batch of 100 experiences is randomly sampled from
the RB that has at most 5000 experiences. We update target Q-network every 100 epochs. The discount factor is 0.99. In addition, the following baselines are considered for algorithm comparisons:
•
Most Popular Update (MPU): At every epoch, we consider to update the content item that receives the highest attention from users, i.e.:
$$\displaystyle\boldsymbol{A}^{t}=\arg\max_{f\in\mathcal{F}}\frac{N_{f}^{t}}{%
\sum_{f^{\prime}\in\mathcal{F}}N_{f^{\prime}}^{t}}.$$
(20)
•
Oracle Update (OU): We assume that the exact knowledge of up-coming user requests, e.g., $\{N_{f}^{t+1}\}$, is available at current epoch $t$. Then, we conduct cache update similar to the proposed DQN. This scheme simply serves as a baseline here and is not possible in reality.
•
Random Update (RU): At every epoch, we randomly select an action from the action space, i.e., $\{0,1,\cdots,F\}$. This serves as a lower bound for the proposed algorithm.
V-B Learning Curves
We first illustrate the learning curves of the proposed algorithm in Fig. 3. For illustrative purposes, we show the moving average results, which are attained by averaging rewards over $N=10000$ epochs, i.e., $\sum_{\tau=t-N+1}^{t}R^{\tau}/N$.
Moreover, the resulting reward is less than zero since it is defined as the negative value of average weighted cost; see (10).
It can be observed that, in the initial stage, the learning curve of the proposed algorithm increases rapidly and surpasses the curve of MPU; after 10000 epochs, it continuously increases; as more states are visited with time elapses, the resulting reward gradually grows larger, eventually converging to a higher level than that of MPU and RU.
Note that, the average reward achieved by DQN is only 0.84 less than the upper bound, i.e., achieved by OU, whereas it exceeds the results of MPU and RU by 28.8% and 40.4%, respectively. This result implies that the proposed DQN algorithm is able to not only track the dynamics of user preferences towards content items but also adapt to energy costs among different sensors in physical-layer transmissions. These observations corroborate the remarkable performance of the proposed algorithm.
V-C Tradeoff Between AoI and Energy Cost
To investigate the tradeoff between AoI and energy cost, we vary factor $\eta$ and plot the results of average total cost, the average AoI, and average transmission energy in Figs. 4 - 6, respectively. As can be seen in Fig. 4, under different $\eta$, the proposed algorithm performs very close to the upper bound, and significantly outperforms other baselines, i.e., MPU and RU. For instance, when $\eta=20$, the achieved average weighted cost can be reduced by 54.9% and 44.3% in comparison to MPU and RU, respectively. These findings again demonstrate the advantages of the proposed DRL-based cache update and also validate its robustness toward factor $\eta$.
Clearly, a larger $\eta$ implies that we pay more attention to minimizing energy consumption. As can be seen in Fig. 5, average energy costs, achieved by DQN and OU, drop off as $\eta$ becomes larger while other baselines only witness rather slight fluctuations without aware of AoI and energy cost tradeoff. On the other hand, in Fig. 6, we observe a reverse trend; that is, the caching content items become staler as $\eta$ grows larger. We conjecture that cache update could happen less frequently such that sensing data available at the cache unit becomes outdated. It is worth pointing out that when $\eta=5$, the achieved average AoI (by DQN) only degrades 2.41 while we can see a notable reduction of 52.7% in energy cost in contrast with the case $\eta=0$. If we continue increasing $\eta$, the reduction in energy consumption is quite limited yet at the cost of worsening AoI. Hence, it is necessary to choose a suitable $\eta$ to balance AoI and energy cost in real applications.
VI Conclusion
In this paper, we have put forth a deep reinforcement learning framework for online cache updating in IoT networks under dynamic content popularity.
The objective of this framework is to minimize the weighted average AoI plus energy cost. We have developed a characterization of transmission energy consumption at IoT sensors. Through trial-and-error explorations, the proposed DQN algorithm is capable of adapting to temporal dynamics of user requests as well as the inhomogeneous content size and channel statistics among different IoT sensors. Simulation results have been presented to demonstrate the effectiveness of the proposed design and reveal how transmission energy consideration compromises data freshness.
References
[1]
S. Madakam, V. Lake, V. Lake, V. Lake et al., “Internet of things
(IoT): A literature review,” Int. J. Comput. Commun., vol. 3,
no. 05, p. 164, May 2015.
[2]
M. Chen, U. Challita, W. Saad, C. Yin, and M. Debbah, “Artificial neural
networks-based machine learning for wireless networks: A tutorial,”
IEEE Commun. Surveys Tuts., vol. 21, no. 4, pp. 3039–3071,
Fourthquarter 2019.
[3]
X. Wu, Q. Li, X. Li, V. C. Leung, and P. Ching, “Joint long-term cache
updating and short-term content delivery in cloud-based small cell
networks,” IEEE Trans. Commun., vol. 68, no. 5, pp. 3173 – 3186, May
2020.
[4]
X. Wu, Q. Li, V. C. Leung, and P. Ching, “Joint fronthaul multicast and
cooperative beamforming for cache-enabled cloud-based small cell networks: An
MDS codes-aided approach,” IEEE Trans. Wireless Commun., vol. 18,
no. 10, pp. 4970–4982, Oct. 2019.
[5]
X. Li, X. Wang, P.-J. Wan, Z. Han, and V. C. Leung, “Hierarchical edge caching
in device-to-device aided mobile networks: Modeling, optimization, and
design,” IEEE J. Sel. Areas Commun., vol. 36, no. 8, pp. 1768–1785,
June 2018.
[6]
S. Kaul, R. Yates, and M. Gruteser, “Real-time status: How often should one
update?” in Proc. IEEE INFOCOM, Mar. 2012, pp. 2731–2735.
[7]
Y. Sun, E. Uysal-Biyikoglu, R. D. Yates, C. E. Koksal, and N. B. Shroff,
“Update or wait: How to keep your data fresh,” IEEE Trans. Inf.
Theory, vol. 63, no. 11, pp. 7492–7508, Nov. 2017.
[8]
X. Wang, C. Wang, X. Li, V. C. Leung, and T. Taleb, “Federated deep
reinforcement learning for internet of things with decentralized cooperative
edge caching,” IEEE Internet Things J., to appear, 2020.
[9]
M. Ma and V. W. Wong, “A deep reinforcement learning approach for dynamic
contents caching in HetNets,” arXiv preprint arXiv:2004.07911,
2020.
[10]
E. T. Ceran, D. Gündüz, and A. György, “A reinforcement learning
approach to age of information in multi-user networks,” in Proc. IEEE
PIMRC, Sept. 2018, pp. 1967–1971.
[11]
M. Hatami, M. Jahandideh, M. Leinonen, and M. Codreanu, “Age-aware status
update control for energy harvesting IoT sensors via reinforcement
learning,” arXiv preprint arXiv:2004.12684, 2020.
[12]
E. T. Ceran, D. Gündüz, and A. György, “Reinforcement learning to
minimize age of information with an energy harvesting sensor with HARQ and
sensing cost,” in Proc. IEEE INFOCOM Wkshps, Apr. 2019, pp. 656–661.
[13]
D. Niyato, D. I. Kim, P. Wang, and L. Song, “A novel caching mechanism for
Internet of things (IoT) sensing service with energy harvesting,” in
Proc. IEEE ICC, May 2016, pp. 1–6.
[14]
M. A. Abd-Elmagid, H. S. Dhillon, and N. Pappas, “A reinforcement
learning framework for optimizing age-of-information in RF-powered
communication systems,” IEEE Trans. Commun., vol. 68, no. 8, pp.
4747–4760, Aug., 2020.
[15]
R. S. Sutton and A. G. Barto, Reinforcement Learning: An
Introduction. MIT Press, 2018.
[16]
V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare,
A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski et al.,
“Human-level control through deep reinforcement learning,” Nature,
vol. 518, no. 7540, p. 529, Feb. 2015.
[17]
X. Wu, Q. Li, X. Li, V. C. Leung, and P. Ching, “Joint long-term cache
allocation and short-term content delivery in green cloud small cell
networks,” in Proc. IEEE ICC, May 2019. |
Photonic Realization of a Quantum Finite Automaton
Carlo Mereghetti${}^{1}$ and Beatrice Palano${}^{2}$
${}^{1}$Dipartimento di Fisica “Aldo Pontremoli”, Università degli Studi di Milano, via Celoria 16, 20133 Milano, Italy
${}^{2}$Dipartimento di Informatica “Giovanni Degli Antoni”, Università degli Studi di Milano, via Celoria 18, 20133 Milano, Italy
Simone Cialdi${}^{3,4}$, Valeria Vento${}^{3}$, Matteo G. A. Paris${}^{3,4}$ and Stefano Olivares${}^{3,4}$
[email protected]
${}^{3}$Dipartimento di Fisica “Aldo Pontremoli”, Università degli Studi di Milano, via Celoria 16, 20133 Milano, Italy
${}^{4}$INFN Sezione di Milano, via Celoria 16, 20133 Milano, Italy
Abstract
We describe a physical implementation of a quantum finite automaton
recognizing a well known family of periodic languages. The realization
exploits the polarization degree of freedom of single photons and
their manipulation through linear optical elements. We use techniques
of confidence amplification to reduce the acceptance error probability
of the automaton. It is worth remarking that the quantum finite automaton we
physically realize is not only interesting per se, but it turns out to be a crucial
building block in many quantum finite automaton design frameworks theoretically settled
in the literature.
I Introduction
Quantum computing is a prolific research area, halfway between physics
and computer science BBBV97 ; BV97 ; Gru99 ; Hir04 ; NC11 . Most likely, its origins may be dated back to
70’s, when some works on quantum information began
to appear (see, e.g., Hol73 ; Ing76 ). In early 80’s, R.P. Feynman suggested that
the computational power of quantum mechanical processes might be
beyond that of traditional computation models Fe82 . A similar idea was put forth by Y.I. Manin Man80 .
Almost at the same time, P. Benioff already proved that such processes are at
least as powerful as Turing machines Be82 . In 1985, D. Deutsch
proposed the notion of a quantum Turing machine as a
physically realizable model for a quantum computer De85 .
The first impressive result witnessing “quantum power” was P. Shor’s algorithm for integer factorization,
which could run in polynomial time on a quantum computer Sho97 . It should be stressed that no
classical polynomial time factoring algorithm is currently known. On this fact,
the security of many nowadays cryptographic protocols, e.g. RSA and Diffie-Hellman, actually relies.
Another relevant progress was made by L. Grover, who proposed a quantum algorithm for searching an
item in an unsorted database containing $n$ items, which runs in time $O(\sqrt{n})$ Gro96 .
These and other theoretical advances naturally drove much attention end efforts on the physical realization of quantum computational devices
(see, e.g., CY95 ; DiV00 ; NKST06 ; FL19 ).
While we can hardly expect to see a full-featured quantum computer in the near future, it might be reasonable to
envision classical computing devices incorporating quantum components. Since the physical realization of quantum computational systems has proved to
be an extremely complex task, it is also reasonable to keep quantum components as “small” as possible. Small size quantum devices are
modeled by quantum finite automata, a theoretical model for quantum machines with finite memory.
Indeed, in current implementations of quantum computing
the preparation and initialization of qubits in superposition or/and entangled states is often challenging,
making worth the study of quantum computation with restricted memory, which requires less demanding resources,
as in the case of the quantum finite automata.
The simplest and most promising from a physical realization viewpoint model of a quantum finite automaton is the so called
measure-once quantum finite automaton BC01 ; BC01a ; BP02 ; MC00 . Such a model also served as a basis for defining several variants of quantum finite automata introduced and studied in a plenty of contributions (see, e.g., ABG06 ; AW02 ; AY18 ; BMP03 ; BMP10 ; MP01x ; ZQLG12 ). Being the only model to be considered in the present paper, from now on for the sake of brevity we will simply write “quantum finite automaton” instead of “measure-once quantum finite automaton”.
The “hardware” of a (one-way) quantum finite automaton is that of a classical finite automaton. Thus, we have an input tape scanned by a one-way input head moving one position forward at each move, plus a finite basis state control. Some basis states are designated as accepting states. At any given time during the computation, the state of the quantum finite automaton is represented by a complex linear combination of classical basis states, called a superposition. At each step, a unitary transformation associated with the currently scanned input symbol makes the automaton evolve to the next superposition. Superposition dynamics can transfer the complexity of the problem from a large number of sequential steps to a large number of coherently superposed quantum states. At the end of input processing, the automaton is observed in its final superposition. This operation makes the superposition collapse to a particular (classical) basis state with a certain probability. The probability that the automaton accepts the input word is given by the probability of observing (collapsing into) an accepting basis state.
Quantum finite automata exhibit both advantages and disadvantages with respect to their classical (e.g., deterministic or probabilistic) counterparts. Basically, quantum superposition offers some computational advantages on probabilistic superposition. On the other hand, quantum dynamics must be reversible, and this requirement may impose severe computational limitations to finite memory devices. As a matter of fact, it is sometimes impossible to simulate classical finite automata by quantum finite automata.
In fact, as we will discuss in Section II.4, isolated cut point quantum finite automata recognize a proper subclass of regular languages BC01 ; BP02 ; MC00 .
Although weaker from a computational power point of view, quantum finite automata may greatly outperform classical ones when
descriptional power is at stake. In the realm of descriptional complexity HK10 , models of computation are compared on
the basis of their size. In case of finite state machines, a commonly assumed size measure is the number of finite control states.
Most likely, the first contribution explicitly studying the descriptional power of quantum vs. classical finite automata
is AF98 , where an extremely succinct quantum finite automaton is provided, accepting the unary
language $L_{m}={\{a^{k}\!\ \mid\ k\in{\mathbf{N}}\,\mbox{ and }\,k\,{\rm mod}\,m=0\}}$ for any given $m>0$. The construction in AF98 uses as a basic (and sole) module
a quantum finite automaton $\cal A$ for $L_{m}$ with 2 basis states, whose acceptance reliability is then enhanced within a suitable modular
building framework where traditional compositions (i.e., direct products and sums) of quantum systems are performed.
Actually, many (if not all) contributions in the literature aiming to design small size quantum finite automata for several tasks
(see, e.g., AN09 ; BMP03 ; BMP03a ; BMP05 ; BMP06 ; BMP14 ; BMP17 ; MP02 ; MP07 ) use the module $\cal A$ as a crucial building block. In this sense, the language $L_{m}$ and the module $\cal A$ turn out to be “paradigmatic” as tools to build and test size-efficient quantum finite automata. Hence, a physical realization of the module $\cal A$ might be well worth investigating.
In this paper, we put forward a physical implementation of quantum
finite automata based on the polarization degree of freedom of
single photons and able to recognize a family of periodic languages.
More precisely, due to above stressed centrality in quantum finite automaton design frameworks,
we focus on the physical implementation of the quantum finite automaton $\cal A$ for the language $L_{m}$.
We investigate the performance of our photonic automaton taking into
account the main sources of error and imperfections, e.g. in the
preparation of the initial automaton state. We also use techniques
of confidence amplification to reduce the acceptance error probability
of the automaton.
The paper is structured as follows. In Section II, we provide an almost self contained
overview of the basic concepts underling formal language theory and classical finite automa.
Moreover, we quickly address practical impacts of finite automata and the importance of investigating
their size in the light of possible physical implementations of such devices.
Next, we present the notion of a quantum finite automaton together with some basic facts
on its computational and descriptional power. We particularly focus on unary automata, i.e., automata with a single-letter input alphabet,
and emphasize the notion of a language accepted with isolated cut point. In
Section III, we introduce a simple unary language, as a benchmark
upon which to test the descriptional power of classical and quantum finite
automata, namely the language $L_{m}={\{a^{k}\!\ \mid\ k\in{\mathbf{N}}\,\mbox{ and }\,k\,{\rm mod}\,m=0\}}$ for any given $m>0$.
We provide a theoretical definition of a quantum finite
automaton $\cal A$ accepting $L_{m}$ with isolated cut point and 2 basis states, whereas any classical
automaton for $L_{m}$ requires a number of states which grows with $m$.
The photonic implementation of the quantum finite automaton $\cal A$
with 2 basis states is then discussed in
Section IV. There, we start reviewing the standard
quantum formalism used to describe the polarization state of the single
photon, its dynamics and the link with the formalism used in the previous
sections. Then, we explain the working principle of the photonic implementation
of the quantum finite automaton and propose a discrimination strategy to
reduce the acceptance error probability. Section V
describes the experimental apparatus and reports the results we obtained.
Finally, we close the paper with Section VI, where we
draw some concluding remarks and outlooks of our work.
II Preliminaries
II.1 Formal languages and classical finite automata
Formal language theory studies languages from a mathematical point of view, providing formal tools and
methods to analyze language properties. Strictly connected with automata theory, the discipline dates
back to 50’s, and it was originally developed to provide a theoretical basis for natural language processing.
It was soon realized that this theory was relevant to the artificial languages (e.g., programming languages) that had originated in computer science. Since its birth,
formal language theory has established as one of the most prominent area in theoretical computer science.
Its results have huge impact in numerous fields, just to cite some: practical computer science, cryptography and security, discrete mathematics and combinatorics, graph theory, mathematical logic, nature inspired (e.g., quantum, bio, dna) computational models, physics, system theory.
The reader may find a lot of excellent textbooks where thoughtful presentations of formal language and automata theory and their applications are presented (see, e.g., HU01 ; HU79 ). In order to keep this paper as self-contained
as possible, we are now to present basic notions and notations of formal language and automata theory, and briefly emphasize those aspects which are relevant to
the present work, i.e., regular languages and finite automata.
An alphabet is any finite set $\Sigma$ of elements called symbols. A word on $\Sigma$ is a sequence
$\omega=\sigma_{1}\sigma_{2}\cdots\sigma_{n}$ with $\sigma_{i}\in\Sigma$
being its $i$-th symbol,
The length of $\omega$, i.e., the number of symbols $\omega$ consists of, is denoted by $|\omega|$.
We let $\varepsilon$ be the empty word satisfying $|\varepsilon|=0$.
The
set of all words (including the empty word) on $\Sigma$ is denoted by $\Sigma^{*}$, and we let
$\Sigma^{+}=\Sigma^{*}\setminus{\left\{\varepsilon\right\}}$.
A language $L$ on $\Sigma$ is any subset of $\Sigma^{*}$, i.e., $L\subseteq\Sigma^{*}$.
If $|\Sigma|=1$ we say that $\Sigma$ is a unary alphabet, and languages on unary alphabets are called unary languages. In case of unary alphabets, we customarily let $\Sigma={\left\{a\right\}}$ so that a unary language is any set $L\subseteq a^{*}$.
The concatenation of the word $x\in\Sigma^{*}$ with the word $y\in\Sigma^{*}$ is the word $xy$ consisting of the sequence of symbols of $x$ immediately followed by
the sequence of symbols of $y$.
For any $\sigma\in\Sigma$ and any positive integer $k$, we let $\sigma^{k}$ be the word obtained by concatenating $k$ times the symbol $\sigma$. We stipulate
that $\sigma^{0}=\varepsilon$.
Several formal tools have been introduced, to rigorously express languages. Formal grammars are the main
generative systems for languages. A formal grammar is a quadruple $G=(\Sigma,Q,P,S)$ where $\Sigma$ and
$Q$ are two disjoint finite alphabets of, respectively, terminal and nonterminal symbols, $S\in Q$ is the start-symbol, and $P$ is the finite set of production rules, or simply, productions. Productions can be regarded as rewriting rules, typically expressed in the form $\alpha\rightarrow\beta$ with $\alpha\in(\Sigma\cup Q)^{+}$
and $\beta\in(\Sigma\cup Q)^{*}$.
Given $w,z\in(\Sigma\cup Q)^{*}$, we say that $z$ is derived in one step from $w$ in $G$ whenever
$w=x\alpha y$, $z=x\beta y$, and $\alpha\rightarrow\beta$ is a production rule in $P$. Formally, we write $w\Rightarrow_{G}z$.
More generally, $z$ is derived from $w$ in $G$ whenever there is a sequence $w_{0},w_{1},\ldots,w_{n-1},w_{n}\in(\Sigma\cup Q)^{*}$ such that
$w=w_{0}\Rightarrow_{G}w_{1}\Rightarrow_{G}\cdots\Rightarrow_{G}w_{n-1}%
\Rightarrow_{G}w_{n}=z$. Formally, we write $w\Rightarrow_{G}^{*}z$. The language generated by the grammar
$G=(\Sigma,Q,P,S)$ is the set $L(G)\subseteq\Sigma^{*}$ defined as $L(G)={\{\omega\in\Sigma^{*}\ \mid\ S\Rightarrow_{G}^{*}\omega\}}$.
Two grammars $G,G^{\prime}$ are equivalent whenever $L(G)=L(G^{\prime})$.
To help reader’s intuition, the following Example provides a grammar and establishes the corresponding generated language.
Example 1.
When listing grammar production rules, we can write $\alpha\rightarrow\beta_{1}|\beta_{2}|\cdots\beta_{n-1}|\beta_{n}$ as a shortcut for expressing
the set of productions $\alpha\rightarrow\beta_{1},\alpha\rightarrow\beta_{2},\ldots,\alpha\rightarrow%
\beta_{n}$. So, consider the grammar
$G=(\Sigma={\left\{a,b\right\}},Q={\left\{B_{0},\ldots,B_{k}\right\}},P,B_{0})$ where the set $P$ of productions is defined as
$$\displaystyle P={\left\{B_{0}\rightarrow aB_{0}|bB_{0}|bB_{1}\right\}}$$
$$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}\cup{\left\{B_{i}\rightarrow aB_{i+1}|%
bB_{i+1}\,\mbox{ for $1\leq i\leq k-1$},\ B_{k}\rightarrow\varepsilon\right\}}.$$
Let us derive the generated language $L(G)$.
By repeatedly applying the productions $B_{0}\rightarrow aB_{0}|bB_{0}$, from the start-symbol $B_{0}$ we can derive $\alpha B_{0}$, for any $\alpha\in{\left\{a,b\right\}}^{*}$.
Formally, $B_{0}\Rightarrow_{G}^{*}\alpha B_{0}$. At this point, in order to generate a word of terminal symbols only, we must apply the production
$B_{0}\rightarrow bB_{1}$, thus having $B_{0}\Rightarrow_{G}^{*}\alpha B_{0}\Rightarrow_{G}\alpha bB_{1}$. Then, we are left to sequentially apply
the productions $B_{i}\rightarrow aB_{i+1}|bB_{i+1}$, for every $1\leq i\leq k-1$. So,
$B_{0}\Rightarrow_{G}^{*}\alpha B_{0}\Rightarrow_{G}\alpha bB_{1}\Rightarrow_{G%
}^{*}\alpha b\beta B_{k}$, for any $\beta\in{\left\{a,b\right\}}^{*}$ and $|\beta|=k-1$.
By applying the last production $B_{k}\rightarrow\varepsilon$, we get $B_{0}\Rightarrow_{G}^{*}\alpha B_{0}\Rightarrow_{G}\alpha bB_{1}\Rightarrow_{G%
}^{*}\alpha b\beta B_{k}\Rightarrow_{G}\alpha b\beta$.
Thus, the language generated by $G$ writes as
$$L(G)={\{\omega\in{\left\{a,b\right\}}^{*}\ \mid\ \omega=\alpha b\beta\mbox{ %
and }|\beta|=k-1\}}.$$
In words, $L(G)$ consists of those words on ${\left\{a,b\right\}}$ featuring a symbol $b$ at the $k$-th position from the right.
Originally, four types of grammars have been pointed out, depending on the form of productions. The corresponding four classes
of generated languages turn out to be relevant both from a practical and a theoretical point of view. Precisely, $G=(\Sigma,Q,P,S)$
is a grammar of:
type 0:
whenever productions in $P$ do not have any particular restriction. The class of languages generated by this type of grammars is the class
of recursively enumerable languages.
type 1 or context-sensitive:
whenever every production $\alpha\rightarrow\beta\in P$ satisfies $|\alpha|\leq|\beta|$; the production $S\rightarrow\varepsilon$ is allowed provided $S$ never occurs within the right part of any production in $P$. The class of languages generated by this type of grammars is the class
of context-sensitive languages.
type 2 or context-free:
whenever every production in $P$ is of the form $A\rightarrow\beta$ with $A\in Q$. The class of languages generated by this type of grammars is
the class of context-free languages.
type 3 or regular:
whenever every production is of the form $A\rightarrow\varepsilon$, $A\rightarrow\sigma$, or $A\rightarrow\sigma B$ with $\sigma\in\Sigma$
and $A,B\in Q$.
The class of languages generated by this type of grammars is the class of regular languages. The reader may easily verify that the grammar proposed in Exercise 1 is a type 3 grammar, and hence the generated language is an example of regular language.
It can be shown that for any given type $i+1$ grammar, an equivalent type $i$ grammar can be built. Hence, the
class of regular languages is contained in the class of context-free languages, which is contained in the class of context-sensitive languages, which in turn is contained in the
class of recursively enumerable languages. In addition, we have that such a language class hierarchy is proper. In fact:
(i) there exist languages outside the class of recursively enumerable languages,
(ii) there exist recursively enumerable languages that cannot be generated by any context-sensitive grammar,
(iii) the ternary context-sensitive language ${\{a^{n}b^{n}c^{n}\ \mid\ n\in{\mathbf{N}}\}}$ cannot be generated by any context-free grammar,
(iv) the binary context-free language ${\{a^{n}b^{n}\ \mid\ n\in{\mathbf{N}}\}}$ cannot be generated by any regular grammar.
Beside the one in Example 1, further instances of regular languages will be provided below.
This language class hierarchy is usually known as Chomsky hierarchy, and the whole formal language and automata theory has been developing
around it.
Every level of the hierarchy has been deeply investigated, yielding profound results and widespread applications.
An alternative equivalent approach to define Chomsky hierarchy uses language accepting systems, i.e., roughly speaking, formal computational devices which process
input words and outcome an accept/reject final verdict. For one such device, the corresponding accepted (or recognized) language consists of those input words
that are accepted. According to this point of view:
(i) the class of recursively enumerable languages coincides with the class of languages accepted by Turing machines,
(ii) the class of contex-sensitive languages coincides with the class of languages accepted by linear bounded automata,
(iii) the class of context-free languages coincides with the class of languages accepted by nondeterministic pushdown automata,
(iv) the class of regular languages coincides with the class of languages accepted by (several types of) finite automata.
In this paper, we will be concerned with the class of regular languages. In particular, we will focus on the computational model of finite automata
defining them (see (iv) above). For extensive and thoughtful surveys on classical finite automata theory, the reader is referred to, e.g., HU01 ; HU79 ; Pa71 .
Several types of finite automata have been introduced and deeply investigated in the literature. Let us begin by the original and most basic version.
In Figure 1, the “hardware” of a one-way deterministic finite automaton (1dfa, for short RS59 ) $A$ is depicted.
We remark that the other versions of finite automata we are going to overview share the same hardware, but exhibit different dynamics.
We have a read-only input tape consisting of a sequence of cells, each one being able to store an input symbol.
The tape is scanned by an input head always moving one position right at each step. This type of input head motion motivates the designation “one-way”. At each time during the computation of $A$, a finite state control
is in a state from a finite set $Q$. Some of the states in $Q$ are designated as accepting states, while $q_{0}\in Q$ is a designated
initial state.
The computation of $A$ on a word $\omega$ from a given input alphabet $\Sigma$ begins by having: (i) $\omega$ stored symbol-by-symbol left-to-right
in the cells of the input tape, (ii) the input head scanning the leftmost tape cell, (iii) the finite state control being in the state $q_{0}$.
In a move, $A$ reads the symbol below the input head and, depending on such a symbol and the state of the finite state control, it switches
to the next state according to a fixed transition function and moves the input head one position forward. We say that $A$ accepts $\omega$ if and only if it enters
an accepting state after scanning the rightmost symbol of $\omega$, otherwise $A$ rejects $\omega$. The language accepted by $A$ is the set $L(A)\subseteq\Sigma^{*}$
consisting of all the input words accepted by $A$.
Formally, a 1dfa is a quintuple $A=(Q,\Sigma,\delta,q_{0},F)$ where $Q$ is a finite set of states, with $q_{0}\in Q$ being the initial state and $F\subseteq Q$ the set of accepting
states, $\Sigma$ is the input alphabet, and $\delta:Q\times\Sigma\rightarrow Q$ is the transition function defining moves as follows: if $A$ scans the input symbol $\sigma$ by
being in the state $p$ and $\delta(p,\sigma)=q$ holds, then it enters the state $q$ and shift the input head one position forward. The transition function $\delta$
can be inductively extended from symbols in $\Sigma$ to words in $\Sigma^{*}$ as $\delta:Q\times\Sigma^{*}\!\rightarrow Q$. Namely, for any $q\in Q$ and $\omega\in\Sigma^{*}$, we let
$$\delta(q,\omega)=\left\{\begin{array}[]{ll}q&\mbox{if $\omega=\varepsilon$}\\
\delta(\delta(q,\sigma),\alpha)&\mbox{if $\omega=\sigma\alpha$.}\end{array}\right.$$
Thus, the language accepted by $A$ is the set $L(A)\subseteq\Sigma^{*}$ defined as $L(A)={\{\omega\in\Sigma^{*}\ \mid\ \delta(q_{0},\omega)\in F\}}$.
A nice pictorial representation of a 1dfa $A=(Q,\Sigma,\delta,q_{0},F)$ is by its state (or transition) graph $D_{A}$. Basically, $D_{A}$ is a labelled digraph having $Q$
as the set of its vertexes, and labelled directed edges representing moves. Precisely, there exists an edge from vertex $p$ to vertex $q$
with label $\sigma$ if and only if $\delta(p,\sigma)=q$ holds true. Vertexes are usually drawn as circles on the plan with labels indicating the corresponding states, while
labelled arrows join adjacent states. The vertex corresponding to the state $q_{0}$ has an incoming arrow, while vertexes associated with
accepting states in $F$ are double circled. It is easy to see that the computation of $A$ on the input word $\omega$ can be tracked in $D_{A}$ by following
the unique directed path labelled $\omega$ from the vertex $q_{0}$. So, $A$ accepts $\omega$ if and only if such a path ends up in a double circled vertex.
To clarify above notions, the next Example displays a 1dfa accepting a simple unary language. We provide such a 1dfa both in its formal definition as a quintuple and as
state graph.
Example 2.
The following simple unary language will play an important role throughout the rest of the paper. For any given
integer $m>0$, let
$$L_{m}={\{a^{k}\!\ \mid\ k\in{\mathbf{N}}\,\mbox{ and }\,k\,{\rm mod}\,m=0\}}.$$
(1)
Such a language can be accepted by the 1dfa
$$A=(Q={\left\{q_{0},q_{1},\ldots,q_{m-1}\right\}},\Sigma={\left\{a\right\}},%
\delta,q_{0},F={\left\{q_{0}\right\}}),$$
where, for any $0\leq i\leq m-1$, we set $\delta(q_{i},a)=q_{(i+1)\,{\rm mod}\,m}$. It is easy to see that $\delta(q_{0},a^{k})=q_{k\,{\rm mod}\,m}$ which is $q_{0}$ if and only if $\,k\,{\rm mod}\,m=0$ if and only if $a^{k}\in L_{m}$. Hence, $L(A)=L_{m}$. The state graph for the 1dfa $A$ is depicted in Figure 2. Due to unary input alphabet, all edges would have the same label ‘$a$’ which can then be safely omitted.
Let us now turn to the model of a one-way nondeterministic finite automaton (1nfa, for short RS59 ). Formally, a 1nfa is a quintuple
$A=(Q,\Sigma,\delta,q_{0},F)$ in which every component is defined as in 1dfa’s but the transition function which is now a mapping
$\delta:Q\times\Sigma\rightarrow{\bf 2}^{Q}$, where ${\bf 2}^{Q}$ denotes the powerset of $Q$, i.e., the set of all subsets of $Q$. Differently from the deterministic case,
now at each move $A$ has several candidates as possible next state. Precisely:
if $A$ scans the input symbol $\sigma$ by
being in the state $p$ and $\delta(p,\sigma)=S$ holds, then it may enter one of the states in $S$ and shift the input head one position forward.
Thus, on any input word $\omega$, more computation paths from $q_{0}$ exist; if at least one of such paths leads to an accepting state, then $A$ accepts $\omega$. More formally, we can inductively extend the transition function $\delta$ to subsets of states and words as
$\delta:{\bf 2}^{Q}\times\Sigma^{*}\rightarrow{\bf 2}^{Q}$. First of all, we define the extension $\delta:{\bf 2}^{Q}\times\Sigma\rightarrow{\bf 2}^{Q}$ as $\delta(S,\sigma)=\cup_{q\in S}\delta(q,\sigma)$, for any $S\subseteq Q$ and $\sigma\in\Sigma$. Then, for any $S\subseteq Q$ and $\omega\in\Sigma^{*}$, we let
$$\delta(S,\omega)=\left\{\begin{array}[]{ll}S&\mbox{if $\omega=\varepsilon$}\\
\delta(\delta(S,\sigma),\alpha)&\mbox{if $\omega=\sigma\alpha$.}\end{array}\right.$$
Thus, the language accepted by $A$ is the set $L(A)\subseteq\Sigma^{*}$ defined as $L(A)={\{\omega\in\Sigma^{*}\ \mid\ \delta({\left\{q_{0}\right\}},\omega)\cap F%
\neq\emptyset\}}$.
The reader may easily verify that a 1dfa can be seen as a 1nfa where, for any $q\in Q$ and $\sigma\in\Sigma$, we have that $\delta(q,\sigma)$ contains a single state.
The state graph $D_{A}$ for the 1nfa $A=(Q,\Sigma,\delta,q_{0},F)$ can be defined as above for the deterministic case, but now an edge from vertex $p$ to vertex $q$
with label $\sigma$ exists if and only if $q\in\delta(p,\sigma)$ holds true. This means that, in general, a vertex may present more outgoing edges with the same
label. Thus, $A$ accepts an input word $\omega$ if and only if there exists a path in $D_{A}$ labelled $\omega$ from $q_{0}$ to a double circled vertex.
The following Example proposes a 1nfa expressed as state graph for a binary language.
Example 3.
Consider the binary language in Example 1, for which a type 3 grammar was there provided.
Here, we call that language $E_{k}$ which was defined as
$$E_{k}={\{\omega\in{\left\{a,b\right\}}^{*}\ \mid\ \omega=\alpha b\beta\mbox{ %
and }|\beta|=k-1\}}.$$
(2)
Thus, a word on ${\left\{a,b\right\}}$ is in $E_{k}$ if and only if its $k$-th symbol from the right is ‘$b$’.
In Figure 3, the state graph of a 1nfa accepting $E_{k}$ is depicted. The reader may easily verify
that the accepted language is exactly $E_{k}$. Moreover, she/he may straightforwardly work out
an equivalent formal definition of the 1nfa as a quintuple.
We complete our overview of classical models of finite automata by introducing the notion of a one-way probabilistic finite automaton
(1pfa, for short Ra63 ). Formally, a 1pfa is a quintuple $A=(Q,\Sigma,\delta,q_{0},F)$ in which every component is defined as usual,
but now $\delta$ returns a probability distribution for the next state. More precisely, $\delta:Q\times\Sigma\times Q\rightarrow[0,1]$ is defined such that
$\delta(p,\sigma,q)$ is the probability that $A$, being in the state $p$, reaches the state $q$ upon reading the symbol $\sigma$. As usual, the input head is shifted
one position right at each move.
Clearly, for any $p\in Q$ and $\sigma\in\Sigma$, we require that $\sum_{q\in Q}\delta(p,\sigma,q)=1$.
Inductively extending the transition function $\delta$ to words enables us to get $\delta:Q\times\Sigma^{*}\times Q\rightarrow[0,1]$, where $\delta(p,\omega,q)$
yields the probability that $A$, being in the state $p$, reaches the state $q$ upon reading the input word $\omega$ as
$$\small\delta(p,\omega,q)=\left\{\begin{array}[]{ll}0&\mbox{if $\omega=%
\varepsilon$ and $p\neq q$}\\
1&\mbox{if $\omega=\varepsilon$ and $p=q$}\\
\sum_{s\in Q}\delta(p,\sigma,s)\cdot\delta(s,\alpha,q)&\mbox{if $\omega=\sigma%
\alpha$.}\end{array}\right.$$
Thus, the probability that $A$ accepts the input word $\omega$ writes as $p_{A}(\omega)=\sum_{q\in F}\delta(q_{0},\omega,q)$, i.e., the probability for $A$
to reach an accepting state from the initial state $q_{0}$ after processing $\omega$. Given a real number $\lambda$, we define the language accepted by
$A$ with cut point $\lambda$ as the set $L_{A,\lambda}={\{\omega\in\Sigma^{*}\ \mid\ p_{A}(\omega)>\lambda\}}$.
A language $L\subseteq\Sigma^{*}$ is said to be accepted by $A$ with
isolated cut point $\lambda$ whenever $L=L_{A,\lambda}$ and there
exists $\rho>0$ such that $|p_{A}(\omega)-\lambda|\geq\rho$
for every $\omega\in\Sigma^{*}$.
The relevance of isolated cut point acceptance
is due to the fact that, in this case, we can arbitrarily reduce the
classification error probability of an input word by repeating a
constant number of times (not depending on the length of the input
word) its parsing and taking the majority of the answers Pa71 ; Ra63 .
In our experiment, we will use this fact to reduce the error
probability.
Notice that beside isolated cut point acceptance, other
probabilistic acceptance mode are widely studied in the literature
(see, e.g., AY18 ; BMP03 ; BMP17 ; Gru00 .
Without going into details, even with a 1pfa $A$, a state graph $D_{A}$ can be naturally associated. Now, edges in $D_{A}$
are labeled by both a symbol and the corresponding
transition probability.
Example 4.
For two primes $m,n$, let the unary language
$$L_{m\cdot n}={\{a^{k}\!\ \mid\ k\in{\mathbf{N}}\,\mbox{ and }\,k\,{\rm mod}\,%
\,(m\cdot n)=0\}}.$$
(3)
Notice that this is a particular instance of the unary language introduced in Example 2.
We define the set of states $Q={\left\{s,p_{0},\ldots,p_{m-1},q_{0},\ldots,q_{m-1}\right\}}$, and construct the 1pfa
$A=(Q,\Sigma={\left\{a\right\}},\delta,s,F={\left\{s,p_{0},q_{0}\right\}})$
where we set
$$\displaystyle\bullet\delta(s,a,p_{1})={\scriptstyle\frac{1}{2}}=\delta(s,a,q_{%
1}),$$
$$\displaystyle\bullet\delta(p_{i},a,p_{(i+1)\,{\rm mod}\,m})={1}=\delta(q_{j},a%
,q_{(j+1)\,{\rm mod}\,n})$$
$$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\mbox{for }0\leq i%
\leq m-1~{}\mbox{ and }~{}0\leq j\leq n-1,$$
$$\displaystyle\bullet\mbox{ any other transition occurs with probability 0.}$$
It is not hard to see that
$$p_{A}(a^{k})=\left\{\begin{array}[]{ll}1&\mbox{if $a^{k}\in L_{m\cdot n}$}\\
\leq{\scriptstyle\frac{1}{2}}&\mbox{otherwise.}\end{array}\right.$$
Thus, the 1pfa $A$ accepts $L_{m\cdot n}$ with cut point ${\scriptstyle\frac{3}{4}}$ isolated by ${\scriptstyle\frac{1}{4}}$. The state graph of the 1pfa $A$ is sketched in Figure
4. As usual, due to unary input alphabet, we omit the label ‘$a$’ from every edge. Moreover, each edge without an associated probability defines a move occurring with certainty.
For the sake of completeness, we point out that two-way finite automata are also considered in the literature.
Very roughly speaking, a two-way finite automaton has the same hardware as a one-way finite automaton, but its input head can move one position
forward or backward, or stand still at each move. Two-way motion of the input head can be adopted by the three paradigms above recalled, thus leading
to the models of 2dfa’s, 2nfa’s, and 2pfa’s. Formal definitions and properties of two way finite automata may be
found, e.g., in DS90 ; HU01 ; HU79 ; Kan91 ; Pa71 ; Ra63 ; RS59 .
The computational power of all these (and actually of many other) variants of finite automata has been well established
in the literature along many years of research. As suggested in (iv) in the automata based characterization of Chomsky hierarchy above recalled:
Theorem 5.
The class of languages accepted by ${\left\{1,2\right\}}$dfa’s, ${\left\{1,2\right\}}$nfa’s, or by 1pfa’s with isolated cut point coincides with the class of regular languages.
The class of regular languages is properly contained in the class of languages accepted by isolated cut point 2pfa’s DS90 .
However, when restricting to unary alphabets, even isolated cut point 2pfa’s accept exactly unary regular languages Kan91 .
Regular languages are of fundamental importance in many applications in computer science. Viewing regular languages throughout finite automata
greatly improved compilers and interpreters design, parsing and pattern matching algorithms, cryptography and security protocol testing, computer networks
protocol testing, model checking and software validation. It does not sound as an exaggeration saying that almost any task in computer science
sooner or later leads to coping with some regular language which can be fruitfully managed via a suitable finite automaton.
However, beside being a valuable tool in language processing, finite automata represent a formidable theoretical model to deal with those physical systems
which exhibit a predetermined sequence of actions depending on a sequence of events they are presented. Originally, finite automata have been
introduced to describe the electric activity of brain neurons, but soon they have been extensively used in the design and analysis of several devices
such as the control units for vending machines, elevators, traffic lights, combination locks, etc. .
Particularly important is the use of finite automata
in vlsi-design, namely, in the project of sequential networks which are the building blocks of modern computers and digital systems. Very roughly speaking,
a sequential network is a boolean circuit equipped with memory.
Engineering a sequential network typically requires to model its behavior by a finite automaton whose number of states directly influences the amount of hardware
(i.e., the number of logic gates) employed in the electronic realization of the sequential network. From this point of view, having less states in the modeling finite automaton
directly results in employing smaller hardware which, in turn, means having less energy absorption and less cooling problems. These latter physical implementation aspects,
as the reader may easily figure out, turn out to be of paramount importance given the current level of digital devices miniaturization.
These “physical” (and other more theoretical) considerations have lead to a well consolidated trend in the literature in which, beside acceptance capabilities, the descriptional power of finite automata is deeply investigated. Within the realm of descriptional complexity HK10 , the size of finite automata is under consideration, and a common measure for finite automaton size is the number of states. In particular, reduction or increasing in the number of states is studied, when using different computational paradigms (e.g., deterministic, nondeterministic, probabilistic, quantum, one-way, two-way) on a finite automaton to perform a given task.
Let us quickly recall some very well known results on the descriptional power of different types of finite automata. To this aim, we say that two finite automata $A,A^{\prime}$ are
equivalent whenever $L(A)=L(A^{\prime})$.
It is well-known that any $n$-state 1nfa can be converted into an equivalent $2^{n}$-state 1dfa RS59 , and that in general such an exponential size blow-up is unavoidable.
In fact, consider the language $E_{k}$ in Example 3. There, a $k$-state 1nfa accepting $E_{k}$ is sketched. On the other hand, it can be shown that
any 1dfa for $E_{k}$ cannot have less than $2^{k}$ states. A similar exponential gap exists for 1dfa’s vs. 1pfa’s: any $n$-state 1pfa accepting a language with
cut point isolated by $\rho$ can be turned into an equivalent 1dfa with $(1+1/\rho)^{n}$ states Ra63 . Even in this case, the exponential blow-up is in general
“almost unavoidable” (stating the exact size gap between determinism and probabilism is an open problem).
This can be proved by elaborating on the language $L_{m\cdot n}$ provided in Example 4. Equivalent 1dfa’s for $n$-state 2dfa’s and 2nfa’s can be obtained,
paying by not less than $n^{n}$ and $2^{n^{2}}$ states, respectively RS59 ; Sh59 .
Following this line of research on the succinctness of different computational paradigms,
we are going to investigate whether and how adopting the quantum paradigm of computation may reduce the number of states on finite state automata,
thus providing theoretical foundations for the realization of more succinct devices with all potential benefits in terms of miniaturization and energy consumption above addressed.
To this aim, we will be particularly interested in unary one-way finite automata, i.e., automata having a unary input alphabet consisting
of the sole symbol $a$. Clearly, unary one-way finite automata accept unary languages $L\subseteq a^{*}$. Here, we choose to provide a nice and compact matrix presentation of
unary one-way finite automata that will naturally lead to formalize the notion of a unary one-way quantum finite automaton.
We recall that a matrix is said to be: boolean whenever its entries are either 0 or 1, stochastic whenever its
entries are reals from the interval $[0,1]$ and each row sums to 1.
Let $A$ be a unary one-way finite automaton with ${\left\{q_{1},q_{2},\ldots,q_{n}\right\}}$ being the set of its states; some of these states are accepting.
Then, $A$
can be formally written as a triple $A=(\zeta,U,\eta)$, where $\eta\in{\left\{0,1\right\}}^{n\times 1}$ is the characteristic
column vector of the accepting states, i.e. $\eta_{i}=1$ if and only if $q_{i}$ is an accepting state,
while $\zeta$ and $U$ have different forms depending on the nature of $A$. Precisely, $A$ is a:
1dfa:
$\zeta\in{\left\{0,1\right\}}^{n}$ is the characteristic row vector of the initial state, $U$ is an $n\times n$ boolean stochastic transition matrix, hence
$U$ has exactly one 1 per each row, with $U_{ij}=1$ if and only if and only if $A$ moves from the state $q_{i}$ to the state $q_{j}$ upon reading $a$, i.e.,
$U_{ij}=1$ if and only if $\delta(q_{i},a)=q_{j}$.
1nfa:
as above, except that $U$ is boolean with $U_{ij}=1$ if and only if $q_{j}\in\delta(q_{i},a)$.
1pfa:
$\zeta\in[0,1]^{n}$ is a stochastic row vector
representing the initial probability distribution of the states111The definition of a 1pfa previously given admits a single initial state $q_{0}$
instead of assigning to each control state the probability of being initial. It can be shown that the two definitions of a 1pfa are actually equivalent from both a computational
and a descriptional point of view.,
$U$ is an $n\times n$ stochastic transition matrix with $U_{ij}$ being
the probability that $A$ moves from the state $q_{i}$ to the state $q_{j}$ upon reading $a$, i.e., $U_{ij}=\delta(q_{i},a,q_{j})$.
The reader may easily work out the matrix presentation for the unary 1dfa and the unary 1pfa defined, respectively, in Example 2 and Example 4.
Let us see how to express the notion of accepted language in this matrix presentation. The situation of the unary one-way finite automata $A$ at the end its the computation on the input word $a^{k}$ is described by the vector $\zeta U^{k}$
having the following meaning (recall that $\eta$ is the characteristic vector of the final states of $A$):
$A$ is a 1dfa:
$\zeta U^{k}$ is the characteristic vector of the state reached by $A$ at the end of the computation on $a^{k}$. Thus, the product $\zeta U^{k}\eta$ returns 1 if the reached state is accepting, 0 otherwise. We say that $A$ accepts $a^{k}$ whenever $\zeta U^{k}\eta=1$.
$A$ is a 1nfa:
$\zeta U^{k}$ is the characteristic vector of the set of states reached by $A$ at the end of the computation on $a^{k}$. Thus, the product $\zeta U^{k}\eta$ returns the number of reached accepting states. We say that $A$ accepts $a^{k}$ whenever $\zeta U^{k}\eta\geq 1$.
$A$ is a 1pfa:
$\zeta U^{k}$ is a stochastic vector whose $i$-th component represents the probability that $A$ reaches the state
$q_{i}$ at the end of the computation on $a^{k}$. Thus, the product $p_{A}(a^{k})=\zeta U^{k}\eta$ returns the probability for $A$ to reach an accepting state at the end of the computation on $a^{k}$, i.e., the probability that $A$ accepts $a^{k}$.
If $A$ is a unary 1dfa or
1nfa, then the accepted language is defined as
$$L_{A}={\{a^{k}\ \mid\ k\in{\mathbf{N}}\mbox{ and }\zeta U^{k}\eta\geq 1\}}.$$
(4)
Let $A$ be a unary 1pfa.
The language accepted by $A$ with cut point
$\lambda$ is defined as
$$L_{A,\lambda}={\{a^{k}\!\ \mid\ k\in{\mathbf{N}}\mbox{ and }p_{A}(a^{k})\!>\!%
\lambda\}}.$$
(5)
As above recalled, the unary 1pfa $A$ accepts a unary language $L\subseteq a^{*}$ with isolated cut point $\lambda$ whenever $L=L_{A,\lambda}$ and there
exists $\rho>0$ such that $|p_{A}(a^{k})-\lambda|\geq\rho$ for every $k\in{\mathbf{N}}$.
For the sake of completeness, we point out that when investigating the descriptional power of unary finite automata,
we get size estimations which are slightly different than those above quoted for finite automata working on general input alphabets.
Thus, e.g., it is known that ${\rm e}^{\Theta(\sqrt{n\log n})}$ states are necessary and sufficient for 1dfa’s to simulate unary 1nfa’s Ch86 .
The same exponential blow up is proved in MP00 ; MP01
for simulating unary 2dfa’s and 2nfa’s by 1dfa’s. A “similar” exponential gap is also
proved for simulating unary 1pfa’s by 1dfa’s, however for this latter simulation the
question should be stated more carefully, and we refer the reader to BP12 ; MP01a for complete details.
Finally, as above recalled, we have that isolated cut point unary 2pfa’s accept all and only regular languages, but their
exact descriptional power is still an open question.
II.2 Basics of linear algebra
We briefly recall some basic notions of linear algebra (see, e.g., MM88 ) useful to recall the quantum picture and, in
particular, to define the model of quantum finite automata.
We denote by ${\mathbf{C}}$ the field of complex numbers. Given a complex number $z\in{\mathbf{C}}$, its conjugate is denoted
by $\overline{z}$, and its modulus by $|z|=\sqrt{z\overline{z}}$. The set of $n\times m$ matrices having entries in ${\mathbf{C}}$
is denoted by ${\mathbf{C}}^{n\times m}$. For matrices $C\in{\mathbf{C}}^{n\times m}$ and $D\in{\mathbf{C}}^{m\times r}$, their product is the matrix
$(CD)_{ij}=\sum_{k=1}^{m}C_{ik}D_{kj}$ in ${\mathbf{C}}^{n\times r}$.
The adjoint of a matrix $M\in{\mathbf{C}}^{n\times m}$ is the matrix
$M^{\dagger}\in{\mathbf{C}}^{m\times n}$ with $M^{\dagger}_{ij}=\overline{M_{ji}}$.
An Hilbert space of dimension $n$ is the linear space ${\mathbf{C}}^{1\times n}$ — in what follows denoted by ${\mathbf{C}}^{n}$ for short —
equipped with sum and product by elements in ${\mathbf{C}}$, in which, for any vectors $\zeta,\xi\in{\mathbf{C}}^{n}$, the inner product ${\left\langle\zeta,\xi\right\rangle}=\zeta\xi^{\dagger}$ is defined. If
${\left\langle\zeta,\xi\right\rangle}=0$, we say that $\zeta$ and $\xi$ are orthogonal.
If $\zeta$ and $\xi$ are orthogonal and $\left\|\zeta\right\|=1=\left\|\xi\right\|$, then $\zeta$ and $\xi$ are said to be orthonormal.
The norm of vector $\zeta$, is defined as
$\left\|\zeta\right\|=\sqrt{{\left\langle\zeta,\zeta\right\rangle}}$. Two subspaces $X,Y$ in ${\mathbf{C}}^{n}$ are
orthogonal if every vector in $X$ is orthogonal to every vector in $Y$;
in this case, the linear space generated by $X\cup Y$ is denoted by
$X\dotplus Y$.
A matrix $M\in{\mathbf{C}}^{n\times n}$ is said to be unitary whenever $MM^{\dagger}=I=M^{\dagger}M$,
where $I\in{\mathbf{C}}^{n\times n}$ is the identity matrix. Equivalently, $M$ is unitary if and only
if it preserves the norm, i.e., $\left\|\zeta M\right\|=\,\left\|\zeta\right\|$ for every $\zeta\in{\mathbf{C}}^{n}$.
The eigenvalues of unitary matrices are
complex numbers of modulus $1$, i.e., they are in the form ${\rm e}^{i\vartheta}$,
for some real $\vartheta$. A matrix ${\cal O}\in{\mathbf{C}}^{n\times n}$ is said to be Hermitian whenever
${\cal O}={\cal O}^{\dagger}$. Let $c_{1},\ldots,c_{s}$ be the eigenvalues of the Hermitian matrix ${\cal O}$
and $E_{1},\ldots E_{s}$ be the corresponding eigenspaces. It is well known that:
(i) each eigenvalue $c_{k}$ is real,
(ii) $E_{i}$ is orthogonal to $E_{j}$ for every $i\neq j$,
(iii) $E_{1}\dotplus\cdots\dotplus E_{s}={\mathbf{C}}^{n}$.
Each vector $\zeta\in{\mathbf{C}}^{n}$ can be uniquely decomposed as
$\zeta=\zeta_{1}+\cdots+\zeta_{s}$, where $\zeta_{j}\in E_{j}$. The linear
transformation $\zeta\mapsto\zeta_{j}$ is the projector $P_{j}\in{\mathbf{C}}^{n\times n}$ on the
subspace $E_{j}$. It is easy to see that $\sum_{j=1}^{s}P_{j}=I$. An
Hermitian matrix ${\cal O}$ is biunivocally determined by its eigenvalues
and its eigenspaces (or, equivalently, by its projectors). In fact, we have
${{\cal O}}=c_{1}P_{1}+\cdots+c_{s}P_{s}$.
II.3 Axiomatic for quantum mechanics in short
Here, we use the elements of linear algebra so far recalled to describe quantum systems (see, e.g., braket ; HU92 for detailed expositions).
Given a set $Q={\left\{q_{1},\ldots,q_{m}\right\}}$ of basis states, every $q_{i}$ can be represented
by its characteristic vector $e_{i}\in{\left\{0,1\right\}}^{m}$ having 1 at $i$-th position and 0 elsewhere.
A quantum state on $Q$ is a
superposition $\zeta\in{\mathbf{C}}^{m}$ of basis states of the form $\zeta=\sum_{k=1}^{m}\alpha_{k}e_{k}$, with
coefficients $\alpha_{k}$ being complex amplitudes satisfying $\left\|\zeta\right\|=1$.
Given an alphabet $\Sigma={\left\{a_{1},\ldots,a_{l}\right\}}$ of events, with every event symbol $a_{i}$
we associate a
unitary transformation $U(a_{k}):{\mathbf{C}}^{m}\rightarrow{\mathbf{C}}^{m}$. An observable is described by an Hermitian matrix ${\cal O}=c_{1}P_{1}+\cdots+c_{s}P_{s}$. Suppose that at a given instant a quantum system is described by
the quantum state $\zeta$. Then, we can operate:
1.
Evolution by the event $a_{j}$. The new state $\xi=\zeta U(a_{j})$ is reached.
This dynamics is reversible, being that $\zeta=\xi U^{\dagger}(a_{j})$.
2.
Measurement of ${\cal O}$. Every outcome in
${\left\{c_{1},\ldots,c_{s}\right\}}$ can be obtained. The outcome $c_{j}$ is obtained with
probability ${\left\|\zeta P_{j}\right\|}^{2}=\langle\zeta P_{j},\zeta P_{j}\rangle$, and the state of the quantum system after observing
such a measurement collapses to the superposition $\zeta P_{j}\left/\right.\left\|\zeta P_{j}\right\|$. The state transformation
induced by a measurement is typically irreversible.
II.4 One-way unary quantum finite automata
Several models of one-way (fully) quantum finite automata are proposed in the
literature. Basically, they differ in measurement policy ABG06 ; AY18 ; BMP03 ; Gru00 . In this paper, we consider the simplest model
of one-way quantum automata called measure-once BC01 ; BC01a ; BP02 ; MC00 . We
focus on the unary case, i.e., automata having a single-letter input alphabet $\Sigma=\{a\}$. indeed, the definition of a one-way quantum automata
on a general alphabet comes straightforwardly.
As done in Section II.1 for classical models of unary one-way finite automata, we are going to provide a matrix presentation of
unary one-way quantum finite automata. A more general definition of a 1qfa on a general input alphabet may be aesily
A unary measure-once one-way quantum finite automaton (1qfa, for short)
with $n$ basis states, some of which are designated as accepting states, is formally defined by the triple $A=(\zeta,U,P)$, where:
•
$\zeta\in{\mathbf{C}}^{n}$, with $\left\|\zeta\right\|=1$, is the initial superposition of basis states,
•
$U\in{\mathbf{C}}^{n\times n}$ is a unitary transition matrix with $U_{ij}$ being the amplitude that
$A$ moves from the basis state $q_{i}$ to the basis state $q_{j}$ upon reading $a$, so that $|U_{ij}|^{2}$ is the
probability of such a transition,
•
$P\in{\mathbf{C}}^{n\times n}$ is the projector onto the accepting subspace, i.e., the
subspace of ${\mathbf{C}}^{n}$ spanned by the accepting basis states. The projector $P$
biunivocally individuates the observable ${\cal O}=1\cdot P+0\cdot(I-P)$.
At the end of the computation on the input word $a^{k}$, the state of $A$ is described by the final superposition $\zeta U^{k}$.
At this point, the observable ${\cal O}$ is measured, and $A$ is observed in an accepting basis state with probability
$p_{A}(a^{k})=\left\|\zeta U^{k}P\right\|^{2}$. This is the probability that $A$ accepts $a^{k}$.
The definition of the unary language $L_{A,\lambda}$ accepted by $A$ with cut point $\lambda$, and the notion of a unary language accepted by $A$ with isolated cut point are identical to those provided in Section II.1 for the model of unary 1pfa’s.
The designation “measure-once” given to the model of 1qfa above introduced is due to the fact the observation for acceptance is performed only once,
at the end of input processing. Throughout the rest of the paper, for the sake of brevity, by 1qfa we will be meaning
“measure-once 1qfa”, unless otherwise stated.
Several contributions in the literature show that, surprisingly enough, isolated cut point 1qfa’s are less powerful than
classical models of one-way finite automata. In fact, in BP02 ; BC01 ; MC00 it is proved that
Theorem 6.
The class of languages on general alphabets accepted by isolated cut point 1qfa’s coincides with the class of group languages Pin86 , a proper
subclass of regular languages.
This limitation still remains for more general variants of (fully) quantum finite automata
ABG06 ; AY18 ; BMP03 ; KW97 . To overcome this computational weakness and exactly reach classical acceptance capability,
hybrid models are proposed in the literature, consisting of classical finite automata “embedding” small quantum
finite memory components (see, e.g., AY18 ; BMP14 ; BMP14b ; Hi10 ; MP06 ; ZQLG12 ).
By restricting to unary alphabets, the computational power of isolated cut point 1qfa’s still remains
strictly lower than that of classical devices. On the other hand,
it is proved in BPa09 that the class of unary languages accepted by “measure-many” isolated cut point 1qfa’s
coincides with the class of unary regular languages. Roughly speaking, a measure-many 1qfa AY18 ; BMP03 ; KW97 is defined as a
measure-once 1qfa, but the observation for acceptance is performed at each step along the computation.
III Theoretical design of a small quantum finite automaton
Although being computationally weaker, 1qfa’s may greatly outperform classical devices when size — customarily measured
by the number of basis states — is considered
(see, e.g., AG00 ; AF98 ; BC01 ; MP03b ; BMP03b ; BMP05 ; BMP06 ; BMP14a ; BMP17 ; MP07 ; MPP01 ).
To prove this fact, we test the descriptional power of several models of classical and quantum one-way finite automata on the very simple benchmark language
introduced in Example 2: for any given
integer $m>0$, we let the unary language
$$L_{m}={\{a^{k}\!\ \mid\ k\in{\mathbf{N}}\,\mbox{ and }\,k\,{\rm mod}\,\;m=0\}}.$$
(6)
Despite its simplicity, this language proves to be particularly size-consuming on classical model of one-way finite automata, as resumed in the following
Theorem 7.
For any integer $m>0$, let $m=p_{1}^{\alpha_{1}}p_{2}^{\alpha_{2}}\cdots p_{s}^{\alpha_{s}}$ be its integer factorization, for primes $p_{i}$ and
positive integers $\alpha_{i}$. To accept the language $L_{m}\/$, the following number of states are necessary and sufficient:
(i)
$m$ states on 1{d,n}fa’s.
(ii)
$p_{1}^{\alpha_{1}}+p_{2}^{\alpha_{2}}+\cdots+p_{s}^{\alpha_{s}}$ states on 2{d,n}fa’s and isolated cut point 1pfa’s.
Proof.
(i) In Example 2, an $m$-state 1dfa (which is clearly a particular 1nfa) for $L_{m}$ is provided.
The fact that $m$ states are necessary for any
1{d,n}fa to accept $L_{m}$ can be easily
obtained by using the pumping lemma for regular languages HU01 ; HU79 . (ii) For 2{d,n}fa’s, the result is proved in MP00 .
For 1pfa’s, the result is proved in MPP01 .
∎
By adopting the quantum paradigm, we can obtain isolated cut point 1qfa’s for $L_{m}$ of incredibly small size:
Theorem 8.
For any integer $m>0$, the language $L_{m}$ can be accepted by an isolated cut point
1qfa with two basis states.
Proof.
We define the 1qfa $\cal A$ with 2 basis states as:
$$\displaystyle{\mathcal{A}}=\Bigg{(}$$
$$\displaystyle\zeta=(1,0),$$
$$\displaystyle U_{m}=\left(\begin{array}[]{cc}\cos(\pi/m)&\sin(\pi/m)\\
-\sin(\pi/m)&\cos(\pi/m)\end{array}\right),$$
$$\displaystyle P=\left(\begin{array}[]{cc}1&0\\
0&0\end{array}\right)\Bigg{)}.$$
(7)
One may easily verify that $U$ is a unitary matrix, and that
$$(U_{m})^{k}=\left(\begin{array}[]{cc}\cos(\pi k/m)&\sin(\pi k/m)\\
-\sin(\pi k/m)&\cos(\pi k/m)\end{array}\right).$$
(8)
Straightforward calculations show that the probability that ${\mathcal{A}}$ accepts
the word $a^{k}$ amounts to
$$\displaystyle p_{{\mathcal{A}}}(a^{k})$$
$$\displaystyle=\left\|\zeta(U_{m})^{k}P\right\|^{2}=\cos^{2}\left(\frac{\pi k}{%
m}\right)$$
$$\displaystyle=\left\{\begin{array}[]{ll}1&\mbox{if }k\,{\rm mod}\,m=0\\
<\cos^{2}(\pi/m)&\mbox{otherwise}.\end{array}\right.$$
(9)
In words, our 1qfa ${\mathcal{A}}$ accepts with certainty the words
in $L_{m}$, while the acceptance probability for the words not in $L_{m}$ is
bounded above by $\cos^{2}\left({\pi}/{m}\right)<1$.
So, we can set the cut point $\lambda=[1+\cos^{2}\left({\pi}/{m}\right)]/2$ and isolation
$\rho=[1-\cos^{2}\left({\pi}/{m}\right)]/2$, and conclude that $L_{m}$ is accepted by the
1qfa ${\mathcal{A}}$ with 2 basis states and cut point $\lambda$ isolated by $\rho$.
∎
In Figure 5, we depict the 1qfa $\cal A$ of Eq. (7) in order to highlight the input word $a^{k}$,
the initial automaton state
$\zeta$, the unitary operator $U_{m}$ and the measurement described by the projector $P$.
It is worth noting that the isolation $\rho=[1-\cos^{2}\left({\pi}/{m}\right)]/2$ around the cut point of the 1qfa $\cal A$
of Eq. (7) tends to 0
for $m\rightarrow+\infty$. Hence, the higher $m$ grows the higher the error probability is, i.e., with high probability
$\cal A$ may erroneously accept (reject) words not in $L_{m}$ (words in $L_{m}$). To overcome this lack of precision, several
modular design frameworks have been settled in the literature, aiming to enlarge cut point isolation paying by increasing the number of basis states AN09 ; BMP03 ; BMP03a ; BMP05 ; BMP06 ; BMP14 ; BMP17 ; MP02 ; MP07 .
Within these frameworks, for any desired isolation $\rho>0$, a 1qfa can be theoretically defined, which accepts $L_{m}$
with cut point isolated by $\rho$ and featuring $O(\frac{\log m}{\rho})$ basis states. Although the number of basis states now
depends on $m$, still it remains exponentially lower than the number of states of equivalent classical one-way finite automata displayed
in Theorem 7. In addition, the proposed $O(\frac{\log m}{\rho})$-state 1qfa turns out to be the smallest possible.
In fact, in BMP06 it is proved that any 1qfa accepting $L_{m}$ with cut point isolation $\rho$ must have at least
$\frac{\log m}{\log(1+2/\rho))}$ basis states.
It should be stressed that all the design frameworks proposed in the literature, aiming to build extremely succinct 1qfa’s
not only for $L_{m}$ but also for more general families of languages, use the simple 1qfa $\cal A$ of Eq. (7) as a crucial
building block. Within these frameworks, the 1qfa $\cal A$ is suitably composed in a modular pattern
by using traditional compositions (i.e., direct product and sum of quantum systems),
in order to enhance precision in language recognition. In particular, from this perspective, a physical realization of the 1qfa $\cal A$
is not only interesting per
se, but it may provide a concrete computational component upon which to physically project more sophisticated and precise 1qfa’s by traditional compositions of quantum systems.
IV Photonic implementation of the quantum finite automaton
In this section we describe the physical implementation of the 1qfa $\cal A$ of Eq. (7). The experimental realization
is based on the polarization degree of freedom of single photons and their manipulation through suitable rotators of polarization.
For the sake of clarity, before discussing the physical implementation, we will summarize in the following the basic formalism used
to describe this kind of quantum system.
IV.1 The Dirac formalism
In order to describe the physical implementation of the 1qfa ${{\mathcal{A}}}$ of Eq. (7) accepting the language $L_{m}$, it is useful to review the standard
notation for quantum mechanics introduced by Dirac braket .
This will help the reader to easily pass from the notation used in the previous sections to the one we will use in the following.
In this notation, the state “$\psi$” of a quantum system is described by the symbol $|\psi\rangle$ which is, in general, a
complex column vector in a Hilbert space. In the present work, we are interested in the (linear) polarization state of a single photon,
therefore, only the two basis states $|H\rangle$ and $|V\rangle$, referring to the horizontal ($H$) and vertical ($V$) polarization,
respectively, are needed. Indeed, due to the very law of quantum mechanics, any normalized linear combination of these
two vectors represents a quantum state. For instance a single photon polarized at an angle $\theta$ with respect to the horizontal is described
by the state vector:
$$|\theta\rangle=\cos\theta\,|H\rangle+\sin\theta\,|V\rangle\,.$$
(10)
Since we are in the presence of only two basis states, we can give a geometrical representation of them and of the
corresponding spanned space, as shown in Figure 6(a).
In this formalism, it is clear the correspondence:
$$\displaystyle|H\rangle=\zeta^{{\dagger}}=\left(\begin{array}[]{c}1\\
0\end{array}\right)\!,\mbox{ and }\langle H|=(|H\rangle)^{\dagger}=\zeta=(1,0)\,,$$
(11)
where $\zeta$ is the same state introduced in Eq. (7). Analogously,
we have:
$$\displaystyle|V\rangle=\xi^{\dagger}=\left(\begin{array}[]{c}0\\
1\end{array}\right)\,,\quad\mbox{and}\quad|\theta\rangle=\left(\begin{array}[]%
{c}\cos\theta\\
\sin\theta\end{array}\right)\,,$$
(12)
In Section II.2 we introduced the inner product ${\left\langle\zeta,\xi\right\rangle}=\zeta\xi^{\dagger}$ between the states $\zeta$ and $\xi$.
Using the Dirac formalism we have:
$$\zeta\xi^{\dagger}=\langle H|V\rangle=0\,,$$
(13)
where we also used the orthonormality of the involved states. If we now introduce the projectors:
$$\Pi_{H}=|H\rangle\langle H|=P=\left(\begin{array}[]{cc}1&0\\
0&0\end{array}\right)\,,$$
(14)
where $P$ is the same as in Eq. (7), and
$$\Pi_{V}=|V\rangle\langle V|=Q=\left(\begin{array}[]{cc}0&0\\
0&1\end{array}\right)\,,$$
(15)
given the state $|\theta\rangle$, with $(|\theta\rangle)^{\dagger}=\vartheta$, we have:
$$\displaystyle p_{H}$$
$$\displaystyle=\langle\vartheta P,\vartheta P\rangle$$
$$\displaystyle=\langle\theta|\Pi_{H}|\theta\rangle=|\langle H|\theta\rangle|^{2%
}=\cos^{2}\theta\,,$$
(16a)
$$\displaystyle p_{V}$$
$$\displaystyle=\langle\vartheta Q,\vartheta Q\rangle$$
$$\displaystyle=\langle\theta|\Pi_{V}|\theta\rangle=|\langle V|\theta\rangle|^{2%
}=\sin^{2}\theta\,,$$
(16b)
where we used $\Pi_{J}^{2}=\Pi_{J}$, with $J\in\{H,V\}$, and $\langle a|b\rangle=\overline{\langle b|a\rangle}$.
The geometrical meaning of $p_{H}$ and $p_{V}$ is reported in Figure 6(b) whereas, from the physical point of view,
they correspond to the probability of finding the photon with horizontal or vertical polarization, respectively.
In the context of the polarization of single photons, the analogous of the unitary operator $U_{m}$ defined in
Eq. (7) is the operator $R(\pi/m)$ which corresponds to a rotator of polarization, which rotates the polarization
of the photons by an amount $\pi/m$. We can write $R(\pi/m)=U_{m}^{\dagger}$.
Thereafter, the one-step evolution of the state $|H\rangle=\zeta^{{\dagger}}$ reads:
$$R(\pi/m)|H\rangle=\zeta U_{m}.$$
(17)
IV.2 Photonic quantum automaton
In Figure 7, we depicted the basic elements of the photonic quantum automaton implementing the
1qfa ${\mathcal{A}}$ of Eq. (7) accepting the language $L_{m}$.
Given the input word $a^{k}$ (see also Figure 5), a single photon, generated in the state $|H\rangle$ is sent through $k$
rotators of polarization, where each rotator applies a rotation of a fixed amount $\pi/m$.
It is worth noting that in order to actually reproduce the computation of a 1qfa,
a single rotation should be applied step by step upon reading each input symbol,
since the input word length is not known in advance.
After the rotators, the single photon is sent to a
polarizing beam splitter (PBS), a device which transmits (reflects) the horizontal (vertical) polarization component of the input state. Since after
the rotators the state of the photon is $|\theta\rangle$, given in Eq. (10), it is detected by the $H$ or $V$ detector (see Figure 7) with the probabilities given in Eqs. (16). It is worth noting that, as expected, $p_{H}(k)$ is equal to
the automaton acceptance probability $p_{{\mathcal{A}}}(a^{k})$, see Eq. (III).
As mentioned in Theorem 8, this kind of automaton accepts with certainty the word $a^{k}$
if $k\,{\rm mod}\,m=0$, but it has also a high error probability to accept the word if $k\,{\rm mod}\,m=1$. In fact, in this case, $p_{H}(k)$
attains its maximum $\cos^{2}(\pi/m)$.
To reduce the error probability, one can send $M=N_{c}(m)$ copies of the same input word $a^{k}$,
collect the number $N_{c}(k)$ of counts at the detector $H$ and evaluate the ratio:
$$f_{k}=\frac{N_{c}(k)}{N_{c}(m)}\xrightarrow[]{M\gg 1}p_{H}(k)\,.\,$$
(18)
In this scenario, we let $f_{1}=f_{(k\,{\rm mod}\,m)\,=\,1}$ be the highest frequency less than $f_{0}=f_{(k\,{\rm mod}\,m)\,=\,0}=~{}1$.
I.e., $f_{1}$ is the highest frequency for words that are erroneously accepted (those words $a^{k}$ for which $k\,{\rm mod}\,m=1$),
and $f_{0}$ is the
frequency of those words that are correctly accepted (those words $a^{k}$ for which $k\,{\rm mod}\,m=0$). Thus, we can define the threshold frequency:
$$f_{\rm th}=\frac{f_{0}+f_{1}}{2}=\frac{1+f_{1}}{2}\,,$$
(19)
and we use the following
strategy:
$$\left\{\begin{array}[]{l}\mbox{if}~{}f_{k}>f_{\rm th}\Rightarrow a^{k}~{}\mbox%
{is accepted by ${\mathcal{A}}$,}\\
\mbox{if}~{}f_{k}<f_{\rm th}\Rightarrow a^{k}~{}\mbox{is rejected.}\end{array}\right.$$
(20)
It is clear that such a strategy leads to a zero error probability, namely, all and only the words in $L_{m}$ can have $f_{k}>f_{\rm th}$.
However, in a realistic scenario the number of detected photons is subjected to Poisson statistical fluctuations, due to the very nature of the
detection process loudon . So, given the word $a^{k}$, the number of detected counts $N_{c}(k)$ fluctuates according to a
Poisson distribution with mean $\mu_{k}=\langle N_{c}\rangle\cos^{2}(\pi k/m)$, where $\langle N_{c}\rangle$ is the average
number of detected photons obtained for $k\,{\rm mod}\,m=0$.
Thus, it is possible to have a detected frequency $\tilde{f}_{k}=N_{c}(k)/\langle N_{c}\rangle$ which incorrectly satisfies
$$\tilde{f}_{k}>\tilde{f}_{\rm th}=(\tilde{f}_{0}+\tilde{f}_{1})\,/\,2~{}~{}~{}~%
{}~{}~{}~{}~{}\mbox{(resp., $\tilde{f}_{k}<\tilde{f}_{\rm th}=(\tilde{f}_{0}+%
\tilde{f}_{1})\,/\,2$)}$$
also for a word $a^{k}$ not belonging (resp., belonging) to the language $L_{m}$,
leading to a non null experimental acceptance error probability $p_{\rm err}$.
If we assume $\mu_{1}=\langle N_{c}\rangle\cos^{2}(\pi/m)\gg 1$, the distribution of the detected number of counts for $k\,{\rm mod}\,m=1$ can be approximated by a Gaussian distribution function with mean and variance given by same value $\mu_{1}$. Analogously, for $k\,{\rm mod}\,m=0$ we have a Gaussian distribution with mean and variance equal to $\mu_{0}=\langle N_{c}\rangle$. Now we can find a more suitable threshold $N_{\rm th}$ of the detected counts by considering the intersection between the two Gaussians, namely:
$$N_{\rm th}=\langle N_{c}\rangle\big{|}\cos(\pi/m)\big{|}\sqrt{1-\frac{\ln\big{%
[}\cos^{2}(\pi/m)\big{]}}{\langle N_{c}\rangle\sin^{2}(\pi/m)}}.$$
(21)
and the corresponding discrimination strategy reads:
$$\left\{\begin{array}[]{l}\mbox{if}~{}N_{c}(k)\geq N_{\rm th}\Rightarrow a^{k}~%
{}\mbox{is accepted by ${\mathcal{A}}$,}\\
\mbox{if}~{}N_{c}(k)<N_{\rm th}\Rightarrow a^{k}~{}\mbox{is rejected.}\end{%
array}\right.$$
(22)
The experimental error probability is thus given by (we consider only the two relevant contributions):
$$\displaystyle p_{\rm err}$$
$$\displaystyle=\int_{-\infty}^{N_{\rm th}}\frac{dx}{\sqrt{2\pi\mu_{0}}}\,\exp%
\left[-\frac{(x-\mu_{0})^{2}}{2\mu_{0}}\right]$$
$$\displaystyle +\int_{N_{\rm th}}^{\infty}\frac{dx}{\sqrt{2\pi\mu_{1}}}%
\,\exp\left[-\frac{(x-\mu_{1})^{2}}{2\mu_{1}}\right]\,,$$
(23)
where $1\ll\langle N_{c}\rangle\cos^{2}(\pi/m)=\mu_{1}<N_{\rm th}<\mu_{0}=\langle N_{%
c}\rangle$.
We note that $p_{\rm err}$ corresponds to the probability of accepting (resp., rejecting) the word $a^{k}$ whenever it should be rejected (resp., accepted).
In Figure 8 we plot the error probability for different values of $m$: as one may expect, the larger is $m$ the grater should be
the average number of counts $\langle N_{c}\rangle$ in order to have a small error probability.
V Experimental results
The main elements of our physical implementation of the 1qfa $\cal A$ accepting the language $L_{m}$
are sketched in Figure 7. However, in order to reduce the losses and other sources of noise,
in the actual setup we replace the action of the $k$ polarization rotators on the input word $a^{k}$ by using a single rotator applying
an overall rotation of $\theta=\pi k/m$, which “simulates” the whole computation
of the 1qfa: for this reason we will refer to
our system as a photonic quantum simulator q:sim of the quantum automaton.
As mentioned in the previous section, an actual 1qfa does not have an a-priori knowledge about the
length $k$ of the input word. In fact, it reads the input word symbol by symbol while applying a rotation $\pi/m$ per each scanned input symbol $a$. Practically, this can be implemented, for instance, by a motorized rotator of polarization,
but this is beyond the scope of the present work.
Nevertheless, it is worth noting that a more advanced technology, e.g., based on integrated optics or
optoelectronics, can be used to realize the very setup of Figure 7.
The experimental setup is shown in Figure 9.
1.
The pump derives from a $405$-nm cw InGaN laser diode, which we chose in order to use detectors in Silicon, namely the ones with the lowest noise on the market: indeed, these work with maximum quantum efficiency at $810$ nm, which is the same wavelength of the photons generated via parametric down conversion (PDC) from a $405$-nm pump.
2.
The laser beam passes through an amplitude modulator composed by a half-wave plate and a polarizing beamsplitter cube (PBS), and then through another half-wave plate to set the polarization vertical with respect to the optical bench.
3.
The interaction between the pump and a $1$-mm long BBO crystal generates photons at $810$ nm with horizontal polarization, along the surface of a cone, via type-I-eoo PDC: to this purpose the optical axis of the crystal lies on the vertical plane at the phase-matching angle.
4.
The intersection of the cone with the horizontal plane individuates two beams (branches): the signal and the idler. It is possible to finely tune the angle of the outgoing photons by properly rotating the principal axis of the BBO.
5.
Along the signal branch, a polarizer ensures the transmission of the horizontally polarized photons, then a half-wave plate is used to simulate the $k$ polarization rotators and finally another horizontal polarizer transmits the photons to the detector. This last half-wave plate can be manually rotated and is equipped with graduations where a unit corresponds to $4^{\circ}$ in polarization: by considering the working principle of the half-wave plate, this can be obtained by actually rotating the plate by $2^{\circ}$. Therefore, in general, in order to obtain a rotation in polarization of amount
$\theta$, one should rotate the plate by $\theta/2$.
6.
On each branch, photons are finally focused into a multimode fiber and sent to a homemade single-photon counting module, based on an avalanche photodiode operated in Geiger mode with passive quenching quench .
We chose to measure the coincidence counts in order to obtain a better signal-noise ratio: indeed the photodiodes produce a thermal background such that approximately the $1$% of the direct counts are dark counts, while the coincidence dark counts are only the $0.001\%$ of the coincidence counts.
In Figure 10, we show typical experimental results from our photonic simulator of the 1qfa $\cal A$ for the language $L_{m}$, with $m=5$ (this choice allows us to put better in evidence the role of the statistical fluctuations of the detected number of photons). In this case, a single rotation of polarization (taking place, e.g., on the input word of length $k=1$) has $\theta=36^{\circ}$, which corresponds to rotating by $9$ units the half-wave plate on signal’s branch (see point (5) in the above description of our experimental setup).
Here we only show the interesting results for input words $a^{k}$ of length $k=5$ and $k=1$.
Such two inputs, respectively representing a word in $L_{5}$ and one of the most-prone-to-error-classification
words not in $L_{5}$,
turn out to be critical for testing the accuracy of
the discrimination strategy we use.
Furthermore, in order to highlight the reduction of the statistical fluctuations, we plot
the ratio $N_{c}(k)/\langle N_{c}\rangle$. Each point corresponds to the number of counts at the detector $H$
when the average total number of counts is $\langle N_{c}\rangle=36,108,479$ and
$1845$, respectively, which can be obtained varying pump’s power by rotating the half-wave plate of the amplitude modulator. We repeated the experiments 50 times with an acquisition time of $1$s for each of the two values of $k$.
It is clear that increasing $\langle N_{c}\rangle$ reduces the relative fluctuations and, thus, the error probability decreases accordingly.
To better appreciate the performance of our photonic simulator, we consider our 1qfa $\cal A$
accepting the language $L_{m}$, with $m=23$. In this case, a single rotation of polarization (taking place, e.g., on the input word of length $k=1$) has
(approximately) $\theta=8^{\circ}$, which corresponds to $2$ units in the half-wave plate’s scale (see point (5) in the above description of our experimental setup).
In the top panel of Figure 11 we report examples of the number of counts $N_{c}(k)$ for just one given experimental run as a function of different values $k$ of the length of the input word $a^{k}$ and for $\langle N_{c}\rangle=18439$. The acquisition time of each point is $10$s. We can see that, due to the statistical fluctuations mentioned above, sometimes the automaton fails to accept the word: this is the case of the input lengths 115 and 229, as
discussed in the figure caption. We remark that in the latter cases we have chosen two particular
experimental runs in which the automaton fails: if we had considered the average over many runs, we would have
found that the automaton always succeed on average, since the standard deviation, due to the statistical scaling, can be reduced at will. Of course, given a particular run,
the error is independent of $k$, but depends only on the random, statistical fluctuations, which can be
controlled by increasing $\langle N_{c}\rangle$, as we can see in the bottom panel of Figure 11, where we report the results of the photonic simulator
taking the same words of the top panel as inputs but with $\langle N_{c}\rangle=56477$, obtained with an acquisition time of $30$s. This can be also understood by considering Eq. (23): the error probability is reduced from $p_{\rm err}=10.3$% for $\langle N_{c}\rangle=18439$ of the previous case to the current $p_{\rm err}=1.3$%
for $\langle N_{c}\rangle=56477$.
VI Conclusions
We have suggested and demonstrated a photonic realization of quantum finite
automata able to recognize a well known family of unary periodic languages. Our device exploits the polarization degree of freedom of single photons and
their manipulation through linear optical elements. In particular, we have
designed and implemented a one-way quantum finite automaton $\cal A$ accepting
the unary language $L_{m}={\{a^{k}\!\ \mid\ k\in{\mathbf{N}}\,\mbox{ and }\,k\,{\rm mod}\,\,m=0\}}$
with only 2 basis states and isolated cut point. Notice that any classical finite automaton for $L_{m}$
requires a number of states which grows with $m$.
We have implemented the quantum finite automaton $\cal A$ using the polarization degree of freedom of a single photon and
have exploited a discrimination strategy to reduce the acceptance error probability.
It is worth noting that, for the particular one-way quantum finite automaton we considered, we exploited only
the polarization degree of freedom of (quantized) optical fields and photodetection.
Therefore, one can implement a similar automaton
also exploiting polarization of a classical coherent field (a laser beam) and intensity measurements.
Nevertheless, our experiment uses single photons, that are intrinsically quantum objects, and, thus,
it paves the way to more complex quantum finite automata we are planning to address and which exploit genuine quantum resources,
such as entanglement. In fact, the quantum technology employed in our implementation is the same
used in the current quantum information processing setups based on optical states.
Besides being interesting in itself for fundamental reasons,
our physical realization of the one-way quantum finite automaton $\cal A$
provides a concrete implementation of a small quantum computational
component that can be used to physically
build more sophisticated and precise quantum finite automata. Indeed,
several modular design frameworks have been modeled and widely investigated from a theoretical point
of view
AF98 ; AY18 ; AN09 ; BMP03 ; BMP03a ; BMP05 ; BMP06 ; BMP14 ; BMP17 ; MP02 ; MP07 ; MPP01
to build succinct and precise quantum finite
automata performing different tasks, where the module $\cal A$ plays
a crucial role. Within these frameworks, by suitably assembling
a sufficient number of $\cal A$-like modules via traditional compositions of quantum systems (i.e., direct products and sums),
the existence of succinct and precise quantum finite automata has been theoretically shown.
From this perspective, our results are instrumental to a deeper understanding of possible
physical implementations of these design frameworks by means of photonic
technology, and pave the way for the construction of other more powerful
models of quantum finite automata.
Acknowledgements.
M. G. A. Paris is member of GNFM-INdAM. C. Mereghetti and B. Palano are members of GNCS-INdAM.
References
(1)
C.H. Bennett, E. Bernstein, G. Brassard, U. Vazirani.
Strength and weakness of quantum computing.
SIAM Journal on Computing, 26:1510–1523, 1997.
(2)
E. Bernstein, U. Vazirani.
Quantum complexity theory.
SIAM J. Comput., 26:1411–1473, 1997.
A preliminary version appeared in Proc. 25th ACM Symp. on Theory of Computing (STOC), 11–20, 1993.
(3)
J. Gruska.
Quantum Computing.
McGraw-Hill, 1999.
(4)
M. Hirvensalo.
Quantum Computing.
Springer, 2004.
(5)
M.A. Nielsen, I.L. Chuang.
Quantum Computation and Quantum Information.
Cambridge University Press, 2011.
(6)
A.S. Holevo.
Some estimates of the information transmitted by quantum communication channels.
Problemy Peredachi Informatsii, 9:3–11, 1973. English translation in
Problems Inf. Transm., 9:177–183, 1973.
(7)
R.S. Ingarden.
Quantum information theory.
Rep. on Mathematical Physics, 10:43–72, 1976.
(8)
R. Feynman.
Simulating physics with computers.
Int. J. Theoretical Physics, 21:467–488, 1982.
(9)
Y.I. Manin.
Vychislimoe i nevychislimoe (Computable and Noncomputable).
Soviet Radio, 1980. In Russian.
(10)
P. Benioff.
Quantum mechanical Hamiltonian models of Turing machines.
J. Stat. Phys., 29:515–546, 1982.
(11)
D. Deutsch.
Quantum theory, the Church-Turing principle and the universal quantum computer.
Proc. Roy. Soc. London, Ser. A, 400:97–117, 1985.
(12)
P. Shor.
Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer.
SIAM J. Comput., 26:1484–1509, 1997.
A preliminary version appeared in Proc. 35th IEEE Symp. on
Foundations of Computer Science (FOCS), 20–22, 1994.
(13)
L. Grover.
A fast quantum mechanical algorithm for database search.
In: Proc. 28th ACM Symp. on Theory of Computing (STOC), 212–219, 1996.
(14)
I.L. Chuang, Y. Yamamoto.
Simple quantum computer.
Physical Rev. A, 52:3489–3496, 1995.
(15)
D.P. DiVincenzo.
The physical implementation of quantum computation.
Fortschritte der Phys., 48:771–783, 2000.
(16)
M. Nakahara, S. Kanemitsu, M.M. Salomaa, S. Takagi (Eds.).
Physical Realizations of Quantum Computing.
World Scientific, 2006.
(17)
S. Feld, C. Linnhoff-Popien (Eds.).
Quantum technology and optimization problems.
Proc. 1st Int. Workshop QTOP 2019, LNCS 11413, Springer, 2019.
(18)
A. Bertoni, M. Carpentieri.
Regular languages accepted by quantum automata.
Information and Computation, 165:174–182, 2001.
(19)
A. Bertoni, M. Carpentieri.
Analogies and differences between quantum and stochastic automata.
Theoretical Computer Science, 262:69–81, 2001.
(20)
A. Brodsky, N. Pippenger.
Characterizations of 1-way quantum finite automata.
SIAM Journal on Computing, 5:1456–1478, 2002.
A preliminary version appeared in arXiv:quant-ph/9903014, 1999.
(21)
C. Moore, J. Crutchfield.
Quantum automata and quantum grammars.
Theoretical Computer Science, 237:275–306, 2000.
A preliminary version appeared in arXiv:quant-ph/9707031, 1997.
(22)
A. Ambainis, M. Beaudry, M. Golovkins, A. Kikusts, M. Mercer,
D. Thérien.
Algebraic results on quantum automata.
Theory of Computing Systems,
39:165–188, 2006.
(23)
A. Ambainis, J. Watrous.
Two-way finite automata with quantum and classical states.
Theoretical Computer Science, 287:299–311, 2002.
(24)
A. Ambainis, A. Yakaryilmaz.
Automata and quantum computing.
arxiv.org/abs/1507.01988v2, 2018.
(25)
A. Bertoni, C. Mereghetti, B. Palano.
Quantum computing: 1-way quantum automata.
In: Proc. 7th Conf. on Developments in Language Theory (DLT).
LNCS 2710, 1–20, Springer, 2003.
(26)
A. Bertoni, C. Mereghetti, B. Palano.
Trace monoids with idempotent generators and measure-only quantum automata.
Natural Computing, 9:383–395, 2010.
(27)
C. Mereghetti, B. Palano.
Upper bounds on the size of one-way quantum finite automata.
In: Proc. 7th Italian Conf. on Theoretical Computer Science (ICTCS).
LNCS 2202, 123–135, Springer, 2001.
(28)
S. Zheng, D. Qiu, L. Li, J. Gruska.
One-way finite automata with quantum and classical states.
In: H. Bordihn, M. Kutrib, B. Truthe (Eds.), Languages Alive, - Essays Dedicated to Jürgen Dassow on
the Occasion of His 65th Birthday. LNCS 7300, 273–290, Springer, 2012.
(29)
M. Holzer, M. Kutrib.
Descriptional complexity—an introductory survey.
In: C. Martín-Vide (Ed.), Scientific Applications of Language Methods, 1–58,
Imperial College Press, 2010.
(30)
A. Ambainis, R. Freivalds.
1-way quantum finite automata: strengths, weaknesses and
generalizations.
In: Proc. 39th Symp. on Foundations of Computer Science (FOCS), 332–342, 1998.
(31)
A. Ambainis, N. Nahimovs.
Improved constructions of quantum automata
Theoretical Computer Science, 410:1916–1922, 2009.
(32)
A. Bertoni, C. Mereghetti, B. Palano.
Approximating stochastic events by quantum automata.
In: Proc. ERATO Conf. on Quantum Information Science, 43–44, 2003.
(33)
A. Bertoni, C. Mereghetti, B. Palano.
Small size quantum automata recognizing some regular languages.
Theoretical Computer Science, 340:394–407, 2005.
(34)
A. Bertoni, C. Mereghetti, B. Palano.
Some formal tools for analyzing quantum automata.
Theoretical Computer Science, 356:14–25, 2006.
(35)
M.P. Bianchi, C. Mereghetti, B. Palano.
Complexity of promise problems on classical and quantum automata.
In: C.S. Calude, R. Freivalds, K. Iwama (Eds.),
Computing with New Resources, Essays Dedicated to Jozef Gruska on
the Occasion of his 80th Birthday.
LNCS 8808, 161–175, Springer, 2014.
(36)
M.P. Bianchi, C. Mereghetti, B. Palano.
Quantum finite automata: Advances on Bertoni’s ideas.
Theoretical Computer Science, 664:39–53, 2017.
(37)
C. Mereghetti, B. Palano:
On the size of one-way quantum finite automata with periodic behaviors.
Theoretical Informatics and Applications, 36:277–291, 2002.
(38)
C. Mereghetti, B. Palano.
Quantum automata for some multiperiodic languages.
Theoretical Computer Science, 387:177–186, 2007.
(39)
J.E. Hopcroft, R. Motwani, J.D. Ullman.
Introduction to Automata Theory, Languages, and Computation.
Addison-Wesley, 2001.
(40)
J. Hopcroft, J. Ullman.
Introduction to Automata Theory, Languages, and Computation.
Addison-Wesley, 1979.
(41)
A. Paz.
Introduction to Probabilistic Automata.
Academic Press, New York, London, 1971.
(42)
M. Rabin, D. Scott.
Finite automata and their decision problems.
IBM J. Res. Develop., 3:114–25, 1959.
(43)
M.O. Rabin.
Probabilistic automata.
Information and Control, 6:230–245, 1963.
(44)
J. Gruska.
Descriptional complexity issues in quantum computing.
J. Aut., Lang. and Comb., 5:191–218, 2000.
(45)
C. Dwork, L. Stockmeyer.
A time complexity gap for two-way probabilistic finite-state automata.
SIAM Journal on Computing, 19:1011–1123, 1990.
(46)
J. Kaneps.
Regularity of one-letter languages acceptable by 2-way finite probabilistic automata.
In: Proc. 8th Int. Symp. on Fundamentals of Computation Theory (FCT), LNCS 529, 287–296, Springer, 1991.
(47)
J.C. Shepherdson.
The reduction of two-way automata to one-way automata.
IBM J. Res. Develop., 3:198–200, 1959.
(48)
M. Chrobak.
Finite automata and unary languages.
Theoretical Computer Science, 47:149–158, 1986.
Corrigendum. ibid., 302:497–498, 2003.
(49)
C. Mereghetti, G. Pighizzini.
Two-way automata simulations and unary languages.
J. Automata, Languages and Combinatorics, 5:287-300, 2000.
(50)
C. Mereghetti, G. Pighizzini.
Optimal simulations between unary automata.
SIAM Journal on Computing, 30:1976–1992, 2001.
(51)
M.P. Bianchi, G. Pighizzini.
Normal forms for unary probabilistic automata.
Theoretical Informatics and Applications, 46:495–510, 2012.
(52)
M. Milani, G. Pighizzini.
Tight bounds on the simulation of unary probabilistic automata by deterministic automata.
J. Automata, Languages and Combinatorics, 6:481–492, 2001.
(53)
M. Marcus, H. Minc:
Introduction to Linear Algebra.
The Macmillan Company, 1965. Reprinted by Dover, 1988.
(54)
P.A.M. Dirac.
A new notation for quantum mechanics.
Mathematical Proceedings of the Cambridge Philosophical Society,
35:416–418, 1939.
(55)
R.I.G. Hughes.
The Structure and Interpretation of Quantum Mechanics.
Harvard University Press, 1992.
(56)
J.-E. Pin.
Varieties of Formal Languages.
North Oxford Academic, 1986.
(57)
A. Kondacs, J. Watrous.
On the power of quantum finite state automata.
In: Proc. 38th Symp. on Foundations of Computer Science (FOCS), 66–75, 1997.
(58)
M.P. Bianchi, C. Mereghetti, B. Palano.
On the power of one-way automata with quantum and classical states.
Int. J. Foundations of Computer Science, 26:895–912, 2015.
(59)
M. Hirvensalo.
Quantum automata with open time evolution.
Int. J. Natural Computing Research,
1:70–85, 2010.
(60)
C. Mereghetti, B. Palano.
Quantum finite automata with control language.
Theoretical Informatics and Applications, 40:315–332, 2006.
(61)
M.P. Bianchi, B. Palano.
Behaviours of unary quantum automata.
Fundamenta Informaticae, 104:1–15, 2010.
(62)
F. Ablayev, A. Gainutdinova.
On the lower bounds for one-way quantum automata.
In: Proc. 25th Int. Symp. on Mathematical Foundations of Computer Science (MFCS).
LNCS 1893, 132–140, Springer, 2000.
(63)
A. Bertoni, C. Mereghetti, B. Palano.
Lower bounds on the size of quantum automata accepting unary languages.
In: Proc. 8th Italian Conf. on Theoretical Computer Science (ICTCS).
LNCS 2841, 86–96, Springer, 2003.
(64)
M.P. Bianchi, C. Mereghetti, B. Palano.
Size lower bounds for quantum automata.
Theoretical Computer Science, 551:102–115, 2014.
(65)
C. Mereghetti, B. Palano, G. Pighizzini.
Note on the succinctness of deterministic, nondeterministic,
probabilistic and quantum finite automata.
Theoretical Informatics and Applications, 35:477–490, 2001.
(66)
A. Bertoni, C. Mereghetti, B. Palano.
Golomb rulers and difference sets for succinct quantum automata.
Int. J. Foundations of Computer Science, 14:871–888, 2003.
(67)
R. Loudon.
The Quantum Theory of Light,
Oxford University Press, 2000.
(68)
S. Cialdi, M.A.C. Rossi, C. Benedetti, B. Vacchini, D. Tamascelli, S. Olivares, M.G.A. Paris
All-optical quantum simulator of qubit noisy channels,
Appl. Phys. Lett., 110, 081107, 2017.
(69)
R. G. W. Brown, K.D. Ridley, J. G. Rarity
Characterization of silicon avalanche photodiodes for photon correlation measurements. 1: Passive quenching,
Appl. Opt., 25:4122–4126, 1986. |
Magnetic anisotropy and exchange interactions of two-dimensional FePS${}_{3}$, NiPS${}_{3}$ and MnPS${}_{3}$ from first principles calculations
Thomas Olsen
Computational Atomic-scale Materials Design (CAMD), Department of Physics, Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark
[email protected]
Abstract
The van der Waals bonded transition metal phosphorous trichalcogenides FePS${}_{3}$, NiPS${}_{3}$ and MnPS${}_{3}$ have recently attracted renewed attention due to the possibility of exfoliating them into their monolayers. Although the three compounds have similar electronic structure, the magnetic structure differs due to subtle differences in exchange and magnetic anisotropy and the materials thus comprise a unique playground for studying different aspects of magnetism in 2D. Here we calculate the exchange and anisotropy parameters of the three materials from first principles paying special attention to the choice of Hubbard parameter U. We find a strong dependence of the choice of U and show that the calculated Néel temperature of FePS${}_{3}$ varies by an order of magnitude over commonly applied values of U for the Fe $d$-orbitals. The results are compared with parameters fitted to experimental spin-wave spectra of the bulk materials and we find excellent agreement between the exchange constants when a proper value of U is chosen. However, the anisotropy parameters are severely underestimated by DFT and we discuss possible origins of this discrepancy.
I Introduction
The discovery of ferromagnetic order in two-dimensional (2D) CrI${}_{3}$ Huang et al. (2017) in 2017 has initiated a vast interest in the field of 2D magnetism Burch et al. (2018); Soriano et al. (2020); Sethulakshmi et al. (2019); Gibertini et al. (2019). Subsequently, several other magnetic van der Waals bonded compounds McGuire (2017) have been exfoliated to the monolayer limit and shown to exhibit 2D magnetic order Burch et al. (2018); Bonilla et al. (2018); Fei et al. (2018); Pedersen et al. (2018). It is, however, not obvious that monolayers exfoliated from magnetic van der Waals bonded materials retain the magnetic order in general. The reason is that 2D materials cannot exhibit a spontaneously broken spin-rotational symmetry Mermin and Wagner (1966) and either (weak) interlayer interactions or magnetic anisotropy are thus vital ingredients for magnetic order in van der Waals bonded materials. Typically, only the latter case will result in magnetic order for the isolated monolayer and spin-orbit interactions (which are responsible for magnetic anisotropy) thus comprise a crucial prerequisite for 2D magnetism.
The transition metal thiophosphates MPS${}_{3}$ (M=Fe,Ni,Mn) comprise a particular interesting class of van der Waals magnets Jernberg et al. (1984); Joy and Vasudevan (1992); Wildes et al. (1998, 2015); Lançon et al. (2016, 2018); Xing et al. (2019); Kang et al. (2020) that exhibit rather distinct magnetic properties in the monolayer limit. In bulk form they all exhibit anti-ferromagnetic order in the individual planes with Néel temperatures of 123 K, 155 K and 78 K for FePS${}_{3}$, NiPS${}_{3}$ and MnPS${}_{3}$ respectively Joy and Vasudevan (1992). However only FePS${}_{3}$ has been demonstrated to retain its magnetic order in the case monolayers with the Néel temperature being reduced to 104-118 K Wang et al. (2016); Lee et al. (2016). This can be understood from the fact that bulk FePS${}_{3}$, exhibits a strong out-of-plane easy-axis Joy and Vasudevan (1992), which breaks the rotational symmetry and allows for magnetic order in the monolayer limit. In contrast, magnetic order in NiPS${}_{3}$ has been shown to persist in bilayers, but disappears for a monolayer Kim et al. (2019a). This is expected from the fact that bulk NiPS${}_{3}$ exhibits an easy-plane coinciding with the atomic layers and if the anisotropy is maintained in the monolayer limit there is a residual rotational symmetry, which deteriorates magnetic order as a consequence of the Mermin-Wagner theorem. Finally, bulk MnPS${}_{3}$ exhibits an out-of-plane easy axis and would be expected to exhibit magnetic order in the monolayer limit Wildes et al. (1998). However, to our knowledge there are no reports on the magnetic order (or its absence) in monolayers of MnPS${}_{3}$ although magnetic order has been demonstrated in bilayers Kim et al. (2019b).
From the computational community there has been
a vivid search for new 2D magnets based on high throughput first principles calculations Mounet et al. (2018); Miyazato et al. (2018); Haastrup et al. (2018); Torelli et al. (2019, 2020); Botana and Norman (2019); Kabiraj et al. (2020) with various attempts of predicting magnetic critical temperatures for magnetic order. Such computations do, however, rely crucially on the accuracy of the applied method. In particular, for methods based on density functional theory (DFT) different choices of exchange-correlation functional may lead to predicted exchange constants that differ by a factor of three Olsen (2019). Moreover, for 2D materials it is crucial to obtain accurate predictions for the magnetic anisotropy, which plays a prominent role in the theory of magnetic order. In this paper we address the accuracy of first principles calculations for exchange parameters and anisotropy constants. The calculations are performed on the three 2D compounds FePS${}_{3}$, NiPS${}_{3}$ and MnPS${}_{3}$, since these materials provide convenient examples of different types of magnetic order and comprise realizations of easy-axis magnetization and easy-plane magnetization. We pay particular attention to the effect of the value of U used in DFT+U calculations and show that different choices can lead to significantly different predictions for the magnetic parameters. Finally, we show that a Heisenberg model including single-ion anisotropy and anisotropic exchange is not able to reproduce the large spin-wave gaps observed for the bulk compounds.
The paper is organized as follows. In Sec. II we provide the basic theoretical framework that allow us to determine exchange and anisotropy parameters from DFT calculations. In Sec. III we summarize the computational details of the calculations and in IV we provide the results. Sec. V provides a summary and a discussion of the results.
II Theory
Whereas DFT can usually faithfully predict the magnetic ground state of a given material, the thermodynamical properties are largely inaccessible by direct computations. Instead one is led to define a magnetic model that captures the essential interactions and is simple enough to allow for thermodynamical predictions. For insulators the Heisenberg model Yosida (1996) has proven highly successful in providing qualitative predictions for phase transitions and if the model parameters are determined by DFT the model acquires quantitative predictive power Schmitt et al. (2014); Xiang et al. (2013); Olsen (2017); Torelli and Olsen (2018). For the purpose of investigating critical temperatures in 2D materials we thus consider the model Hamiltonian
$$\displaystyle H=-\frac{1}{2}\sum_{ij}J_{ij}\mathbf{S}_{i}\cdot\mathbf{S}_{j}-\frac{1}{2}\sum_{ij}\lambda_{ij}S_{i}^{z}S_{j}^{z}-A\sum_{i}(S_{i}^{z})^{2},$$
(1)
where the sums run over magnetic atoms in the compound and $\mathbf{S}_{i}$ is the spin operator for site $i$. $J_{ij}$ denotes the isotropic exchange between site $i$ and $j$, $\lambda_{ij}$ is the anisotropic exchange and $A$ denotes the strength of single-ion anisotropy. For 2D materials it is vital to include the anisotropy terms due to the Mermin-Wagner theorem. Since the anisotropic exchange and single-ion anisotropy only involve the $z$-component of spin operators we have implicitly assumed magnetic isotropy in the $xy$-plane, which is taken to coincide with the atomic plane. We have neglected off-diagonal exchange terms (for example terms proportional to $S_{i}^{x}S_{j}^{y}$). Such terms may give rise to interesting physical effects such as chiral magnetic interactions and Kitaev terms in the Hamiltonian, but will not be considered here since we expect that these have minor influence on the critical temperature.
The Heisenberg model (1) can be analyzed, for example, from renormalized spinwave theory Yosida (1996); Yasuda et al. (2005); Gong et al. (2017); Lado and Fernández-Rossier (2017) or classical Monte Carlo simulations Sarikurt et al. (2018); Torelli and Olsen (2018); Lu et al. (2019). The former case comprises a full quantum mechanical treatment that is accurate at low temperatures. However, spinwave interactions are treated at the mean-field level and may become inaccurate in the vicinity of the critical temperature where the number of spin-waves increases dramatically Yasuda et al. (2005). In contrast, classical Monte Carlo simulations completely neglects quantum effects, but includes all correlation in the model. At elevated temperatures (close to the critical temperature in particular) quantum effects tend to be quenched and the classical analysis is expected to become accurate Torelli and Olsen (2018) - perhaps with the exception of spin-$1/2$ materials Yasuda et al. (2005). In the present work we have thus applied classical Monte Carlo simulations to extract critical temperatures.
In order make quantitative predictions for real materials the parameters in the model (1) need to be determined from first principles calculations. Including $n$-nearest neighbor couplings in the model yields $2n+1$ parameters that can be determined from $2n+2$ DFT calculations involving different spin configurations. Since the anisotropy parameters arise from spin-orbit coupling one may can consider $n+1$ spin configurations without spin-orbit coupling and then obtain $2n+2$ total energies with non-selfconsistent spin-orbit coupling by orienting the exchange-correlation magnetic field parallel and orthogonal to the atomic plane for each configuration.
The third nearest neighbor exchange coupling has previously been shown to be particular important for the transition metal phosphorous trichalcogenides and we thus consider all interactions up to third nearest neighbor in the model (1). It should be noted that there are 3 nearest and third nearest neighbors while there are 6 second nearest neighbors. In contrast to previous works we also calculate the anisotropy parameters, which are crucial for obtaining reliable estimates of the critical temperature. We thus consider the four spin configurations shown in Fig. 1, which are used to extract the exchange parameters as
$$\displaystyle J_{1}=$$
$$\displaystyle\frac{1}{4S^{2}}(E_{\mathrm{Stripy}}^{\parallel}-E_{\mathrm{FM}}^{\parallel}+E_{\mathrm{Neel}}^{\parallel}-E_{\mathrm{Zigzag}}^{\parallel})$$
(2)
$$\displaystyle J_{2}=$$
$$\displaystyle\frac{1}{8S^{2}}(E_{\mathrm{Stripy}}^{\parallel}-E_{\mathrm{FM}}^{\parallel}-E_{\mathrm{Neel}}^{\parallel}+E_{\mathrm{Zigzag}}^{\parallel})$$
(3)
$$\displaystyle J_{3}=$$
$$\displaystyle\frac{1}{3S^{2}}(E_{\mathrm{FM}}^{\parallel}-E_{\mathrm{Neel}}^{\parallel})-J_{1}$$
(4)
$$\displaystyle\lambda_{1}=$$
$$\displaystyle\frac{1}{4S^{2}}(\Delta E_{\mathrm{Stripy}}-\Delta E_{\mathrm{FM}}+\Delta E_{\mathrm{eel}}-\Delta E_{\mathrm{Zigzag}})$$
(5)
$$\displaystyle\lambda_{2}=$$
$$\displaystyle\frac{1}{8S^{2}}(\Delta E_{\mathrm{Stripy}}-\Delta E_{\mathrm{FM}}-\Delta E_{\mathrm{Neel}}+\Delta E_{\mathrm{Zigzag}})$$
(6)
$$\displaystyle\lambda_{3}=$$
$$\displaystyle\frac{1}{3S^{2}}(\Delta E_{\mathrm{FM}}-\Delta E_{\mathrm{Neel}})-\lambda_{1}$$
(7)
$$\displaystyle A=$$
$$\displaystyle\lambda_{2}-\frac{1}{2S^{2}}(\Delta E_{\mathrm{Stripy}}+\Delta E_{\mathrm{Zigzag}}),$$
(8)
where $E_{\alpha}^{\parallel}$ is the total energy per magnetic atom of configuration $\alpha$ with the exchange-correlation magnetic field aligned in the atomic plane. $\Delta E_{\alpha}=E_{\alpha}^{\perp}-E_{\alpha}^{\parallel}$ is the energy difference per magnetic atom for spin state $\alpha$ between spins aligned in the plane ($E^{\parallel}_{\alpha})$) and spins aligned out of plane ($E^{\perp}_{\alpha})$). We note that these parameters were extracted by mapping total energies to the classical Heisenberg model. It has previously been shown that for nearest neighbor exchange only it is possible to map the total energies directly to the quantum mechanical Heisenberg model, which yields exchange couplings that are 3-7 % lower than those obtained from the classical model Torelli and Olsen (2020). However, in the present case a full quantum mechanical energy mapping analysis would be non-trivial and we will stick with the classical parameters stated above in the following.
III Computational details
All DFT calculations were obtained with the electronic structure code GPAW using the projector-augmented wave method and a plane wave basis Enkovaara et al. (2010); Hjorth Larsen et al. (2017). Spin-orbit coupling was included non-selfconsistently Olsen (2016) and a direction for the spins was chosen by rotating the spin-dependent mean-field along the desired direction. We used the PBE+U functional and a plane wave cutoff of 600 eV. The unit cell in all calculations were chosen as shown in Fig. 1 and the Brillouin zone sampling was done on a $\Gamma$-centered 6x12 grid. In calculations with PBE+U we put the value of U on the transition metal $d$-orbitals. The structures were relaxed until all forces are below 0.05 eV/Å.
In order to obtain critical temperatures of FePS${}_{3}$ we have performed classical Monte Carlo simulations using the Metropolis algorithm with a $20\times 20$ repetition of the minimal unit cell containing two magnetic sites and periodic boundary conditions. We used 100,000 Monte Carlo steps, where each step involves a random spin flip of all sites in the lattice and the total energy was extracted from an average over the last 20,000 steps. The heat capacity was then evaluated by finite difference between the energies at neighboring temperatures and the the critical temperature extracted from a Lorentzian fit in the vicinity of the maximum of the heat capacity.
IV Results
IV.1 Magnetic ground state
The magnetic ground states of FePS${}_{3}$, NiPS${}_{3}$ and MnPS${}_{3}$ are shown in Fig. 2. All of the compounds are anti-ferromagnetic, but only MnPS${}_{3}$ acquires the Néel state where each magnetic site is anti-aligned with all nearest neighbors. In contrast, the ground states of FePS${}_{3}$ and NiPS${}_{3}$ exhibit Zigzag-type ordering (see Fig. 1), where each transition metal atom is aligned with two nearest neighbors and anti-aligned with one nearest neighbor, which indicates ferromagnetic nearest neighbor exchange. We note that only the Ferromagnetic and Néel configurations can be represented in the primitive (non-magnetic) unit cell of the lattice.The magnetic ground state in the three compounds are insensitive to the choice of U in a DFT+U treatment when U is chosen up to 7 eV. However, as will be shown below the magnitude of the magnetic interactions depend strongly on U.
The relative values of exchange coupling constants in the classical Heisenberg model (1) can be related to the magnetic ground state. For example, if one neglects the contributions from anisotropy the Néel state will be favored over the FM state if $J_{1}+J_{3}<0$, the Zigzag state is favored over the FM state if $J_{1}+4J_{2}+3J_{3}<0$ and the Striped state is favored over the FM state if $J_{1}+2J_{2}<0$. Moreover, the Zigzag state will be favored over the Néel state if $J_{1}>2J_{2}$, which is the case for FePS${}_{3}$ and NiPS${}_{3}$ as shown below.
Due to the Mermin-Wagner theorem, a magnetic easy-axis is required for magnetic order at finite temperatures. In the present case only FePS${}_{3}$ is predicted to have an easy axis, whereas NiPS${}_{3}$ and MnPS${}_{3}$ both have easy planes coinciding with the atomic planes. Monolayers of FePS${}_{3}$ have indeed been found to exhibit anti-ferromagnetic order up to 118 K in experiments Lee et al. (2016), whereas the magnetic order in NiPS${}_{3}$ has been shown to be quenched in the monolayer limit down to 10 K Kim et al. (2019a). Bulk MnPS${}_{3}$ has been argued to be largely isotropic Joy and Vasudevan (1992), which is expected due to the orbitally closed $d$-shell with $S=5/2$. We find an anisotropy energy of 0.053 meV per Mn atom (energy difference between spins oriented in-plane and out-of-plane). This is, however, slightly larger than the value of 0.037 meV found for the $S=1$ material NiPS${}_{3}$, which indicates that a priori prediction of spin-orbit effects is highly challenging. In addition, both values are an order of magnitude smaller than the value of FePS${}_{3}$, which is found to be 0.45 meV per Fe atom.
IV.2 Heisenberg parameters and critical temperatures
The Heisenberg parameters of Eq. (1) has been calculated for the three materials studied in this work. However, the parameters turn out to be rather sensitive to the value of U used in a DFT+U approach. In Fig. 3 we show the isotropic exchange constants as well as the single-ion anisotropy as a function of U for the three compounds. In all cases we observe a reduction of the exchange constants by a factor of 2-4 when increasing U from 1 eV to 5 eV. The effect is most dramatic in FePS${}_{3}$ where $J_{1}S^{2}$ decreases from 16 meV to 2.3 meV. To rationalize this trend one may argue that larger values of U tends to increase orbital localization and therefore decrease the overlap between wavefunctions. In the case of direct exchange interactions this will in general decrease exchange integrals and therefore decrease the magnitude of exchange interactions. For superexchange the exchange constants are roughly given by $-t^{2}/U$ where $t$ is a hopping matrix element. In that case one would also expect a decreased magnitude of exchange coupling constants. In reality the effect of U may, however, be significantly more complicated than this simplistic picture and in CrI${}_{3}$, for example, it has been shown that increasing $U$ tends to increase the magnitude of exchange coupling constants Torelli et al. (2019). Nevertheless, for the case of FePS${}_{3}$, NiPS${}_{3}$ and MnPS${}_{3}$ we observe a sizeable decrease in exchange coupling constants when increasing the Hubbard parameter. In addition we also observe a significant decrease in single-ion anisotropy with increasing Hubbard corrections. This can be rationalized from the fact that the spin-orbit coupling is completely dominated by spherical contributions to the crystal field in the vicinity of the nuclei. The magnetic anisotropy thus arises from hybridization effects, which are suppressed by the Hubbard corrections. A similar picture was observed for CrI${}_{3}$ although in that case the anisotropic exchange increases with increasing Hubbard corrections resulting in an overall increase in the magnetic anisotropy Torelli et al. (2019). The predicted values of the anisotropic exchange constants are neglectable in all three cases.
In Tab. 1 we present the parameters calculated with PBE+U using values of U that provides the best agreement with experimentally determined values for the bulk material (also displayed for reference). For all three materials we note that the magnitude of the second-nearest neighbor coupling $J_{2}$ is much smaller than $J_{1}$ and $J_{3}$. The magnitudes of $J_{1}$ and $J_{3}$ in FePS${}_{3}$ are similar and the fact that $J_{1}>0$ and $J_{3}<0$ determines the Zigzag state as the magnetic ground state. In the case of of NiPS${}_{3}$ the anti-ferromagnetic $J_{3}$ is completely dominating and the positive $J_{1}$ again determines the ground state to have Zigzag order. In contrast, MnPS${}_{3}$ shows all anti-ferromagnetic exchange coupling constants and exhibits Néel-type order. The results are in reasonable agreement with experimental values at the chosen values of the Hubbard parameter $U$, but will deviate significantly if other values are applied. We note that the results appear to be in disagreement with previous PBE+U calculations for MnPS${}_{3}$ Sivadas et al. (2015) that yielded $J_{1}=-1.58$ meV, $J_{2}=-0.08$ meV and $J_{3}=-0.46$ meV (note the different convention for $J_{i}$ in Ref. Sivadas et al. (2015)) using a Hubbard parameter of 5 eV. In that work however, the experimental lattice parameter of $a=5.88\;\AA$ was used whereas we have used the PBE relaxed structure with $a=6.15\;\AA$. If we base the calculations on a relaxed structure using the experimental lattice parameter and U = 5 eV we obtain $J_{1}=-1.6$ meV, $J_{2}=-0.076$ meV and $J_{3}=-0.36$ meV, which is in very good agreement with the experimental values as well as the previous theoretical predictions Sivadas et al. (2015). Redoing the calculations with experimental lattice parameter and U = 3 eV, however, lead to parameters that are roughly twice the experimental values. The exchange parameters are thus highly sensitive to correct lattice parameter and resulting interatomic distances although a modified value of $U$ can be applied to correct for the error originating from an overestimated lattice parameter. In the respect, the current example of MnPS${}_{3}$ is a rather extreme example where PBE overestimates the lattice parameter by 4.6 %. In any case, the results are seen to depend strongly on the choice of U and the agreement with experimental values thus appears fortuitous, since it is vital to choose the correct value of U, which is not known a priori.
The values of exchange and anisotropy constants will have crucial influence on any magnetic property calculated for the system. As an example, Fig. 4 shows the magnon dispersion relation of MnPS${}_{3}$ calculated from the Heisenberg model (1) using different values of U (see Appendix for details). The band width increases by a factor of 2.5 when the value of U is decreased from U = 5 eV to U = 1 eV. A band width of 12 meV has been determined from inelastic neutron scattering Wildes et al. (1998) and seems to agree well with the calculated dispersion relation using U = 3 eV. This is of course expected since the exchange parameters are in agreement with the experimental ones that were extracted from the measured magnon dispersion.
In contrast to the exchange parameters, the predicted single-ion anisotropy differs from experimental values by more than an order magnitude for FePS${}_{3}$ and NiPS${}_{3}$. Experimentally the value is determined from the spin-wave gap of the bulk material, which is assumed to originate from single-ion anisotropy. The single-ion anisotropy parameters have thus been estimated to 2.7, 0.3 and 0.009 meV for FePS${}_{3}$, NiPS${}_{3}$, and MnPS${}_{3}$ respectively Lançon et al. (2018).
The theoretical predictions appear to be much to small irrespective of the value chosen for the Hubbard parameter. Moreover, only the case of FePS${}_{3}$ yields a theoretical prediction of a positive value of the single-ion anisotropy corresponding to an easy axis orthogonal to the atomic plane. In contrast, experiments predict all three bulk materials have positive values. Due to the small magnitude of the interlayer exchange coupling constants Lançon et al. (2018) it does not seem likely that this discrepancy originates from the fact that the experimental values refer to the bulk materials. There may be other effects contributing to the spin-wave gap that are not accounted for in the fit to spin-wave spectra, but it is far from clear how such effects could give rise to an order of magnitude larger spin-wave gaps compared to the experimentally determined values. On the other hand, spin-orbit effects are usually well accounted for in DFT and is not highly sensitive to the choice of functional so it is not obvious why DFT would make an order of magnitude error for these materials either. We have tested that the single-ion anisotropy does not change significantly when using experimental lattice parameters instead of relaxed structure (we get A=0.116 meV for FePS${}_{3}$ using the experimental alttice constant and U=2 eV). For now the origin of this discrepancy remains an open question.
The magnetic anisotropy plays a crucial role in the magnetic order for 2D materials. In particular, an easy axis is required for a 2D material to exhibit magnetic order at finite temperatures. However, the critical temperature has a logarithmic dependence on the magnetic anisotropy and critical temperatures will thus not be very sensitive to the magnitude of the single-ion anisotropy constant. In Fig. 5 we show classical Monte Carlo simulations of the heat capacity of FePS${}_{3}$ using the parameters obtained from DFT with different Hubbard corrections. The Heat capacity has a peak at the Néel temperature, which is seen to have a strong dependence on U. We also show a simulation where the experimental parameters were used. The main difference between this set of parameters and the calculated ones with U=2 eV is the single-ion anisotropy, which is more than 20 times larger as determined from experiments. The predicted critical temperature, however, is only slightly larger compared to the theoretical results with U=2 eV, but the heat capacity is more sharply peaked due to the stronger ”Ising-like” nature of the material resulting from the experimental parameters.
In Fig. 6 we have extracted the critical temperatures from the Monte Carlo simulations of the heat capacity, which are plotted as a function of U. First of all we note that the critical temperature obtained from the experimental parameters gives 89 K, which is somewhat lower than the experimentally determined value of 118 K. From DFT it appears that simulation with $U<1$ is required to reproduce the experimental critical temperature, but this could be due to a strong underestimation of the single-ion anisotropy. We stress again that theoretical predictions are strongly dependent on the chosen value for the Hubbard correction.
V Discussion
We have presented the Heisenberg parameters of FePS${}_{3}$, NiPS${}_{3}$, and MnPS${}_{3}$ as predicted by DFT using the PBE+U approach. It was demonstrated that the magnitude of the parameters depend crucially on the chosen value of U. This is important to bear in mind since the value of U is often chosen to reproduce a particular experimental signature of the material. For example, in Ref. Qiu et al. (2021) a value of U=0.5 eV was chosen in in order to reproduce STS spectra, whereas values 2-4 eV are more common for accurate extraction of structural properties Sivadas et al. (2018). It is however, reassuring that the experimental exchange constants are well reproduced by a particular value of U, which is in the range of commonly applied values.
The observable consequences resulting from different Hubbard corrections were exemplified by the spin-wave band-width of MnPS${}_{3}$ and the Néel temperature of FeP${}_{3}$. The experimental band width of MnP${}_{3}$ was trivially reproduced by U=3 eV calculations since the experimental parameters were extracted from the dispersion and U=3 eV yields parameters in good agreement with the experimental ones. The calculated Néel temperature of FePS${}_{3}$ exhibits a strong dependence of U, which is naturally inherited from the exchange coupling constants. However, in order to reproduce the experimental Néel temperature a value of U$\sim$0.8 eV is required, which yields exchange constants that are not in agreement with the experimental values. In fact, the experimental parameters themselves yields a Néel temperature that is 15 % smaller than the experimental one. The simplest explanation could be that the bulk material simply has weaker intralayer exchange constants than the 2D material. Such effects should be straightforward to unravel with DFT but is beyond the scope of the present work. Another possibility is the existence of important Heisenberg terms such as biquadratic exchange and dipolar interactions that are simply not accounted for in the present model Hoffmann and Blügel (2020). Finally, it is possible that classical Monte Carlo simulations are insufficient to describe the Néel temperature accurately.
It is highly disturbing that the calculated anisotropy constants deviate from experimental values by more than an order of magnitude. We note, however, that the experimental parameters were derived from the spin-wave gap, which for anti-ferromagnets depends on the exchange constants as well as the anisotropy parameters Torelli and Olsen (2020). Thus if additional Heisenberg terms are present they are expected to modify the spin-wave gap, which would yield different predictions for the experimental single-ion anisotropy parameters. In addition, many-body effects have recently been shown to play a crucial role for the gap opening between the acoustic and optical branches in CrI${}_{3}$ Ke and Katsnelson (2021) - even without spinorbit coupling. Although, the spin-wave gap has to vanish in the absence of spinorbit coupling it is not unlikely that similar effects could play an important role in determining the size of the gap (and thus the predicted anisotropy constants) once spinorbit coupling is introduced. We leave these open questions to future work.
Appendix
Here we present the non-interacting magnon spectrum of the Néel state on the Honeycomb lattice with three exchange coupling constants. The Heisenberg Hamiltonian is written as
$$\displaystyle H=$$
$$\displaystyle-J_{1}\sum_{\langle ij\rangle}\mathbf{S}_{ai}\cdot\mathbf{S}_{bj}-\frac{J_{2}}{2}\sum_{\langle\langle ij\rangle\rangle}\mathbf{S}_{ai}\cdot\mathbf{S}_{aj}$$
$$\displaystyle-\frac{J_{2}}{2}\sum_{\langle\langle ij\rangle\rangle}\mathbf{S}_{bi}\cdot\mathbf{S}_{bj}-J_{3}\sum_{\langle\langle\langle ij\rangle\rangle\rangle}\mathbf{S}_{ai}\cdot\mathbf{S}_{bj},$$
(9)
where $a$ and $b$ denotes the two in-equivalent sites in the unit cell. Performing the usual Holstein-Primakoff transformation to second order in raising and lowering operators yield
$$\displaystyle H=$$
$$\displaystyle E_{0}+H_{0}$$
(10)
$$\displaystyle E_{0}=$$
$$\displaystyle NS^{2}(N_{1}J_{1}-N_{2}J_{2}+N_{3}J_{3})$$
(11)
$$\displaystyle H_{0}=$$
$$\displaystyle-SJ_{1}\sum_{\langle ij\rangle}\Big{(}a_{i}b_{j}+a_{i}^{\dagger}b_{j}^{\dagger}+a_{i}^{\dagger}a_{i}+b_{j}^{\dagger}b_{j}\Big{)}$$
$$\displaystyle-SJ_{2}\sum_{\langle\langle ij\rangle\rangle}\Big{(}a_{i}^{\dagger}a_{j}+b_{i}^{\dagger}b_{j}-a_{i}^{\dagger}a_{i}-b_{i}^{\dagger}b_{i}\Big{)}$$
$$\displaystyle-SJ_{3}\sum_{\langle\langle\langle ij\rangle\rangle\rangle}\Big{(}a_{i}b_{j}+a_{i}^{\dagger}b_{j}^{\dagger}+a_{i}^{\dagger}a_{i}+b_{j}^{\dagger}b_{j}\Big{)},$$
(12)
where $N$ is the number of unit cells and $N_{n}$ is the number of $n$’th nearest neighbors.
We emphasize that the Néel state is not an eigenstate of the Heisenberg Hamiltonian but its expectation value is given by $E_{0}$, which coincides with the classical minimum energy.
We then introduce the Fourier transforms
$$\displaystyle a_{i}$$
$$\displaystyle=\sqrt{\frac{1}{N}}\sum_{\mathbf{q}}e^{i\mathbf{q}\cdot\mathbf{R}_{i}}a_{\mathbf{q}}$$
(13)
$$\displaystyle b_{j}$$
$$\displaystyle=\sqrt{\frac{1}{N}}\sum_{\mathbf{q}}e^{-i\mathbf{q}\cdot\mathbf{R}_{j}}b_{\mathbf{q}},$$
(14)
where $\mathbf{R}_{i}$ are the positions of sublattice a sites and $\mathbf{R}_{j}$ are the positions of sublattice b sites.
Inserting into $H_{0}$ yields
$$\displaystyle H_{0}=$$
$$\displaystyle-SJ_{1}\sum_{\mathbf{q}\Delta_{1}}\Big{(}e^{-i\mathbf{q}\cdot\mathbf{R}_{\Delta_{1}}}a_{\mathbf{q}}b_{\mathbf{q}}+e^{i\mathbf{q}\cdot\mathbf{R}_{\Delta_{1}}}a_{\mathbf{q}}^{\dagger}b_{\mathbf{q}}^{\dagger}\Big{)}$$
$$\displaystyle-SN_{1}J_{1}\sum_{\mathbf{q}}\Big{(}a_{\mathbf{q}}^{\dagger}a_{\mathbf{q}}+b_{\mathbf{q}}^{\dagger}b_{\mathbf{q}}\Big{)}$$
$$\displaystyle-SJ_{2}\sum_{\mathbf{q}\Delta_{2}}\Big{(}e^{i\mathbf{q}\cdot\mathbf{R}_{\Delta_{2}}}a^{\dagger}_{\mathbf{q}}a_{\mathbf{q}}+e^{-i\mathbf{q}\cdot\mathbf{R}_{\Delta_{2}}}b_{\mathbf{q}}^{\dagger}b_{\mathbf{q}}\Big{)}$$
$$\displaystyle+SN_{2}J_{2}\sum_{\mathbf{q}}\Big{(}a_{\mathbf{q}}^{\dagger}a_{\mathbf{q}}+b_{\mathbf{q}}^{\dagger}b_{\mathbf{q}}\Big{)}$$
$$\displaystyle-SJ_{3}\sum_{\mathbf{q}\Delta_{3}}\Big{(}e^{-i\mathbf{q}\cdot\mathbf{R}_{\Delta_{3}}}a_{\mathbf{q}}b_{\mathbf{q}}+e^{i\mathbf{q}\cdot\mathbf{R}_{\Delta_{3}}}a_{\mathbf{q}}^{\dagger}b_{\mathbf{q}}^{\dagger}\Big{)}$$
$$\displaystyle-SN_{3}J_{3}\sum_{\mathbf{q}}\Big{(}a_{\mathbf{q}}^{\dagger}a_{\mathbf{q}}+b_{\mathbf{q}}^{\dagger}b_{\mathbf{q}}\Big{)}$$
(15)
$$\displaystyle=$$
$$\displaystyle-S\sum_{\mathbf{q}}\Big{(}N_{1}J_{1}+N_{2}J_{2}[\gamma_{2}(\mathbf{q})-1]+N_{3}J_{3}\Big{)}$$
$$\displaystyle\qquad\times\bigg{[}a_{\mathbf{q}}^{\dagger}a_{\mathbf{q}}+b_{\mathbf{q}}^{\dagger}b_{\mathbf{q}}+\tilde{\gamma}_{\mathbf{q}}a_{\mathbf{q}}b_{\mathbf{q}}+\tilde{\gamma}_{\mathbf{q}}^{*}a_{\mathbf{q}}^{\dagger}b_{\mathbf{q}}^{\dagger}\bigg{]}$$
(16)
where
$$\displaystyle\tilde{\gamma}(\mathbf{q})=\frac{N_{1}J_{1}\gamma_{1}(\mathbf{q})+N_{3}J_{3}\gamma_{3}(\mathbf{q})}{N_{1}J_{1}+N_{2}J_{2}[\gamma_{2}(\mathbf{q})-1]+N_{3}J_{3}},$$
(17)
$$\displaystyle\gamma_{n}(\mathbf{q})=\frac{1}{N_{n}}\sum_{\Delta_{n}}e^{-i\mathbf{q}\cdot\mathbf{R}_{\Delta_{n}}},$$
(18)
and $\mathbf{R}_{\Delta_{n}}$ are the vectors connecting the $n$’th nearest neighbor atoms.
The Hamiltonian can now be diagonalized by the Bogliubov transformation
$$\displaystyle a_{\mathbf{q}}$$
$$\displaystyle=\cosh\theta_{\mathbf{q}}\alpha_{\mathbf{q}}-\sinh\theta_{\mathbf{q}}\beta_{\mathbf{q}}^{\dagger}$$
(19)
$$\displaystyle b_{\mathbf{q}}$$
$$\displaystyle=-\sinh\theta_{\mathbf{q}}\alpha_{\mathbf{q}}^{\dagger}+\cosh\theta_{\mathbf{q}}\beta_{\mathbf{q}}$$
(20)
where $\alpha_{\mathbf{q}}$ and $\beta_{\mathbf{q}}$ satisfy the usual bosonic commutator relations and $\tanh 2\theta_{\mathbf{q}}=|\tilde{\gamma}(\mathbf{q})|$. The non-interacting part of the Hamiltonian then becomes
$$\displaystyle H_{0}=S$$
$$\displaystyle\sum_{\mathbf{q}}\Big{(}N_{1}J_{1}+N_{2}J_{2}[\gamma_{2}(\mathbf{q})-1]+N_{3}J_{3}\Big{)}$$
(21)
$$\displaystyle\times\bigg{\{}1-\sqrt{1-|\tilde{\gamma}(\mathbf{q})|^{2}}\Big{(}\alpha_{\mathbf{q}}^{\dagger}\alpha_{\mathbf{q}}+\frac{1}{2}+\beta_{\mathbf{q}}^{\dagger}\beta_{\mathbf{q}}+\frac{1}{2}\Big{)}\bigg{\}}.$$
The new operators $\alpha_{\mathbf{q}}$ and $\beta_{\mathbf{q}}$ define a new ”Non-interacting magnon” ground state defined by $\alpha_{\mathbf{q}}|0\rangle_{\mathrm{NIM}}=\beta_{\mathbf{q}}|0\rangle_{\mathrm{NIM}}=0$. This state has a lower energy than the Néel state and it is given by
$$\displaystyle E_{0}^{\mathrm{NIM}}=E_{0}+SN\bigg{\langle}\Big{(}N_{1}J_{1}+N_{2}J_{2}[\gamma_{2}(\mathbf{q})-1]+N_{3}J_{3}\Big{)}$$
$$\displaystyle\times\Big{(}1-\sqrt{1-|\gamma(\mathbf{q})|^{2}}\Big{)}\bigg{\rangle}_{BZ}.$$
(22)
We have written the sum as a BZ average denoted by $\langle\ldots\rangle_{BZ}$ and multiplied by $N$ since the $\mathbf{q}$-sum contains $N$ terms.
Finally, the single magnon excited states have an energy relative to the NIM state given by
$$\displaystyle\varepsilon_{\mathbf{q}}=-S\Big{(}N_{1}J_{1}+N_{2}J_{2}[\gamma_{2}(\mathbf{q})-1]+N_{3}J_{3}\Big{)}\sqrt{1-|\gamma(\mathbf{q})|^{2}}.$$
(23)
References
Huang et al. (2017)
B. Huang et al., Nature 546, 270 (2017).
Burch et al. (2018)
K. S. Burch, D. Mandrus, and J.-G. Park, Nature 563, 47 (2018).
Soriano et al. (2020)
D. Soriano, M. I. Katsnelson, and J. Fernández-Rossier, Nano Letters 20, 6225 (2020).
Sethulakshmi et al. (2019)
N. Sethulakshmi et al., Mater. Today 27, 107
(2019).
Gibertini et al. (2019)
M. Gibertini, M. Koperski,
A. F. Morpurgo, and K. S. Novoselov, Nat. Nanotechnol. 14, 408 (2019).
McGuire (2017)
M. McGuire, Crystals 7, 121 (2017).
Bonilla et al. (2018)
M. Bonilla et al., Nat. Nanotechnol. 13, 289 (2018).
Fei et al. (2018)
Z. Fei et al., Nat. Mater. 17, 778 (2018).
Pedersen et al. (2018)
K. S. Pedersen et al., Nat. Chem. 10, 1056 (2018).
Mermin and Wagner (1966)
N. D. Mermin and H. Wagner, Phys. Rev. Lett. 17, 1133 (1966).
Jernberg et al. (1984)
P. Jernberg, S. Bjarman, and R. Wäppling, Journal of Magnetism and Magnetic Materials 46, 178 (1984).
Joy and Vasudevan (1992)
P. A. Joy and S. Vasudevan, Physical Review B 46, 5425 (1992).
Wildes et al. (1998)
A. R. Wildes, B. Roessli,
B. Lebech, and K. W. Godfrey, Journal of Physics: Condensed Matter 10, 6417 (1998).
Wildes et al. (2015)
A. R. Wildes, V. Simonet,
E. Ressouche, G. J. McIntyre, M. Avdeev, E. Suard, S. A. J. Kimber, D. Lançon, G. Pepe,
B. Moubaraki, and T. J. Hicks, Phys. Rev. B 92, 224408 (2015).
Lançon et al. (2016)
D. Lançon, H. C. Walker, E. Ressouche,
B. Ouladdiaf, K. C. Rule, G. J. McIntyre, T. J. Hicks, H. M. Rønnow, and A. R. Wildes, Physical Review B 94, 1
(2016).
Lançon et al. (2018)
D. Lançon, R. A. Ewings, T. Guidi,
F. Formisano, and A. R. Wildes, Physical Review B 98, 134414 (2018).
Xing et al. (2019)
W. Xing, L. Qiu, X. Wang, Y. Yao, Y. Ma, R. Cai, S. Jia, X. C. Xie, and W. Han, Physical Review X 9, 011026 (2019).
Kang et al. (2020)
S. Kang, K. Kim, B. H. Kim, J. Kim, K. I. Sim, J.-U. Lee, S. Lee, K. Park, S. Yun, T. Kim, A. Nag, A. Walters, M. Garcia-Fernandez, J. Li, L. Chapon, K.-J. Zhou, Y.-W. Son, J. H. Kim, H. Cheong, and J.-G. Park, Nature 583, 785 (2020).
Wang et al. (2016)
X. Wang, K. Du, Y. Y. Fredrik Liu, P. Hu, J. Zhang, Q. Zhang, M. H. S. Owen, X. Lu, C. K. Gan,
P. Sengupta, C. Kloc, and Q. Xiong, 2D
Materials 3, 031009
(2016).
Lee et al. (2016)
J.-U. Lee et al., Nano Lett. 16, 7433 (2016).
Kim et al. (2019a)
K. Kim et al., Nat. Commun. 10, 345 (2019a).
Kim et al. (2019b)
K. Kim et al., 2D Mater. 6, 041001 (2019b).
Mounet et al. (2018)
N. Mounet et al., Nat. Nanotechnol. 13, 246 (2018).
Miyazato et al. (2018)
I. Miyazato, Y. Tanaka, and K. Takahashi, Journal of Physics: Condensed Matter 30, 06LT01 (2018).
Haastrup et al. (2018)
S. Haastrup et al., 2D Mater. 5, 042002 (2018).
Torelli et al. (2019)
D. Torelli, K. S. Thygesen, and T. Olsen, 2D Mater. 6, 045018 (2019).
Torelli et al. (2020)
D. Torelli, H. Moustafa,
K. W. Jacobsen, and T. Olsen, npj Computational Materials 6, 158 (2020).
Botana and Norman (2019)
A. S. Botana and M. R. Norman, Physical Review Materials 3, 044001 (2019).
Kabiraj et al. (2020)
A. Kabiraj, M. Kumar, and S. Mahapatra, Npj Comput. Mater. 6, 35 (2020).
Olsen (2019)
T. Olsen, MRS Commun. 9, 1142 (2019).
Yosida (1996)
K. Yosida, Theory of magnetism (Springer Berlin, Heidelberg, 1996).
Schmitt et al. (2014)
M. Schmitt, O. Janson,
S. Golbs, M. Schmidt, W. Schnelle, J. Richter, and H. Rosner, Physical Review B 89, 174403 (2014).
Xiang et al. (2013)
H. Xiang, C. Lee, H.-J. Koo, X. Gong, and M.-H. Whangbo, Dalton
Trans. 42, 823 (2013).
Olsen (2017)
T. Olsen, Phys. Rev. B 96, 125143 (2017).
Torelli and Olsen (2018)
D. Torelli and T. Olsen, 2D Mater. 6, 015028 (2018).
Yasuda et al. (2005)
C. Yasuda, S. Todo,
K. Hukushima, F. Alet, M. Keller, M. Troyer, and H. Takayama, Phys. Rev. Lett. 94, 217201 (2005).
Gong et al. (2017)
C. Gong et al., Nature 546, 265 (2017).
Lado and Fernández-Rossier (2017)
J. L. Lado and J. Fernández-Rossier, 2D Mater. 4, 035002 (2017).
Sarikurt et al. (2018)
S. Sarikurt et al., Phys. Chem. Chem. Phys. 20, 997 (2018).
Lu et al. (2019)
X. Lu, R. Fei, and L. Yang, Phys. Rev. B 100, 205409 (2019).
Torelli and Olsen (2020)
D. Torelli and T. Olsen, J.
Phys. Condens. Matter 33, 335802 (2020).
Enkovaara et al. (2010)
J. Enkovaara et al., J. Phys. Condens. Matter 22, 253202 (2010).
Hjorth Larsen et al. (2017)
A. Hjorth Larsen et al., J.
Phys. Condens. Matter 29, 273002 (2017).
Olsen (2016)
T. Olsen, Phys. Rev. B 94, 235106 (2016).
Sivadas et al. (2015)
N. Sivadas, M. W. Daniels, R. H. Swendsen, S. Okamoto, and D. Xiao, Phys. Rev. B 91, 1 (2015).
Qiu et al. (2021)
Z. Qiu, M. Holwill,
T. Olsen, P. Lyu, J. Li, H. Fang, H. Yang, M. Kashchenko,
K. S. Novoselov, and J. Lu, Nature Communications 12, 70 (2021).
Sivadas et al. (2018)
N. Sivadas, S. Okamoto,
X. Xu, C. J. Fennie, and D. Xiao, Nano Lett. 18, 7658
(2018).
Hoffmann and Blügel (2020)
M. Hoffmann and S. Blügel, Physical Review B 101, 024418 (2020).
Ke and Katsnelson (2021)
L. Ke and M. I. Katsnelson, npj Computational Materials 7, 4 (2021). |
Doping evolution of the superconducting gap structure in the underdoped iron arsenide Ba${}_{1-x}$K${}_{x}$Fe${}_{2}$As${}_{2}$ revealed by thermal conductivity
J.-Ph. Reid
Département de physique & RQMP, Université de Sherbrooke, Sherbrooke, Québec J1K 2R1, Canada
M. A. Tanatar
Ames Laboratory, Ames, Iowa 50011, USA
X. G. Luo
Département de physique & RQMP, Université de Sherbrooke, Sherbrooke, Québec J1K 2R1, Canada
H. Shakeripour
Département de physique & RQMP, Université de Sherbrooke, Sherbrooke, Québec J1K 2R1, Canada
Department of Physics, Isfahan University of Technology, Isfahan 84156-83111, Iran
S. René de Cotret
Département de physique & RQMP, Université de Sherbrooke, Sherbrooke, Québec J1K 2R1, Canada
A. Juneau-Fecteau
Département de physique & RQMP, Université de Sherbrooke, Sherbrooke, Québec J1K 2R1, Canada
J. Chang
Département de physique & RQMP, Université de Sherbrooke, Sherbrooke, Québec J1K 2R1, Canada
B. Shen
Center for Superconducting Physics and Materials, National Laboratory of Solid State Microstructures
and Department of Physics, Nanjing University, Nanjing 210093, China
H.-H. Wen
Center for Superconducting Physics and Materials, National Laboratory of Solid State Microstructures
and Department of Physics, Nanjing University, Nanjing 210093, China
Canadian Institute for Advanced Research, Toronto, Ontario M5G 1Z8, Canada
H. Kim
Ames Laboratory, Ames, Iowa 50011, USA
Department of Physics and Astronomy, Iowa State University, Ames, Iowa 50011, USA
R. Prozorov
Ames Laboratory, Ames, Iowa 50011, USA
Department of Physics and Astronomy, Iowa State University, Ames, Iowa 50011, USA
N. Doiron-Leyraud
Département de physique & RQMP, Université de Sherbrooke, Sherbrooke, Québec J1K 2R1, Canada
Louis Taillefer
E-mail: [email protected]
Département de physique & RQMP, Université de Sherbrooke, Sherbrooke, Québec J1K 2R1, Canada
Canadian Institute for Advanced Research, Toronto, Ontario M5G 1Z8, Canada
Abstract
The thermal conductivity $\kappa$ of the iron-arsenide superconductor Ba${}_{1-x}$K${}_{x}$Fe${}_{2}$As${}_{2}$ was measured for heat currents parallel and perpendicular
to the tetragonal $c$ axis
at temperatures down to 50 mK
and
in magnetic fields up to 15 T.
Measurements were performed on samples with compositions
ranging from optimal doping
($x=0.34$; $T_{\rm c}$ $=39$ K)
down to dopings deep into the region where antiferromagnetic order coexists with superconductivity
($x=0.16$; $T_{\rm c}$ $=7$ K).
In zero field, there is no residual linear term in $\kappa(T)$ as $T\to 0$
at any doping, whether for in-plane or inter-plane transport.
This shows that there are no nodes in the superconducting gap.
However,
as $x$ decreases into the range of coexistence with antiferromagnetism,
the residual linear term grows more and more rapidly with applied magnetic field.
This shows that the superconducting energy gap develops minima at certain locations
on the Fermi surface and these minima deepen with decreasing $x$.
We propose that the minima in the gap structure arise when the Fermi surface
of Ba${}_{1-x}$K${}_{x}$Fe${}_{2}$As${}_{2}$ is reconstructed by the antiferromagnetic order.
pacs: 74.25.Fc, 74.20.Rp, 74.70.Xa
I Introduction
Soon after the discovery of superconductivity in iron-based materials, Hosono
it was recognized that a conventional phonon-mediated pairing cannot account for
the high critical temperature $T_{\rm c}$.phonon
The observation of superconductivity in proximity to a magnetic quantum critical point Louisreview
points instead to magnetically-mediated pairing,magneticpairing
a scenario also discussed for cuprate and heavy-fermion materials.Normanscience
Because such pairing is based on a repulsive interaction, it implies that the superconducting order parameter
must change sign around the Fermi surface. Scalapino
This is the case for the $d$-wave state realized in cuprate superconductors,
where the gap has symmetry-imposed nodes where the Fermi surface crosses the diagonals
at $k_{x}=k_{y}$.
In the $s_{\pm}$ state proposed for iron-based superconductors,Mazinspm
there are no symmetry-imposed nodes,
but the order parameter has a different sign on the hole and electron pockets.
One can see that in order to identify the pairing symmetry, associated with a particular pairing mechanism,
it is important to determine the anisotropy of the gap structure.
In the iron-based superconductors,
the superconducting gap structure has been studied most extensively in the
oxygen-free materials with BaFe${}_{2}$As${}_{2}$ (Ba122) as a parent compound.Rotter
High-quality single crystals
can be grown with various types of dopants to induce superconductivity in the parent antiferromagnet, including:
hole doping with potassium in Ba${}_{1-x}$K${}_{x}$Fe${}_{2}$As${}_{2}$ (K-Ba122),Wencrystals
electron doping with cobalt in Ba(Fe${}_{1-x}$Co${}_{x}$)${}_{2}$As${}_{2}$ (Co-Ba122),Athena ; CB
and iso-electron substitution of arsenic with phosphorus in BaFe${}_{2}$(As${}_{1-x}$P${}_{x}$)${}_{2}$ (P-Ba122).Kasahara
Early on, an ARPES study of optimally-doped K-Ba122 found a full superconducting gap on all sheets of the Fermi surface.Ding
This was explained within the $s_{\pm}$ scenario.MazinNature
However, subsequent studies of the superconducting gap structure in Ba122 revealed considerable diversity.
In P-Ba122, the gap is nodal for all dopings.HashimotoScience ; ShiyanRu
In Co-Ba122, the gap is isotropic at optimal doping but it develops nodes in both under- and over-doped compositions.GordonPRB ; Martin3D ; TanatarPRL ; Reid3D ; Goffryk
In K-Ba122, the gap is also isotropic at optimal doping,Ding ; ReidSUST
but it develops some $k$-dependence with increasing $x$,XGLuo ; MartinK and
there are nodes in the gap at $x=1.0$ (KFe${}_{2}$As${}_{2}$),Fukazawa ; Hashimoto ; ShiyanK ; ReidPRL ; Watanabe ; Okazaki
where the pairing symmetry may in fact be $d$-wave.ReidPRL ; ReidSUST
This diversity in the gap structure has been attributed in part to a
competition between intra-band and inter-band pairing interactions.Chubukovreview ; Hirschfeld-ROPP
Another factor that can affect the gap structure is the presence of a coexisting antiferromagnetic order.Chubukovreconstruction
In this Article,
we report a study of the superconducting gap structure in K-Ba122 using heat transport measurements,
for concentrations that cross into the region of the phase diagram where superconductivity and antiferromagnetism coexist.
We observe that the gap, which is isotropic just above the coexistence region, gradually develops $k$ dependence
as the magnetic order grows, with minima that deepen with decreasing $x$.
We attribute these minima to the reconstruction of the Fermi surface caused by the antiferromagnetic order.
II Methods
Single crystals of Ba${}_{1-x}$K${}_{x}$Fe${}_{2}$As${}_{2}$ were grown using a self-flux technique.Wencrystals
Nine samples were cut for $a$-axis transport and seven for $c$-axis transport.
The samples are labelled by their $T_{\rm c}$ value.
Details of the sample preparation, screening, compositional analysis and resistivity measurements can be found in ref. caxis, .
The technique for making contacts is described in refs. SUST, and patent, .
The superconducting $T_{\rm c}$ of underdoped samples changes monotonically with $x$.
We find that the relation between $T_{\rm c}$ and $x$ is well described by the formula $T_{\rm c}$ $=38.5-54~{}(0.345-x)-690~{}(0.345-x)^{2}$.
The thermal conductivity was measured in a standard one-heater two-thermometer technique described elsewhere,Reid3D
for two directions of the heat flow:
parallel ($Q\parallel c$; $\kappa_{c}$) and perpendicular ($Q\parallel a$; $\kappa_{a}$) to the [001] tetragonal $c$ axis.
The magnetic field $H$ was applied along the $c$ axis.
Measurements were done on warming after cooling from above $T_{\rm c}$ in a constant field, to ensure a homogeneous field distribution in the sample.
At least two samples were measured for all compositions to ensure reproducibility.
Resistivity measurements to determine the upper critical field were performed in Quantum Design PPMS down to 1.8 K.
III Results
III.1 Electrical resistivity
In the right panels of Fig. 1, the resistivity of four K-Ba122 samples, normalized to its value at $T=300$ K,
is plotted as a function of temperature, for both $J\parallel a$ and $J\parallel c$.
The values at 300 K, $\rho$(300 K), do not change much with doping, and
$\rho$(300 K) $\simeq 300$ and 1000-2000 $\mu\Omega$ cm, respectively.caxis ; YLiudoping
We use the resistivity curves of each sample to determine $T_{\rm c}$ and the residual resistivity $\rho_{0}$,
obtained from a smooth extrapolation of $\rho(T)$ to $T=0$ (see Fig. 1).
We use $\rho_{0}$ to estimate the residual value of the thermal conductivity in the normal state,
$\kappa_{\rm N}/T$, via the Wiedemann-Franz law,
$\kappa_{\rm N}/T=L_{0}/\rho_{0}$,
where $L_{0}\equiv(\pi^{2}/3)(k_{\rm B}/e)^{2}$.
III.2 Thermal conductivity
The thermal conductivity of the same four samples,
measured using the same contacts,
is also displayed in Fig. 1.
The data in the top row are for a heat current along the $a$ axis,
giving $\kappa_{a}$, while the data in the bottom panels are for the inter-plane heat current, giving $\kappa_{c}$.
The fits show that the data below 0.3 K are well described by the power-law function $\kappa/T=a+bT^{\alpha}$.
The first term, $a\equiv\kappa_{0}/T$, is the residual linear term, entirely due to electronic excitations.Shakeripour2009a
The second term is due to phonons, which at low temperature are scattered by the sample boundaries, with $1<\alpha<2$.Sutherland2003 ; Li2008
We see that for $H=0$, $\kappa_{0}/T=0$ for all samples, within error bars.
At the highest doping ($T_{\rm c}$ $=37-38$ K), $\kappa_{0}/T$ remains negligible even when a magnetic field of 15 T is applied.
At lower K concentration, however, $\kappa_{0}/T$ increases significantly
upon application of a magnetic field.
Our current data are consistent with our previous measurement of $\kappa_{\rm a}$ in a K-Ba122 sample with $T_{\rm c}$ $=26$ K.XGLuo
Fig. 2 shows how the residual linear term $\kappa_{0}/T$ evolves as a function of magnetic field $H$,
for both in-plane (top panel) and inter-plane (bottom panel) heat current directions.
In this Figure, $\kappa_{0}/T$ is normalized by the sample’s normal-state conductivity, $\kappa_{\rm N}/T$,
and the magnetic field is normalized by the sample’s upper critical field $H_{\rm c2}$ (determined as described in the next section).
In Fig. 4, the ratio $\left(\kappa_{0}/T\right)/\left(\kappa_{\rm N}/T\right)$, labelled
$\kappa_{0}/\kappa_{\rm N}$ for convenience, is plotted as a function of K concentration $x$, for two values of the magnetic field:
$H=0$ and $H=0.15$ $H_{\rm c2}$.
III.3 Upper critical field
In the left panel of Fig. 3, we plot the upper critical field $H_{\rm c2}$ as a function of temperature,
for four values of $x$, as determined from resistivity measurements for $H\parallel c$.
For the sample at $x=0.16$ ($T_{\rm c}$ $=7$ K), a field of 15 T is sufficient to reach the normal state.
Using the fact that $\kappa_{0}/T$ saturates above $H\simeq 9$ T in that sample, we estimate that
$H_{\rm c2}$ $=9$ T at $T\to 0$.
We see that this value agrees with a linear extrapolation of the resistively-determined $H_{\rm c2}$$(T)$.
For the other dopings, we obtain $H_{\rm c2}$(0), the value of $H_{\rm c2}$$(T)$ at $T\to 0$, by linear extrapolation.
Note that the slope of the $H_{\rm c2}$$(T)$ curves increases with increasing $T_{\rm c}$,
as expected for superconductors in the clean limit, YLiudoping
which holds for K-Ba122 at all dopings.
In the right panel of Fig. 3, we plot $H_{\rm c2}$(0) vs $T_{\rm c}$,
including published data from a sample with a slightly higher concentration.YLiudoping
IV Discussion
From Fig. 2,
three main characteristics of the gap structure of K-Ba122 can be deduced.
First, the fact that in zero field $\kappa_{0}/T$ $=0$ at all dopings, for both current directions,
immediately implies that there are no zero-energy quasiparticles at $H=0$.
From this we can infer that there are no nodes
in the superconducting gap anywhere on the Fermi surface.
Secondly, we see that the rate at which a magnetic field excites heat-carrying quasiparticles in K-Ba122
varies enormously with doping.
In the absence of nodes, quasiparticle conduction proceeds by tunnelling between states localized in the cores of adjacent vortices,
which grows exponentially as the inter-vortex separation decreases with increasing field,Boaknin2003
as observed in a superconductor with an isotropic gap like Nb (see Fig. 2).
The exponential rate is controlled by the coherence length, which is inversely proportional to the gap magnitude.
If the gap is large everywhere on the Fermi surface, the coherence length will
be small everywhere,
and the growth of $\kappa_{0}$/$\kappa_{\rm N}$ vs $H$ will be very slow at low $H/$$H_{\rm c2}$.
This is what we observe in K-Ba122 near optimal doping ($T_{\rm c}$ $\simeq 37$ K).
If the gap is small on some part of the Fermi surface, compared to the maximum value that dictates $H_{\rm c2}$,
this will make it easier to excite quasiparticles, and so lead to an enhanced thermal conductivity at a given value of
$H/$$H_{\rm c2}$.
This is what happens in K-Ba122 with decreasing $x$, whereby $\kappa_{0}$/$\kappa_{\rm N}$ becomes larger and larger with underdoping.
A good way to visualize this evolution is to plot $\kappa_{0}$/$\kappa_{\rm N}$ vs $x$ at $H/$$H_{\rm c2}$ $=0.15$, as done in Fig. 4.
We infer from our in-field data that the gap structure of K-Ba122 develops a minimum somewhere on its Fermi surface,
which gets deeper and deeper with decreasing $x$.
There are two ways in which the gap can be small on part of the Fermi surface.
It can develop a strong anisotropy on one sheet of the Fermi surface, as is believed to happen in
borocarbide superconductors.borocarbide
It can also be small on one surface and large on another.
This multi-band scenario is what happens in MgB${}_{2}$ (ref. Sologubenko2002, ) and
NbSe${}_{2}$ (ref. Boaknin2003, ).
In both cases, $\kappa_{0}$/$\kappa_{\rm N}$ grows fast, according to a field scale $H^{\star}$ much smaller
than $H_{\rm c2}$, as it is controlled by the minimum gap.
See Fig. 2 for the data on NbSe${}_{2}$.
The third property of the $\kappa_{0}$/$\kappa_{\rm N}$ vs $H$ data in Fig. 2 is its isotropy with respect to
current direction. The same behaviour is observed for in-plane and inter-plane heat currents.
This implies that the minima which develop in the gap structure have no strong $k_{z}$ dependence,
i.e. they run vertically along the $c$ axis.
In Fig. 5, we provide a sketch
of how the superconducting gap structure evolves with doping in K-Ba122,
in terms of a simple one-band Fermi surface.
Since the gap modulation has no significant $k_{z}$ dependence, we limit our discussion to a 2D picture.
At high $x$, the gap is isotropic (panel c), meaning that there is no indication of any modulation of the gap with angle.
Upon lowering $x$, the gap acquires a modulation,
with a minimum gap $\Delta_{\min}$ along some direction (panel b),
and the gap minimum deepens with decreasing $x$ (panel a).
This explains
why the initial rise in $\kappa_{0}$/$\kappa_{\rm N}$ vs $H$ is gets steeper with decreasing $x$ (Fig. 2).
The question is: why does the gap develop a modulation?
In a number of calculations applied to pnictides, the so-called $s_{\pm}$ state is the most stable.
This is a state with $s$-wave symmetry but with a gap that changes sign in going from the hole-like Fermi surface centred at $\Gamma$ ($\Delta_{h}>0$) to the electron-like Fermi surfaces centred at $X$ and $M$ ($\Delta_{e}<0$).Wang2009 ; Graser2009 ; Chubukovanisotropy
Although fundamentally nodeless, the associated gap function can have strong modulations,
depending on details of the Fermi surface and the interactions, possibly leading to accidental nodes.
Hirschfeld-ROPP
The gap modulation comes from a strongly anisotropic pairing interaction, which is also band-dependent, involving the interplay of intra-band and inter-band interactions.
It is typically the gap on the electron Fermi surface centred at the $M$ point of the Brillouin zone that shows a strong angular dependence within the basal plane. Graser2009 ; Wang2009
Therefore, the evolution of the gap structure detected here in K-Ba122, going from isotropic to modulated with decreasing $x$,
is compatible with the general findings of such calculations.
In Co-Ba122, the development of gap modulations with overdoping was attributed to such a change in interactions.Reid3D
We propose that another mechanism is at play on the underdoped side of the phase diagram,
having to do with the onset of antiferromagnetic order.
This is based on the fact that the modulation of the gap and the magnetic order appear at the same concentration, as seen in Fig. 4.
Neutron scattering studies show that antiferromagnetic order in K-Ba122 coexists with superconductivity over a broad range of doping,
up to $x\simeq 0.25$ ($T_{\rm c}$ $\simeq 26$ K), and both magnetism and superconductivity are bulk and occupy at least 95% of the sample volume.Avci
(The fact that $\kappa_{0}/T=0$ for $H=0$ in all our samples
rules out a scenario of phase separation, whereby significant
portions of the sample are not superconducting.)
This bulk coexistence is deemed
to be a strong argument in favour of the $s_{\pm}$ model,
and one against the usual $s$-wave scenario.Parker ; Fernandesspm
Antiferromagnetism in K-Ba122 causes a reconstruction of the Fermi surface
whereby the $\Gamma$-centered hole pocket becomes superpimposed on the edge-centered electron pocket,
as sketched in Fig. 6.
Energy gaps open at the points where the original two Fermi pockets cross,
resulting in the formation of four small crescent-shaped pieces (Fig. 6).
Maiti et al. Chubukovreconstruction
showed theoretically that such a reconstruction triggers a strong modulation
of the superconducting gap, which develops strong minima, and possibly even
(accidental) nodes, at the crossing points.
It therefore seems natural to attribute the appearance of gap minima
in underdoped K-Ba122 to the onset of magnetic order.
In underdoped Co-Ba122, a similar thermal conductivity study
revealed that a strong modulation of the gap also appears with the onset of magnetic order.Reid3D
So the two materials tell a consistent story.
In Fig. 7, we compare the evolution of the gap as found in thermal conductivity measurements on the two
sides of the phase diagram of BaFe${}_{2}$As${}_{2}$: the electron-doped side (Co-Ba122) and the hole-doped side (K-Ba122).
In both cases, the gap is isotropic close to optimal doping, and it develops a strong modulation with underdoping,
concomitant with the onset of antiferromagnetism.
However, there is a difference between K-Ba122 and Co-Ba122.
The former does not develop nodes, while the latter does.
These nodes are in regions of the Fermi surface with strong $k_{z}$ dispersion,
as they give rise to a large zero-field value of $\kappa_{0}/T$ for $J||c$, but not for $J||a$ (Fig. 7, bottom panel).Reid3D
V Summary
In summary, the thermal conductivity of K-Ba122 in the $T=0$ limit reveals three main facts.
First, the superconducting gap at optimal doping, where $T_{\rm c}$ is maximal, is isotropic, with no sign of significant modulation anywhere on the Fermi surface.
This reinforces the statement made earlier on the basis of thermal conductivity data in Co-Ba122, Reid3D
that superconductivity in pnictides is strongest when
isotropic, pointing fundamentally to a state with $s$-wave symmetry, at least in the high-$T_{\rm c}$ members of the pnictide family.
Secondly,
With underdoping, the superconducting gap becomes small in some parts of the Fermi surface.
The minimum gap gets weaker and weaker with decreasing $x$.
Because this modulation of the gap appears with the onset of antiferromagnetic order, we attribute it
to the associated reconstruction of the Fermi surface.
Third,
although it acquires minima, the superconducting gap structure of underdoped K-Ba122 never develops nodes,
where the gap goes to zero.
This is in contrast with underdoped Co-Ba122, whose gap does have nodes.
VI Acknowledgements
We thank A. V. Chubukov, R. M. Fernandes, P. J. Hirschfeld, D.-H. Lee and I. I. Mazin
for fruitful discussions and J. Corbin for his assistance with the experiments.
The work at Sherbrooke was supported by a Canada Research Chair,
the Canadian Institute for Advanced Research,
the National Science and Engineering Research Council of Canada,
the Fonds de recherche du Québec - Nature et Technologies,
and the Canada Foundation for Innovation.
The work at the Ames Laboratory was supported by the DOE-Basic Energy Sciences under Contract No. DE-AC02-07CH11358.
The work in China was supported by NSFC and the MOST of China (#2011CBA00100).
H.S. acknowledges the support of the Iran National Science Foundation.
References
(1)
Y. Kamihara, T. Watanabe, M. Hirano, and H. Hosono, J. Am. Chem. Soc. 130, 3296 (2008).
(2)
L. Boeri, O. V. Dolgov, and A. A. Golubov, Phys. Rev. Lett. 101, 026403 (2008).
(3)
L. Taillefer, Ann. Rev. Cond. Matter Physics 1, 51 (2010).
(4)
P. Monthoux, D. Pines, and G. G. Lonzarich, Nature 450, 1177 (2007).
(5)
M. R. Norman, Science 332, 196 (2011).
(6)
D. J. Scalapino, Rev. Mod. Phys. 84, 1383 (2012).
(7)
I. I. Mazin, D. J. Singh, M. D. Johannes, and M. H. Du, Phys. Rev. Lett. 101, 057003 (2008).
(8)
M. Rotter, M. Tegel, and D. Johrendt, Phys. Rev. Lett. 101, 107006 (2008).
(9)
H. Q. Luo, Z. S. Wang, H. Yang, P. Cheng, X. Zhu, and H.-H. Wen, Supercond. Sci. Technol. 21, 125014 (2008).
(10)
A. S. Sefat, R. Jin, M. A. McGuire, B. C. Sales, D. J. Singh, and D. Mandrus, Phys. Rev. Lett. 101, 117004 (2008).
(11)
P. C. Canfield and S. L. Bud’ko,
Ann. Rev. Cond. Mat. Phys. 1, 27 (2010).
(12)
S. Kasahara, T. Shibauchi, K. Hashimoto, K. Ikada, S. Tonegawa, R. Okazaki, H. Shishido, H. Ikeda, H. Takeya, K. Hirata, T. Terashima, and Y. Matsuda, Phys. Rev. B 81, 184519 (2010).
(13)
H. Ding, P. Richard, K. Nakayama, K. Sugawara, T. Arakane, Y. Sekiba, A. Takayama, S. Souma, T. Sato, T. Takahashi, Z. Wang, X. Dai, Z. Fang, G. F. Chen, J. L. Luo, and N. L. Wang, Europhys. Lett. 83, 47001 (2008).
(14)
I. I. Mazin, Nature 464, 183 (2010).
(15)
K. Hashimoto, K. Cho, T. Shibauchi, S. Kasahara, Y. Mizukami, R. Katsumata, Y. Tsuruhara, T. Terashima, H. Ikeda, M. A. Tanatar, H. Kitano, N. Salovich, R. W. Giannetta, P. Walmsley, A. Carrington, R. Prozorov, Y. Matsuda, Science 336, 1554 (2012).
(16)
X. Qiu, S. Y. Zhou, H. Zhang, B. Y. Pan, X. C. Hong, Y. F. Dai, Man Jin Eom, Jun Sung Kim, Z. R. Ye, Y. Zhang, D. L. Feng, and S. Y. Li,
Physical Review X 2, 011010 (2012)
(17)
R. T. Gordon, C. Martin, H. Kim, N. Ni, M. A. Tanatar, J. Schmalian, I. I. Mazin, S. L. Bud’ko, P. C. Canfield, and R. Prozorov
Phys. Rev. B 79, 100506 (R) (2009).
(18)
C. Martin, H. Kim, R. T. Gordon, N. Ni, V. G. Kogan, S. L. Bud’ko, P. C. Canfield, M. A. Tanatar, and R. Prozorov,
Phys. Rev. B 81, 060505 (2010).
(19)
M.A. Tanatar, J.-Ph. Reid, H. Shakeripour, X. G. Luo, N. Doiron-Leyraud, N. Ni, S. L. Bud’ko, P. C. Canfield, R. Prozorov, and L. Taillefer, Phys. Rev. Lett. 104, 067002 (2010).
(20)
J.-Ph. Reid, M. A. Tanatar, H. Shakeripour, X. G. Luo, N. Doiron-Leyraud, N. Ni, S. L. Bud’ko, P. C. Canfield, R. Prozorov, and L. Taillefer,
Phys. Rev. B 82, 064501 (2010).
(21)
K. Gofryk, A. S. Sefat, M. A. McGuire, B. C. Sales, D. Mandrus, J. D. Thompson, E. D. Bauer, and F. Ronning,
Phys. Rev. B 81, 184518 (2010).
(22)
J.-Ph. Reid, A. Juneau-Fecteau, R. T. Gordon, S. Rene de Cotret, N. Doiron-Leyraud, X. G. Luo, H. Shakeripour, J. Chang, M. A. Tanatar, H. Kim, R. Prozorov, T. Saito, H. Fukazawa, Y. Kohori, K. Kihou, C. H. Lee, A. Iyo, H. Eisaki, B. Shen, H.-H. Wen, Louis Taillefer, Supercond. Sci. Technol. 25, 084013 (2012).
(23)
X. G. Luo, M. A. Tanatar, J.-Ph. Reid, H. Shakeripour, N. Doiron-Leyraud, N. Ni, S. L. Bud’ko, P. C. Canfield, H. Luo, Z. Wang, H.-H. Wen, R. Prozorov, and L. Taillefer,
Phys. Rev. B 80, 140503 (R) (2009).
(24)
C. Martin, R. T. Gordon, M. A. Tanatar, H. Kim, N. Ni, S. L. Bud ko, P. C. Canfield, H. Luo, H. H. Wen, Z. Wang, A. B. Vorontsov, V. G. Kogan, and R. Prozorov, Phys. Rev. B 80, 020501(R) (2009).
(25)
H. Fukazawa, Y. Yamada, K. Kondo, T. Saito, Y. Kohori, K. Kuga, Y. Matsumoto, S. Nakatsuji, H.Kito, P.M. Shirage, K. Kihou, N. Takeshita, C.-H. Lee, A. Iyo, and H. Eisaki,
J. Phys. Soc. Jpn. 78, 033704 (2009).
(26)
K. Hashimoto, A. Serafin, S. Tonegawa, R. Katsumata, R. Okazaki, T. Saito, H. Fukazawa, Y. Kohori, K. Kihou, C. H. Lee, A. Iyo, H. Eisaki, H. Ikeda, Y. Matsuda, A. Carrington, and T. Shibauchi, Phys. Rev. B 82, 014526 (2010).
(27)
J. K. Dong, S. Y. Zhou, T. Y. Guan, H. Zhang, Y. F. Dai, X. Qiu, X. F. Wang, Y. He, X. H. Chen, and S. Y. Li, Phys. Rev. Lett. 104, 087005 (2010).
(28)
J.-Ph. Reid, M. A. Tanatar, A. Juneau-Fecteau, R. T. Gordon, S. R. de Cotret, N. Doiron-Leyraud, T. Saito, H. Fukazawa, Y. Kohori, K. Kihou, C. H. Lee, A. Iyo, H. Eisaki, R. Prozorov, and L. Taillefer, Phys. Rev. Lett. 109, 087001 (2012).
(29)
D. Watanabe, T. Yamashita, Y. Kawamoto, S. Kurata, Y. Mizukami, T. Ohta, S. Kasahara, M. Yamashita, T. Saito, H. Fukazawa, Y. Kohori, S. Ishida, K. Kihou, C. H. Lee, A. Iyo, H. Eisaki, A. B. Vorontsov, T. Shibauchi, and Y. Matsuda, Phys. Rev. B 89, 115112 (2014).
(30)
K. Okazaki, Y. Ota, Y. Kotani, W. Malaeb, Y. Ishida, T. Shimojima, T. Kiss, S. Watanabe, C.-T. Chen, K. Kihou, C. H. Lee, A. Iyo, H. Eisaki, T. Saito, H. Fukazawa, Y. Kohori, K. Hashimoto, T. Shibauchi, Y. Matsuda, H. Ikeda, H. Miyahara, R. Arita, A. Chainani, and S. Shin, Science 337, 1314 (2012).
(31)
A. V. Chubukov, Annu. Rev. Cond. Mat. Phys. 3, 57 (2012).
(32)
P. J. Hirschfeld, M. M. Korshunov, and
I. I. Mazin, Rep. Prog. Phys. 74, 124508 (2011)
(33)
S. Maiti, R. M. Fernandes, and A. V. Chubukov,
Phys. Rev. B 85, 144527 (2012).
(34)
M. A. Tanatar, W. E. Straszheim, Hyunsoo Kim, J. Murphy, N. Spyrison, E. C. Blomberg, K. Cho, J.-Ph. Reid, Bing Shen, Louis Taillefer, Hai-Hu Wen, R. Prozorov, Phys. Rev. B 89, 144514 (2014).
(35)
M. A. Tanatar, N. Ni, S. L. Bud’ko, P. C. Canfield, and R. Prozorov, Supercond. Sci. Technol. 23, 054002 (2010).
(36)
M. A.Tanatar, R. Prozorov, N. Ni, S. L. Bud’ko, and P. C. Canfield, U.S. Patent 8,450,246.
(37)
Y. Liu, M. A. Tanatar, W. E. Straszheim, B. Jensen, K. W. Dennis, R. W. McCallum, V. G. Kogan, R. Prozorov, T. A. Lograsso, Phys. Rev. B 89, 134504 (2014).
(38)
H. Shakeripour, C. Petrovic, and L. Taillefer, New J. Phys. 11, 055065 (2009).
(39)
E. Boaknin, M. A. Tanatar, J. Paglione, D. G. Hawthorn, F. Ronning, R. W. Hill, M. Sutherland, L. Taillefer, J. Sonier, S. M. Hayden, and J. W. Brill,
Phys. Rev. Lett. 90, 117003 (2003).
(40)
C. Proust, E. Boaknin, R.W. Hill, L. Taillefer, and A. P. Mackenzie,
Phys. Rev. Lett. 89, 147003 (2002).
(41)
M. Sutherland, D. G. Hawthorn, R. W. Hill, F. Ronning, S. Wakimoto, H. Zhang, C. Proust, E. Boaknin, C. Lupien, L. Taillefer, R. Liang, D. A. Bonn, W. N. Hardy, R. Gagnon, N. E. Hussey, T. Kimura, M. Nohara, and H. Takagi,
Phys. Rev. B 67, 174520 (2003).
(42)
S. Y. Li, J.-B. Bonnemaison, A. Payeur, P. Fournier, C. H. Wang, X. H. Chen, and L. Taillefer,
Phys. Rev. B 77, 134501 (2008).
(43)
E. Boaknin, R. W. Hill, C. Proust, C. Lupien, L. Taillefer, P. C. Canfield, Phys. Rev. Lett. 87, 237001 (2001).
(44)
A. V. Sologubenko et al.,
Phys. Rev. B 66, 014504 (2002).
(45)
A. V. Chubukov, M. G. Vavilov, and A. B.
Vorontsov, Phys. Rev. B 80, 140515 (2009).
(46)
F. Wang and D.-H.Lee,
Phys. Rev. Lett. 102, 047005 (2009).
(47)
S. Graser S. Graser, T. A. Maier, P. J. Hirschfeld, and D. J. Scalapino,
New J. Phys. 11, 025016 (2009).
(48)
S. Avci, O. Chmaissem, E. A. Goremychkin, S. Rosenkranz, J.-P. Castellan, D. Y. Chung, I. S. Todorov, J. A. Schlueter, H. Claus, M. G. Kanatzidis, A. Daoud-Aladine, D. Khalyavin, and R. Osborn, Phys. Rev. B 83, 172503 (2011).
(49)
D. Parker, M. G. Vavilov, A. V. Chubukov, and I. I. Mazin
Phys. Rev. B 80, 100508 (2009).
(50)
R. M. Fernandes and A. J. Millis, Phys.
Rev. Lett. 111, 127001 (2013). |
Model and Objective Separation with Conditional Lower Bounds: Disjunction is Harder than Conjunction
Krishnendu Chatterjee
IST Austria
Wolfgang Dvořák
University of Vienna, Faculty of Computer Science
Monika Henzinger
University of Vienna, Faculty of Computer Science
Veronika Loitzenbauer
University of Vienna, Faculty of Computer Science
Abstract
Given a model of a system and an objective, the model-checking question asks
whether the model satisfies the objective. We study polynomial-time problems in
two classical models, graphs and Markov Decision Processes (MDPs), with respect to
several fundamental $\omega$-regular objectives, e.g., Rabin and Streett objectives.
For many of these problems the best-known upper bounds are quadratic or cubic,
yet no super-linear lower bounds are known.
In this work our contributions are two-fold: First, we present several improved
algorithms, and second, we present the first conditional super-linear lower bounds
based on widely believed assumptions about the complexity of CNF-SAT and combinatorial Boolean
matrix multiplication.
A separation result for two models with respect to an objective means a conditional
lower bound for one model that is strictly higher than the existing upper bound
for the other model, and similarly for two objectives with respect to a model.
Our results establish the following separation results:
(1) A separation of models (graphs and MDPs) for disjunctive queries of
reachability and Büchi objectives.
(2) Two kinds of separations of objectives, both for graphs and MDPs, namely,
(2a) the separation of dual objectives such as reachability/safety (for
disjunctive questions) and Streett/Rabin objectives, and (2b) the separation of
conjunction and disjunction of multiple objectives of the same type such as safety, Büchi, and coBüchi.
In summary, our results establish the first model and objective separation
results for graphs and MDPs for various classical $\omega$-regular objectives.
Quite strikingly, we establish conditional lower bounds for the disjunction of
objectives that are strictly higher than the existing upper bounds for the
conjunction of the same objectives.
1 Introduction
The fundamental problem in formal verification is the model-checking
question that given a model of a system and a property asks whether the model
satisfies the property.
The model can be, for example, a standard graph, or a probabilistic extension of
graphs, and the property describes the desired behaviors (or infinite paths)
of the model.
For several basic model-checking questions, though polynomial-time
algorithms are known, the best-known existing upper bounds are quadratic or
cubic, yet no super-linear lower bounds are known.
In graph algorithmic problems unconditional super-linear lower bounds are very
rare when polynomial-time solutions exist.
However, recently there have been many interesting results that establish
conditional lower bounds [3, 6, 1].
These are lower bounds based on the assumption that
for some well-studied problem such as 3-SUM [24] or All-Pairs
Shortest Paths [40, 36] no (polynomially111In particular
improvements by polylogarithmic factors are not excluded.) faster algorithm
exists (compared to the best known algorithm).
The lower bounds in this work assume
(A1) there is no combinatorial222Combinatorial here means avoiding fast matrix multiplication [33], see also the
discussion in [27]. algorithm with running time of
$O(n^{3-\varepsilon})$ for any $\varepsilon>0$
to multiply two $n\times n$ Boolean matrices;
or (A2) for all $\varepsilon>0$ there exists a $k$ such that there is no algorithm
for the $k$-CNF-SAT problem that runs in $2^{(1-\varepsilon)\cdot n}\cdot\operatorname{poly}(m)$ time, where $n$ is the number of variables and $m$ the number of clauses.
These two assumptions have been used to establish lower bounds for
several well-studied problems, such as dynamic graph algorithms [3, 6],
measuring the similarity of strings [5, 10, 11, 7, 2], context-free grammar
parsing [34, 1], and verifying first-order graph
properties [35, 43].
No relation between conjectures (A1) and (A2) is known.
In this work we present conditional lower bounds that are super-linear
for fundamental model-checking problems.
Models.
The two most classical models in formal verification are
standard graphs and Markov decision processes (MDPs).
MDPs are probabilistic extensions of graphs,
and an MDP consists of a finite
directed graph $(V,E)$ with a partition of the vertex set $V$ into
player 1 vertices $V_{1}$ and random vertices $V_{R}$
and a probabilistic transition function that specifies for vertices in $V_{R}$
a probability distribution over their successor vertices.
Let $n=|V|$ and $m=|E|$.
An infinite path in an MDP is obtained by the following process.
A token is placed on an initial vertex and the token is moved indefinitely
as follows: At a vertex $v\in V_{1}$ a choice is made to move the token along
one of the outedges of $v$, and at a vertex $v\in V_{R}$ the token is moved
according to the probabilistic transition function.
Note that if $V_{R}=\emptyset$, then we have a standard graph, and
if $V_{1}=\emptyset$, then we have a Markov chain.
Thus MDPs generalize standard graphs and Markov chains.
Objectives.
Objectives (or properties) are subsets of infinite paths that specify the
desired set of paths.
The most basic objective is reachability where, given a set
$T\subseteq V$ of target vertices, an infinite path satisfies the
objective if the path visits a vertex of $T$ at least once.
The dual objective to reachability is safety where, given a set
$T\subseteq V$ of target vertices, an infinite path satisfies the
objective if the path does not visit any vertex of $T$.
The next extension of a reachability objective is the
Büchi objective that requires the set of target vertices
to be reached infinitely often. Its dual, the coBüchi objective,
requires the set of target vertices to be reached only finitely often.
A natural extension of single objectives are conjunctive and disjunctive
objectives [23, 44, 18]. For two objectives
$\psi_{1}$ and $\psi_{2}$ their conjunctive objective is equal to $\psi_{1}\cap\psi_{2}$
and their disjunctive objective is equal to $\psi_{1}\cup\psi_{2}$.
The conjunction of reachability (resp. Büchi) objectives is known as
generalized reachability (resp. Büchi) [23, 44].
A very central and canonical class of objectives in formal verification are
Streett (strong fairness) objectives and their dual Rabin objectives [39].
A one-pair Streett objective for two sets of vertices $L$ and $U$ specifies
that if the Büchi objective for target set $L$ is satisfied, then also the Büchi
objective for target set $U$ has to be satisfied; in other words, a one-pair
Streett objective is the disjunction of a coBüchi objective (with target set
$L$) and a Büchi objective (with target set $U$).
The dual one-pair Rabin objective for two vertex sets $L$ and $U$
is the conjunction of a Büchi objective with target set $L$ and a
coBüchi objective with target set $U$.
A Streett objective is the conjunction of $k$ one-pair Streett objectives
and its dual Rabin objective is the disjunction of $k$ one-pair Rabin objectives.
Algorithmic questions.
The algorithmic question given a model and an objective is as follows:
(a) for standard graphs, the model-checking question asks whether there is a
path that satisfies the objective; and
(b) for MDPs, the basic model-checking question asks
whether there is a
policy (or a strategy that resolves the
non-deterministic choices of outgoing
edges) for player 1 to ensure that the objective is satisfied with
probability 1.
Observe that if we consider the model-checking question for MDPs with
$V_{R}=\emptyset$, then it exactly corresponds to the model-checking question
for standard graphs.
Given $k$ objectives, the conjunctive query question asks whether
there is a policy for player 1 to ensure that all
the objectives
are satisfied with probability 1, and the disjunctive query
question asks whether there is a policy for player 1 to ensure that one
of the objectives is satisfied with probability 1.
Conjunctive queries coincide with conjunctive objectives on graphs and MDPs,
while disjunctive queries coincide with disjunctive objectives on graphs but not MDPs (see
Observations 2.1 and 2.2).
Significance of model and objectives.
Standard graphs are the model for non-deterministic systems, and provide the
framework to model hardware and software systems [29, 20], as well
as many basic logic-related questions such as automata emptiness.
MDPs model systems with both non-deterministic and probabilistic behavior;
and provide the framework for a wide range of applications from randomized
communication and security protocols, to stochastic distributed systems,
to biological systems [32, 8].
In verification, reachability objectives are the most basic objectives for
safety-critical systems.
In general all properties that arise in verification (such as liveness,
fairness) are $\omega$-regular languages ($\omega$-regular languages extend
regular languages to infinite words), and every $\omega$-regular language can
be expressed as a Streett objective (or a Rabin objective). Important special
cases of Streett (resp. Rabin) objectives are Büchi and coBüchi objectives [16].
Thus the algorithmic questions we consider are the most basic questions in
formal verification.
Model separation and objective separation questions.
In this work our results (upper and conditional lower bounds) aim to establish
the following two fundamental separations:
•
Model separation.
Consider an objective where the algorithmic question for both graphs and MDPs
can be solved in polynomial time, and establish a conditional lower bound for
MDPs that is strictly higher than the best-known upper bound for graphs.
In other words, the conditional lower bound would separate the model of
graphs and MDPs for problems (i.e., w.r.t. the objective) that can be solved in
polynomial time.
•
Objective separation.
Consider a model (either graphs or MDPs) with two different objectives and
show that, though the algorithmic question for both objectives can be solved in
polynomial time, there is a conditional lower bound for one objective that
is strictly higher than the best-known upper bound for the other objective.
In other words, the conditional lower bound would separate the two objectives
w.r.t. the model though they both can be solved in polynomial time.
To the best of our knowledge, there is no previous work that establish any
model separation or objective separation result in the literature.
Our results.
In this work we present improved algorithms as well as
the first conditional lower bounds that are super-linear for algorithmic
problems in model checking that can be solved in polynomial time, and together
they establish both model separation and objective separation results.
An overview of the results for the different objectives is given in Table 1,
where our results are highlighted in boldface.
We use
MEC to refer to the time to compute the maximal end-component
decomposition of an MDP. An end-component is a (non-trivial)
strongly connected sub-MDP
that has no outgoing edges for random vertices. We have $\textsc{MEC}=O(\min(n^{2},m^{1.5}))$ [16] and assume $\textsc{MEC}=\Omega(m)$
and $m\geq n$.
Moreover, we use
$k$ to denote the number of combined objectives in the case of conjunction
or disjunction of multiple objectives and
$b$ to denote the total number of elements in all the target sets
that specify the objectives.
We first describe Table 1 and our main results and then
discuss the significance of our results for model and objective separation.
1.
Conjunctive and Disjunctive Reachability (and Büchi) Problems.
First, we consider conjunctive and disjunctive reachability objectives and
queries. Recall that conjunctive objectives and queries in general
and disjunctive objectives and queries on graphs coincide. For reachability
further the disjunctive objective can be reduced to a single objective (see
Observation 2.3).
The following results are known: the algorithmic question for conjunctive
reachability objectives is \NP-complete for
graphs [13], and \PSPACE-complete for MDPs [23];
and the disjunctive objective
can be solved in linear time for graphs
and in $O(\min(n^{2},m^{1.5})+b)$ time in MDPs [19, 16].
We present three results for disjunctive reachability queries in MDPs:
(i) We present an $O(km+\textsc{MEC})$-time algorithm333This implies an $O(\textsc{MEC}+b)$-time algorithm for disjunctive objective but does not
improve the running time for this case..
(ii) We show that under assumption (A1) there does not exist a combinatorial
$O(k\cdot n^{2-\varepsilon})$ algorithm for any $\varepsilon>0$.
(iii) We show that for $k=\Omega(m)$
there does not exist an $O(m^{2-\varepsilon})$
time algorithm for any $\varepsilon>0$ under assumption (A2).
Hence we establish an upper bound and matching conditional lower bounds
based on (A1) and (A2).
Disjunctive Büchi objectives (on graphs and MDPs)
can be reduced in linear time to disjunctive
reachability objectives and vice versa, therefore
the same results apply to disjunctive Büchi problems (see Observation 2.6).
The basic algorithm for conjunctive Büchi objectives runs in time $O(m+b)$
on graphs and in time $O(\textsc{MEC}+b)$ on MDPs.
2.
Conjunctive and Disjunctive Safety Problems.
Second, we consider conjunctive and disjunctive safety objectives and queries.
The following results are known: the conjunctive problem can be reduced
to a single objective and can be solved in linear
time, both in graphs and MDPs (see e.g. [14]); disjunctive queries for MDPs can be solved in $O(k\cdot m)$ time; and disjunctive objectives for MDPs
are \PSPACE-complete [23].
We present two results:
(i) We show that for the disjunctive problem in graphs
under assumption (A1) there does not exist a combinatorial
$O(k\cdot n^{2-\varepsilon})$ algorithm for any $\varepsilon>0$.
This implies the same conditional lower bound for disjunctive queries
and objectives in MDPs and matches the upper bound for graphs
and disjunctive queries in MDPs.
(ii) We present, for $k=\Omega(m)$,
an $\Omega(m^{2-o(1)})$ lower bound for disjunctive
objectives and queries in MDPs under assumption (A2).
Again this lower bound matches the upper bound of $O(k\cdot m)$
for disjunctive queries.
3.
Conjunctive and Disjunctive coBüchi Problems.
For coBüchi, a conjunctive objective can be reduced to a single objective.
For single objectives the basic algorithm runs in time $O(\textsc{MEC}+b)$ on MDPs
and in time $O(m+b)$ on graphs.
Since the conditional lower bounds for disjunctive safety objectives and
queries actually already apply for the non-emptiness of the winning set,
the reductions also hold for coBüchi (see
Observation 2.5).
Here the running times and the conditional lower bounds are matching for both
disjunctive queries and disjunctive objectives.
For the conditional lower bound based on assumption (A2)
only singleton coBüchi objectives, i.e., coBüchi objectives with target
sets of cardinality one, are needed,
therefore the bound already holds for this case.
We additionally present two results:
(i) We present $O(km+\textsc{MEC})$-time algorithms for
disjunctive queries and objectives in MDPs.
(ii) We present a linear time algorithm for
disjunctive singleton coBüchi objectives in graphs.
4.
Rabin and Streett objectives.
Finally, we consider Rabin and Streett objectives.
The basic algorithm for Rabin objectives runs in time $O(k\cdot m)$
on graphs and in time $O(k\cdot\textsc{MEC})$ on MDPs.
As disjunctive coBüchi objectives are a special case of Rabin objectives,
the conditional lower bounds for coBüchi objectives
of $\Omega(k\cdot n^{2-o(1)})$ on graphs and
additionally $\Omega(m^{2-o(1)})$ on MDPs extend to Rabin objectives.
The conditional lower bound for graphs is matching (for combinatorial algorithms).
Furthermore, we extend the results
of [28, 17] from graphs
to MDPs to show that MDPs with Streett objectives can be solved in
$O(\min(m\sqrt{m\log n},n^{2})+b\log n)$ time.
Significance of our results.
We now describe the model and objective separation results that are obtained
from the results we established.
1.
Model Separation.
Table 2 shows our results that
separate graphs and MDPs regarding their complexity for certain
objectives and queries under assumptions (A1) and (A2).
First,
for reachability and Büchi objectives disjunction in graphs is in linear time
while in MDPs we have $\Omega(kn^{2-o(1)})$ and $\Omega(m^{2-o(1)})$ conditional
lower bounds for disjunctive queries.
Second, for coBüchi we have a separation when restricted to the
class where each target set is a singleton.
For these objectives disjunction in graphs
is in linear time while
we establish an $\Omega(m^{2-o(1)})$ conditional lower bound for MDPs for both
disjunctive objectives and queries.
2.
Objective Separation.
Further we identify complexity separations between different objectives.
Here we consider two aspects, separations between dual objectives like Büchi and
coBüchi (Tables 3 and 4), and separations between
conjunction and disjunction of objectives (Table 5).
We compare dual objectives in two ways: (i) we show that single objectives
that are dual to each other behave differently when we consider
disjunction for each of them and (ii) we compare conjunctive objectives
and their dual disjunctive objectives. For (ii) we have that
conjunctive Büchi objectives are dual to disjunctive coBüchi objectives,
and Streett objectives, the conjunction of 1-pair Streett objectives, are
dual to Rabin objectives, the disjunction of 1-pair Rabin objectives.
(a)
Separating Dual Objectives in Graphs.
First, we consider reachability and safety objectives. In graphs we have
that for reachability objectives disjunction
is in linear time while for disjunctive safety objectives we establish an
$\Omega(kn^{2-o(1)})$
lower bound under assumption (A1). Analogously, we have
disjunctive Büchi objectives are in linear time on graphs
while we establish an $\Omega(kn^{2-o(1)})$ conditional lower bound for disjunction of coBüchi objectives.
Further, conjunctive Büchi objectives are in linear time and thus can
be separated from their dual objective, the disjunctive coBüchi objectives.
Finally, for Streett objectives in graphs with $b=O(n^{2}/\log n)$
we have an $O(n^{2})$ algorithm while we establish an
$\Omega(n^{3-o(1)})$ lower bound for Rabin objectives when $k=\Theta(n)$.
(b)
Separating Dual Objectives in MDPs. First, consider Büchi and coBüchi objectives in MDPs.
On MDPs disjunctive Büchi objectives are in time $O(\textsc{MEC}+b)$, which is in
$O(\min(n^{2},m^{1.5})+nk)$, while for coBüchi objectives
we show $\Omega(kn^{2-o(1)})$ and $\Omega(m^{2-o(1)})$ conditional lower bounds for
both disjunctive queries and disjunctive objectives. This separates the
two objectives for both sparse and dense graphs.
Further conjunctive Büchi objectives can be solved in $O(\textsc{MEC}+b)$
time and thus there is also a separation between disjunctive coBüchi
objectives and their dual.
Finally, for Streett objectives in MDPs with $b=O(\min(n^{2},m^{1.5})/\log n)$
we show both an $O(n^{2})$-time and an $O(m^{1.5})$-time algorithm while
we establish $\Omega(n^{3-o(1)})$ and
$\Omega(m^{2-o(1)})$ conditional lower bounds for Rabin objectives
when $k=\Theta(n)$.
(c)
Separating Conjunction and Disjunction in Graphs and MDPs.
Except for reachability, i.e., in particular for all considered
polynomial-time problems, we observe that the disjunction of objectives is
computationally harder than the conjunction of these objectives (under assumptions (A1), (A2)).
First, for safety objectives conjunction is in linear time even for MDPs
while for disjunctive queries (disjunctive objectives are $\PSPACE$-complete)
we present $\Omega(kn^{2-o(1)})$ and $\Omega(m^{2-o(1)})$ conditional lower bounds, where the
first bound also holds for graphs.
Second, for Büchi and coBüchi objectives conjunction is in $O(\textsc{MEC}+b)$ on MDPs
(and $O(m+b)$ on graphs) while
we show $\Omega(kn^{2-o(1)})$ and $\Omega(m^{2-o(1)})$ conditional lower bounds for
disjunctive coBüchi objectives and disjunctive Büchi / coBüchi queries on MDPs.
The $\Omega(m^{2-o(1)})$ bound even holds for the disjunction of singleton coBüchi
objectives. Further, for coBüchi objectives our $\Omega(kn^{2-o(1)})$ bound also holds on graphs,
which separates conjunction and disjunction also in this setting.
Third, we can also see the results for Streett and Rabin objectives
as a separation between conjunction and disjunction. Recall that Streett
objectives are the conjunction of one-pair Streett objectives and
Rabin objectives are the disjunction of one-pair Rabin objectives.
Further, both Büchi and coBüchi objectives are special cases of each of
one-pair Streett and one-pair Rabin objectives. In particular the following
separations are easy observations or corollaries of our results:
For the disjunction of one-pair Streett objectives the same conditional
lower bounds (and the same upper bound, see Observation 6.10) as
for the disjunction of coBüchi objectives apply.
Thus the disjunction of one-pair Streett objectives is harder than the
conjunction of one-pair Streett objectives (under assumptions (A1)/(A2)).
The conjunction of one-pair Rabin objectives can be solved in the same time
as conjunctive Büchi objectives. Thus also the disjunction of one-pair
Rabin objectives is harder than their conjunction.
Remark about Streett and Rabin objective separation.
One remarkable aspect of our objective separation result is that we achieve it
for Rabin and Streett objectives (both in graphs and MDPs), which are dual.
In more general models such as games on graphs, Rabin objectives are
\NP-complete and Streett objectives are \coNP-complete [22].
In graphs and MDPs, both Rabin and Streett objectives can be solved in
polynomial time.
Since Rabin and Streett objectives are dual, and they belong to the
complementary complexity classes (either both in P, or one is \NP-complete,
other \coNP-complete), they were considered to be equivalent for algorithmic
purposes for graphs and MDPs.
Quite surprisingly we show that under some widely believed assumptions, both
for MDPs and graphs, Rabin objectives are algorithmically harder than Streett objectives.
Technical contributions.
Algorithms.
(1) We show that given the MEC-decomposition of an MDP, the almost-sure reachability problem
can be solved in linear time on the MDP where each MEC is contracted to
a player 1 vertex. This yields to the improved algorithms
for disjunctive queries of reachability and Büchi objectives on MDPs.
(2) For MDPs with disjunctive coBüchi objectives and disjunctive queries of
coBüchi objectives we use the MEC-decomposition in a different way;
namely, we show that it is sufficient to do a linear-time computation in
each MEC per coBüchi objective to solve both disjunctive questions.
(3) Further we show that for graphs with a
disjunctive coBüchi objective for which the target set of each of the
single coBüchi objectives has cardinality one
the problem can be solved with a breadth-first search like algorithm in linear time.
(4) Finally, we provide faster algorithms for MDPs with Streett objectives.
The straight-forward algorithm repeatedly
computes MEC-decompositions in a black-box manner; we show that one can
open this black-box and combine the current
best algorithms for MEC-decomposition [16] and graphs with
Streett objectives [28, 17]
to achieve almost the same running time for MDPs with Streett objectives
as for graphs.
Conditional Lower Bounds.
(a) Conjecture (A1)
is equivalent to the conjecture that there is no combinatorial $O(n^{3-\varepsilon})$
time algorithm to detect whether an $n$-vertex graph contains a triangle [40].
We show that triangle-detection in graphs can be linear-time reduced
to disjunctive queries of almost-sure reachability in MDPs and thus
that the latter is hard assuming (A1).
(b) For the hardness under (A2) we consider the intermediate problem
Orthogonal Vectors, which is known to be hard under (A2) [41],
and linear-time reduce it to disjunctive queries of almost-sure
reachability in MDPs.
(c) For disjunctive safety problems we give a linear-time reduction from
triangle-detection that only requires player 1 vertices and thus
hardness also holds in graphs when assuming (A1).
(d) However, the reduction we give from Orthogonal Vectors
to disjunctive safety problems requires random vertices and thus hardness
under (A2) only holds on MDPs.
(e) Based on the hardness results for almost-sure reachability and
safety, we then exploit reductions between the different types of
objectives to obtain the hardness results for Büchi, coBüchi,
and Rabin.
Outline.
In Section 2 we provide formal definitions, describe the connections
between different objectives, and state the
conjectures on which the conditional lower bounds are based.
Section 3 is about disjunctive reachability queries on MDPs; we
first present the improved algorithm and then the conditional lower bounds.
In Section 4 we describe the conditional lower bounds for disjunctive
safety problems on graphs and MDPs. In Section 5 we provide
the improved algorithms for MDPs with Streett objectives. In Section 6
we show how the conditional lower bounds extend from reachability and safety
to Büchi, coBüchi, and Rabin and present algorithms for MDPs with Rabin objectives and
for MDPs with disjunctive objectives and queries of Büchi and coBüchi objectives.
In Section 7 we describe the linear time algorithm for disjunctive
coBüchi objectives on graphs for the special case when all target sets are singletons.
We conclude in Section 8.
2 Preliminaries
Markov Decision Processes (MDPs) and Graphs.
An MDP $P=((V,E),\allowbreak(V_{1},V_{R}),\allowbreak\delta)$ consists of a finite directed
graph with vertices $V$ and edges $E$ with a partition of the vertices into
player 1 vertices $V_{1}$ and random vertices $V_{R}$ and a
probabilistic transition function $\delta$. We call an edge $(u,v)$ with $u\in V_{1}$
player 1 edge and an edge $(v,w)$ with $v\in V_{R}$ a random edge.
The probabilistic transition function is a function from $V_{R}$ to $\mathcal{D}(V)$,
where $\mathcal{D}(V)$ is the set of probability distributions over $V$ and
a random edge $(v,w)\in E$ if and only if $\delta(v)[w]>0$. For the purpose
of this paper we assume for simplicity that, for each random vertex $v$,
$\delta(v)[w]$ is the uniform distribution over all $w\in V$ with $(v,w)\in E$; this is w.l.o.g. as we are only ask whether a probability is zero or one
(qualitative analysis) or zero or larger than zero. Graphs are a special case
of MDPs with $V_{R}=\emptyset$.
Sub-MDPs and Maximal End-Components.
A sub-MDP of an MDP $P$ induced by a vertex set $X\subseteq V$
is defined as $P[X]=((X,E\cap(X\times X),(V_{1}\cap X,V_{R}\cap X),\delta^{\prime})$, where $\delta^{\prime}:X\rightarrow\mathcal{D}(X)$
is for each $v\in V_{R}\cap X$ the uniform distribution over all $w\in X$
with $(v,w)\in E$.
An end-component (EC) of an MDP $P$ is a set of
vertices $X\subseteq V$ such that (a) the induced sub-MDP
$P[X]$
is strongly connected, (b) all outgoing edges in $E$ of vertices in
$X\cap V_{R}$ are contained in $P[X]$, and (c) $P[X]$ contains at least one edge.
An end-component is a maximal end-component (MEC) if it is maximal
under set inclusion.
An end-component is trivial if it consists of a single vertex (with
a self-loop), otherwise it is non-trivial.
The MEC-decomposition of an MDP consists of all MECs of the MDP and the
set of vertices that do not belong to any MEC.
Plays and Strategies.
A play or infinite path in $P$ is an infinite sequence $\omega=\langle v_{0},v_{1},v_{2},\ldots\rangle$ such that $(v_{i},v_{i+1})\in E$ for all $i\in\mathbb{N}$;
we denote by $\Omega$ the set of all plays.
A player 1 strategy $\sigma:V^{*}\cdot V_{1}\rightarrow V$ is a function that
assigns to every finite prefix $\omega\in V^{*}\cdot V_{1}$ of a play that ends in a
player 1 vertex $v$ a successor vertex $\sigma(\omega)\in V$ such that
there exists an edge $(v,\sigma(\omega))\in E$; we denote by $\Sigma$ the
set of all player 1 strategies. A strategy is memoryless if we have
$\sigma(\omega)=\sigma(\omega^{\prime})$ for any $\omega,\omega^{\prime}\in V^{*}\cdot V_{1}$ that
end in the same vertex $v\in V_{1}$.
Objectives and Almost-Sure Winning Sets.
An objective $\psi$
is a subset of $\Omega$ said to be winning for player 1. We say that a
play $\omega\in\Omega$ satisfies the objective if $\omega\in\psi$.
For any measurable set of plays $A\subseteq\Omega$
we denote by $\mathrm{Pr}^{\sigma}_{v}\left(A\right)$ the probability that a play starting at $v\in V$
belongs to $A$ when player 1 plays strategy $\sigma$.
A strategy $\sigma$ is almost-sure (a.s.) winning from a vertex $v\in V$
for an objective $\psi$ if $\mathrm{Pr}^{\sigma}_{v}\left(\psi\right)=1$. In graphs the existence
of an almost-sure winning strategy corresponds to the existence of a play in the objective. The almost-sure
winning set $\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\psi\right)$
of player 1 is the set of vertices for which player 1 has an
almost-sure winning strategy. Computing the almost-sure winning set for some
objective is also called qualitative analysis of MDPs. Below we define
the objectives used in this work. Let $\mathrm{Inf}(\omega)$ for $\omega\in\Omega$ denote
the set of vertices that occurs infinitely often in $\omega$.
Reachability
For a vertex
set $T\subseteq V$ the reachability
objective is the set of infinite paths that contain a vertex of $T$, i.e.,
$\textrm{Reach}\left(T\right)=\{\langle v_{0},v_{1},v_{2},\ldots\rangle\in\Omega\mid\exists j\geq 0:v_{j}\in T\}$.
Safety
For a vertex set $T\subseteq V$ the safety
objective is the set of infinite paths that do not contain any vertex
of $T$, i.e.,
$\textrm{Safety}\left(T\right)=\{\langle v_{0},v_{1},v_{2},\ldots\rangle\in\Omega\mid\forall j\geq 0:v_{j}\notin T\}$.
Büchi
For a vertex set $T\subseteq V$ the Büchi
objective is the set of infinite paths in which a vertex of $T$ occurs
infinitely often, i.e.,
$\textrm{B{\"{u}}chi}\left(T\right)=\{\omega\in\Omega\mid\mathrm{Inf}(\omega)\cap T\neq\emptyset\}$.
coBüchi
For a
vertex set $T\subseteq V$ the coBüchi
objective is the set of infinite paths for which no vertex of $T$
occurs infinitely often, i.e.,
$\textrm{coB{\"{u}}chi}\left(T\right)=\{\omega\in\Omega\mid\mathrm{Inf}(\omega)\cap T=\emptyset\}$.
Streett
Given a set $\mathrm{SP}$ of
$k$ pairs $(L_{i},U_{i})$ of vertex sets $L_{i},U_{i}\subseteq V$ with $1\leq i\leq k$, the Streett objective is the set of infinite paths
for which it holds for each $1\leq i\leq k$ that whenever a vertex of $L_{i}$
occurs infinitely often, then a vertex of $U_{i}$ occurs infinitely often, i.e.,
$\textrm{Streett}\left(\mathrm{SP}\right)=\{\omega\in\Omega\mid L_{i}\cap\mathrm{Inf}(\omega)=\emptyset\text{ or }U_{i}\cap\mathrm{Inf}(\omega)\neq\emptyset\text{ for all }1\leq i\leq k\}$.
Rabin
Given a set $\mathrm{RP}$ of $k$ pairs $(L_{i},U_{i})$ of vertex sets $L_{i},U_{i}\subseteq V$ with $1\leq i\leq k$, the Rabin objective is the set of infinite paths
for which there exists an $i$, $1\leq i\leq k$, such that a vertex of $L_{i}$
occurs infinitely often but no vertex of $U_{i}$ occurs infinitely often, i.e.,
$\textrm{Rabin}\left(\mathrm{RP}\right)=\{\omega\in\Omega\mid L_{i}\cap\mathrm{Inf}(\omega)\neq\emptyset\text{ and }U_{i}\cap\mathrm{Inf}(\omega)=\emptyset\text{ for some }1\leq i\leq k\}$.
Given $c$ objectives $\psi_{1},\ldots,\psi_{c}$, the conjunctive objective
$\psi=\psi_{1}\cap\ldots\cap\psi_{c}$ is given by the intersection of the $c$
objectives, and the disjunctive objective $\psi=\psi_{1}\cup\ldots\cup\psi_{c}=\bigvee_{i=1}^{c}\psi_{i}$
is given by the union of the $c$
objectives. For the conjunctive query of $c$ objectives
$\psi_{1},\ldots,\psi_{c}$ we define the (almost-sure) winning set to be
the set of vertices that have one strategy that is (almost-sure) winning for each of
the objectives $\psi_{1},\ldots,\psi_{c}$.
Analogously, a vertex is in
the (almost-sure) winning set $\bigvee_{i=1}^{c}\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\psi_{i}\right)$ for the disjunctive query of the $c$
objectives if it is in a (almost-sure) winning set for at least
one of the $c$ objectives (i.e. we take the union of the winning sets).
Below we present several observations that interlink different types of objectives.
Observation 2.1.
The almost-sure winning set for a conjunctive objective is the same as for
the corresponding conjunctive query.
Proof.
We have for any $v\in V$ and $\sigma\in\Sigma$ and any two objectives $\psi_{1}$, $\psi_{2}$ that $\mathrm{Pr}^{\sigma}_{v}\left(\psi_{1}\land\psi_{2}\right)=1$
iff $\mathrm{Pr}^{\sigma}_{v}\left(\psi_{1}\right)=1$ and $\mathrm{Pr}^{\sigma}_{v}\left(\psi_{2}\right)=1$.
∎
Observation 2.2.
On graphs (i.e. $V_{R}=\emptyset$)
the winning set for a disjunctive objective is the same as for the
corresponding disjunctive query.
Proof.
For any two objectives $\psi_{1}$, $\psi_{2}$ we have for each $\omega\in\Omega$
that $\omega\in(\psi_{1}\cup\psi_{2})$ iff $\omega\in\psi_{1}$ or $\omega\in\psi_{2}$.
∎
Observation 2.3.
The disjunctive objective of Büchi (resp. reachability)
objectives is the same as the Büchi (resp. reachability) objective of the
union of the target sets.
Proof.
We show the claim for Büchi, the proof for reachability is analogous. For two
target sets $T_{1},T_{2}\subseteq V$ we have
$\{\omega\in\Omega\mid\mathrm{Inf}(\omega)\cap T_{1}\neq\emptyset\}\cup\{\omega\in\Omega\mid\mathrm{Inf}(\omega)\cap T_{2}\neq\emptyset\}=\{\omega\in\Omega\mid\mathrm{Inf}(\omega)\cap(T_{1}\cup T_{2})\neq\emptyset\}$. ∎
Observation 2.4.
The conjunctive objective of coBüchi (resp. safety)
objectives is the same as the coBüchi (resp. safety) objective of the
union of the target sets.
Proof.
We show the claim for coBüchi, the proof for safety is analogous. For two
target sets $T_{1},T_{2}\subseteq V$ we have
$\{\omega\in\Omega\mid\mathrm{Inf}(\omega)\cap T_{1}=\emptyset\}\cap\{\omega\in\Omega\mid\mathrm{Inf}(\omega)\cap T_{2}=\emptyset\}=\{\omega\in\Omega\mid\mathrm{Inf}(\omega)\cap(T_{1}\cup T_{2})=\emptyset\}$. ∎
By definition each path winning for a safety objective
is also winning
for the corresponding coBüchi objective while the converse is not always true.
However, when it comes to the non-emptiness of winning sets these two objectives become equivalent.
Observation 2.5.
For a fixed MDP $P$ the winning set for $\textrm{Safety}\left(T\right)$ is non-empty
iff the winning set for $\textrm{coB{\"{u}}chi}\left(T\right)$ is non-empty.
This equivalence extends also to conjunctions and disjunctions of safety and coBüchi objectives.
Proof.
By [21, p. 891] (see also Section 5.1)
the winning set for $\textrm{Safety}\left(T\right)$ resp. $\textrm{coB{\"{u}}chi}\left(T\right)$
is non-empty if and only if there exists an end-component $X$ with $X\cap T=\emptyset$.
∎
Observation 2.6.
Disjunctive (Obj./Qu.) Reachability in MDPs can be linear time reduced to
disjunctive (Obj./Qu.) Büchi-Objectives in MDPs and vice versa.
Proof.
Reachability $\Rightarrow$ Büchi: For each target set $T$
replace each $t\in T$ with two vertices: $t_{\text{in}}\in V_{1}$ and
$t_{\text{out}}$, where $t_{\text{out}}$ belongs to the same player as $t$.
Assign all incoming edges of $t$ to $t_{\text{in}}$
and all outgoing edges of $t$ to $t_{\text{out}}$, and add the edge $(t_{\text{in}},t_{\text{out}})$ and the self-loop $(t_{\text{in}},t_{\text{in}})$. Let the
corresponding target set for Büchi be the union of $t_{\text{in}}$ for all
$t\in T$. A vertex $t_{\text{in}}$ in the modified MDP can be visited
infinitely often almost surely iff in the original MDP the vertex $t$ can be
reached almost surely.
Büchi $\Rightarrow$ Reachability: For each target set $T$
replace each $t\in T$ with three vertices: $t_{\text{in}}\in V_{R}$,
$t_{r}\in V_{1}$, and $t_{\text{out}}$, where $t_{\text{out}}$ belongs to the
same player as $t$. Assign all incoming edges of $t$ to $t_{\text{in}}$
and all outgoing edges of $t$ to $t_{\text{out}}$, and add the edges
$(t_{\text{in}},t_{\text{out}})$, $(t_{\text{in}},t_{r})$, and $(t_{r},t_{\text{out}})$.
Let the corresponding target set for Reachability be the union of $t_{r}$ for all
$t\in T$. A vertex $t_{r}$ in the modified MDP can be reached almost
surely iff in the original MDP the vertex $t$ can almost surely
be visited infinitely often.
∎
Observation 2.7.
Conjunctive Büchi (resp. coBüchi) objectives are special instances of Streett objectives.
Proof.
For Büchi let $L_{i}=V$ and $U_{i}=T_{i}$, for coBüchi let $L_{i}=T_{i}$
and $U_{i}=\emptyset$.
∎
Observation 2.8.
Disjunctive Büchi (resp. coBüchi) objectives are special instances of Rabin objectives.
Proof.
For Büchi let $L_{i}=T_{i}$ and $U_{i}=\emptyset$, for coBüchi let
$L_{i}=V$ and $U_{i}=T_{i}$.
∎
2.1 Conjectured Lower Bounds
While classical complexity results are based on standard complexity-theoretical assumptions,
e.g., $\lx@paragraphsign\neq\NP$, polynomial lower bounds are often based on widely believed,
conjectured lower bounds
about well studied algorithmic problems.
Our lower bounds will be conditioned on the popular conjectures discussed below.
First, we consider conjectures on Boolean matrix multiplication [40, 3] and
triangle detection [3] in graphs, which build the basis for our lower bounds on dense graphs. A triangle in a graph is a triple $x,y,z$ of vertices
such that $(x,y),(y,z),(z,x)\in E$.
Conjecture 2.9 (Combinatorial Boolean Matrix Multiplication Conjecture (BMM)).
There is no $O(n^{3-\varepsilon})$ time combinatorial algorithm for computing the boolean product of
two $n\times n$ matrices for any $\varepsilon>0$.
Conjecture 2.10 (Strong Triangle Conjecture (STC)).
There is no $O(\min\{n^{\omega-\varepsilon},$ $m^{2\omega/(\omega+1)-\varepsilon}\})$ expected time algorithm and no
$O(n^{3-\varepsilon})$ time combinatorial algorithm that can detect whether a graph contains a triangle for any $\varepsilon>0$,
where $\omega<2.373$ is the matrix multiplication exponent.
By a result of Vassilevska Williams and Ryan Williams [40], we have that
BMM is equivalent to the combinatorial part of STC.
Moreover, if we do not restrict ourselves to combinatorial algorithms, STC still gives a super-linear lower bound.
Second, we consider the Strong Exponential Time Hypothesis [31, 12]
and the Orthogonal Vectors Conjecture [4], the former dealing with satisfiability in propositional logic and the latter with
the Orthogonal Vectors Problem.
The Orthogonal Vectors Problem (OV). Given two sets $S_{1},S_{2}$ of $d$-bit
vectors with $|S_{i}|\leq N$, $d\in\Theta(\log N)$, are there $u\in S_{1}$
and $v\in S_{2}$ such that $\sum_{i=1}^{d}u_{i}\cdot v_{i}=0$?
Conjecture 2.11 (Strong Exponential Time Hypothesis (SETH)).
For each $\varepsilon>0$ there is a $k$ such that k-CNF-SAT on $n$ variables and $m$ clauses
cannot be solved in $O(2^{(1-\varepsilon)n}\operatorname{poly}(m))$ time.
Conjecture 2.12 (Orthogonal Vectors Conjecture (OVC)).
There is no $O(N^{2-\varepsilon})$ time algorithm for the Orthogonal Vectors Problem for any $\varepsilon>0$.
By a result of Williams [41]
we know that SETH implies OVC, i.e.,
whenever a problem is hard assuming OVC, it is also hard when assuming SETH.
Hence, it is preferable to use OVC for proving lower bounds.
Finally, to the best of our knowledge, no relations between the former two conjectures and the latter two conjectures
are known.
Remark 2.13.
The conjectures that no polynomial improvements over the best known
running times are possible do not exclude improvements by sub-polynomial
factors such as poly-logarithmic factors or factors of, e.g., $2^{\sqrt{\log n}}$
as in [42].
3 Reachability in MDPs
First let us briefly discuss reachability on Graphs.
The winning set for disjunctive reachability can simply be computed by union all target sets and then starting
a breadth-first search which is in $O(m)$.
On the other hand, the problem becomes $\NP$-complete when considering conjunctive reachability [23],
as with conjunction one can require a path to contain several vertices
and in particular one can embed the well-known $\NP$-hard problem of Hamiltonian path.
Turning to MDPs, notice that in MDPs based on acyclic graphs almost-sure reachability is equivalent to
computing the winning set for a player with reachability objectives in a 2-player graph-game where
all the random vertices are owned by the opponent
(as random will play the optimal strategy for the opponent with non-zero probability).
As computing the winning set for conjunctive reachability in the 2-player graph-game is $\PSPACE$-hard [23]
even for acyclic graphs, we have that conjunctive almost-sure reachability in MDPs is $\PSPACE$-hard as well.
Moreover, as we will show later, compared to graphs, also disjunctive reachability becomes harder,
i.e., we will provide polynomial lower-bound based on popular conjectures.
In the first part of this section we present an improved algorithm for disjunctive reachability queries in MDPs.
As disjunctive reachability objectives can be easily reduced to a single reachability objective
by taking the union of all target sets, the algorithm mentioned above is also
an algorithm for disjunctive reachability objectives (by setting $k=1$).
In the second part we present two lower bounds for disjunctive reachability queries,
an $\Omega(n^{3-o(1)})$ lower bound based on STC and an $\Omega(m^{2-o(1)})$ lower bound based on OVC (resp. SETH).
3.1 Algorithm for Disjunctive Reachability Queries in MDPs
In this section we present an algorithm to compute the almost-sure winning
set for disjunctive reachability queries in MDPs.
In particular we show the following theorem:
Theorem 3.1.
For an MDP $P$ and target sets $T_{i}\subseteq V$
for $1\leq i\leq k$ the almost-sure winning set for disjunctive reachability
queries can be computed in $O(km+\textsc{MEC})$ time, where MEC is the time
needed to compute a MEC-decomposition.
A vertex $v$ is in the almost-sure winning
set if player 1 has a strategy to reach one of the $k$ target sets $T_{i}$
with probability 1 starting from $v$.
Note that the sets $T_{i}$ are not absorbing in contrast to what is often
assumed for the reachability objective in MDPs. The trivial algorithm would be
to invoke an algorithm for almost-sure reachability in MDPs $k$ times (for one
target set $T_{i}$ at a time, temporarily making the set $T_{i}$ absorbing
if necessary). The
crucial observation to improve upon this is that given an MDP without
non-trivial end-components, almost-sure reachability in MDPs can be solved in
linear time.
We further observe that, for each target set,
either all vertices of an end-component are winning (almost-surely) or none.
Thus if we know the MEC-decomposition of an MDP,
we can contract the MECs
to single vertices with self-loops and solve almost-sure reachability on the
derived MDP. This derived MDP does not have non-trivial end-components,
therefore given the MEC decomposition, the problem can be solved in linear time
per target set. Our algorithm implies that
almost-sure reachability (i.e. $k=1$) can be solved in the same asymptotic
time needed to determine the MEC-decomposition of an MDP.
Definition 3.2 (Contraction of MECs).
Contracting a MEC $X$ in an MDP $P$
creates a modified MDP $P^{\prime}$ from $P$ where the vertices of $X$ are
replaced by a single vertex $u$ that belongs to player 1 and the edges to or
from a vertex in $X$ are replaced with edges to or from, respectively, the
vertex $u$; parallel edges are omitted from $P^{\prime}$, for parallel random edges
the probabilities are added up.
Observation 3.3 ([15]).
The MDP $P^{\prime}$ that is constructed from the MDP $P$ by contracting all
MECs of $P$ does not contain any non-trivial end-components.
Proof.
Assume by contradiction that the MDP $P^{\prime}$ contains an end-component $X^{\prime}$
with at least two vertices. Let $X$ be the set of vertices corresponding
to the vertices of $X^{\prime}$ in the original MDP $P$. Then $X$ is an
end-component in $P$, a contradiction to the definition of $P^{\prime}$.
∎
In the derived MDP we basically apply, for each target set,
one iteration of the classical almost-sure reachability algorithm but with a slightly
modified random attractor computation defined below. The classical algorithm
repeatedly executes the following two steps:
1) Compute the vertices $S$ from which player 1 can reach the target set $T$.
2a) If $S=V$, output $S$ as the (almost-sure) winning set of player 1.
2b) If $S\subsetneq V$, remove the random attractor of $V\setminus S$
from the graph (and from $V$) and repeat.
Intuitively, a random attractor of a set of vertices $W$ contains the vertices
from which there is a positive probability to reach $W$ for every strategy of
player 1. The extended random attractor, formally defined below
and used implicitly in [16], additionally
includes player 1 vertices for which the only player 1 strategy to avoid a
positive probability to reach $W$ is using a self-loop of a vertex not in
the target set.
Additionally, we explicitly avoid adding vertices in the considered target set
to the attractor. In the classical algorithm this was achieved by making the
target set absorbing, which would not work for the extended random attractor.
Definition 3.4 (Extended Random Attractor).
Let $E(v)$ denotes the set of vertices $u\in V$
for which $(v,u)\in E$.
In an MDP $P=((V,E),(V_{1},V_{R}),\delta)$
the extended random attractor $\mathit{Attr}^{+}(P,W,T)$ for sets of vertices
$W,T\subseteq V$ is defined as $\mathit{Attr}^{+}(P,W,T)=\bigcup_{j\geq 0}Z_{j}$
where $Z_{0}=W\setminus T$ and
$Z_{j}$ for $j>0$ is defined recursively as $Z_{j+1}=Z_{j}\cup\{v\in V_{R}\mid E(v)\cap Z_{j}\neq\emptyset\}\cup\{v\in V_{1}\mid E(v)\subseteq Z_{j}\cup\{v\}\}\setminus T$. In contrast to a random attractor (a) a set of
vertices $T$ can be specified that is never included in $\mathit{Attr}^{+}(P,W,T)$
and (b) a player 1 vertex is also included in $Z_{j+1}$ if all its outgoing
edges apart from its self-loop are contained in $Z_{j}$.
The extended random attractor $A=\mathit{Attr}^{+}(P,W,T)$
can be computed in $O(\sum_{v\in A}\mathit{Indeg}(v)+\lvert V_{1}\setminus T\rvert)$ time [9, 30].
Putting the pieces together, our algorithm looks as follows: First, the
MEC-decomposition of the input MDP $P$ is computed. Then all MECs of $P$
are contracted to construct the derived MDP $P^{\prime}$, which does not contain
any non-trivial MECs. For each target set we execute one iteration of the classical
algorithm, replacing the usual random attractor with the extended random attractor.
The union of the winning sets determined for each target set then gives the
winning set of player 1 for disjunctive reachability.
Proposition 3.5 (Runtime).
Algorithm 1 runs in time $O(km+\textsc{MEC})$.
Proof.
Contracting all MECs can be done in time $O(m)$ as we have to consider each
edge (and vertex) at most twice. The for-loop is executed $k$ times.
Within the for-loop both the vertices $S$ that can reach $T_{i}$ and the
extended random attractor $A=\mathit{Attr}^{+}(P^{\prime},V\setminus S,T_{i})$ can be found
in linear time, that is, in $O(km)$ time over all iterations of the for-loop.
Undoing the contraction takes again at most $O(m)$ time.
∎
Proposition 3.6 (Correctness).
For an MDP $P$ and target sets $T_{i}\subseteq V$
for $1\leq i\leq k$ Algorithm 1 returns the set
$\bigvee_{1\leq i\leq k}\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Reach}\left(T_{i}\right)\right)$.
Proof.
We assume that in the MDP $P$ each vertex
has at least one outgoing edge and each random vertex has at least one outgoing
edge that is not a self-loop. This is w.l.o.g. because $\bigvee_{1\leq i\leq k}\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Reach}\left(T_{i}\right)\right)$ does not change if we replace each vertex without
outgoing edges by a vertex with a self-loop and treat a random vertex whose
only outgoing edge is a self-loop as a player 1 vertex.
First note that by definition a vertex is in $\bigvee_{1\leq i\leq k}\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Reach}\left(T_{i}\right)\right)$
if and only if it is in $\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Reach}\left(T_{i}\right)\right)$ for some $1\leq i\leq k$.
Hence we can consider the $k$ target
sets separately by showing that in the $i$-th iteration of the for-loop of
Algorithm 1 the set $\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Reach}\left(T_{i}\right)\right)$ is identified.
Let $P^{\prime}$ be the MDP derived from the MDP $P$ by contracting all
MECs of $P$ and let $T_{i}^{\prime}$ be the set of contracted vertices that
represent some vertex of $T_{i}$ as in Algorithm 1.
We use the superscript ${}^{\prime}$ to denote sets related to the MDP $P^{\prime}$ and
omit the superscript for sets related to the original MDP $P$.
Note that since only strongly connected subgraphs are
contracted in $P^{\prime}$, it clearly holds that a vertex $v\in V$ can reach
another vertex $u\in V$ if and only if the vertex $v^{\prime}\in V^{\prime}$ corresponding to $v$ can reach the vertex $u^{\prime}\in V^{\prime}$ corresponding to $u$.
Fix some iteration $i$ and let
$S^{\prime}=\textnormal{{GraphReach}}(P^{\prime},T^{\prime}_{i})$, let $A^{\prime}=\mathit{Attr}^{+}(P^{\prime},V^{\prime}\setminus S^{\prime},T^{\prime}_{i})$, and let $W^{\prime}_{i}=V^{\prime}\setminus A^{\prime}$,
that is, $W^{\prime}_{i}$ is the set added to $W^{\prime}$ in the $i$-th iteration of the
for-loop of Algorithm 1. Let the same letters without superscript
denote the corresponding sets of vertices after reverting the contraction
of the MECs of $P$.
We prove the lemma by first showing $\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Reach}\left(T_{i}\right)\right)\subseteq W_{i}$ and then $W_{i}\subseteq\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Reach}\left(T_{i}\right)\right)$.
We prove $\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Reach}\left(T_{i}\right)\right)\subseteq W_{i}$ by showing $A\subseteq V\setminus\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Reach}\left(T_{i}\right)\right)$ by induction on the
recursive definition of $A^{\prime}=\mathit{Attr}^{+}(P^{\prime},V^{\prime}\setminus S^{\prime},T^{\prime}_{i})=\cup_{j\geq 0}Z^{\prime}_{j}$, where the sets $Z^{\prime}_{j}$ are defined as in
Definition 3.4 and the sets $Z_{j}$ are the corresponding
sets after reverting the contraction of the MECs of $P$. Since the attractor
computation is done on $P^{\prime}$, each
set $Z_{j}$ either contains all vertices of a MEC of $P$ or none.
Clearly $A\cap T_{i}=\emptyset$ as vertices
in $T_{i}^{\prime}$ are explicitly excluded from $A^{\prime}$.
Player 1 cannot reach $T_{i}$ almost surely from the vertices in
$Z_{0}=V\setminus S$ because these vertices cannot reach any vertex in
$T_{i}$. Assume the claim holds for $Z_{j}$, i.e., for all vertices $z\in Z_{j}$
and any strategy $\sigma$ of player 1 we have $\mathrm{Pr}^{\sigma}_{z}\left(P,\textrm{Reach}\left(T_{i}\right)\right)<1$.
By the definition of $Z^{\prime}_{j+1}$, for a random vertex $v^{\prime}$ in $Z^{\prime}_{j+1}\setminus Z^{\prime}_{j}$
there is a positive probability to reach a vertex in $Z^{\prime}_{j}$; thus,
$\mathrm{Pr}^{\sigma^{\prime}}_{v^{\prime}}\left(P^{\prime},\textrm{Reach}\left(T_{i}^{\prime}\right)\right)<1$ for any strategy $\sigma^{\prime}$
of player 1. Random vertices in $P^{\prime}$ were not contracted, thus the same
argument holds for $Z_{j+1}$ and $P$.
A player 1 vertex ${x}^{\prime}$ in $Z^{\prime}_{j+1}\setminus Z^{\prime}_{j}$ corresponds to either a
player 1 vertex ${x}$ or a MEC $X$ in $Z_{j+1}\setminus Z_{j}$. In both cases
all the edges from ${x}$ resp. $X$ lead to vertices in $Z_{j}$ or to ${x}$
resp. $X$ itself. Hence since ${x}\notin T_{i}$ resp. $X\cap T_{i}=\emptyset$, we also have $\mathrm{Pr}^{\sigma}_{{x}}\left(\textrm{Reach}\left(T_{i}\right)\right)<1$ for any strategy $\sigma$ of player 1 and ${x}$ resp. all ${x}\in X$.
We next show $W_{i}\subseteq\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Reach}\left(T_{i}\right)\right)$.
Let $G[W_{i}]=(W_{i},E\cap(W_{i}\times W_{i}))$ be the subgraph induced by
the vertices in $W_{i}$.
We establish two properties:
(1) all outgoing edges of random vertices $V_{R}\cap W_{i}$ lead to vertices in $W_{i}$, and
(2) all vertices in $W_{i}\setminus T_{i}$ can reach $T_{i}$ in $G[W_{i}]$.
The claim follows from these two properties using the same proof as for the
classical algorithm for almost-sure reachability in MDPs (see below).
(1)
For vertices in $V_{R}$ we distinguish whether they are contained in a MEC of $P$
or not. In the first case property (1) follows from the fact that a MEC has no
outgoing random edges and every MEC is either completely contained in $W_{i}$
or completely contained in $V\setminus W_{i}$.
In the second case property (1) follows from the definition of an extended
random extractor because a vertex in $V_{R}\cap W^{\prime}_{i}$ with an edge to a vertex in
$A$ would have been included in $A$.
(2)
To show property (2) we will use that by Observation 3.3 the
MDP $P^{\prime}$ does not contain any non-trivial MEC.
Assume by contradiction that some vertices in
$W_{i}\setminus T_{i}$ cannot reach $T_{i}$ in $G[W_{i}]$.
Then there exists a bottom SCC $C$ (i.e. an SCC without outgoing edges,
possibly a single vertex)
in $G[W_{i}]$ with $C\cap T_{i}=\emptyset$. Note that
every MEC in $G[W_{i}]$ is completely contained in one of the SCCs of $G[W_{i}]$.
By property (1) $C$ has no outgoing random edges in $P$; by this and the fact
that $C$ is strongly connected,
the corresponding set $C^{\prime}$ of vertices in $P^{\prime}$
would be a non-trivial MEC in $P^{\prime}$ if it contained
more than one vertex. Thus $C^{\prime}$ can contain only one vertex ${c}^{\prime}$ and this
vertex has either no outgoing
edge or only a self-loop in $G^{\prime}_{W^{\prime}_{i}}$. If ${c}^{\prime}$ was
a player 1 vertex, then all its outgoing edges would go to vertices in $A^{\prime}$
or be a self-loop, hence ${c}^{\prime}$ would have been included in the attractor $A^{\prime}$.
If ${c}^{\prime}$ was a random vertex, then by the assumption that in $P$, and thus in
$P^{\prime}$, every random vertex
has an outgoing edge that is not a self-loop we would get a contradiction to
property (1). Thus no such bottom SCC $C$ can exist, that is, every bottom SCC
of $G[W_{i}]$ contains a vertex of $T_{i}$ and thus property (2) holds.
To see that the two established properties imply
$W_{i}\subseteq\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Reach}\left(T_{i}\right)\right)$, let for a vertex $u\in W_{i}$
be $d(u)$ the shortest path distance to a vertex in $T_{i}$. Consider the
following strategy $\sigma$ of player 1: For a player 1 vertex $u$,
choose an edge to a vertex $v$ such that $d(v)<d(u)$. For a random vertex $u$,
there is always an edge to a vertex $v$ such that $d(v)<d(u)$.
Let $\ell=\lvert W_{i}\rvert$ and let $\alpha$ be the minimum positive
transition probability in the MDP $P$. For all vertices $v\in W_{i}$
the probability that $T_{i}$ is reached within $\ell$ steps
is at least $\alpha^{\ell}$, that is, the probability that $T_{i}$ is not
reached within $b\cdot\ell$ steps is at most $(1-\alpha^{\ell})^{b}$,
which goes to $0$ as $b$ goes to $\infty$. Thus for all $v\in W_{i}$ strategy $\sigma$
ensures that $T_{i}$ is reached with probability 1.
∎
3.2 Conditional Lower Bounds for Disjunctive Reachability in MDPs
Here we complement the above algorithm by conditional lower bounds for disjunctive reachability queries in MDPs.
These lower bound will be based on the conjectures STC, SETH, and OVC
introduced in Section 2.1.
We first present our lower bound for dense MDPs based on STC.
Theorem 3.7.
There is no combinatorial $O(n^{3-\epsilon})$ or $O((k\cdot n^{2})^{1-\epsilon})$ algorithm (for any $\epsilon>0$) for disjunctive reachability queries in MDPs under Conjecture 2.10 (i.e., unless STC and BMM fail).
In particular, there is no such algorithm deciding whether the winning set is non-empty
or deciding whether a specific vertex is in the winning set.
The bounds hold for dense MDPs with $m=\Theta(n^{2})$.
The above theorem is by the following reduction from the triangle detection problem.
Reduction 3.8.
Given an instance of triangle detection, i.e., a graph $G=(V,E)$,
we build the following MDP $P$.
•
The vertices $V^{\prime}$ of $P$ are given by four copies
$V^{1},V^{2},V^{3},V^{4}$ of $V$, a start vertex $s$, and absorbing vertices
$F=\{g_{v}\mid v\in V\}$. The edges $E^{\prime}$ of $P$ are defined as follows:
There is an edge from $s$ to the first copy $v^{1}\in V^{1}$ of every $v\in V$
and the last copy $v^{4}\in V^{4}$ of every $v\in V$ is connected to its first copy $v^{1}$
and its corresponding absorbing vertex $g_{v}\in F$; further for $1\leq i\leq 3$
there is an edge from $v^{i}$ to $u^{i+1}$ iff $(v,u)\in E$.
•
The set of vertices $V^{\prime}$ is partitioned into player 1 vertices $V^{\prime}_{1}=\{s\}\cup V^{1}\cup V^{2}\cup V^{3}\cup F$
and random vertices $V^{\prime}_{R}=V^{4}$.
Moreover, the probabilistic transition function for each vertex $v\in V^{\prime}_{R}$
chooses among $v$’s successors with equal probability $1/2$ each.
The reduction is illustrated in Figure 1.
As all random choices are uniformly at random we omit the exact probabilities in the figures.
Next we prove that Reduction 3.8 is indeed a valid reduction
from triangle detection to disjunctive reachability queries in MDPs.
Lemma 3.9.
A graph $G$ has a triangle iff $s$ is contained in $\bigvee_{v\in V}\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Reach}\left(T_{v}\right)\right)$, where
$P$ is the MDP given by Reduction 3.8 and $T_{v}=\{g_{v}\}$ for $v\in V$.
Proof.
For the only if part assume that $G$ has a triangle with vertices $a,b,c$ and
let $a^{i}$,$b^{i}$,$c^{i}$ be the copies of $a,b,c$ in $V^{i}$.
Now a strategy for player 1 in the MDP $P$ to reach $g_{a}$ with probability 1 is as follows:
When in $s$, go to $a^{1}$; when in $a^{1}$, go to $b^{2}$; when in $b^{2}$, go to $c^{3}$;
when in $c^{3}$, go to $a^{4}$.
As $a,b,c$ form a triangle, all the edges required by the above strategy exist.
When player 1 starts in $s$ and follows the above strategy the only random vertex he
encounters is $a^{4}$.
The random choice sends him to the target vertex $g_{a}$ and to vertex $a^{1}$
with probability $1/2$ each.
In the former case he is done, in the latter case he continues playing his strategy and will reach $a^{4}$ again after three steps.
The probability that player 1 has reached $g_{a}$ after $3q+1$ steps is $1-(1/2)^{q}$
which converges to $1$ with $q$ going to infinity.
Thus we have found a strategy to reach $g_{a}$ with probability $1$.
For the if part assume that $s\in\bigvee_{v\in V}\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Reach}\left(T_{v}\right)\right)$.
That is, there is an $a\in V$ such that $s\in\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Reach}\left(T_{a}\right)\right)$.
Let us consider a corresponding strategy for reaching $T_{a}=\{g_{a}\}$.
First, assume that the strategy would visit a vertex $v^{4}$ for $v\in V\setminus\{a\}$.
Then with probability $1/2$ player 1 would end up in the vertex $g_{v}$ which has no path to $g_{a}$,
a contradiction to $s\in\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Reach}\left(T_{a}\right)\right)$.
Thus the strategy has to avoid visiting vertices $v^{4}$ for $v\in V\setminus\{a\}$.
Second, as the only way to reach $g_{a}$ is $a^{4}$, the strategy has to choose $a^{4}$.
But then with probability $1/2$ it will be send to $a^{1}$
and there must be a path from $a^{1}$ to $g_{a}$ that doesn’t not cross $V^{4}\setminus\{a^{4}\}$.
By the latter this path must be of the form $a^{1},b^{2},c^{3},a^{4},g_{a}$ for some $b,c\in V$.
Now by the construction of $G^{\prime}$ in the MDP $P$ the vertices $a,b,c$ form a triangle in the original graph $G$.
∎
The size and the construction time of the MDP $P$, constructed by Reduction 3.8, is linear in the size of the
original graph $G$ and we have $k=\Theta(n)$ target sets.
Thus if we would have a combinatorial $O(n^{3-\epsilon})$ or
$O((k\cdot n^{2})^{1-\epsilon})$ algorithm for
disjunctive queries of reachability objectives in MDPs for any $\epsilon>0$, we
would immediately get a combinatorial $O(n^{3-\epsilon})$ algorithm for
triangle detection, which contradicts STC and BMM.
Next we present a lower bound for sparse MDPs based on OVC and SETH.
Theorem 3.10.
There is no $O(m^{2-\epsilon})$ or $O((k\cdot m)^{1-\epsilon})$ algorithm
(for any $\epsilon>0$) for
disjunctive reachability queries in MDPs under Conjecture 2.12 (i.e., unless OVC and SETH fail).
In particular, there is no such algorithm deciding whether the winning set is non-empty
or deciding whether a specific vertex is in the winning set.
To prove the above we give a reduction from OVC to disjunctive reachability queries
in MDPs.
Reduction 3.11.
Given two sets $S_{1},S_{2}$ of $d$-dimensional vectors, we build the following MDP $P$.
•
The vertices $V$ of the MDP $P$
are given by a start vertex $s$, vertices $S_{1}$ and $S_{2}$ representing the
sets of vectors, vertices $\mathcal{C}=\{c_{i}\mid 1\leq i\leq d\}$ representing the
coordinates, and absorbing vertices $F=\{g_{v}\mid v\in S_{2}\}$.
The edges $E$ of $P$ are defined as follows: the start vertex $s$
has an edge to every vertex of $S_{1}$ and every vertex $v\in S_{2}$ has an edge to $s$
and to its corresponding absorbing vertex $g_{v}\in F$; further for each $x\in S_{1}$
there is an edge to $c_{i}\in\mathcal{C}$ iff $x_{i}=1$ and for each $y\in S_{2}$
there is an edge from $c_{i}\in\mathcal{C}$ iff $y_{i}=0$.
•
The set of vertices $V$ is partitioned into player 1 vertices $V_{1}=\{s\}\cup\mathcal{C}\cup F$
and random vertices $V_{R}=S_{1}\cup S_{2}$.
The probabilistic transition function for each vertex $v\in V_{R}$ chooses among $v$’s successors
uniformly at random.
The reduction is illustrated on an example in Figure 2.
Lemma 3.12.
There exist orthogonal vectors $x\in S_{1}$, $y\in S_{2}$ iff $s\in\bigvee_{v\in V}\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Reach}\left(T_{v}\right)\right)$ where
$P$ is the MDP given by Reduction 3.11 and $T_{v}=\{g_{v}\}$ for $v\in V$.
Proof.
For the only if part assume that there are orthogonal vectors $x\in S_{1}$, $y\in S_{2}$.
Now a strategy for player 1 in the MDP $P$ to reach $g_{y}$ with probability 1 is as follows:
When in $s$, go to $x$; when in some $c\in\mathcal{C}$, go to $y$.
As $x$ and $y$ are orthogonal, each $c_{i}\in\mathcal{C}$ reachable from $x$ has an edge to $y$, i.e.,
for $x_{i}=1$ it must be that $y_{i}=0$.
When player 1 starts in $s$ and follows the above strategy,
he reaches $y$ after three steps. There the random choice sends
him to the target vertex $g_{y}$ and back to vertex $y$ with probability $1/2$ each.
In the former case he is done, in the latter case
he continues playing his strategy and will reach $y$ again after three steps.
The probability that player 1 has reached $g_{y}$ after $3q$ steps is $1-(1/2)^{q}$,
which converges to $1$ with $q$ going to infinity.
Thus we have found a strategy to reach $g_{y}$ with probability $1$.
For the if part assume that $s\in\bigvee_{v\in V}\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Reach}\left(T_{v}\right)\right)$.
That is, there is an $y\in S_{2}$ such that $s\in\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Reach}\left(T_{y}\right)\right)$.
Let us consider a corresponding strategy for reaching $T_{y}=\{g_{y}\}$.
First, assume that the strategy would visit a vertex $y^{\prime}\in S_{2}$ for $y^{\prime}\not=y$.
Then with probability $1/2$ the player would end up in the vertex $g_{y^{\prime}}$ which has no path to $g_{y}$,
a contradiction to $s\in\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Reach}\left(T_{y}\right)\right)$.
Thus the strategy has to avoid visiting vertices $S_{2}\setminus\{y\}$.
Second, as the only way to reach $g_{y}$ is $y$, the strategy has to choose $y$.
But then with probability $1/2$ it will be send to $s$
and thus there must be a strategy to reach $g_{y}$ from $s$ with probability $1$ that does not cross $S_{2}\setminus\{y\}$.
As $y$ is the only predecessor of $g_{y}$, there must also be such a strategy to reach $y$.
In other words, there must be an $x\in S_{1}$ such that for each successor $c_{i}\in\mathcal{C}$ there
is an edge to $y$.
By the construction of the MDP $P$ this is equivalent to the existence of
an $x\in S_{1}$ such that whenever $x_{i}=1$ then $y_{i}=0$,
and thus $x$ and $y$ are orthogonal vectors.
∎
The number of vertices in $P$, constructed by Reduction 3.11,
is $O(N)$ and the construction can be performed in
$O(N\log N)$ time (recall that $d\in O(\log N)$).
The number of edges $m$ is $O(N\log N)$ (thus we consider $P$
to be a sparse MDP) and the number of target sets $k\in\Theta(N)=\theta(m/\log N)$.
Finally, if we would have an $O(m^{2-\epsilon})$ or $O((k\cdot m)^{1-\epsilon})$ algorithm for disjunctive reachability queries in MDPs for any $\epsilon>0$, we
would immediately get an $O(N^{2-\epsilon})$ algorithm for OV, which contradicts OVC (and thus SETH).
4 Safety Objectives
It is well-known that computing the a.s. winning set for a single safety objective
in an MDP is equivalent to computing
the winning set of player 1 for safety objectives in the 2-player graph-game where all the random vertices are owned by the opponent, called player 2
(see e.g. [14]).
A 2-player graph-game is defined as a graph with a partition
of the vertices into player 1 vertices $V_{1}$ and player 2 vertices $V_{2}$.
A player 2 strategy is defined analogous to a player 1 strategy (replacing the
vertices $V_{1}$ with the vertices $V_{2}$ in the definition).
The objective of player 2 is the dual of the objective of player 1.
Safety objectives in 2-player graph-games can be computed in $O(m)$ time by computing
a player 2 attractor (the definition of a player 1 or player 2 attractor is
analogous to the definition of a random attractor in Definition 5.12).
Thus in MDPs the a.s. winning set for a single safety objective can be computed in $O(m)$ time by computing a random attractor,
and the a.s. winning set for a disjunctive query can be determined
in $O(k\cdot m)$ time by computing $k$ random attractors and union the winning sets.
Conjunctive safety can be reduced to a single safety objective in $O(b)$ time
by taking the union of all the sets $T_{i}$.
Turning to disjunctive safety objectives, we have the same equivalence to 2-player graph-games as for single objectives (Observation 4.1).
In this 2-player game the disjunctive safety objective is the
complementary objective to the
conjunctive reachability objective with the same sets and, as the game is determined [23]444A graph-game is determined if the winning
set of player 1 is the complement of the winning set of player 2.,
the $\PSPACE$-hardness shown in [23] also applies to disjunctive safety objectives.
Observation 4.1.
Computing the a.s. winning set for a disjunctive safety objective in an MDP
with player 1 vertices $V_{1}$ and random vertices $V_{R}$ is equivalent to
computing the same disjunctive safety objective in the 2-player graph-game
with the same edges and the same player 1 vertices and
player 2 vertices $V_{2}=V_{R}$.
Proof.
We show that a vertex $s$ is almost sure winning in the MDP if and only if it is winning for player 1 in the game graph.
$\Leftarrow:$ Assume $s$ is not winning for player 1 in the graph-game.
Then $s$ is winning for player 2 and thus player 2 has a strategy to visit all target sets from $s$.
As there are only finitely many target sets, all these target sets are visited after a finite number of steps,
lets say after $l$ steps.
Now consider the corresponding MDP; with some constant probability the random choices in the MDP will follow exactly the strategy of player 2 in the
graph-game for the first $l$
steps and in that case player 1 cannot win almost surely from $s$.
Hence, $s$ is not in the a.s. winning set.
$\Rightarrow:$ Assume player 1 has a winning strategy for the graph-game
starting in $s$.
By definition this strategy is also winning for the MDP (if it is winning for each possible choice of player 2 then it also
winning for a random choice).
∎
4.1 Conditional Lower Bounds for Safety Objectives
We first present a lower bound for disjunctive safety based on STC that even holds on graphs.
Theorem 4.2.
There is no combinatorial $O(n^{3-\epsilon})$ or $O((k\cdot n^{2})^{1-\epsilon})$ algorithm (for any $\epsilon>0$) for disjunctive safety (objectives or queries) in graphs under Conjecture 2.10 (i.e., unless STC and BMM fail).
In particular, there is no such algorithm deciding whether the winning set is non-empty
or deciding whether a specific vertex is in the winning set.
The above is by the linear time reduction from triangle detection to disjunctive safety in graphs
provided below.
Reduction 4.3.
Given a graph $G=(V,E)$ (for triangle detection), we build a graph $G^{\prime}=(V^{\prime},E^{\prime})$ (for disjunctive safety) as follows.
As vertices $V^{\prime}$ we have four copies $V^{1},V^{2},V^{3},V^{4}$ of $V$ and a vertex $s$.
A vertex $v^{i}\in V^{i}$ has an edge to a vertex $u^{i+1}\in V^{i+1}$ iff $(v,u)\in E$. Finally, $s$ has an edge to all vertices in $V^{1}$ and all vertices in $V^{4}$ have an edge to $s$.
Reduction 4.3 is illustrated in Figure 3.
Lemma 4.4.
Let $G^{\prime}$ be the graph given by Reduction 4.3 for a graph $G$
and
let $T_{v}=(V^{1}\setminus\{v^{1}\})\cup(V^{4}\setminus\{v^{4}\})$. Then
the following statements are equivalent.
1.
$G$ has a triangle.
2.
$s$ is in the winning set of $(G^{\prime},\bigvee_{v\in V}\textrm{Safety}\left(T_{v}\right))$.
3.
The winning set of $(G^{\prime},\bigvee_{v\in V}\textrm{Safety}\left(T_{v}\right))$ is non-empty.
Proof.
(1)$\Rightarrow$(2): Assume that
$G$ has a triangle with vertices $a,b,c$ and
let $a^{i}$,$b^{i}$,$c^{i}$ be the copies of $a,b,c$ in $V^{i}$.
Now a strategy for player 1 in $G^{\prime}$ to satisfy $\textrm{Safety}\left(T_{a}\right)$ is as follows:
When in $s$, go to $a^{1}$; when in $a^{1}$, go to $b^{2}$; when in $b^{2}$, go to $c^{3}$;
when in $c^{3}$, go to $a^{4}$; and when in $a^{4}$, go to $s$.
As $a,b,c$ form a triangle, all the edges required by the above strategy exist.
When player 1 starts in $s$ and follows the above strategy,
then he plays an infinite path
that only uses vertices $s,a^{1},b^{2},c^{3},a^{4}$ and thus satisfies $\textrm{Safety}\left(T_{a}\right)$.
(2)$\Rightarrow$(1): Assume that there is a
winning play starting in $s$ and satisfying $\textrm{Safety}\left(T_{a}\right)$.
Starting from $s$, this play has to first go to $a^{1}$, as all other successors of $s$ would violate
the safety constraint. Then the play continues on some vertex $b^{2}\in V^{2}$ and $c^{3}\in V^{3}$
and then, again by the safety constraint, has to enter $a^{4}$.
Now by construction of $G^{\prime}$ we know that there must be edges $(a,b),(b,c),(c,a)$ in the original graph $G$,
i.e. there is a triangle in $G$.
(2)$\Leftrightarrow$(3): Notice that when removing $s$ from $G^{\prime}$ we get an acyclic graph and thus each infinite
path has to contain $s$ infinitely often. Thus, if the winning set is non-empty,
there is a cycle winning for some vertex and then
this cycle is also winning for $s$. For the converse direction we have that if $s$ is in the winning set, then the winning set is non-empty.
∎
The size and the construction time of the graph $G^{\prime}$, constructed
by Reduction 4.3, is linear in the size of the
original graph $G$ and we have $k=\Theta(n)$ target sets.
Thus if we would have a combinatorial $O(n^{3-\epsilon})$ or $O((k\cdot n^{2})^{1-\epsilon})$ algorithm for disjunctive
safety objectives or queries in graphs, we
would immediately get a combinatorial $O(n^{3-\epsilon})$ algorithm for triangle detection, which contradicts STC (and thus BMM).
The above reduction uses a linear number of safety constraints which are all of linear size.
Thus, a natural question is whether smaller safety sets would make the problem any easier.
Next we argue that our result even holds for safety sets that are of logarithmic size.
To this end we modify Reduction 4.3 as follows. We remove all edges incident to $s$ and
replace them by two complete binary trees. The first tree with $s$ as root and the vertices $V^{1}$ as leaves is directed towards the leaves,
the second tree with root $s$ and leaves $V^{4}$ is directed towards $s$.
Now for each pair $v^{1},v^{4}$ one can select one vertex of each level of the trees (except for the root levels)
for the set $T_{v}$ such that the only safe path starting in $s$ has to use
$v^{1}$ and each safe path to $s$ must pass $v^{4}$.
As the depth of the trees is logarithmic in the number of leaf vertices, we get
sets of logarithmic size.
The construction with the binary trees
is illustrated in Figure 4.
Next we present an $\Omega(m^{2-o(1)})$ lower bound for disjunctive objective/query safety in sparse MDPs.
Theorem 4.5.
There is no $O(m^{2-\epsilon})$ or $O((k\cdot m)^{1-\epsilon})$ algorithm (for any $\epsilon>0$) for disjunctive safety objectives/queries in MDPs under Conjecture 2.12 (i.e., unless
OVC and SETH fail).
In particular, there is no such algorithm for deciding whether the winning set is non-empty
or deciding whether a specific vertex is in the winning set.
To prove the above, we give a linear time reduction from OV to disjunctive safety objectives/queries.
Reduction 4.6.
Given two sets $S_{1},S_{2}$ of $d$-dimensional vectors, we build the following MDP $P$.
•
The vertices $V$ of the MDP $P$
are given by a start vertex $s$, vertices $S_{1}$ and $S_{2}$ representing the
sets of vectors, and
vertices $\mathcal{C}=\{c_{i}\mid 1\leq i\leq d\}$ representing the
coordinates. The edges $E$ of $P$ are defined as follows: the start vertex $s$
has an edge to every vertex of $S_{1}$ and every vertex $v\in S_{2}$ has an edge to $s$;
further for each $x\in S_{1}$
there is an edge to $c_{i}\in\mathcal{C}$ iff $x_{i}=1$ and for each $y\in S_{2}$
there is an edge from $c_{i}\in\mathcal{C}$ iff $y_{i}=1$.
•
The set of vertices $V$ is partitioned into player 1 vertices $V_{1}=\{s\}\cup S_{2}$
and random vertices $V_{R}=S_{1}\cup\mathcal{C}$.
Moreover, the probabilistic transition function for each vertex $v\in V_{R}$ chooses among $v$’s successors
uniformly at random.
The reduction is illustrated on an example in Figure 5.
Lemma 4.7.
Given two sets $S_{1},S_{2}$ of $d$-dimensional vectors,
the corresponding MDP $P$ given by Reduction 4.6 and
$T_{v}=\{v\}$ for $v\in S_{2}$
the following statements are equivalent
1.
There exist orthogonal vectors $x\in S_{1}$, $y\in S_{2}$.
2.
$s\in\bigvee_{v\in S_{2}}\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Safety}\left(T_{v}\right)\right)$
3.
$s\in\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\bigvee_{v\in S_{2}}\textrm{Safety}\left(T_{v}\right)\right)$
4.
The winning set $\bigvee_{v\in S_{2}}\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Safety}\left(T_{v}\right)\right)$ is non-empty.
5.
The winning set $\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\bigvee_{v\in S_{2}}\textrm{Safety}\left(T_{v}\right)\right)$ is non-empty.
Proof.
W.l.o.g. we assume that the $1$-vector, i.e., the vector with all coordinates being $1$, is contained in $S_{2}$
(adding the $1$-vector does not change the result of the OV instance).
Then a play in the MDP $P$ proceeds as follows.
Starting from $s$, player 1 chooses a vertex $x\in S_{1}$; then a vertex
$c\in\mathcal{C}$
and then a vertex $y\in S_{2}$ are picked randomly; then the play goes
back to $s$, starting another cycle of the play.
(1)$\Rightarrow$(2): Assume there are orthogonal vectors $x\in S_{1}$, $y\in S_{2}$.
Now player 1 can satisfy $\textrm{Safety}\left(T_{y}\right)$ in the MDP $P$
by simply going to $x$ whenever the play is in $s$.
The random player will then send it to some adjacent $c\in\mathcal{C}$
and then to some adjacent vertex in $S_{2}$,
but as $x$ and $y$ are orthogonal, this $c$ is not connected to $y$.
Thus the play will never visit $y$.
(2)$\Rightarrow$(3): Assume $s\in\bigvee_{v\in S_{2}}\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Safety}\left(T_{v}\right)\right)$. Then
there is a vertex $y\in S_{2}$ such that $s\in\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Safety}\left(T_{y}\right)\right)$.
Now we can enlarge the objective
to $\bigvee_{v\in S_{2}}\textrm{Safety}\left(T_{v}\right)$ and obtain
$s\in\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\bigvee_{v\in S_{2}}\textrm{Safety}\left(T_{v}\right)\right)$.
(3)$\Rightarrow$(1): Assume $s\in\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\bigvee_{v\in S_{2}}\textrm{Safety}\left(T_{v}\right)\right)$ and consider
a corresponding strategy $\sigma$.
W.l.o.g. we can assume that this strategy is memoryless [38].
Thus whenever the play is in $s$, it picks a fixed $x\in S_{1}$ as the next vertex.
Assume towards contradiction that there is no orthogonal vector $y\in S_{2}$ for $x$.
Then for each $y\in S_{2}$ we have that there is a $c\in\mathcal{C}$ connecting $x$ to $y$.
In each cycle of the play one goes from $s$ to $x$ and then by random choice to some vertex in $S_{2}$.
By the above, each of the vertices in $S_{2}$ has a non-zero probability to be
reached in this cycle, which can, for each fixed $n$,
be lower bounded by a constant $p$.
Thus after $n$ cycles in the play with probability at least $p^{|S_{2}|}$
all vertices in $S_{2}$ have been visited and thus none of the safety objectives is
satisfied, a contradiction to the assumption that with probability $1$ at least one
safety objective is satisfied.
Thus there must exist a vector $y\in S_{2}$ orthogonal to $x$.
(2)$\Leftrightarrow$(4) & (3)$\Leftrightarrow$(5):
Notice that when removing $s$ from $P$ we get an acyclic MDP and
thus each infinite
path has to contain $s$ infinitely often.
Certainly if $s$ is in the a.s. winning set, this set is non-empty.
Thus let us assume there is a vertex $v$ different from $s$ with a winning strategy $\sigma$.
All (winning) paths starting in $v$ cross $s$ after at most $3$ steps and thus
$\sigma$ must be also winning when starting in $s$.
∎
The number of vertices in the MDP $P$, constructed by Reduction 4.6, is $O(N)$, the number of edges $m$ is $O(N\log N)$
(recall that $d\in O(\log N)$), we have $k\in\Theta(N)$ target sets, and the construction can be performed in
$O(N\log N)$ time.
Thus, if we would have an $O(m^{2-\epsilon})$ or $O((k\cdot m)^{1-\epsilon})$
algorithm for disjunctive
queries or disjunctive objectives of safety objectives for any $\epsilon>0$, we
would immediately get an $O(N^{2-\epsilon})$ algorithm for OV, which contradicts OVC (and thus SETH).
5 Algorithms for MDPs with Streett objectives
In this section we extend algorithms for graphs with Streett objectives to MDPs.
In particular we prove the following theorem.
Theorem 5.1.
For an MDP $P$ with Streett objectives defined by Streett pairs
$\mathrm{SP}=\{(L_{i},U_{i})\mid 1\leq i\leq k\}$ with
$b=\sum_{i=1}^{k}(\lvert L_{i}\rvert+\lvert U_{i}\rvert)$
the almost-sure winning set can be computed in $O(\min(n^{2},m\sqrt{m\log n})+b\log n)$ time.555It can also be computed in
$O((\textsc{MEC}+b)\cdot k)$ time, which is faster for some combinations
of parameters with $k=O(\log n)$.
We first describe the basic algorithm for MDPs with Streett objectives, which
uses an algorithm for MEC-decomposition as a black box. We then develop a new
algorithm that opens up this black box and after an initial computation of the
MEC-decomposition only uses strongly connected components
and random attractor computations (Section 5.3).
This algorithm reveals strong similarities to the known algorithms for graphs
with Streett objectives. We then extend the two approaches that lead to the best
asymptotic running times on graphs, one for dense
graphs (Section 5.4)
and one for sparse (Section 5.4)
graphs, to MDPs. The algorithms for graphs are based on finding
“good” strongly connected subgraphs and then determining which vertices can
reach these “good components”. For MDPs we find good end-components
and then compute almost-sure reachability with the union of all good
end-components as target set
to determine the almost-sure winning set.
We first show that this approach is correct
(Section 5.1, see also [8, Chap. 10.6.3])
and then provide algorithms that identify all good end-components.
5.1 Good End-Components
Good end-components are also useful for other objectives such as Rabin objectives.
The results of this subsection are valid for all objectives for which whether
an infinite path $\omega$ belongs to the objective depends only on the vertices
$\mathrm{Inf}(\omega)$ that occur infinitely often in $\omega$.
For such objectives we show that determining the winning set is equivalent to
computing almost-sure reachability of the union of all good end-components.
We define a good end-component as an end-component for which the objective
is satisfied if exactly the vertices of the end-component are visited infinitely
often.
Definition 5.2 (Good End-Component).
Given an MDP $P$ and an objective $\psi$,
an end-component $X$ of $P$ such that
each path $\omega\in\Omega$ with $\mathrm{Inf}(\omega)=X$ is in $\psi$
is called a good $\psi$ end-component.
For a Streett objective the following is an equivalent definition.
Definition 5.3 (Good Streett End-Component).
Given an MDP $P$ and a set $\mathrm{SP}=\{(L_{i},U_{i})\mid 1\leq i\leq k\}$ of
Streett pairs,
a good Streett end-component is an end-component $X$ of $P$ such that
for each $1\leq i\leq k$ either $L_{i}\cap X=\emptyset$ or
$U_{i}\cap X\neq\emptyset$.
The importance of end-components lies in the fact that player 1 can keep the play
in an end-component forever and can visit each vertex in the end-component
almost surely and also almost surely infinitely often
(Lemma 5.4). This implies that in a good end-component
player 1 has an almost-sure winning strategy (Lemma 5.5) and thus
player 1 has an almost-sure winning strategy from every vertex that can
almost-surely reach a good end-component (Lemma 5.6 and
Corollary 5.7). This shows the soundness of the approach
of determining the almost-sure winning set for an objective determined by
$\mathrm{Inf}(\omega)$ by computing almost-sure reachability of the union of all good
end-components.
Lemma 5.4.
Given an MDP $P$ and an end-component $X$, player 1 has a strategy
from each vertex of $X$ such that all vertices of $X$ are
almost-surely reached infinitely often
and only vertices of $X$ are visited.
Proof.
We define a strategy $\sigma$ as follows:
Choose some arbitrary numbering of the vertices in $X$. The (not memoryless)
strategy of player 1 is to first follow a shortest path within the end-component
(with, say, lexicographic tie breaking) to the first vertex from the current
position of the play until this vertex is reached, then a shortest path
within the end-component to the
second vertex and so on, until he starts with the first vertex again. This is
possible because an end-component is a strongly connected subgraph.
Since an end-component has no outgoing random edges, the
play does not leave the end-component when player 1 plays this strategy.
Let $\ell=\lvert X\rvert$ and let $\alpha$ be the smallest
positive transition probability in the MDP. Then the probability that the first
chosen shortest path is followed with the above strategy
is at least $\alpha^{\ell}$ and the
probability that a sequence of $\ell$ shortest paths within $X$ are followed
and thus all vertices of $X$ are visited is at least $\alpha^{\ell^{2}}$. Thus
the probability that not all
vertices in $X$ were visited after $q\cdot\ell^{2}$ steps is at most $(1-\alpha^{\ell^{2}})^{q}$, which goes to 0 when $q$ goes to infinity. Hence player 1
has a strategy such that all vertices in $X$ are visited with probability 1.
By the same argument all vertices in $X$ are visited infinitely often with
probability 1 because the probability that some vertex is not visited after
some finite prefix of length $t\cdot\ell^{2}$ can be bounded by $(1-\alpha^{\ell^{2}})^{(q-t)}$.
∎
Lemma 5.5.
Player 1 has a strategy $\sigma$ from each vertex in a good $\psi$ end-component $X$
to satisfy $\psi$ almost-surely.
Proof.
By Lemma 5.4 player 1 has a strategy that almost-surely visits
all nodes in $X$ infinitely often. By the definition of good $\psi$ end-component,
all paths visiting all nodes in $X$ infinitely often are in $\psi$.
Hence, the strategy given by Lemma 5.4 is also almost-sure winning for $\psi$.
∎
Lemma 5.6.
Given an MDP $P$, an objective $\psi$ that is determined by $\mathrm{Inf}(\omega)$,
and a set $S$ of almost-sure winning nodes we have that
if $v\in\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Reach}\left(S\right)\right)$, then also $v\in\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\psi\right)$.
Proof.
Assume $v\in\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Reach}\left(S\right)\right)$ and consider the following strategy.
Start with the strategy for reaching $S$ and as soon as one vertex $s$ of $S$ is reached
switch to the almost-sure winning strategy of $s$.
As $S$ is (almost-surely) reached within a finite number of steps, the vertices
visited by the strategy for reaching $S$ does not affect the objective $\psi$.
∎
Corollary 5.7 (Soundness of Good End-Components).
For a set of good end-components $\mathcal{X}$ and an objective $\psi$ that is determined by $\mathrm{Inf}(\omega)$
we have that $\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Reach}\left(\bigcup_{X\in\mathcal{X}}X\right)\right)$ is contained in $\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\psi\right)$.
Another conclusion we can draw from the above lemmata is that if a MEC contains
a good end-component, then player 1 has an almost-sure winning strategy
for the whole MEC because he can reach the good end-component almost-surely from
every vertex of the MEC. We exploit this observation in the improved algorithm
for coBüchi objectives in Section 6.4.
Corollary 5.8 (of Lemmata 5.4
and 5.6).
Given an MDP $P$ and an objective $\psi$ that is determined by $\mathrm{Inf}(\omega)$,
if a MEC $X$ contains an almost-sure
winning vertex (e.g. a good end-component $X$),
then all vertices in $X$ are almost-sure winning for player 1.
To show the completeness of the approach of computing good end-components,
we have to argue that every vertex from which player 1 can satisfy the objective
almost-surely has also a strategy to reach a good end-component almost-surely.
For this we need two rather technical lemmata.
The intuition behind Lemma 5.9 is that if a random vertex
occurs infinitely often on a path,
then almost-surely also each of its successors appears infinitely often on that
path. Thus we can argue that vertex sets that are reached infinitely often
with positive probability are closed under random edges and hence SCCs within such
sets of vertices are end-components (Lemma 5.10).
To show completeness (Proposition 5.11)
we then use a set of paths in the objective that are reached with positive
probability to show that the vertices that these paths use infinitely often
form good end-components. A similar proof is given for Büchi objectives in
[21].
Lemma 5.9.
Given an MDP $P$, a strategy $\sigma$ of player 1,
the set $\Omega_{\sigma}$ of infinite paths starting at a vertex $v$ that are compatible with the strategy $\sigma$, and
a vertex $a\in V_{R}$ with $\mathrm{Pr}_{\sigma}\left(\{\omega\in\Omega_{\sigma}\mid a\in\mathrm{Inf}(\omega)\}\right)=p$,
for each successor $b$ of $a$ we have
$\mathrm{Pr}_{\sigma}(\{\omega\in\Omega_{\sigma}\mid a\in\mathrm{Inf}(\omega),$ $b\in\mathrm{Inf}(\omega)\})=p$ and
$\mathrm{Pr}_{\sigma}\left(\{\omega\in\Omega_{\sigma}\mid a\in\mathrm{Inf}(\omega),b\notin\mathrm{Inf}(\omega)\}\right)=0$.
Proof.
Whenever the strategy visits node $a$, with some constant probability $q$ the play continues in $b$.
Thus the probability that $b$ was visited less than $\ell$ times after $a$ was visited $n$ times
is upper bounded by $(1-q^{\ell})^{n/\ell}$ which goes to $0$ with increasing $n$.
Thus, we have $\mathrm{Pr}_{\sigma}\left(\{\omega\in\Omega_{\sigma}\mid a\in\mathrm{Inf}(\omega),b\notin\mathrm{Inf}(\omega)\}\right)=0$
and hence for the complement set $\mathrm{Pr}_{\sigma}(\{\omega\in\Omega_{\sigma}\mid a\in\mathrm{Inf}(\omega),b\in\mathrm{Inf}(\omega)\})=p$.
∎
Lemma 5.10.
Given an MDP $P$, a strategy $\sigma$ of player 1,
the set $\Omega_{\sigma}$ of infinite paths starting at a vertex $v$ that are compatible with the strategy $\sigma$,
a set $\Omega^{\prime}\subseteq\Omega_{\sigma}$, and the set of vertices
$S=\{a\mid\mathrm{Pr}_{\sigma}\left(\{\omega\mid a\in\mathrm{Inf}(\omega),\omega\in\Omega^{\prime}\}\right)>0\}$, then
for each SCC $C$ of $S$ and each vertex $a\in C\cap V_{R}$ all successors of a are contained in $C$, i.e., $C$ is an end-component of $P$.
Proof.
Consider an SCC $C$, a vertex $a\in C\cap V_{R}$, and a successor $b$.
Then by definition $\mathrm{Pr}_{\sigma}\left(\{\omega\mid a\in\mathrm{Inf}(\omega),\omega\in\Omega^{\prime}\}\right)=p$
for a $p>0$ and by Lemma 5.9
we get $\mathrm{Pr}_{\sigma}(\{\omega\mid a\in\mathrm{Inf}(\omega),$ $b\notin\mathrm{Inf}(\omega),\omega\in\Omega_{\sigma}\})=0$
and thus $\mathrm{Pr}_{\sigma}(\{\omega\mid a\in\mathrm{Inf}(\omega),b\in\mathrm{Inf}(\omega),$ $\omega\in\Omega^{\prime}\})=p$,
i.e., $b\in S$.
For each of the paths $\omega$ in the latter set we have a path from $b$ to $a$ consisting solely of nodes in $\mathrm{Inf}(\omega)$.
As in $P$ there are just finitely many paths from $b$ to $a$ at least one must have non-zero probability
and thus is also contained in $S$. Hence, $b$ belongs to the SCC $C$.
∎
Proposition 5.11 (Completeness of Good End-Components).
Given an MDP $P$ with an objective $\psi$ determined by $\mathrm{Inf}(\omega)$
and let $\mathcal{X}$ be the set of all good $\psi$ end-components,
then $\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\psi\right)$ is contained in $\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Reach}\left(\cup_{X\in\mathcal{X}}X\right)\right)$.
Proof.
For a vertex $v\in\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\psi\right)$,
fix a strategy $\sigma$ of player 1 such that the objective is satisfied almost-surely.
Let $P_{\sigma}$ be the sub-MDP of $P$ that consists of the vertices that
are visited infinitely often with non-zero probability when player 1 follows strategy $\sigma$.
Note that by Lemma 5.10 each SCC of $P_{\sigma}$ is an
end-component of $P$.
Moreover, $\sigma$ is a strategy for almost-surely reaching $P_{\sigma}$
(each infinite path has to visit at least one vertex infinitely often).
It remains to show that each vertex of $P_{\sigma}$ can almost-surely reach
a good end component.
We will actually show that each vertex of $P_{\sigma}$ is already contained in a good end component.
To this end let $\Omega_{\sigma}$ be the set of infinite paths starting at $v$
that are compatible with the strategy $\sigma$ and satisfy the objective.
For an arbitrary node $u$ of $P_{\sigma}$ we consider all paths
$\omega\in\Omega_{\sigma}$ with $u\in\mathrm{Inf}(\omega)$ and group them by $\mathrm{Inf}(\omega)$.
At least one of these groups has non-zero probability,
as there are only finitely many possible sets $\mathrm{Inf}(\omega)$ and $u\in\mathrm{Inf}(\omega)$ has non-zero probability.
Let us consider one of the groups of paths $\Omega^{S}_{\sigma}$ with non-zero probability
and the corresponding set $S=\mathrm{Inf}(\omega)$ for $\omega\in\Omega^{S}_{\sigma}$.
By Lemma 5.10 the set $S$ is closed under random edges.
Moreover, as in each path $\omega\in\Omega_{\sigma}$ the vertices $\mathrm{Inf}(\omega)$ are strongly connected, the
set $S$ is also strongly connected and thus an end-component.
Finally, as the paths $\omega\in\Omega^{S}_{\sigma}$ satisfy the objective
and the objective $\psi$ is determined by $\mathrm{Inf}(\omega)=S$, the set $S$
forms a good end component.
Hence, we have shown that each vertex of $P_{\sigma}$ is contained in a good $\psi$
end-component, which completes the proof.
∎
5.2 Algorithm Preliminaries
We introduce some additional notation for the algorithms for MDPs with Streett
and Rabin objectives.
For a set $\mathrm{RP}=\{(L_{i},U_{i})\mid 1\leq i\leq k\}$
of Rabin pairs or a set $\mathrm{SP}=\{(L_{i},U_{i})\mid 1\leq i\leq k\}$,
let $b=\sum_{i=1}^{k}(\lvert L_{i}\rvert+\lvert U_{i}\rvert)$.
A strongly connected component (SCC) is a maximal strongly
connected subgraph. A single vertex is considered strongly connected. An SCC without
outgoing edges is a bottom SCC, one without incoming edges a top SCC.
The reverse graph $\mathit{RevG}$ is constructed by reversing the direction of
all edges of the graph $G$. In a graph $G=(V,E)$ the set of vertices $E(v)$
for some vertex $v$ denotes the set of vertices $w\in V$ for which $(v,w)\in E$.
The out-degree of $v\in V$ in $G$ is denoted with $\mathit{Outdeg}_{H}(v)$, its in-degree
with $\mathit{Indeg}_{H}(v)$. Let MEC denote the runtime to compute the maximal
end-component decomposition of an MDP; we assume $\textsc{MEC}=\Omega(m)$.
Further we assume that each vertex in the input MDP
has at least one outgoing edge, and thus we have $m=\Omega(n)$.
Definition 5.12 (Random Attractor).
In an MDP $P=((V,E),(V_{1},V_{R}),\delta)$
the random attractor $\mathit{Attr}(P,W)$ of a set of vertices
$W\subseteq V$ is defined as $\mathit{Attr}(P,W)=\bigcup_{j\geq 0}Z_{j}$ where $Z_{0}=W$ and
$Z_{j}$ for $j>0$ is defined recursively as $Z_{j+1}=Z_{j}\cup\{v\in V_{R}\mid E(v)\cap Z_{j}\neq\emptyset\}\cup\{v\in V_{1}\mid E(v)\subseteq Z_{j}\}$.
The random attractor $\mathit{Attr}(P,W)$ can be computed in $O(\sum_{v\in\mathit{Attr}(P,W)}\mathit{Indeg}(v))$ time [9, 30].
All the algorithms for Streett objectives maintain vertex sets that are
candidates for good end-components. For such a vertex set $S$ we (a)
refine the maintained sets according to the SCC decomposition of $P[S]$
and (b) for a set of vertices $W$ for which we know that it cannot be contained
in a good end-component, we remove its random attractor from $S$. The following lemma
shows the correctness of these operations.
Lemma 5.13.
Given an MDP $P=((V,E),(V_{1},V_{R}),\delta)$, let $X$ be an end-component with $X\subseteq S$ for
some $S\subseteq V$.
We have
(a)
$X\subseteq C$ for one SCC $C$ of $P[S]$ and
(b)
$X\subseteq S\setminus\mathit{Attr}(P^{\prime},W)=\emptyset$ for each $W\subseteq V\setminus X$
and each sub-MDP $P^{\prime}$ containing $X$.
Proof.
Property (a) holds since every end-component induces a strongly connected
sub-MDP. We prove Property (b) by showing that $\mathit{Attr}(P^{\prime},W)$
does not contain a vertex of $X$ by induction over the recursive
definition of a random attractor. Let the sets $Z_{j}$ be as in
Definition 5.12 and let $E^{\prime}(v)$ be the vertices to which $v$ has an
edge in $P^{\prime}$.
We have $Z_{0}=W$ and thus $Z_{0}\cap X=\emptyset$.
Assume we have $Z_{j}\cap X=\emptyset$ for some $j\geq 0$. No vertex of
$V_{R}\cap X$ has an outgoing edge to $V\setminus X$ and thus the set
$X\cap\{v\in V_{R}\mid E^{\prime}(v)\cap Z_{j}\neq\emptyset\}$ is empty.
Further every vertex in $V_{1}\cap X$ has an outgoing edge to a vertex in $X$.
Hence also $X\cap\{v\in V_{1}\mid E^{\prime}(v)\subseteq Z_{j}\}$ is empty
and we have that $Z_{j+1}\cap X=\emptyset$.
∎
Let $X$ be a good Streett end-component. Then $X\cap U_{i}=\emptyset$ implies
$X\cap L_{i}=\emptyset$. Thus if $S\cap U_{i}=\emptyset$ for some vertex
set $S$ and some index $i$, then we have $U_{i}\subseteq V\setminus X$
for each end-component $X\subseteq S$. Hence we obtain the
following corollary.
Corollary 5.14.
Given an MDP $P$, let $X$ be a good Streett end-component with
$X\subseteq S$ for some $S\subseteq V$.
For each $i$ with $S\cap U_{i}=\emptyset$ it holds that
$X\subseteq S\setminus\mathit{Attr}(P[S],L_{i}\cap S)$.
5.3 Improving Upon the Basic Algorithm
In Algorithm 2, the basic algorithm for MDPs with Streett
objectives, we maintain a set of already identified
(maximal) good end-components goodEC, which is initially empty, and a set of
candidate end-components $\mathcal{X}$, which is initialized with the
MECs of the input MDP $P$. In each iteration of the while-loop we remove
an end-component $X$ from $\mathcal{X}$ and check whether it is a
good end-component. For this check we find sets $U_{i}$ for $1\leq i\leq k$
that do not intersect with $X$ and identify vertices in $X\cap L_{i}$ for
such an $i$ as “bad vertices” $B$. If there are no bad vertices, then
$X$ is a good end-component and added to goodEC. Otherwise the bad
vertices and their random attractor within $X$ are removed from $X$.
On the sub-MDP induced by the remaining vertices of $X$ we compute the
MEC-decomposition, which identifies all remaining candidate end-components among
the vertices of $X$. The new candidates are then added to $\mathcal{X}$.
If the algorithm finds good end-components, it returns the almost-sure winning set
for the reachability of the union of them.
Proposition 5.15 (Runtime of
Algorithm 2).
Algorithm 2 can be implemented
to run in $O((\textsc{MEC}+b)\min(n,k))$ time.
Proof.
The initialization of $\mathcal{X}$ with all MECs of the input
MDP $P$ can clearly be done in $O(\textsc{MEC})$ time. Further by
Theorem 3.1 the almost-sure reachability computation
after the while-loop can be done in $O(\textsc{MEC})$ time.
Let $X_{v}$ denote the end-component of $\mathcal{X}$ currently containing
an arbitrary, fixed vertex $v\in V$ during Algorithm 2.
In each iteration
of the while-loop in which $X_{v}$ is considered either (a) $B=\emptyset$
and $X_{v}$ will not be considered further or (b) the number of vertices
in $X_{v}$ is reduced by at least one and we have for some $1\leq i\leq k$ that
$X_{v}\cap L_{i}\neq\emptyset$ before the iteration of the while-loop and
$X_{v}\cap L_{i}=\emptyset$ after the while-loop. Thus each vertex and
each edge of the MDP $P$ is considered in at most $O(\min(n,k))$ iterations
of the while-loop.
Consider the $j$th iteration of the while-loop; let $X_{j}$ denote the set
removed from $\mathcal{X}$ in this iteration and let $\mathit{bits}(X_{j})=\sum_{i=1}^{k}(\lvert L_{i}\cap X_{j}\rvert+\lvert U_{i}\cap X_{j}\rvert)$.
Assume that each vertex has a list of the sets $L_{i}$ and $U_{i}$ for
$1\leq i\leq k$ it belongs to.
(We can generate these lists from the lists of the Streett pairs in $O(b)$
time at the beginning of the algorithm.)
Then we can determine $B$ by going through all lists of the vertices
in $X_{j}$ in $O(\lvert X_{j}\rvert+b_{j})$ time, which amounts to
$O((n+b)\min(n,k))$ total time over all iterations of the while-loop.
The random attractor computed in Line 2 is removed and
not considered further, thus its computation takes $O(m)$ time over the whole
algorithm (see Definition 5.12). The computation
of all MECs in $P[X_{j}]$ takes total time $O(\textsc{MEC}\cdot\min(n,k))$
over all iterations of the while loop. Thus the whole algorithm can be
implemented in $O((\textsc{MEC}+b)\min(n,k))$ total time.
∎
Proposition 5.16 (Soundness of Algorithm 2).
Let $W$ be the set returned by Algorithm 2.
We have $W\subseteq\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Streett}\left(\mathrm{SP}\right)\right)$.
Proof.
By Corollary 5.7 it is sufficient to show that every set
$X\in\textnormal{{goodEC}}$ is a good end-component. The algorithm explicitly
checks immediately before $X$ is added to goodEC that we have for each
$1\leq i\leq k$ either $L_{i}\cap X=\emptyset$ or $U_{i}\cap X\neq\emptyset$. Thus it only remains
to show that $X$ is an end-component when it is added to goodEC. Before
a set is added to goodEC, the same set is contained in the set $\mathcal{X}$.
We show that all sets in $\mathcal{X}$ are end-components at any point in
the algorithm by induction over the iterations of the while-loop in the algorithm.
Before the first iteration of the while-loop the sets $X\in\mathcal{X}$ are
the maximal end-components of $P$. Now consider an iteration in which
a set $X$ is removed from $\mathcal{X}$ and new sets are added to
$\mathcal{X}$. First, some vertices and their random attractor in the
sub-MDP $P[X]$ induced by $X$ are removed from $X$. Let $X^{\prime}$ be the
remaining set of vertices. By the definition of a random attractor there are no
random edges from $X^{\prime}$ to the removed random attractor.
Further, by the induction hypothesis there are no random edges from $X$ to $V\setminus X$. Thus there are no random edges from $X^{\prime}$ to $V\setminus X^{\prime}$.
Then the algorithm adds the MECs of the sub-MDP $P[X^{\prime}]$ to $\mathcal{X}$.
Let $\hat{X}$ be one such MEC. Since $\hat{X}$ is a MEC
in $P[X^{\prime}]$, it is a MEC in $P$ if and only if it has no random edges
from $\hat{X}$ to $V\setminus X^{\prime}$. This holds by $\hat{X}\subseteq X^{\prime}$
and the properties of $X^{\prime}$ established above.
∎
Proposition 5.17 (Completeness of Algorithm 2).
Let $W$ be the set returned by Algorithm 2.
We have $\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Streett}\left(\mathrm{SP}\right)\right)\subseteq W$.
Proof.
By Proposition 5.11 it is sufficient to show that at the end of
Algorithm 2 the union of the sets in goodEC contains
all good end-components of the MDP $P$. We show by induction that
every good end-component is a subset of either
goodEC or $\mathcal{X}$ before and after each iteration of the while-loop
in Algorithm 2; as $\mathcal{X}$ is empty at the
end of the algorithm, this implies the claim.
Before the first iteration of the while-loop, the
set $\mathcal{X}$ is initialized with the MECs of $P$, thus the induction
base holds. Let $X$ be the set of vertices removed from $\mathcal{X}$ in
an iteration of the while-loop and let ${X}^{*}$ be the union of the
good end-components contained in $X$. Either $X$ is added to goodEC or
we have that for some indices $i$ the set $X$ contains vertices of
$L_{i}$ but not of $U_{i}$; then for these indices the sets $L_{i}$ and their
random attractor are removed from $X$.
Let $\hat{X}$ be this the updated set, i.e., $\hat{X}=X\setminus\mathit{Attr}(P[X],B)$ .
By Corollary 5.14
we still have ${X}^{*}\subseteq\hat{X}$ after this step. Then all
MECs of $P[\hat{X}]$
are added to $\mathcal{X}$.
Every good end-component contained in $\hat{X}$ is completely contained
in one MEC of $P[\hat{X}]$, thus the claim continues to hold after the iteration
of the while-loop.
∎
The essential observation towards faster algorithms for MDPs with Streett objectives
is the following.
Consider a set $X$ in an iteration of the basic algorithm after
some vertices in $\mathit{Attr}(P[X],B)$ were removed.
We have that there are no random edges from $X$ to the remaining vertices
in the graph and further we have for each $1\leq i\leq k$ either $L_{i}\cap X=\emptyset$ or $U_{i}\cap X\neq\emptyset$. Thus if $P[X]$ is still
strongly connected, then $X$ is a good
end-component and is added to goodEC in one of the subsequent iterations
of the algorithm. If, however, the sub-MDP $P[X]$ consists of multiple SCCs,
then we have that the bottom SCCs of $P[X]$ are end-components in $P$
but the remaining SCCs of $P[X]$ might have outgoing random edges within
$P[X]$. Note, however, that we have for any good end-component $\hat{X}$
in $P[X]$ and any SCC $C$ of $P[X]$ that either $\hat{X}\subseteq C$
or $\hat{X}\cap C=\emptyset$, simply by the fact that every good end-component
is strongly connected (Lemma 5.13 (a)). Let $\hat{X}\subseteq C$ and let
$R$ be the random vertices of $C$ with edges to
vertices not in $C$. Then the vertices in $R$ cannot intersect with $\hat{X}$
because an end-component has no outgoing random edges. Further, also the
random attractor of $R$ cannot intersect with $\hat{X}$ (Lemma 5.13 (b)). Thus we can
remove $\mathit{Attr}(P[X],R)$ from $P[X]$ and all good end-components that were contained
in $P[X]$ are still contained in the remaining sub-MDP. However, now the
set of vertices in $C\setminus\mathit{Attr}(P[X],R)$ has no outgoing random edges.
Thus if it is still strongly connected, then it is an end-component.
With this observation we can avoid computing a MEC decomposition in the
while-loop of the basic algorithm and instead only compute strongly connected
components and random attractors, which both can be done in linear time.
Note that in the improved algorithm we do not have the property that every
maintained set of vertices is an end-component (as in the basic algorithm) but
still none of the maintained sets has outgoing random edges.
In this formulation the algorithm for MDPs with Streett objectives has a very
similar structure to the algorithm for graphs with Streett objectives:
We repeatedly remove “bad vertices” and recompute strongly connected components.
The main difference is that we additionally compute random attractors.
Based on this, we can indeed show that for Streett objectives
the same techniques as for graphs also apply to MDPs and by this improve the
runtime to the runtime for graphs plus the time to compute one MEC decomposition.
This can be seen as opening up the “black-box”
use of a MEC-decomposition algorithm and combining the fastest algorithms for
MEC-decomposition [15, 16] and graphs with Streett
objectives [28, 17].
In contrast to graphs with Streett objectives, no $O((m+b)k)$
algorithm can be achieved for small values of $k$. Intuitively, this is because
it could be that only in a few iterations bad vertices are removed while
the majority of the iterations is actually used to recompute MECs.
We present the new algorithmic ideas for MDPs with Streett objectives in
Algorithm 3 (which is only faster for large enough $k$)
and then apply the known techniques for sparse and dense graphs in
Algorithms 5 and 4,
respectively, to beat the basic algorithm for all parameters except very small
values of $k$; the basic algorithm is faster for e.g. $k=O(1)$ or $k=O(\sqrt{\log n})$ and $m=O(n^{4/3})$.
In our improved algorithms we use the data structure $\mathit{D}(X)$
from [28] to quickly identify and remove vertices in
$X\cap L_{i}$ for which $X\cap U_{i}=\emptyset$ from a set of vertices $X$.
Lemma 5.18 ([28]).
After a one-time preprocessing time of $O(k)$, there is a data structure
$\mathit{D}(X)$ for a given set $X$ that can be initialized with the operation
$\textnormal{{Construct}}(X)$ in time $O(\mathit{bits}(X)+\lvert X\rvert)$, where
$\mathit{bits}(X)=\sum_{i=1}^{k}\left(\lvert X\cap L_{i}\rvert+\lvert X\cap U_{i}\rvert\right)$.
Further it supports the operation $\textnormal{{Remove}}(X,\mathit{D}(X),B)$ that removes
a set $B\subseteq V$ from $X$ and updates $\mathit{D}(X)$ accordingly in
time $O(\mathit{bits}(B)+\lvert B\rvert)$ and the operation $\textnormal{{Bad}}(\mathit{D}(X))$
that returns a pointer to
the set $\{{x}\in X\mid\exists i\text{ s.t.\ }{x}\in L_{i}\text{ and }X\cap U_{i}=\emptyset\}$ in constant time.
In Algorithm 3 we maintain a list $Q$ of data structures
of disjoint vertex sets that are candidates for good end-components.
For every set $S$ with $\mathit{D}(S)$ in $Q$ we maintain that there are no random edges
from $S$ to $V\setminus S$. The list $Q$
is initialized with the data structures of all MECs of the input MDP $P$.
In each iteration
of the outer while-loop the data structure of one vertex set $S$ is pulled
from $Q$. In the inner while-loop the set of “bad vertices”
$\{{x}\in S\mid\exists i\text{ s.t.\ }{x}\in L_{i}\text{ and }S\cap U_{i}=\emptyset\}$ is identified and its random attractor
is removed from $S$ and $\mathit{D}(S)$. Through removing the random attractor we
maintain the property that there are no random edges from $S$ to $V\setminus S$
at this step. Thus we have that if $P[S]$ is (still) strongly connected, then $P[S]$ is a good end-component, which
we identify in Line 3.
If $P[S]$ does not contain an edge, we do not have to consider it further.
If it contains an edge but is not strongly connected, the SCCs of $P[S]$
are identified. For each SCC $C$ we identify its random vertices that have edges
to vertices of $S\setminus C$ and remove their random attractor from $C$.
After this step the data structure of the remaining vertices of $C$ is added to $Q$.
At this point we distinguish between the largest SCC and the other SCCs of $P[S]$.
We construct a new data structure for all but the largest SCC and reuse the
data structure of $S$ for the largest SCC. This improves the runtime because
we only spend time proportional to the smaller SCCs and a vertex can be in a
smaller SCC at most $O(\log n)$ times. Note that at this point of the algorithm
the sub-MDP $P[C]$ is not necessarily strongly connected since vertices were
removed after the SCC computation
but we maintain the property that there are no random edges from a vertex
set for which the data structure is in $Q$ to other vertices.
When the list $Q$ becomes empty, the algorithm terminates. If good end-components
were identified, the almost-sure winning set for the reachability objective
of the union of the good end-components is output.
Proposition 5.19 (Runtime Algorithm 3).
Algorithm 3 terminates in $O(mn+b\log n)$ time.
Proof.
Using the data structure of Lemma 5.18 ([28]),
the initialization phase of Algorithm 3 takes
$O(k+\textsc{MEC}+b+n)$ time, which is in $O(mn+b)$. Further by
Theorem 3.1 the almost-sure reachability computation
after the outer while-loop can be done in $O(\textsc{MEC})$ time.
Whenever bad vertices and their random attractor are identified in
lines 3–3, they are removed in
Line 3 and not considered further. Thus finding bad
vertices takes total time $O(n)$,
identifying the random attractor of bad vertices takes total time $O(m)$ (see
Definition 5.12), and removing the bad vertices and their attractor
takes total time $O(m+b)$ by Lemma 5.18.
After the initialization of $Q$ with the MECs of $P$, all vertex sets
for which a data structure is stored in $Q$ induce a strongly connected sub-MDP.
Consider the set $S$ when Line 3 is reached and its smallest superset
$S^{\prime}\supseteq S$ that was identified as strongly connected in the algorithm
(i.e. $S^{\prime}$ is either a MEC of $P$ or an SCC computed in Line 3
in a previous iteration of the algorithm). We have that $S$ is a proper
subset of $S^{\prime}$, i.e., either bad vertices were
removed from $S^{\prime}$ in Line 3 or a non-empty set of random
vertices was identified in Line 3. Hence any part of $P$
is considered in at most $n$ iterations of the outer while-loop. This implies
that we can bound the total time spent in
lines 3–3 with $O(mn)$.
By the same argument as for the removal of bad vertices and their attractor,
the calls to Remove in Line 3 take total time $O(n+b)$.
It remains to bound the time for the calls to Remove and Construct
in lines 3–3. Note that we avoid to
make these calls for the largest of the SCCs of the sub-MDP induced by $S$,
which are computed in Line 3.
Thus whenever we call Remove and Construct for an SCC $C$, we have
$\lvert C\rvert\leq\lvert S\rvert/2$. Hence
we can charge the time for Remove and Construct to the vertices of $C$
and to $\mathit{bits}(C)$ such that every vertex $v$ and every $\mathit{bits}(\{v\})$ is
charged $O(\log n)$ times. Thus we can bound the time for
lines 3–3 with $O((n+b)\log n)$.
This proves the claimed runtime.
∎
Proposition 5.20 (Soundness of Algorithm 3).
Let $W$ be the set returned by Algorithm 3.
We have $W\subseteq\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Streett}\left(\mathrm{SP}\right)\right)$.
Proof.
By Corollary 5.7 it is sufficient to show that every set
$X\in\textnormal{{goodEC}}$ is a good end-component. The algorithm explicitly
checks immediately before $X$ is added to goodEC in Line 3
that $X$ contains at least one edge and is strongly connected. Further
we have by the termination condition of the inner while-loop that for each
$1\leq i\leq k$ either $L_{i}\cap X=\emptyset$ or $U_{i}\cap X\neq\emptyset$. Thus it remains to show that there are no random edges from $X$
to $V\setminus X$.
Let $X^{\prime}$ be the set of vertices for which the data
structure $\mathit{D}(X^{\prime})$ was removed from $Q$ in the iteration of the outer while-loop
in which $X$ was added to goodEC. By the following invariant there are no random
edges from $X^{\prime}$ to $V\setminus X^{\prime}$.
Invariant 5.21.
For every set $S$ for which the data structure $\mathit{D}(S)$ is in $Q$ there
are no random edges from $S$ to $V\setminus S$.
Assume the invariant holds. If $X^{\prime}$ is not equal to $X$, then
some vertices and their random attractor within $P[X^{\prime}]$ were removed in the
inner while-loop. By the definition of a random attractor there are no random
edges from $X$ to $X^{\prime}\setminus X$ and thus to $V\setminus X$.
It
remains to prove the invariant by induction over the iterations of the outer
while-loop.
Before the first iteration of the while-loop $Q$ is initialized with
the maximal end-components of $P$ and thus the invariant holds.
Assume the invariant holds before the beginning of an iteration of the outer while-loop
and let $S$ be the set of vertices for which the data structure
is removed from $Q$ in this iteration. In the inner while-loop some vertices
and their random attractor in $P[S]$ might be removed from $S$. Let $S^{\prime}$ be
the remaining vertices. By the definition of a random attractor there are no
random edges from $S^{\prime}$ to $S\setminus S^{\prime}$ and thus by the induction hypothesis
there are no random edges from $S^{\prime}$ to $V\setminus S^{\prime}$.
If $P[S^{\prime}]$ is strongly connected, then no set is added to $Q$ in this iteration
of the while-loop.
Otherwise the SCCs $\mathcal{C}$ of $P[S^{\prime}]$ are considered as candidates to be
added to $Q$. For each set $C\in\mathcal{C}$ the random vertices $R$ in $C$
with edges to vertices in $S^{\prime}\setminus C$ and their random attractor $A$
in $P[C]$ are removed from $C$. Let $C^{\prime}$ be the remaining vertices.
We have that there are no random edges from $C^{\prime}$ to $S^{\prime}\setminus C$ by the
definition of $R$ and that there are no random edges from $C^{\prime}$ to $C\setminus C^{\prime}$ by the definition of $A$. Thus there are no random
edges from $C^{\prime}$ to $V\setminus C^{\prime}$ for any set $C^{\prime}$ for which the
data structure is added to $Q$, which shows the invariant.
∎
Proposition 5.22 (Completeness of Algorithm 3).
Let $W$ be the set returned by Algorithm 3.
We have $\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Streett}\left(\mathrm{SP}\right)\right)\subseteq W$.
Proof.
By Proposition 5.11 it is sufficient to show that at the end of
the algorithm the union of the sets in goodEC contains
all good end-components of the MDP $P$. We show the following invariant
by induction over the iterations of the outer while-loop;
as $Q$ is empty at the end of the algorithm, this implies the claim.
Invariant 5.23.
For each good end-component $X$ of $P$ and some set $Y\supseteq X$
either $Y\in\textnormal{{goodEC}}$ or $\mathit{D}(Y)\in Q$ holds before and after each iteration
of the outer while-loop.
Before the first iteration of the outer while-loop, the
set $Q$ is initialized with the MECs of $P$, thus the induction base holds.
Let $S$ be the set of vertices for which the data structure is
removed from $Q$ in an iteration of the outer while-loop and
let $\mathcal{X}_{S}$ be the set of good end-components
contained in $S$. We have $X\subseteq S^{\prime}$ for every
$X\in\mathcal{X}_{S}$ after the inner while-loop by Corollary 5.14.
Since every end-component contains an edge, $P[S^{\prime}]$ contains at least one
edge if $\mathcal{X}_{S}$ is not empty. Then either $S^{\prime}$ and thus all $X\in\mathcal{X}_{S}$ are added to goodEC or the SCCs $\mathcal{C}$ of $P[S^{\prime}]$
are computed. For each $X\in\mathcal{X}_{S}$ there exists $C\in\mathcal{C}$ such that $X\subseteq C$ by Lemma 5.13 (a);
let $X$ and $C$ be such that $X\subseteq C$. Since $X$ has not
outgoing random edges, we have $R\cap X=\emptyset$ (Line 3)
and thus also $X\subseteq C\setminus\mathit{Attr}(P[C],R)$ by Lemma 5.13 (b). The data structure of $C\setminus A$
is added to $Q$ in lines 3 or 3,
hence the claim holds after the outer while-loop.
∎
5.4 Algorithm for Dense MDPs with Streett Objectives
Algorithm 4 combines Algorithm 3
with the ideas of the MEC-algorithm for dense MDPs of [16] and
the algorithm for graphs with Streett objectives of [17].
The difference to Algorithm 3 lies in the search for
strongly connected components. To detect a good end-component, it is essential
to detect when a sub-MDP $P[S]$ remains strongly connected after some
vertices and their random attractor were removed from the vertex
set $S$ for which the data structure $\mathit{D}(S)$ is
maintained in $Q$. For this it is sufficient to identify one
strongly connected component $C$ of the sub-MDP $P[S]$:
The sub-MDP is strongly connected if and only if the SCC spans the whole
sub-MDP, i.e., $C=S$.
As for Algorithm 3, the correctness of the algorithm
is based on maintaining the Invariants 5.21
and 5.23. For maintaining these invariants it makes no difference
whether we compute all SCCs of $P[S]$ or just one. Whenever $P[S]$ is not
strongly connected, there exists a top or bottom SCC that contains at most
half of the vertices of $S$. In
Algorithm 4 we search for such a “small” top or bottom
SCC of $P[S]$.
The search for a top SCC is done by searching for a bottom SCC in the reverse
graph. To search for a bottom SCC,
a sparsification technique called Hierarchical Graph Decomposition
is used. This technique was introduced by [25] for undirected
graphs and extended to directed graphs and game graphs by [16].
In the level-$j$ graph $H_{j}$ of a graph $H$ only the first $2^{j}$ outgoing edges
of each vertex are considered, thus $H_{j}$ has $O(n\cdot 2^{j})$ edges. The main
observation (Lemma 5.25) is that we can identify each bottom SCC
with at most $2^{j}$ vertices by searching for bottom SCCs of $H_{j}$ that
only contain vertices for which all their outgoing edges in $H$ are also in $H_{j}$.
The search is started at level $j=1$ and then $j$ is doubled until such a bottom
SCC is found in $H_{j}$. Note that $H_{j}=H$ for $j\geq\log n$. When a bottom
SCC is identified at level $j^{*}$ but not at $j^{*}-1$, then this bottom SCC has
$\Omega(2^{j^{*}})$ vertices by the above observation. Further, the number of
edges in the graphs from level $1$ to $j^{*}$ form a geometric series. Thus
the work spent in all the levels up to $j^{*}$ can be bounded in terms of the number of
edges in $H_{j^{*}}$, that is, the bottom SCC of size $\Omega(2^{j^{*}})$ is
identified in $O(n\cdot 2^{j^{*}})$ time. By searching “in parallel” for
top and bottom SCCs and charging the needed time to the identified SCC,
the total runtime can be bounded by $O(n^{2})$. To identify only bottom SCCs of $H_{j}$
for which all the outgoing edges are present in $H_{j}$ we determine the
set of “blue” vertices $\mathit{Bl}_{j}$ that have an out-degree higher than $2^{j}$
and remove vertices that can reach blue vertices before computing SCCs.
In the following we provide formal definitions and proofs for Algorithm 4.
Definition 5.24 (Hierarchical Graph Decomposition).
Let $H=(V,E)$ be a simple directed graph. We consider for $j\in\mathbb{N}$
the subgraphs $H_{j}=(V,E_{j})$ of $H$ where $E_{j}$ contains for
each vertex of $V$ its first $2^{j}$ outgoing edges in $E$ (for some arbitrary but
fixed ordering of the outgoing edges of each vertex). Note that when
$j\geq\log(\max_{v\in V}{\mathit{Outdeg}_{H}(v)})$, then $H_{j}=H$.
Let the set $\mathit{Bl}_{j}$ denote all vertices with out-degree more than $2^{j}$ in $H$.
Lemma 5.25 (See e.g. [26]).
We use Definition 5.24.
1.
A set $C\subseteq V\setminus\mathit{Bl}_{j}$ is a bottom SCC in $H_{j}$
if and only if it is a bottom SCC in $H$.
2.
If a set $C\subseteq V$ with $\lvert C\rvert\leq 2^{j}$
is a bottom SCC in $H$, then $C\subseteq V\setminus\mathit{Bl}_{j}$.
Proof.
1.
By $C\subseteq V\setminus\mathit{Bl}_{j}$ the outgoing edges of the vertices in $C$
are the same in $H_{j}$ and in $H$. Thus we have $H_{j}[C]=H[C]$
and $C$ has no outgoing edges in $H_{j}$ if and only if it has no outgoing
edges in $H$.
2.
In $H$ all outgoing edges of each vertex of $C$ have to go to other vertices
of $C$. Thus each vertex of $C$ has an
out-degree of at most $\lvert C\rvert\leq 2^{j}$ in $H$.∎
Proposition 5.26 (Runtime of Algorithm 4).
Algorithm 4 terminates in $O(n^{2}+b\log n)$ time.
Proof.
Using the data structure of Lemma 5.18 ([28]),
the initialization phase of Algorithm 4 takes
$O(\textsc{MEC}+b+n)$ time, which is in $O(n^{2}+b)$ [16].
Further by Theorem 3.1 the almost-sure reachability computation
after the outer while-loop can be done in $O(\textsc{MEC})$ time.
Removing bad vertices takes total time $O(n+b)$ by Lemma 5.18.
Whenever a random attractor is computed, its edges are not considered further;
thus all attractor computations take $O(m)$ total time by Definition 5.12.
Whenever Remove or Construct are called (after the
initialization of $Q$), the vertices that are removed resp. added are either
(1) vertices for which the size of the SCC containing them was at least halved
or (2) vertices that are not considered further. For each vertex
case (1) can happen at most $O(\log n)$ times and case (2) at
most once, thus all calls to Remove or Construct take
total time $O((n+b)\log n)$ by Lemma 5.18.
To efficiently construct the graphs $H_{j}$ and compute $\mathit{Bl}_{j}$
for $1\leq j<\lceil\log(n)\rceil$ and $H\in\{G,\mathit{RevG}\}$, we maintain
for all vertices a list of their incoming and outgoing edges, which we
update whenever we encounter obsolete entries while constructing $H_{j}$.
Each entry can be removed at most once, thus this can be done in $O(m)$ total time.
Let $S$ be the set of vertices considered in an iteration of the outer while-loop
and let $\lvert S\rvert=n^{\prime}$.
The $j$th iteration of the for-loop takes $O(n^{\prime}\cdot 2^{j})$ time
because $H_{j}$ contains $O(n^{\prime}\cdot 2^{j})$ edges and constructing $H_{j}$ and $\mathit{Bl}_{j}$
and computing reachability, SCCs, and $R$ can all be done in time linear in the
number of edges. The search in $G$ and $\mathit{RevG}$ only increases the runtime
by a factor of two. Further all iterations up to the $j$th iteration
can be executed in time $O(n^{\prime}\cdot 2^{j})$ as their runtimes
form a geometric series.
Note that whenever a graph is not strongly connected, it contains a top SCC and
a bottom SCC and one of them has at most half of the vertices. Thus in some
iteration $j^{*}$ a top or bottom SCC with either $C=S$ or
$\lvert C\rvert\leq n^{\prime}/2$ is found by Lemma 5.25.
Since $C$ was not found
in iteration $j^{*}-1$, we have $\lvert C\rvert=\Omega(2^{j^{*}})$ by
Lemma 5.25.
In the case $C=S$ the vertices in $S$ are not considered further by
the algorithm. Thus we can bound the time for this iteration with
$O(n^{\prime}\cdot 2^{\log(n^{\prime})})=O(n^{\prime 2})$
and hence the total time for this case with $O(n^{2})$.
It remains to bound the time for the case $\lvert C\rvert\leq n^{\prime}/2$.
Let $\lvert C\rvert=n_{1}$ and let $c$ be some constant such that
the time spent for the search of $C$ is bounded by $c\cdot n_{1}\cdot n^{\prime}$.
We denote this time for the set $S$ over the whole algorithm with $f(n^{\prime})$ and
show $f(n^{\prime})=2cn^{\prime 2}$ by induction as follows:
$$\displaystyle f(n^{\prime})$$
$$\displaystyle\leq f(n_{1})+f(n^{\prime}-n_{1})+cn^{\prime}n_{1}\,,$$
$$\displaystyle\leq 2cn_{1}^{2}+2c(n^{\prime}-n_{1})^{2}+cn^{\prime}n_{1}\,,$$
$$\displaystyle=2cn_{1}^{2}+2cn^{\prime 2}-4cn^{\prime}n_{1}+2cn_{1}^{2}+cn^{\prime}n_{1}\,,$$
$$\displaystyle=2cn^{\prime 2}+4cn_{1}^{2}-3cn^{\prime}n_{1}\,,$$
$$\displaystyle\leq 2cn^{\prime 2}\,,$$
where the last inequality follows from $n_{1}\leq n^{\prime}/2$.
∎
Proposition 5.27 (Soundness of Algorithm 4).
Let $W$ be the set returned by Algorithm 4.
We have $W\subseteq\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Streett}\left(\mathrm{SP}\right)\right)$.
Proof.
We follow the proof of Proposition 5.20.
Let $C$ be a set of vertices added to goodEC in Line 4.
Since $P[C]$ is strongly connected by
Lemma 5.25, we have that
immediately before $C$ is added to goodEC it was checked that
$P[C]$ contains at least one edge, is strongly connected, and $\textnormal{{Bad}}(\mathit{D}(C))$
is empty. Thus it is sufficient to show that Invariant 5.21 holds
in Algorithm 4.
Before the first iteration of the while-loop $Q$ is initialized with
the maximal end-components of $P$ and thus the invariant holds.
Assume the invariant holds before the beginning of an iteration of the outer while-loop
and let $S$ be the set of vertices for which the data structure
is removed from $Q$ in this iteration. In the inner while-loop some vertices
and their random attractor in $P[S]$ might be removed from $S$. Let $S^{\prime}$ be
the remaining vertices. By the definition of a random attractor there are no
random edges from $S^{\prime}$ to $S\setminus S^{\prime}$ and thus by the induction hypothesis
there are no random edges from $S^{\prime}$ to $V\setminus S^{\prime}$.
Then either $P[S^{\prime}]$ is strongly connected and no set is added to $Q$ in
this iteration of the while-loop or either a top or a bottom SCC $C$ of $P[S^{\prime}]$
is identified by Lemma 5.25.
If $C$ is a top SCC, then there are no edges from $S^{\prime}\setminus C$
to $C$ and thus $S^{\prime}\setminus C$ has no outgoing random edges.
Hence the invariant is maintained when $\mathit{D}(S^{\prime}\setminus C)$ is added to $Q$.
Then the random vertices of $C$ with edges to vertices in $S^{\prime}\setminus C$ and
their random attractor are removed from $C$. Thus the remaining vertices
of $C$ have no random edges to $V\setminus C$ and the invariant is maintained
when the data structure of this vertex set is added to $Q$.
If $C$ is a bottom SCC, then there are no edges from $C$ to $S^{\prime}\setminus C$;
thus the invariant is maintained when $\mathit{D}(C)$ is added to $Q$. The random
attractor of $C$ is removed from $S^{\prime}\setminus C$ before the data structure of
the remaining vertices is added to $Q$, hence the invariant is maintained in all
cases.
∎
Proposition 5.28 (Completeness of Algorithm 4).
Let $W$ be the set returned by Algorithm 4.
We have $\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Streett}\left(\mathrm{SP}\right)\right)\subseteq W$.
Proof.
Following the proof of Proposition 5.22, it is sufficient
to show by induction over the iterations of the outer-while loop that Invariant 5.23 holds in Algorithm 4.
Before the first iteration of the outer while-loop, the
set $Q$ is initialized with the MECs of $P$, thus the induction base holds.
Let $S$ be the set of vertices for which the data structure is
removed from $Q$ in an iteration of the outer while-loop and
let $\mathcal{X}_{S}$ be the set of good end-components
contained in $S$.
Let $S^{\prime}$ be the subset of $S$ that is not removed in the inner while-loop.
We have $X\subseteq S^{\prime}$ for every $X\in\mathcal{X}_{S}$ by
Corollary 5.14.
Since every end-component contains an edge, $P[S^{\prime}]$ contains at least one
edge if $\mathcal{X}_{S}$ is not empty.
Then either $S^{\prime}$ and thus all $X\in\mathcal{X}_{S}$ are added to goodEC
(Line 4) or an SCC $C\subsetneq S^{\prime}$ of $P[S^{\prime}]$ is
identified in Line 4 by Lemma 5.25. By
Lemma 5.13 (a) each $X\in\mathcal{X}_{S}$ is either
a subset of $C$ or of $S^{\prime}\setminus C$.
For $X\subseteq C$ we have $R\cap X=\emptyset$ (Line 4)
since $X$ has no outgoing random edges and thus $X\subseteq C\setminus\mathit{Attr}(P[C],R)$ by Lemma 5.13 (b).
For $X\subseteq S^{\prime}\setminus C$ we have $X\cap C=\emptyset$
and thus $X\subseteq S^{\prime}\setminus\mathit{Attr}(P[S^{\prime}],C)$
by Lemma 5.13 (b). The data structures of $C\setminus\mathit{Attr}(P[C],R)$
and of $S^{\prime}\setminus\mathit{Attr}(P[S^{\prime}],C)$
are added to $Q$ in lines 4 and either 4
or 4, hence the invariant holds after the outer while-loop.
∎
5.5 Algorithm for Sparse MDPs with Streett Objectives
Algorithm 5 combines Algorithm 3
with the ideas of the MEC-algorithm for sparse MDPs of [16] and
the algorithm for graphs with Streett objectives of [28].
As for dense graphs, the difference to
Algorithm 3 lies in the search for strongly connected
components in the sub-MDP $P[S]$ induced by a vertex set $S$ for which
the data structure was maintained in $Q$ and then some vertices (and their
random attractor) might have been removed from it. The algorithm is based
on the following observation: Whenever a strongly connected component $C$
is not strongly connected after some vertices $A$ were removed from it,
then (a) there is a top and a bottom SCC in $P[C\setminus A]$ and
(b) some vertex of the top SCC had an incoming edge from a vertex of $A$
and some vertex of the bottom SCC had an outgoing edge to a vertex of $A$.
We label vertices that lost an incoming edge since the last SCC computation
with $h$ (for head) and vertices that lost an outgoing edge with $t$ (for tail).
If more than $\sqrt{m/\log n}$ vertices are labeled, we remove all labels and
compute SCCs as in Algorithm 3; this can happen at most
$\sqrt{m\log n}$ times. Otherwise we search for the smallest top or bottom SCC
of $P[S]$ by searching in lock-step from all labeled vertices.
Lock-step means that one step in each of the searches is executed before
the next step of a search is started and all searches are stopped as soon as
one search finishes. The search for top SCCs is done by
searching for bottom SCCs in the reverse graph. Tarjan’s depth-first search
based SCC algorithm detects a bottom SCC in time proportional to the number
of edges in the bottom SCC when the search is started from a vertex inside
the bottom SCC. As there are at most $\sqrt{m/\log n}$ parallel searches,
the time for all the lock-step searches is $O(\sqrt{m/\log n})$ times the
number of edges in the smallest top or bottom SCC of $P[S]$. Since
each edge can be in the smallest SCC at most $O(\log n)$ times, this leads to
a total runtime of $O(m\sqrt{m\log n})$.
Whenever an SCC is identified, the labels of its vertices are removed. The
Invariants 5.21 and 5.23 are maintained
as in Algorithm 3.
Lemma 5.29 (Label Invariant).
In Algorithm 5 the following invariant is maintained
for every set $S$ for which the data structure $\mathit{D}(S)$ is in $Q$:
Either (1) no vertex of $S$ is labeled and $P[S]$ is strongly connected or
(2) in each top SCC of $P[S]$ at least one vertex is labeled with $h$
and in each bottom SCC of $P[S]$ at least one vertex is labeled with $t$.
Proof.
The proof is by induction over the iterations of the outer while-loop.
After the initialization of $Q$ with the MECs of $P$ no vertex is labeled
and every set $S$ with $\mathit{D}(S)\in Q$ is strongly connected.
Let now $S$ denote the set for which $\mathit{D}(S)$ is removed from $Q$
at the beginning of an iteration of the outer while-loop and assume the
invariant holds for $S$.
Observation.
We have for non-empty vertex sets $W$ and $Z=W\setminus Y$
with $Y\subsetneq W$ that if $C$ is a top (bottom) SCC in $P[Z]$
but had incoming (outgoing) edges in $P[W]$, then these incoming
(outgoing) edges were from (to) vertices in $Y$. Thus when the
invariant holds for $W$ and
we label each vertex of $Z$ with an incoming edge from $Y$ with $h$ and
each vertex of $Z$ with an outgoing edge to $Y$ with $t$, then the invariant
holds for $Z$.
By this observation the invariant remains to hold for $S$ after the inner
while-loop.
In the case $\lvert H\rvert+\lvert T\rvert\geq\sqrt{m/\log n}$
all labels are removed from $S$ and then each SCC $C$ of $P[S]$ is
considered separately. Note that for each $C$ the invariant holds
and thus the invariant remains to hold for the set $C$ added to $Q$ after
the vertices in $A$ were removed and the corresponding labels were added in
Line 5.
In the case $\lvert H\rvert+\lvert T\rvert<\sqrt{m/\log n}$
a bottom or top SCC $C$ of $P[S]$ is identified and all labels
of $C$ are removed. The invariant holds for $C$ and thus the invariant remains
to hold for the set $C$ added to $Q$ after
vertices were removed from $C$ in Line 5
and the corresponding labels were added in Line 5.
By the above observation with $W=S$ and $Y=\mathit{Attr}(P[S],C)$
the invariant also holds for the set $S\setminus\mathit{Attr}(P[S],C)$
for which the data structure is added to $Q$ after the corresponding
labels are added in Line 5.
∎
Proposition 5.30 (Runtime of Algorithm 5).
Algorithm 5 takes $O(m\sqrt{m\log n}+b\log n)$ time.
Proof.
Using the data structure of Lemma 5.18 ([28]),
the initialization phase of Algorithm 5 takes
$O(\textsc{MEC}+b+n)$ time, which is in $O(m\sqrt{m}+b)$ [16].
Further by Theorem 3.1 the almost-sure reachability computation
after the outer while-loop can be done in $O(\textsc{MEC})$ time.
Removing bad vertices takes total time $O(n+b)$ by Lemma 5.18.
Since a label is added only when an edge is not considered further by the
algorithm, the total time for adding and removing labels is $O(m)$.
Whenever a random attractor is computed, its edges are not considered further;
thus all attractor computations take $O(m)$ total time by Definition 5.12.
Note that whenever a graph is not strongly connected, it contains a top SCC and
a bottom SCC and one of them has at most half of the vertices. Thus whenever
a top or bottom SCC $C$ with $C\subsetneq S$ is identified in
Line 5, then $\lvert C\rvert\leq\lvert S\rvert/2$.
This implies by Lemma 5.29
that whenever Remove or Construct are called (after the
initialization of $Q$), the vertices that are removed resp. added are either
(1) vertices for which the size of the SCC containing them was at least halved
or (2) vertices that are not considered further. Case (1) can happen
at most $O(\log n)$ times, thus all calls to Remove or Construct take
total time $O((n+b)\log n)$ by Lemma 5.18.
It remains to bound the time for identifying SCCs and determining the
random boundary vertices $R=\{v\in V_{R}\cap C\mid\exists u\in S\setminus C\text{ s.t.\ }(v,u)\in E\})$ in
Case 1, $\lvert H\rvert+\lvert T\rvert\geq\sqrt{m/\log n}$,
and Case 2, $\lvert H\rvert+\lvert T\rvert<\sqrt{m/\log n}$.
Since labels are added only when edges are not considered further and all
labels of the considered vertices are deleted when Case 1 occurs, Case 1 can
happen at most $\sqrt{m\log n}$ times. Thus the total time for Case 1 can be
bounded by $O(m\sqrt{m\log n})$. In Case 2 we charge the time for the
$O(\sqrt{m/\log n})$ lock-step searches to the edges in the identified
SCC $C$. With Tarjan’s SCC algorithm [37] a bottom SCC is identified
in time proportional to the number of edges in the bottom SCC when the search
is started at a vertex in the bottom SCC, which is in
Algorithm 5 guaranteed by Lemma 5.29
for both top and bottom SCCs. Since always the smallest top or bottom SCC in
$P[S]$ is identified, each edge is charged at most $O(\log n)$ times.
Thus the total time for identifying SCCs in Case 2 is $O(m\sqrt{m\log n})$.
Determining the random boundary vertices $R$ in Case 2 can be charged to
the edges in $C$
and to the edges from $C$ to $S\setminus C$, which are then not considered
further by the algorithm. Thus the total runtime of the algorithm is
$O(m\sqrt{m\log n})$.
∎
Proposition 5.31 (Correctness of Algorithm 5).
Let $W$ be the set returned by Algorithm 5.
We have $W=\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Streett}\left(\mathrm{SP}\right)\right)$.
Proof.
Lemma 5.29 implies that whenever a vertex set is added
to goodEC in Line 5, it induces a strongly connected sub-MDP.
Thus we have that immediately before a set of vertices $C$ is added to
goodEC in Line 5 or Line 5, it is checked that
$P[C]$ contains at least one edge, is strongly connected, and $\textnormal{{Bad}}(\mathit{D}(C))$
is empty. For the soundness and completeness of Algorithm 5
it remains to show the Invariants 5.21 and 5.23.
We have for each iteration of the outer while-loop:
The inner while-loop is the same as in Algorithms 3
and 4. In the case $\lvert H\rvert+\lvert T\rvert=0$,
the currently considered set of vertices is added to goodEC and
no set is added to $Q$. If $\lvert H\rvert+\lvert T\rvert\geq\sqrt{m/\log n}$,
the same operations as in Algorithm 3 are performed.
If $\lvert H\rvert+\lvert T\rvert<\sqrt{m/\log n}$,
like in Algorithm 4,
either a top or a bottom SCC is identified and then the same operations as in
Algorithm 4 are applied to the identified SCC and the
remaining vertices. As the operations in
Algorithms 3 and 4 preserve
the invariants, this is also true for Algorithm 5. ∎
6 MDPs with Rabin and Disjunctive Büchi and coBüchi Objectives
In the first part of this section we prove the following conditional lower bounds for Rabin,
and disjunctive Büchi and coBüchi objectives.
Theorem 6.1.
Assuming STC, there is no combinatorial $O(n^{3-\epsilon})$ or $O((kn^{2})^{1-\epsilon})$ algorithm for
each of the following problems:
1.
computing the a.s. winning set in an MDP with a disjunctive Büchi query;
2.
computing the winning set in a graph with a disjunctive coBüchi objective and
thus also computing the a.s. winning set in an MDP for disjunctive coBüchi objective or
a disjunctive coBüchi query;
3.
computing the a.s. winning set in an MDP with a Rabin objective.
Theorem 6.2.
Assuming SETH or OVC, there is no $O(m^{2-\epsilon})$ or $O((k\cdot m)^{1-\epsilon})$ algorithm for
each of the following problems:
1.
computing the a.s. winning set in an MDP with a disjunctive Büchi query;
2.
computing the a.s. winning set in an MDP with a disjunctive coBüchi objective or
a disjunctive coBüchi query;
3.
computing the a.s. winning set in an MDP with a disjunctive Singleton coBüchi objective or
a disjunctive Singleton coBüchi query;
4.
computing the a.s. winning set in an MDP with a Rabin objective.
On the algorithmic side we prove the following theorem in the second part of this section.
Note that a Rabin objective corresponds to a disjunctive objective over
1-pair Rabin objectives.
Theorem 6.3.
Given an MDP $P=((V,E),(V_{1},V_{R}),\delta)$
and a Rabin objective wit Rabin pairs $\mathrm{RP}=\{(L_{i},U_{i})\mid 1\leq i\leq k\}$,
let $b=\sum_{i=1}^{k}(\lvert L_{i}\rvert+\lvert U_{i}\rvert)$.
Let MEC denote the time to compute a MEC-decomposition.
1.
The almost-sure winning set $\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Rabin}\left(\mathrm{RP}\right)\right)$ can be computed in
$O(k\cdot\textsc{MEC})$ time.
2.
If $U_{i}=\emptyset$ for all $1\leq i\leq k$ (i.e. the Rabin
pairs are Büchi objectives),
then the almost-sure winning set for the disjunctive objective over the Rabin pairs
can computed in $O(\textsc{MEC}+b)$ time and the disjunctive query in
$O(k\cdot m+\textsc{MEC})$ time.
3.
If $L_{i}=V$ for all $1\leq i\leq k$ (i.e. the Rabin
pairs are coBüchi objectives), then the
almost-sure winning set for the disjunctive objective and the disjunctive
query over the Rabin pairs
can computed in $O(k\cdot m+\textsc{MEC})$ time.
6.1 Conditional Lower Bounds for Rabin, Büchi and coBüchi
The conditional lower bounds for Rabin, and disjunctive Büchi and coBüchi objectives are based
on our results for reachability (see Section 3.2) and safety objects
(see Section 4.1) and
the Observations 2.5, 2.6 & 2.8
that interlink these objectives.
Proposition 6.4.
Assuming STC, there is no combinatorial $O(n^{3-\epsilon})$ or $O((k\cdot n^{2})^{1-\epsilon})$ algorithm for
1.
computing the winning set in an MDP with a disjunctive Büchi query,
2.
computing the winning set in a graph with a disjunctive coBüchi objective, and
3.
computing the winning set in an MDP with a Rabin objective.
Moreover, there is no such algorithm deciding whether the winning set is non-empty
or deciding whether a specific vertex is in the winning set.
Proof.
1) By Observation 2.6 in MDPs reachability can be reduced in
linear time to Büchi. Thus the result follows from
the corresponding hardness result for reachability (cf. Theorem 3.7).
2) By Observation 2.5 the winning set of disjunctive safety is non-empty iff
the winning set of disjunctive coBüchi with the same target sets is non-empty. Thus
the result follows from the corresponding hardness result for safety (cf. Theorem 4.2).
For the problem of deciding whether a specific vertex is in the winning set, recall
that the graph $G^{\prime}$ constructed in Reduction 4.3 is such that vertex $s$
appears in each infinite path and thus if there is a winning strategy starting in some vertex,
then there is also one starting in $s$.
That is, deciding on $G^{\prime}$ whether $s$ is winning is equivalent to deciding whether the winning
set is non-empty. Hence, the lower bound for the former follows.
3) The result follows from (2) and Observation 2.8, by which disjunctive coBüchi objectives are special instances of Rabin objectives.
∎
Proposition 6.5.
Assuming SETH or OVC, there is no $O(m^{2-\epsilon})$ or $O((k\cdot m)^{1-\epsilon})$ algorithm for
1.
computing the winning set in an MDP with a disjunctive Büchi query,
2.
computing the winning set in an MDP with a disjunctive coBüchi objective or
a disjunctive coBüchi query,
3.
computing the winning set in an MDP with a disjunctive Singleton coBüchi objective or
a disjunctive Singleton coBüchi query, and
4.
computing the winning set in an MDP with a Rabin objective.
Moreover, there is no such algorithm for deciding whether the winning set is non-empty
or deciding whether a specific vertex is in the winning set.
Proof.
1) By Observation 2.6 in MDPs reachability can be reduced in
linear time to Büchi. Thus the result follows from
the corresponding hardness result for reachability (cf. Theorem 3.10).
2) By Observation 2.5 the winning set of disjunctive safety is non-empty iff
the winning set of disjunctive coBüchi with the same target sets is non-empty. Thus
the result follows from the corresponding hardness result for safety (cf. Theorem 4.5).
For the problem of deciding whether a specific vertex is in the winning set, recall
that the MDP $P$ constructed in Reduction 4.6 is such that vertex $s$
appears in each infinite path and thus if there is a winning strategy starting in some vertex,
then there is also one starting in $s$.
That is, deciding on $P$ whether $s$ is winning is equivalent to deciding
whether the winning
set is non-empty. Hence, the lower bound for the former follows.
3) This holds by (2) and the fact that all sets $T_{i}$ in Lemma 4.7 are singletons.
3) The result follows from (2) and Observation 2.8, by which disjunctive coBüchi objectives are special instances of Rabin objectives.
∎
6.2 Algorithm for MDPs with Rabin Objectives
In this section we describe an algorithm for MDPs with Rabin objectives that
considers each MEC of the input MDP separately. This formulation has the
advantage that we can obtain a faster runtime than previously known
for the special case of disjunctive coBüchi objectives, which we describe in
Section 6.4. The special case of Büchi objectives is
described in Section 6.3.
For Rabin objectives a good end-component could, equivalently to
Definition 5.2, be defined as follows.
Definition 6.6 (Good Rabin End-Component).
Given an MDP $P$ and a set $\mathrm{RP}=\{(L_{i},U_{i})\mid 1\leq i\leq k\}$ of
Rabin pairs,
a good Rabin end-component is an end-component $X$ of $P$ such that
$L_{i}\cap X\neq\emptyset$ and $U_{i}\cap X=\emptyset$ for some
$1\leq i\leq k$.
As for Streett objectives, we determine the almost-sure winning set for Rabin
objectives by computing almost-sure reachability of the union of all good Rabin
end-components. The correctness of this approach
follows from Corollary 5.7 and Proposition 5.11.
We use the notation defined in Section 5.2.
Our strategy to find all good Rabin end-components is as follows. First the
MEC-decomposition of the input MDP $P$ is determined. For each MEC $X$
and separately for each $1\leq i\leq k$ we first remove the set $U_{i}$ and its
random attractor and then compute the MEC-decomposition in the sub-MDP induced
by the remaining vertices. Every newly computed MEC that contains a vertex of $L_{i}$
is a good Rabin end-component. If the MEC $X$ of $P$ contains one such
good end-component, then by Corollary 5.8 all vertices of $X$
are in the almost-sure winning set for the Rabin objective. Thus we can immediately
add $X$ to the set of winning MECs in Line 6.666
We could alternatively add only the vertices in the good end-component because
the winning MEC would be detected as winning in the final almost-sure reachability
computation; the presented formulation shows the similarities
to the coBüchi algorithm in Section 6.4. Additionally, this
allows reusing the initial MEC-decomposition for the almost-sure reachability
computation.
Proposition 6.7 (Runtime of Algorithm 6).
Algorithm 6 can be implemented in $O(k\cdot\textsc{MEC})$ time.
Proof.
The initialization of $\mathcal{X}$ with all MECs of the input
MDP $P$ can clearly be done in $O(\textsc{MEC})$ time. Further by
Theorem 3.1 the final almost-sure reachability computation
can be done in $O(\textsc{MEC})$ time777
Actually the almost-sure reachability computation can be done in $O(m)$
reusing the already computed MEC decomposition.
.
Assume that each vertex has a list of the sets $L_{i}$ and $U_{i}$ for $1\leq i\leq k$ it belongs to.
(We can generate these lists from the lists of the Rabin pairs in $O(b)=O(nk)$ time at the beginning of the algorithm.)
Consider an iteration of the outer for-each loop,
let $X$ denote the considered MEC, and fix one iteration $i$ of the $k$
iterations of the for loop. Line 4 requires $O(|X|)$ time.
Let $m_{X}$ be the number of edges in $P[X]$ and let
$\textsc{MEC}_{X}$ denote the time needed to compute a MEC-decomposition
on $P[X]$. The Line 5 requires $O(m_{X}+\textsc{MEC}_{X})=O(\textsc{MEC}_{X})$ time. The inner for-each loop takes $O(|X|)$ time
as in each iteration we need $O(|Y|)$ in Line 7
and constant time in Line 8. Thus in total we have
$O(b+\textsc{MEC}+\sum_{X\in\mathcal{X}}k\cdot(|X|+\textsc{MEC}_{X}))=O(k\cdot\textsc{MEC})$.
∎
Proposition 6.8 (Correctness of Algorithm 6).
Algorithm 6 computes $\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Rabin}\left(\mathrm{RP}\right)\right)$.
Proof.
By the Corollaries 5.8 & 5.7 and Proposition 5.11 we know that it
suffices to correctly classify each MEC as either winning or not winning; we say a MEC is winning iff it contains a good Rabin EC,
that is, it contains
an EC $X$ such that $L_{i}\cap X\neq\emptyset$ and $U_{i}\cap X=\emptyset$ for some $1\leq i\leq k$.
The loops in Lines 6 & 6 iterate over all MECs $X$
and all Rabin Pairs $(L_{i},U_{i})$.
What remains to show is that
Lines 6–6 correctly classify
whether a MEC contains a good EC satisfying the Rabin pair $(L_{i},U_{i})$.
•
Assume $X$ contains a good EC $X^{\prime}$ that satisfies $(L_{i},U_{i})$, i.e., $L_{i}\cap X^{\prime}\neq\emptyset$ and $U_{i}\cap X^{\prime}=\emptyset$.
Then the if condition in Line 6 is true and the algorithm subtracts the random attractor
of $U_{i}$.
As $X^{\prime}$ is strongly connected, has no outgoing random edges, and $U_{i}\cap X^{\prime}=\emptyset$,
it does not intersect with $\mathit{Attr}(P[{X}],U_{i})$ (see also Lemma 5.13).
Thus there is a MEC $Y\in\mathcal{Y}$ that contains $X^{\prime}$ and thus $L_{i}\cap Y\neq\emptyset$.
Hence, the algorithm correctly classifies the set $X$ as winning MEC.
•
Assume the algorithm classifies a MEC $X$ as winning.
Then for some $i$ in Line 6 there is an end-component
$Y\in\mathcal{Y}$ of $P[X\setminus\mathit{Attr}(P[{X}],U_{i})]$
with $L_{i}\cap Y\neq\emptyset$ and $U_{i}\cap Y=\emptyset$,
i.e., $Y$ is a good end-component in $P[X\setminus\mathit{Attr}(P[{X}],U_{i})]$.
Moreover, there cannot be a random edge from $u\in Y$ to $\mathit{Attr}(P[{X}],U_{i})$ as such an $u$
would be included in the random attractor $\mathit{Attr}(P[{X}],U_{i})$.
Thus $Y$ is also a good end-component of the full MDP $P$,
i.e., it was classified correctly.
By the above we have that whenever the outer for-each loop terminates, the set winMEC consists of all
winning MECs and then by Corollary 5.7 and Proposition 5.11 we can compute
$\mathopen{\hbox{\set@color${\langle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\langle}$}}1\mathclose{\hbox{\set@color${\rangle}$}\mkern 2.0mu\kern-3.49998pt\leavevmode\hbox{\set@color${\rangle}$}}_{\textit{as}}\left(P,\textrm{Rabin}\left(\mathrm{RP}\right)\right)$ by computing almost-sure reachability
of the union of all winning MECs.
∎
6.3 Algorithms for MDPs with Büchi Objectives
As Büchi objectives can be encoded as Rabin pairs,
Algorithm 6 can also be used
to compute the a.s. winning set for disjunctive Büchi objectives.
However, Büchi objectives allow for some immediate simplifications that result in Algorithm 7.
This simplifications are based on the observation that for Büchi all sets $U_{i}$ are empty and
therefore also the random attractors computed in Line 6 of Algorithm 6 are empty.
Hence, there is also no need to recompute the MECs and
deciding whether a MEC is winning reduces to testing whether it intersects with one of the target sets.
Proposition 6.9 (Runtime of Algorithm 7).
Algorithm 7 can be implemented in $O(\textsc{MEC}+b)$ time.
Proof.
The initialization of $\mathcal{X}$ with all MECs of the input
MDP $P$ can clearly be done in $O(\textsc{MEC})$ time. Further by
Theorem 3.1 the final almost-sure reachability computation
can be done in $O(\textsc{MEC})$ time.
Assume that each vertex has a flag indicating whether it is in one of the sets $T_{i}$ or in none of them
(We can generate these flags from lists of the sets $T_{i}$ in $O(b)$
time at the beginning of the algorithm.).
Consider an iteration of the for-each loop,
let $X$ denote the considered MEC and fix some iteration $i$ of the for loop.
One Iteration costs $O(|X|)$ as in each iteration we need $O(|X|)$ in Line 7
and constant time in Line 7.
Thus in total the algorithm takes
$O(\textsc{MEC}+n+b)=O(\textsc{MEC}+b)$ time.
∎
When it comes to disjunctive Büchi queries with $k$ sets $T_{i}$,
one basically solves $k$
Büchi problems and then computes disjunctive almost-sure reachability
queries of the winning sets of the Büchi problems.
However, as the MEC-decomposition is independent of the sets $T_{i}$,
is suffices to compute the MEC-decomposition once.
This results in an $O(k\cdot m+\textsc{MEC}+b)=O(k\cdot m+\textsc{MEC})$
time algorithm (see Algorithm 8).
6.4 Algorithms for MDPs with coBüchi Objectives
Again, as coBüchi objectives can be encoded as Rabin pairs, one can use
Algorithm 6 to compute the a.s. winning set for disjunctive
coBüchi objectives.
However, coBüchi objectives allow for some simplifications that result in
the simpler and more efficient Algorithm 9.
This simplifications are based on the observation that for coBüchi all sets $L_{i}$ coincide with the set of all vertices and
therefore the if conditions in Lines 6 & 6 of Algorithm 6 are always true.
That is, whenever there is a vertex in a MEC $X$ of $P$ that is not contained
in $\mathit{Attr}(P[X],T_{i})$, then there is a MEC in $P[X\setminus\mathit{Attr}(P[X],T_{i})]$, which is a good end-component of $P$.
Testing whether a MEC contains a good EC for a coBüchi objective
$\textrm{coB{\"{u}}chi}\left(T_{i}\right)$ thus reduces to testing
whether the random attractor of $T_{i}$ covers the whole MEC.
Observation 6.10.
The same ideas can be used for the disjunction of one-pair Streett objectives
(Table 5). For each MEC $X$ and each $i$ we check whether
$X\cap L_{i}\neq\emptyset$ and $X\cap U_{i}=\emptyset$. If this is the case,
then we determine whether the random attractor of $L_{i}$ covers the whole MEC.
If not, then the MEC contains a good end-component for the one-pair Streett
objective.
Proposition 6.11 (Runtime).
Algorithm 9 can be implemented in $O(k\cdot m+\textsc{MEC})$ time.
Proof.
The initialization of $\mathcal{X}$ with all MECs of the input
MDP $P$ can clearly be done in $O(\textsc{MEC})$ time. Further by
Theorem 3.1 the final almost-sure reachability computation
can be done in $O(\textsc{MEC})$ time.
Consider an iteration of the for-each loop,
let $X$ denote the considered MEC, and fix some iteration $i$ of the for loop.
Let $m_{X}$ be the number of edges in $P[X]$.
In the $i$th iteration we need $O(|m_{X}|)$ time
to compute the random attractor in Line 9
and constant time in Line 9.
Thus the total time is $O(k\cdot m+\textsc{MEC})$.
∎
When it comes to disjunctive coBüchi queries with $k$ sets $T_{i}$,
we have to remember which of the sets $T_{i}$ are satisfied by a MEC and
then compute disjunctive almost-sure reachability queries,
one query per set $T_{i}$.
This increases the running time for the almost-sure reachability computation
to $O(k\cdot m)$ (given the MEC-decomposition),
which, however, is subsumed by the total running time of $O(k\cdot m+\textsc{MEC})$.
The resulting algorithm is stated as Algorithm 10.
7 Algorithm for Graphs with Singleton coBüchi Objectives
In this section we show how to compute in linear time the winning set
for graphs with a special type of coBüchi objectives, namely when all sets $T_{i}$
for $1\leq i\leq k$ have cardinality one.
Theorem 7.1.
Given a graph $G=(V,E)$ and coBüchi objectives $T_{i}$ with
$\lvert T_{i}\rvert=1$ for $1\leq i\leq k$, the winning set for the
disjunction over the coBüchi objectives can be computed in $O(m)$ time.
To compute the winning set it is sufficient to detect whether a strongly connected
graph contains a cycle that does not contain all the vertices
in the set $T=\bigcup_{1\leq i\leq k}T_{i}$.
To see this, first note that each non-trivial SCC of the graph (i.e., each
SCC that contains at least one edge) that does not contain all vertices of $T$
is winning. If there is no SCC $S$ with $T\subseteq S$, then
we can determine the winning set in linear time by computing
the vertices that can reach any non-trivial SCC. Thus it remains to consider
an SCC $S$ with $T\subseteq S$. For the relevant case of $\lvert T\rvert>1$ we have that $S$ is a non-trivial SCC.
Since $S$ is strongly connected,
the vertices of $S$ can reach each other and hence it is sufficient to compute whether
$S$ contains a cycle that does not contain all the vertices of $T$ (i.e. solving the non-emptiness problem).
If such a cycle exists, then also $S$ is winning, otherwise $S$ is not winning.
In any case, the winning set can then be determined by computing the vertices
that can reach some winning SCC.
We now describe the algorithm to determine whether a strongly connected
graph $G=(V,E)$ contains a simple cycle $C$ such that we have
$T_{i}\cap C=\emptyset$ for some $1\leq i\leq k$, given $\lvert T_{i}\rvert=1$ for all $i$.
First we check whether $G[V\setminus T_{1}]$ contains a non-trivial SCC.
If this is true, then
$G$ contains a cycle that does not contain $T_{1}$ and we are done. Otherwise
every cycle of $G$ contains $T_{1}$. We assign the edges of $G$ edge lengths
as follows: All edges $(v,w)\in E$ for which $w\in T$ have length 1,
all other edges have length 0.
Let $s$ denote the vertex in $T_{1}$.
Let $\delta$ be the length of the shortest path (w.r.t. the edge lengths
defined above) from $s$ to $s$ that uses at least one edge, i.e., the minimum
length of a cycle containing $s$. We have that $\delta<k$ if and only if
this cycle with the length $\delta$ does not contain all vertices of $T$.
Thus if $\delta<k$, then $G$ is winning for the coBüchi objective, otherwise not.
Note that this algorithm would also work for a Rabin objective where we have
for each $1\leq i\leq k$ that (a) $L_{i}=\{s\}$ for some $s\in V$ and
(b) $\lvert U_{i}\rvert=1$.
Since all edge lengths are zero or one, we can compute $\delta$ in linear time.
In Algorithm 11 we additionally use that all
incoming edges of a vertex have the same length. After checking whether
$G[V\setminus T_{1}]$ contains a non-trivial SCC, the algorithm works as
follows.
We modify the graph by replacing the vertex $s$ by two vertices, $s_{\text{in}}$
and $s_{\text{out}}$, and replacing $s$ in all edges $(v,s)\in E$ with
$s_{\text{in}}$ and in all edges $(s,v)\in E$ with $s_{\text{out}}$.
Then $\delta$ is equal to the shortest path from $s_{\text{out}}$ to
$s_{\text{in}}$. For the algorithm we consider both $s_{\text{in}}$ and
$s_{\text{out}}$ to be contained in $T$.
In the $j$th iteration of the for-loop we consider two “queues”,
$Q_{j}$ and $Q_{j+1}$ (can be implemented as sets). Each vertex is added to a
queue at most once during the
algorithm, which is ensured by marking vertices when they are added to a queue
and only add before unmarked vertices.
The following lemma shows that, until the vertex $s_{\text{in}}$ is removed from
$Q_{j}$ and the algorithm terminates,
precisely the vertices with distance $j$ from $s_{\text{out}}$ are
added to $Q_{j}$ for each $j$. Thus $s_{\text{in}}$ is added to $Q_{j}$ for some $j<k$ if and
only if $\delta<k$, which shows the correctness of the algorithm.
The runtime of the algorithm is $O(m)$ because each vertex is added to and
removed from a queue at most once and thus the outgoing edges of a vertex are
only considered once, namely when it is removed from a queue.
Lemma 7.2.
Before each iteration $j$ of the for-loop in
Algorithm 11, $Q_{j}$ contains
the vertices of $T$ with distance $j$ from $s_{\text{out}}$.
During iteration $j$, the vertices of $V\setminus T$
with distance $j$ from $s_{\text{out}}$ are added to $Q_{j}$.
No other vertices are added to $Q_{j}$.
Proof.
The proof is by induction over the iterations of the for-loop.
Before the first iteration ($j=0$), $Q_{0}$ is initialized with $s_{\text{out}}$
and all queues $Q_{j}$ for $j>0$ are empty,
thus the induction base holds. Assume the claim holds before the $j$th
iteration.
At the end of the while-loop, $Q_{j}$ is empty; every vertex $v$ that was added
to $Q_{j}$ before or in the $j$th iteration of the for-loop is removed from $Q_{j}$
in some iteration of the while-loop. Then all the unmarked vertices $w$ with
$(v,w)\in E$ are marked and added to $Q_{j}$ if the edge $(v,w)$ has length
zero or added to $Q_{j+1}$ if the edge $(v,w)$ has length one.
A vertex $u\in V\setminus T$ with distance
at least $j$ from $s_{\text{out}}$ has distance exactly $j$ if and only if
it can be reached from some vertex $v\in T$ that has distance $j$
by a sequence of zero length edges. The while-loop precisely adds these vertices
to $Q_{j}$. Further, a vertex $u\in V\cap T$ has distance $j+1$ if and
only if it has an edge from some vertex $v\in V$ that has distance $j$.
The while-loop adds exactly these vertices to $Q_{j+1}$.
∎
8 Conclusion
In this work we present improved algorithms and the first conditional
super-linear lower bounds for several fundamental model-checking
problems in graphs and MDPs w.r.t. to $\omega$-regular objectives.
Our results establish the first model separation results for graphs
and MDPs w.r.t. to classical $\omega$-regular objectives, and first
objective separation results both in graphs and MDPs for dual objectives,
and conjunction and disjunction of same objectives.
An interesting direction of future work is to consider similar results
for other models, such as, games on graphs.
Acknowledgments.
K. C. and M. H. are supported by the Austrian Science Fund (FWF): P23499-N23.
K. C. is supported by S11407-N23 (RiSE/SHiNE), an ERC Start Grant (279307: Graph Games), and a Microsoft Faculty Fellows Award.
For W. D., M. H., and V. L. the research leading to these results has received funding from the
European Research Council under the European Union’s Seventh Framework Programme
(FP/2007-2013) / ERC Grant Agreement no. 340506.
References
[1]
Amir Abboud, Arturs Backurs, and Virginia Vassilevska Williams.
If the current clique algorithms are optimal, so is Valiant’s
parser.
In FOCS, pages 98–117, 2015.
[2]
Amir Abboud, Arturs Backurs, and Virginia Vassilevska Williams.
Tight Hardness Results For LCS and other Sequence Similarity
Measures.
In FOCS, pages 59–78, 2015.
[3]
Amir Abboud and Virginia Vassilevska Williams.
Popular conjectures imply strong lower bounds for dynamic problems.
In FOCS, pages 434–443, 2014.
[4]
Amir Abboud, Virginia Vassilevska Williams, and Joshua R. Wang.
Approximation and fixed parameter subquadratic algorithms for radius
and diameter.
CoRR, abs/1506.01799, 2015.
[5]
Amir Abboud, Virginia Vassilevska Williams, and Oren Weimann.
Consequences of faster alignment of sequences.
In ICALP 2014, Proceedings, Part I, pages 39–51, 2014.
[6]
Amir Abboud, Virginia Vassilevska Williams, and Huacheng Yu.
Matching triangles and basing hardness on an extremely popular
conjecture.
In STOC, pages 41–50, 2015.
[7]
Arturs Backurs and Piotr Indyk.
Edit distance cannot be computed in strongly subquadratic time
(unless SETH is false).
In STOC, pages 51–58, 2015.
[8]
Christel Baier and Joost-Pieter Katoen.
Principles of model checking.
MIT Press, 2008.
[9]
Catriel Beeri.
On the membership problem for functional and multivalued dependencies
in relational databases.
ACM Transactions on Database Systems, pages 241–259, 1980.
[10]
Karl Bringmann.
Why walking the dog takes time: Frechet distance has no strongly
subquadratic algorithms unless SETH fails.
In FOCS, pages 661–670, 2014.
[11]
Karl Bringmann and Marvin Künnemann.
Quadratic Conditional Lower Bounds for String Problems and Dynamic
Time Warping.
In FOCS, pages 79–97, 2015.
[12]
Chris Calabro, Russell Impagliazzo, and Ramamohan Paturi.
The complexity of satisfiability of small depth circuits.
In IWPEC, pages 75–85, 2009.
[13]
Krishnendu Chatterjee, Luca de Alfaro, and Rupak Majumdar.
The complexity of coverage.
Int. J. Found. Comput. Sci., 24(2):165–186, 2013.
[14]
Krishnendu Chatterjee, Laurent Doyen, and Thomas A. Henzinger.
Qualitative analysis of partially-observable Markov decision
processes.
In MFCS, pages 258–269, 2010.
[15]
Krishnendu Chatterjee and Monika Henzinger.
Faster and Dynamic Algorithms For Maximal End-Component
Decomposition And Related Graph Problems In Probabilistic Verification.
In SODA, pages 1318–1336, 2011.
[16]
Krishnendu Chatterjee and Monika Henzinger.
Efficient and Dynamic Algorithms for Alternating Büchi Games and
Maximal End-component Decomposition.
Journal of the ACM, 61(3):15, 2014.
[17]
Krishnendu Chatterjee, Monika Henzinger, and Veronika Loitzenbauer.
Improved Algorithms for One-Pair and $k$-Pair Streett Objectives.
In LICS, pages 269–280, 2015.
[18]
Krishnendu Chatterjee, Thomas A. Henzinger, and Nir Piterman.
Generalized parity games.
In FOSSACS, volume 4423, pages 153–167, 2007.
[19]
Krishnendu Chatterjee, Marcin Jurdziński, and Thomas A. Henzinger.
Simple stochastic parity games.
In CSL, pages 100–113, 2003.
[20]
A. Cimatti, E. Clarke, F. Giunchiglia, and M. Roveri.
Nusmv: a new symbolic model checker.
International Journal on Software Tools for Technology
Transfer, 2:410–425, 2000.
[21]
Costas Courcoubetis and Mihalis Yannakakis.
The complexity of probabilistic verification.
J. ACM, 42(4):857–907, July 1995.
[22]
E. Allen Emerson and Charanjit S. Jutla.
The complexity of tree automata and logics of programs.
SIAM J. Comput., 29(1):132–158, 1999.
[23]
Nathanaël Fijalkow and Florian Horn.
The surprizing complexity of reachability games.
CoRR, abs/1010.2420, 2010.
[24]
Anka Gajentaan and Mark H. Overmars.
On a class of O(n${}^{\mbox{2}}$) problems in computational
geometry.
Comput. Geom., 45(4):140–152, 2012.
[25]
Monika Henzinger, Valerie King, and Tandy Warnow.
Constructing a Tree from Homeomorphic Subtrees, with Applications to
Computational Evolutionary Biology.
Algorithmica, 24(1):1–13, 1999.
[26]
Monika Henzinger, Sebastian Krinninger, and Veronika Loitzenbauer.
Finding 2-Edge and 2-Vertex Strongly Connected Components in
Quadratic Time.
In ICALP (Track A), pages 713–724, 2015.
[27]
Monika Henzinger, Sebastian Krinninger, Danupon Nanongkai, and Thatchaphol
Saranurak.
Unifying and strengthening hardness for dynamic problems via the
online matrix-vector multiplication conjecture.
In STOC, pages 21–30, 2015.
[28]
Monika Henzinger and Jan Arne Telle.
Faster Algorithms for the Nonemptiness of Streett Automata and for
Communication Protocol Pruning.
In SWAT, pages 16–27, 1996.
[29]
Gerard J. Holzmann.
The model checker spin.
IEEE Trans. Softw. Eng., 23(5):279–295, May 1997.
[30]
Neil Immerman.
Number of quantifiers is better than number of tape cells.
Journal of Computer and System Sciences, pages 384–406, 1981.
[31]
Russell Impagliazzo, Ramamohan Paturi, and Francis Zane.
Which problems have strongly exponential complexity?
J. Comput. Syst. Sci., 63(4):512–530, 2001.
[32]
Marta Z. Kwiatkowska, Gethin Norman, and David Parker.
Prism 4.0: Verification of probabilistic real-time systems.
In CAV, LNCS 6806, pages 585–591, 2011.
[33]
François Le Gall.
Powers of Tensors and Fast Matrix Multiplication.
In ISSAC, pages 296–303, 2014.
[34]
Lillian Lee.
Fast context-free grammar parsing requires fast boolean matrix
multiplication.
J. ACM, 49(1):1–15, January 2002.
[35]
Mihai Patrascu and Ryan Williams.
On the possibility of faster SAT algorithms.
In SODA, pages 1065–1075, 2010.
[36]
Liam Roditty and Uri Zwick.
On dynamic shortest paths problems.
Algorithmica, 61(2):389–401, 2011.
Announced at ESA’04.
[37]
Robert Endre Tarjan.
Depth first search and linear graph algorithms.
SIAM J. Computing, 1(2):146–160, 1972.
[38]
W. Thomas.
On the synthesis of strategies in infinite games.
In STACS’95, LNCS 900, pages 1–13. Springer, 1995.
[39]
W. Thomas.
Languages, automata, and logic.
In G. Rozenberg and A. Salomaa, editors, Handbook of Formal
Languages, volume 3, Beyond Words, chapter 7, pages 389–455. Springer,
1997.
[40]
Virginia Vassilevska Williams and Ryan Williams.
Subcubic equivalences between path, matrix and triangle problems.
In FOCS 2010, pages 645–654, 2010.
[41]
Ryan Williams.
A new algorithm for optimal 2-constraint satisfaction and its
implications.
Theor. Comput. Sci., 348(2-3):357–365, 2005.
Announced at ICALP’04.
[42]
Ryan Williams.
Faster all-pairs shortest paths via circuit complexity.
In STOC 2014, pages 664–673, 2014.
[43]
Ryan Williams.
Faster decision of first-order graph properties.
In CSL-LICS ’14, pages 80:1–80:6, 2014.
[44]
Pierre Wolper.
Constructing automata from temporal logic formulas: A tutorial.
In Lectures on Formal Methods and Performance Analysis, pages
261–277, 2000. |
Tagged particle diffusion in one-dimensional systems with Hamiltonian dynamics – II
Anjan Roy
A. Roy Raman Research Institute, Bangalore 560080, India; [email protected]
A. Dhar International Centre for Theoretical Sciences, TIFR, Bangalore 560012, India
O. Narayan Department of Physics, University of California, Santa Cruz, California 95064, USA
S. Sabhapandit Raman Research Institute, Bangalore 560080, India
Abhishek Dhar
A. Roy Raman Research Institute, Bangalore 560080, India; [email protected]
A. Dhar International Centre for Theoretical Sciences, TIFR, Bangalore 560012, India
O. Narayan Department of Physics, University of California, Santa Cruz, California 95064, USA
S. Sabhapandit Raman Research Institute, Bangalore 560080, India
Onuttom Narayan
A. Roy Raman Research Institute, Bangalore 560080, India; [email protected]
A. Dhar International Centre for Theoretical Sciences, TIFR, Bangalore 560012, India
O. Narayan Department of Physics, University of California, Santa Cruz, California 95064, USA
S. Sabhapandit Raman Research Institute, Bangalore 560080, India
Sanjib Sabhapandit
A. Roy Raman Research Institute, Bangalore 560080, India; [email protected]
A. Dhar International Centre for Theoretical Sciences, TIFR, Bangalore 560012, India
O. Narayan Department of Physics, University of California, Santa Cruz, California 95064, USA
S. Sabhapandit Raman Research Institute, Bangalore 560080, India
(December 8, 2020)
Abstract
We study various temporal correlation functions of a tagged particle
in one-dimensional systems of interacting point particles evolving with
Hamiltonian dynamics. Initial conditions of the particles are chosen from the canonical thermal distribution.
The correlation functions are studied in finite systems, and their forms examined at short and long times.
Various one-dimensional systems are studied. Results of numerical simulations
for the Fermi-Pasta-Ulam chain are qualitatively similar to results for the harmonic chain,
and agree unexpectedly well with a simple description in terms of linearized hydrodynamic equations for sound waves.
Simulation results for the alternate mass hard particle gas reveal that —
in contradiction to our earlier results roy13 with smaller system sizes — the diffusion
constant slowly converges to a constant value, in a manner consistent with mode coupling theories.
Our simulations also show that the behaviour of Lennard-Jones gas depends on its density.
At low densities, it behaves like a hard-particle gas, and at high densities like an anharmonic chain. In all the systems studied, the
tagged particle was found to show normal diffusion asymptotically, with convergence times depending on the system under study.
Finite size effects show up at time scales larger than sound traversal times, their nature being system-specific.
Keywords:Hamiltonian dynamics 1-d gas tagged particle
diffusion velocity auto-correlation function (VAF) mean-squared displacement (MSD)
1 Introduction
Observing tagged particle dynamics constitute a simple way of probing
the complex dynamics of an interacting many body system and has
been studied both
theoretically roy13 ; jepsen65 ; lebowitz67 ; lebowitz72 ; percus10 ; evans79 ; kasper85 ; marro85 ; mazur60 ; bishop81 ; bagchi02 ; harris65 ; beijeren83 ; pincus78 ; beijeren91 ; kollman03 ; lizana08 ; gupta07 ; barkai09 ; barkai10
and experimentally hahn96 ; wei00 ; lutz04 . Much of the theoretical
studies on tagged particle diffusion have focused on one-dimensional
systems and discussed two situations where the microscopic particle
dynamics is (i) Hamiltonian roy13 ; jepsen65 ; lebowitz67 ; lebowitz72 ; percus10 ; evans79 ; kasper85 ; marro85 ; mazur60 ; bishop81 ; bagchi02
or (ii) stochastic harris65 ; beijeren83 ; pincus78 ; beijeren91 ; kollman03 ; lizana08 ; gupta07 ; barkai09 ; barkai10 .
A hydrodynamic description of tagged particle diffusion has been
considered in pincus78 ; beijeren91 . Even for Hamiltonian
systems much of the work has been done on hard particle gases but
very little has been done on soft chains mazur60 ; bishop81 ; bagchi02 .
Although there has been considerable work on transport properties of
one dimensional gases narayan02 , this involves the
propagation of conserved quantities as a function of position and
time without reference to the identity of each particle. This changes
things considerably: for instance, conserved quantities propagate
ballistically for an equal mass hard particle gas, resulting in a
thermal conductivity proportional to $N,$ while tagged particle
dynamics in the same system is diffusive. Thus here we approach
the dynamics from a perspective that is different from the heat
conduction literature.
In this paper we present results for tagged particle correlations
for various one dimensional systems.
In particular,
we compute the mean squared deviation (MSD) $\langle[\Delta x(t)]^{2}\rangle$, the velocity autocorrelation function (VAF) $\langle v(t)v(0)\rangle$, and
$\langle\Delta x(t)v(0)\rangle$ of the central
particle, where $\Delta x(t)=x_{M}(t)-x_{M}(0)$ and $v(t)=v_{M}(t)$.
The average $\langle\cdots\rangle$ is taken over initial configurations
chosen from the equilibrium distribution. (Details are given later
in the sections dealing with each system). Note that the three correlation
functions are related to each other as
$$\displaystyle\frac{1}{2}\frac{d}{dt}\langle[\Delta x(t)]^{2}\rangle$$
$$\displaystyle=$$
$$\displaystyle\langle\Delta x(t)v(t)\rangle=\langle\Delta x(t)v(0)\rangle=D(t),$$
$$\displaystyle\frac{d}{dt}\langle\Delta x(t)v(0)\rangle$$
$$\displaystyle=$$
$$\displaystyle\langle v(t)v(0)\rangle.$$
(1)
These results follow from $\Delta x(t)=\int_{0}^{t}v(t^{\prime})dt^{\prime}$ and
$\langle v(t)v(t^{\prime})\rangle=\langle v(t-t^{\prime})v(0)\rangle;$ the last equation
on the first line defines $D(t)$.
When the tagged particle shows normal diffusive behaviour, $D(t\rightarrow\infty)$
is a constant, which is the diffusion constant. On
the other hand, $D(t\rightarrow\infty)$ is zero for sub-diffusive and
divergent for super-diffusive behaviour. Here we examine the form of the correlations at both “short times”, corresponding to times where the size of the system does not matter and at “long times”, after system size effects show up. As we will see, system size effects typically show up at times $t\sim L/c$ where $L$ is the system size and $c$ the sound speed in the system. The short time regime typically has an initial ballistic regime, with $\langle\Delta x^{2}(t)\rangle\sim t^{2}$, and we will see that this is always followed by a diffusive regime, with $\langle\Delta x^{2}(t)\rangle\sim t$.
The rest of this paper is organized as follows.
In Section 2, we present analytic results for the
harmonic chain, as an indicator of what one might expect for anharmonic
chains where an exact solution is not possible. In Section 3, we present simulation
results for the Fermi-Pasta Ulam (FPU) chain of anharmonic oscillators, with a simple model of
damped sound waves that compares well with the numerical results.
In Section 4, we present the
simulation results for the Lennard-Jones (LJ) gas, and show that its correlation functions resemble
those of a hard-particle gas (obtained in Ref. roy13 ) at low densities, and an anharmonic chain at high densities.
In Section 5, we present simulation results for the alternate mass hard
particle gas with large system sizes, and show that $D(t)$ saturates to a
constant in the large-$t$ limit, and that the approach to this asymptotic
limit is in agreement with the predictions of mode-coupling theory beijeren ;
this is in contradiction to the logarithmic decay for $D(t)$, that we had claimed in a previous paper roy13 ,
based on simulations on smaller system sizes.
Finally, in Section 6, we provide a discussion and summary of our results.
2 Harmonic chain
We consider a harmonic chain of $N$ particles labeled $l=1,\ldots,N$.
The particles of masses $m$ are connected by springs with stiffness
constant $k$. Let $\{q_{1},\ldots,q_{N}\}$ denote the displacements
of the particles about their equilibrium positions. The equilibrium
positions are assumed to be separated by a lattice spacing $a$ so
that the mass density is $\rho=m/a$. We assume that the particles $l=0$ and $l=N+1$ are fixed so that $q_{0}=q_{N+1}=0$.
The Hamiltonian of the system is
$$\displaystyle H=\sum_{l=1}^{N}\frac{m}{2}\dot{q}_{l}^{2}+\sum_{l=1}^{N+1}\frac%
{k}{2}(q_{l}-q_{l-1})^{2}~{}.$$
(2)
Transforming to normal mode coordinates $q_{l}(t)=\sum_{p}a_{p}(t)\phi_{p}(l)$ where
$$\displaystyle\phi_{p}(l)=\left[\frac{2}{m(N+1)}\right]^{1/2}\sin(lpa)~{}~{}~{}%
{\rm with}~{}~{}~{}p=\frac{n\pi}{(N+1)a}~{},~{}~{}~{}~{}~{}n=1,\ldots,N$$
(3)
brings the Hamiltonian to the form
$H=\sum_{p}\dot{a}_{p}^{2}/2+\omega_{p}^{2}a_{p}^{2}/2$ with
$$\displaystyle\omega_{p}^{2}=2\frac{k}{m}(1-\cos pa)~{}.$$
(4)
The normal mode equations of motion $\ddot{a}_{p}=-\omega_{p}^{2}a_{p}$
are easily solved and lead to the following expression:
$$\displaystyle q_{l}(t)=\sum_{p}\phi_{p}(l)~{}\left[a_{p}(0)\cos\omega_{p}t+%
\frac{\sin\omega_{p}t}{\omega_{p}}\dot{a}_{p}(0)~{}\right]~{}.$$
(5)
We consider a chain in thermal equilibrium at temperature $T,$ i.e.
$\langle\dot{a}_{p}^{2}(0)\rangle=\omega_{p}^{2}\langle a_{p}^{2}(0)\rangle=k_{%
B}T$ and $\langle\dot{a}_{p}(0)a_{p}(0)\rangle=0.$ For the middle
particle, $l=M=(N+1)/2$ (assuming odd $N$). Defining $\Delta q(t)=q_{M}(t)-q_{M}(0),$
$$\displaystyle\langle[\Delta q(t)]^{2}\rangle=\frac{8k_{B}T}{m(N+1)}\sum_{n=1,3%
,\ldots}\frac{\sin^{2}(\omega_{p}t/2)}{\omega_{p}^{2}}~{}.$$
(6)
The correlations $\langle\Delta q(t)v(0)\rangle$ and $\langle v(t)v(0)\rangle$ can be found by
differentiating this expression, as in Eqs.(1).
In Fig. (1) and Fig. (2) we
plot the simulation results for the various correlation functions for different system sizes
and find them to match the exact analytic results (Eq.(6) and its derivatives).
At short times the effect of boundary is not seen and hence the correlation functions are system size independent.
Within this regime it doesn’t matter whether the boundary is periodic or reflecting.
As seen in Fig. (1), an initial $\sim t^{2}$ growth in $\langle\Delta q^{2}(t)\rangle$ crosses over to a linear growth, indicating a diffusive regime that
is also seen in $\langle\Delta q(t)v(0)\rangle.$
After that, boundary effects set in and $\langle(\Delta q)^{2}\rangle/N$ is an almost periodic
function of $t/N$ (Fig. (2)). This is
somewhat surprising since we are averaging over an initial equilibrium
ensemble with all normal modes, and $\omega_{p}\approx cp$ (where $c=a\sqrt{k/m}$ is the wave speed)
only for small $p.$
The behaviour of $\langle[\Delta q(t)]^{2}\rangle$ for the harmonic chain can be understood
in detail by analyzing the different time regimes of Eq. (6) .
There are three regimes of $t$ to consider:
(i) When $\omega_{N}t<<1,$ $\sin^{2}(\omega_{n}t/2)\approx\omega_{n}^{2}t^{2}/4.$ (We use $\omega_{n}$ to denote the normal mode frequencies in Eq.(4), but with $p=n\pi/[(N+1)a].$)
The right hand side of Eq. (6) is then equal
to $k_{B}Tt^{2}/m$ . This approximation is valid as long as $\omega_{N}t=2ct/a$ is small.
(ii) In the second regime, $\omega_{N}t>>1>>\omega_{1}t,$ and the
sum can be replaced by an integral:
$$\frac{4k_{B}T}{m(N+1)}\int_{1}^{N}dn\frac{\sin^{2}(\omega_{n}t/2)}{\omega_{n}^%
{2}}\approx\frac{2k_{B}Tat}{\pi mc}\int_{0}^{\infty}dy\frac{\sin^{2}(y)}{y^{2}%
\sqrt{1-(ay/ct)^{2}}}~{}$$
(7)
where we have changed variables from $n$ to $y=\omega_{n}t/2$ and used $\omega_{N}t>>1>>\omega_{1}t$
to change the limits of the integral. Expanding $[1-(ay/ct)^{2}]^{-1/2}$ in a binomial series and keeping
only the first term of the expansion, we get the leading
order behaviour in t as $\langle[\Delta q(t)]^{2}\rangle\approx(k_{B}Tt/\rho c)$. The linear $t$-dependence implies diffusive behaviour
with a diffusion constant $D=k_{B}T/(2\rho c)$.
The velocity auto-correlation
function in this regime can be obtained by differentiating the first expression in Eq.(7),
as in Eqs.(1):
$$\langle v(t)v(0)\rangle=\frac{k_{B}T}{m(N+1)}\int_{1}^{N}dn\cos(\omega_{n}t)~{}.$$
(8)
Substituting $z=\omega_{n}/(2c/a)=\sin(n\pi/2(N+1)),$ this is equivalent to
$$\langle v(t)v(0)\rangle=\frac{2k_{B}T}{\pi m}\int_{0}^{1}dz\frac{\cos(2ctz/a)}%
{\sqrt{1-z^{2}}}=\frac{k_{B}T}{m}J_{0}(2ct/a)~{}\sim\frac{\cos(2ct/a-\pi/4)}{%
\sqrt{t}}~{}~{}{\rm for}~{}t\rightarrow\infty,$$
(9)
using the asymptotic properties of Bessel functions mazur60 .
This is shown in the lower panels of Fig. 1.
(iii) In the third regime, $\omega_{1}t$ is no longer small. As a first
approximation, we set $\omega_{n}$ to be equal to $cn\pi/[(N+1)a]\approx cn\pi/Na.$ Then
Eq. (6) becomes
$$\langle[\Delta q(t)]^{2}\rangle=\frac{8k_{B}TNa}{\rho c^{2}\pi^{2}}\sum_{n=1,3%
,5\ldots}\frac{1}{n^{2}}\sin^{2}\left(\frac{cn\pi t}{2Na}\right)~{}.$$
(10)
This is a periodic function in $t$ with a period of $2Na/c.$
Since the sum is dominated by $n<<N,$ we see that our first approximation is a reasonable one.
More accurately, we expand $\omega_{n}$ to one order higher:
$$\omega_{n}=\frac{cn\pi}{Na}[1-n^{2}\pi^{2}/(24N^{2})+\cdots]$$
(11)
and evaluate the sum at $t=2jNa/c.$ We have
$$\langle[\Delta q(t)]^{2}\rangle=\frac{8k_{B}TNa}{\rho c^{2}\pi^{2}}\sum_{n=1,3%
,\ldots}\frac{1}{n^{2}[1-O(n/N)^{2}]}\sin^{2}(jn^{3}\pi^{3}/(24N^{2})).$$
(12)
Approximating the sum by an integral and changing
variables to $y=nj^{1/3}/N^{2/3}$, we get
$$\langle[\Delta q(t)]^{2}\rangle=\frac{8k_{B}T(Nj)^{1/3}a}{\rho c^{2}\pi^{2}}%
\int_{O(N^{-2/3})}^{O(N^{1/3})}\frac{\sin^{2}(y^{3}\pi^{3}/24)}{y^{2}[1-O(y^{2%
}/(jN)^{2/3})]}dy.$$
(13)
As $N\rightarrow\infty,$ the integral converges to an $N$-independent
value of $0.8046\ldots,$ so that the function is $O(Nj)^{1/3}.$
We note that this is small compared to the
$O(N)$ value of the function at its maxima, but that it increases
steadily with $j,$ as expected in a dispersive system. In a more careful analysis, the locations of
the minima are taken to be $2jNa/c+\delta_{j}$ and the $\delta_{j}$’s evaluated to
leading order, but this does not change the fact that the minima are $O(Nj)^{1/3}.$ A
similar analysis shows that the function at its
maxima is equal to $k_{B}TNa/(\rho c^{2})-O(jN)^{1/3}.$
3 Fermi-Pasta-Ulam chain
We now turn to a numerical study of tagged particle motion in a
chain of nonlinear oscillators. The Hamiltonian of the Fermi-Pasta-Ulam
(FPU) chain we study is taken to be:
$$\displaystyle H=\sum_{l=1}^{N}\frac{m}{2}\dot{q}_{l}^{2}+\sum_{l=1}^{N+1}\left%
[\frac{k}{2}(q_{l}-q_{l-1})^{2}~{}+\frac{\nu}{4}(q_{l}-q_{l-1})^{4}\right],$$
(14)
where the $q_{l}$’s are the displacements from equilibrium positions. We fix
the particles at the boundaries by setting $q_{0}=0$ and $q_{N+1}=0$.
The corresponding equations of motion are:
$$m\ddot{q}_{l}=-k(2q_{l}-q_{l+1}-q_{l-1})-\nu(q_{l}-q_{l-1})^{3}-\nu(q_{l}-q_{l%
+1})^{3}~{}.$$
(15)
In this case there is no analytic solution of the equations of
motion and we evaluate $\langle[\Delta q(t)]^{2}\rangle$ and other
correlation functions for the central tagged particle through direct
MD simulations. We prepare the initial thermal equilibrium state
by connecting all the N particles to Langevin-type heat baths and
evolving the system for some time. The heat baths are then removed
and, starting from the thermal initial condition, the system is
evolved with the dynamics of Eq. (15). The average $\langle...\rangle$ is obtained by creating a large number of independent thermal
initial conditions.
The simulation results are plotted in Fig. (3) and Fig. (4).
Comparing to the analogous figures for the harmonic chain, we see that the plots are qualitatively similar.
At short times, a regime independent of boundary and hence system size (Fig. (3)),
there is a crossover from ballistic ($\langle\Delta q^{2}(t)\rangle\sim t^{2}$) to diffusive
($\langle\Delta q^{2}(t)\rangle\sim t$) behaviour. The velocity correlation
function $\langle v(0)v(t)\rangle$ shows damped oscillatory behavior,
although the damping is much faster than for the harmonic chain.
At long times (Fig. (4)), we can see $\langle\Delta q^{2}(t)\rangle$ converging to the expected
equilibrium values, for small system sizes, unlike the harmonic chain. Also, the scaling with N is not good.
As a simplified model of this system, we assume that the nonlinear
terms in Eq.(15), that couples the normal modes of the
linearized system, can be replaced by momentum conserving damping and noise terms; for
any mode, all the other modes act as a heat bath. The resultant equation
of motion is
$$\displaystyle m\ddot{q}_{l}=-k(2q_{l}-q_{l+1}-q_{l-1})-\gamma(2\dot{q}_{l}-%
\dot{q}_{l+1}-\dot{q}_{l-1})+(2\xi_{l}-\xi_{l+1}-\xi_{l-1})$$
(16)
with fixed boundaries condition $q_{0}=0$ and $q_{N+1}=0$. As for
the harmonic oscillator, we transform to normal mode coordinates defined
in Eq.(3), with $\xi_{l}(t)=\sum_{p}\tilde{\xi}_{p}(t)\phi_{p}(l).$
The normal mode coordinates $a_{p}(t)$ now satisfy the equation of motion
$$\displaystyle\ddot{a}_{p}(t)+\omega_{p}^{2}a_{p}(t)=-\frac{\gamma}{k}\omega_{p%
}^{2}\dot{a}_{p}(t)+\frac{\omega_{p}^{2}}{k}\tilde{\xi}_{p}(t)~{},$$
where $\omega_{p}^{2}=\frac{2k}{m}(1-\cos pa)$. To ensure equilibration of the modes
we choose Gaussian noise with zero mean and two point correlations
given by
$$\displaystyle\langle\tilde{\xi}_{p}(t)~{}\tilde{\xi}_{p^{\prime}}(t^{\prime})%
\rangle=\frac{2k\gamma k_{B}T}{m\omega_{p}^{2}}\delta(t-t^{\prime})~{}\delta_{%
p,p^{\prime}}~{}.$$
In steady state, $a_{p}(t)=\omega_{p}^{2}\int_{-\infty}^{t}G(t-t^{\prime})\tilde{\xi}_{p}(t^{%
\prime})dt^{\prime},$ where $G(t-t^{\prime})$ is
the Green’s function for the equation of motion.
After some straightforward computations we finally get
$$\displaystyle\langle q_{l}(t)q_{l}(0)\rangle$$
$$\displaystyle=$$
$$\displaystyle k_{B}T\sum_{p}\frac{\phi_{p}^{2}(l)}{c^{2}\lambda_{p}}e^{-\alpha%
_{p}t}\left[\cos(\beta_{p}t)+\frac{\alpha_{p}}{\beta_{p}}\sin(\beta_{p}t)%
\right],$$
$$\displaystyle\langle{q}_{l}(t)v_{l}(0)\rangle$$
$$\displaystyle=$$
$$\displaystyle k_{B}T\sum_{p}\frac{\phi_{p}^{2}(l)}{\beta_{p}}e^{-\alpha_{p}t}%
\sin(\beta_{p}t)~{},$$
$$\displaystyle\langle v_{l}(t)v_{l}(0)\rangle$$
$$\displaystyle=$$
$$\displaystyle k_{B}T\sum_{p}\phi_{p}^{2}(l)e^{-\alpha_{p}t}\left[\cos(\beta_{p%
}t)-\frac{\alpha_{p}}{\beta_{p}}\sin(\beta_{p}t)\right]~{},$$
$$\displaystyle{\rm where}~{}~{}\alpha_{p}$$
$$\displaystyle=$$
$$\displaystyle\frac{\gamma\omega_{p}^{2}}{2k}~{},~{}~{}~{}\beta_{p}=(-\alpha_{p%
}^{2}+\omega^{2}_{p})^{1/2}~{}.$$
Taking $N$ to be odd, we get for the middle particle $M=(N+1)/2$
$$\displaystyle\langle q(t)q(0)\rangle$$
$$\displaystyle=$$
$$\displaystyle\frac{2k_{B}T}{m(N+1)}\sum_{n=1,3,\ldots}e^{-\alpha_{p}t}\left[%
\cos(\beta_{p}t)+\frac{\alpha_{p}}{\beta_{p}}\sin(\beta_{p}t)\right]~{},$$
$$\displaystyle\langle{q}(t)v(0)\rangle$$
$$\displaystyle=$$
$$\displaystyle\frac{2k_{B}T}{m(N+1)}\sum_{n=1,3,\ldots}\frac{1}{\beta_{p}}e^{-%
\alpha_{p}t}\sin(\beta_{p}t)~{},$$
$$\displaystyle\langle v(t)v(0)\rangle$$
$$\displaystyle=$$
$$\displaystyle\frac{2k_{B}T}{m(N+1)}\sum_{n=1,3,\ldots}e^{-\alpha_{p}t}\left[%
\cos(\beta_{p}t)-\frac{\alpha_{p}}{\beta_{p}}\sin(\beta_{p}t)\right]~{},$$
In the limit $N\to\infty$ we get
$$\displaystyle\langle q(t)q(0)\rangle$$
$$\displaystyle=$$
$$\displaystyle\frac{k_{B}Ta}{m\pi}\int_{0}^{\pi/a}dpe^{-\alpha_{p}t}\left[\cos(%
\beta_{p}t)+\frac{\alpha_{p}}{\beta_{p}}\sin(\beta_{p}t)\right]~{},$$
$$\displaystyle\langle{q}(t)v(0)\rangle$$
$$\displaystyle=$$
$$\displaystyle\frac{k_{B}Ta}{m\pi}\int_{0}^{\pi/a}dp\frac{1}{\beta_{p}}e^{-%
\alpha_{p}t}\sin(\beta_{p}t)~{},$$
$$\displaystyle\langle v(t)v(0)\rangle$$
$$\displaystyle=$$
$$\displaystyle\frac{k_{B}Ta}{m\pi}\int_{0}^{\pi/a}dpe^{-\alpha_{p}t}\left[\cos(%
\beta_{p}t)-\frac{\alpha_{p}}{\beta_{p}}\sin(\beta_{p}t)\right]~{},$$
The diffusion constant is obtained as
$$\displaystyle\lim_{t\to\infty}\langle q(t)v(0)\rangle=\frac{k_{B}T}{m\pi c}%
\int_{0}^{\infty}dx\frac{\sin(x)}{x}=\frac{k_{B}T}{2\rho c}$$
(17)
where, as before, $\rho$ is the mass per unit length and $c$ is the
speed of sound spohn13 .
Our linearized hydrodynamic theory for sound waves can be thought of as a special case of the hydrodynamic equations discussed in spohn13 where we neglect the coupling with the heat mode. We can then use the expression for sound speed given in that paper for the FPU chain.
For the symmetric (even potential) FPU chain this gives $c=[k_{B}T/(m\langle q^{2}\rangle)]^{1/2}$.
With our simulation parameters, we get from Eq. (17), $D_{FPU}=0.342\ldots$, in excellent agreement with the simulation results [Fig. (3)].
In Fig. (5) and Fig. (6) we compare the predictions
of the hydrodynamic model with the simulation results of the FPU chain of Fig. (3) of size $N=65$.
The constant $k$ is obtained from the $c$ above and $\gamma$ is the only fitting
parameter used. This model seems to provide a good description of tagged particle diffusion in this system. We have also studied the
asymmetric FPU chain, where we find less agreement with the hydrodynamic model.
4 Lennard-Jones gas
The mean square displacement $\langle\Delta q^{2}(t)\rangle$ for the
FPU (and harmonic) chain seems to have a similar dependence on $t$
as for a hard particle gas roy13 , with an initial $\sim t^{2}$
increase crossing over to a $\sim t$ dependence. However, derivatives
of this correlation function, $\langle\Delta q(t)v(0)\rangle$ and
$\langle v(0)v(t)\rangle$ show differences. For the FPU chain, $\langle\Delta q(t)v(0)\rangle$ approaches a constant rapidly. Although this is less
rapid for a harmonic chain, it is nevertheless clear that the large
$t$ limit is a constant. On the other hand, for the alternate mass hard
particle gas, $\langle\Delta x(t)v(0)\rangle$ decreases slowly as $t$ increases,
with a levelling off at very long times roy13 [see also Sec. (5)].
Turning to the velocity auto-correlation function, for the FPU chain this
has a damped oscillatory behaviour, while there are no — or overdamped —
oscillations in the hard particle velocity auto-correlation function.
The hard particle gas may be considered as an extreme case of a nonlinear
oscillator chain, but it is a singular limit of this family. To see if
the differences between the correlation functions for the two cases
are significant, we study the Lennard Jones gas.
The Hamiltonian of the Lennard Jones gas is taken to be
$$\displaystyle H=\sum_{l=1}^{N}\frac{m}{2}\dot{x}_{l}^{2}+\sum_{l=1}^{N+1}\left%
[\frac{1}{(x_{l}-x_{l-1})^{12}}~{}-~{}\frac{1}{(x_{l}-x_{l-1})^{6}}\right]$$
(18)
where $x$’s are the positions of the particles. At low densities,
one would expect the particles to behave approximately like free
particles, with a repulsive force between neighbouring particles
when they come close to each other. Since the repulsion occurs over
a distance that is small compared to the mean inter-particle separation,
the system is similar to a hard particle gas. On the other hand,
at high densities, the particles should remain close to their
equilibrium positions with small deviations, resulting in behaviour
more like a FPU chain.
As for the FPU
chain we evaluate the correlation functions of the central particle
from molecular dynamics simulations. The particles are inside a
box of length $L$ and we fix particles at the boundaries by setting
$x_{0}=0$ and $x_{N+1}=L$. The mean inter-particle spacing is thus
$a=L/(N+1)$. The simulation results are given in Fig. (7) and
Fig. (8) for short times and Fig. (9) and Fig. (10)
for long times. In these simulations we have taken $k_{B}T=1$.
As expected, we observe in Fig. (7) and
Fig. (9) that at high density the behaviour
is similar to that of the FPU chain. At low densities, Fig. (8) and Fig. (10),
the behaviour resembles that of the hard particle gas. The figures correspond
to $a=1.0$ and $a=3.0$ respectively.
Hard particle like behaviour was also seen at $a=5.0$.
Though the hydrodynamic model used for the FPU chains works decently for the
correlation functions of the high density Lennard Jones gas (Fig. (11)),
in the low density regime where it behaves like a hard particle gas, it does not work well (figure not shown).
This is because the model still has inherent oscillations in the velocity auto-correlation function which are
absent for a hard particle gas.
5 Hard particle gas
Finally, we present simulation results for the alternate mass hard
particle gas that extend our results in Ref. roy13 . In that
paper, results for various tagged particle correlation functions in the hard particle gas (equal and alternate mass cases) were presented.
Here, we limit ourselves to a more extensive study of $D(t)$ for the
alternate mass hard particle gas. In
Ref. roy13 we studied the hard particle gas for system sizes up to 801;
the slow decay of $\langle\Delta x(t)v(0)\rangle$ as a function of time
led us to conclude that the system
is subdiffusive, with $D(t)\sim a/(b+\ln t).$ Here we present
simulation results for much larger system sizes in Fig. (12).
We find a deviation at long times from our earlier conclusion, and
see that the numerical results are consistent with the prediction
from mode coupling theory beijeren of $\langle\Delta x(t)v(0)\rangle=0.2887+0.39x^{-2/5}.$ In particular, this indicates
that the system is diffusive, contrary to our earlier conclusion.
6 Discussions
We see that both harmonic and anharmonic chains (FPU chain and high
density L-J gas) have an eventual diffusive regime after which the
effect of boundary sets (at times $t\approx L/c$) in, and size dependent oscillations appear.
These oscillations may be attributed to sound waves travelling in
the system. While they persist for a long time for harmonic chains,
they die off quickly for anharmonic chains. At lower density the
L-J gas behaves like the hard particle gas. We confirm this by
studying both equal and alternate mass L-J gas and see similar
behaviour as in roy13 . Note that the transition of L-J gas
from anharmonic chain like behaviour to hard particle like behaviour
should be continuous. We have confirmed this by studying the equal and
alternate mass cases for the L-J gas at different densities along
with the equal and alternate mass cases for the hard particle gas.
While for harmonic chain the value of long time diffusion
constant is same in both equal and alternate mass cases, the
difference starts slowly building up for the FPU chain and the high
density L-J gas. Gradually the difference widens with decreasing
density and we see Fig. (8) like behaviour at low densities.
The eventual diffusive behaviour of alternate mass hard particle
gas is an important result. This along with the diffusive behaviour
observed in various other one-dimensional systems studied by us
demonstrates a point made by Alexander and Pincus pincus78
that in one-dimensional systems where particle crossings cannot
take place, and individual particle dynamics is ballistic,
the long time dynamics of the tagged particle will be normal-diffusive.
In summary, we have studied tagged particle diffusion in various
one-dimensional Hamiltonian systems and have come up with following results:
1.
There is a
“short time” regime during which the tagged particle at the centre
does not feel the effect of the boundary and during this time, the
correlation functions have the same behaviour as for an infinite
system. The tagged particle motion is always diffusive with diffusion constant given by $D=k_{B}T/(2\rho c)$. In this regime
(a)
For the harmonic chain, the VAF $\sim\cos(\omega t)/\sqrt{t}$.
(b)
For the FPU chain, the VAF has a faster decay $\sim\exp(-at)\sin(\omega t)$.
(c)
The equal mass Lennard-Jones gas, at high density, behaves like the FPU chain. At
low density it behaves like the equal mass hard-particle gas with
a negative VAF $\sim-1/t^{3}$.
(d)
The alternate mass Lennard-Jones gas at low density is also similar
to the alternate mass hard-particle gas with the VAF apparently changing
from $\sim 1/t^{3}$ to $\sim 1/t^{7/5}$.
(e)
The alternate mass hard particle gas also
shows normal diffusive behaviour, but to see this asymptotic behaviour requires one to study very large system sizes. The VAF $\sim 1/t^{7/5}$.
2.
At long times the effect of boundary sets in and size dependent
oscillations, damped for anharmonic chains, appear. The time at
which the system size effects start showing up is $\approx L/c$, where $c$ is the velocity of sound in the medium.
References
(1)
A. Roy, O. Narayan, A. Dhar and S. Sabhapandit, J. Stat. Phys. 150, 851 (2013).
(2)
D. W. Jepsen, J. Math. Phys. 6, 405 (1965).
(3)
J. L. Lebowitz and J. K. Percus, Phys. Rev. 155, 122 (1967).
(4)
J. L. Lebowitz and J. Sykes, J. Stat. Phys. 6, 157 (1972).
(5)
J. K. Percus, J. Stat. Phys. 138, 40 (2010).
(6)
J. W. Evans, Physica 95A, 225 (1979).
(7)
P. Kasperkovitz and J. Reisenberger, Phys. Rev. A 31, 2639 (1985).
(8)
J. Marro and J. Masoliver, Phys. Rev. Lett. 54, 731 (1985).
(9)
P. Mazur and E. Montroll, J. Math. Phys. 1, 70 (1960).
(10)
M. Bishop, M. Derosa, and J. Lalli, J. Stat. Phys. 25, 229 (1981).
(11)
G. Srinivas and B. Bagchi, J. Chem. Phys. 112, 7557 (2000); S. Pal, G. Srinivas, S. Bhattacharyya and B. Bagchi, J. Chem. Phys. 116, 5941 (2002).
(12)
T.âE. Harris, J. Appl. Probab. 2, 323 (1965).
(13)
H.âvan Beijeren, K.âW. Kehr, and R. Kutner, Phys. Rev. B 28, 5711 (1983).
(14)
S. Alexander and P. Pincus, Phys. Rev. B 18, 2011 (1978).
(15)
H. van Beijeren, J. Stat. Phys. 63, 47 (1991).
(16)
M. Kollmann, Phys. Rev. Lett. 90, 180602 (2003).
(17)
L. Lizana and T. Ambjörnsson,
, Phys. Rev. Lett 100, 200601 (2008); Phys. Rev. E 80, 051103 (2009).
(18)
S. Gupta, S. N. Majumdar, C. Godrèche and M. Barma, Phys. Rev. E 76, 021112 (2007).
(19)
E. Barkai and R. Silbey, Phys. Rev. Lett. 102, 050602 (2009).
(20)
E. Barkai and R. Silbey, Phys. Rev. E 81, 041129 (2010).
(21)
K. Hahn, J. Kärger, and V. Kukla, Phys. Rev. Lett. 76, 2762 (1996).
(22)
H. Wei, C. Bechinger, and P. Leiderer, Science 287,
625 (2000).
(23)
C. Lutz, M. Kollmann and C. Bechinger, Phys. Rev. Lett.
93, 026001 (2004).
(24)
O. Narayan and S. Ramaswamy, Phys. Rev. Lett. 89,
200601 (2002); H. van Beijeren, Phys. Rev. Lett. 108, 180601 (2012);
A. Dhar, Adv. Phys. 57, 457 (2008).
P. I. Hurtado, Phys. Rev. Lett. 96, 010601 (2006).
(25)
H. Spohn, J. Stat. Phys. 154, 1191 (2014).
(26)
H.âvan Beijeren, unpublished work. |
Linear free divisors and the
global logarithmic comparison theorem
\firstnameMichel \lastnameGranger
Departement de Mathématiques
Université d’Angers
2 Bd Lavoisier
49045 Angers
France
[email protected]
,
\firstnameDavid \lastnameMond
Mathematics Institute
University of Warwick
Coventry CV47AL
England
[email protected]
,
\firstnameAlicia \lastnameNieto-Reyes
Departamento de Matematicas,
Estadistica y Computacion
Universidad de Cantabria
Spain
[email protected]
and
\firstnameMathias \lastnameSchulze
Department of Mathematics
Oklahoma State University
Stillwater, OK 74078
United States
[email protected]
(Date:: January 16, 2008)
Abstract.
A complex hypersurface $D$ in $\mathds{C}^{n}$ is a linear free divisor (LFD) if its module of logarithmic vector fields has a global basis of linear vector fields. We classify all LFDs for $n$ at most $4$.
By analogy with Grothendieck’s comparison theorem, we say that the global logarithmic comparison theorem (GLCT) holds for $D$ if the complex of global logarithmic differential forms computes the complex cohomology of $\mathds{C}^{n}\setminus D$.
We develop a general criterion for the GLCT for LFDs and prove that it is fulfilled whenever the Lie algebra of linear logarithmic vector fields is reductive.
For $n$ at most $4$, we show that the GLCT holds for all LFDs.
We show that LFDs arising naturally as discriminants in quiver representation spaces (of real Schur roots) fulfill the GLCT. As a by-product we obtain a topological proof of a theorem of V. Kac on the number of irreducible components of such discriminants.
Key words and phrases:free divisor, prehomogeneous vector space, de Rham cohomology, logarithmic comparison theorem, Lie algebra cohomology, quiver representation
1991 Mathematics Subject Classification: 32S20, 14F40, 20G10, 17B66
DM is grateful to Ignacio de Gregorio for helpful conversations on the topics treated here.
MS gratefully acknowledges financial support from EGIDE and the Humboldt Foundation.
We are grateful to the referee for a very careful reading and many valuable suggestions.
\excludeversion
annalif
\includeversionarxiv
\alttitleDiviseurs linéairement libres
et le théorème de
comparaison logarithmique global
\altkeywordsdiviseur linéairement libre, espace vectorielle préhomogène, cohomologie de de Rham, théorème de comparaison logarithmique, cohomologie des algèbres de Lie, représentation des quivers
{altabstract}
Une hypersurface complexe de $\mathds{C}^{n}$ est appelée un diviseur linéairement libre (ou DLL) si son module de champs de vecteur logarithmiques a une base globale formée de champs de vecteurs linéaires.
Nous classifions tous les DLL pour $n$ au plus égal a $4$.
Par analogie avec le théorème de comparaison de Grothendieck, on dit que le théorème de comparaison logarithmique global (ou TCLG) est vrai pour $D$ si le complexe des formes différentielles logarithmiques globales permet de calculer la
cohomologie de $\mathds{C}^{n}\setminus D$ à coefficients complexes.
Nous mettons en évidence un critère général pour qu’un DLL ait la propriété TCLG, et nous démontrons que ce critère s’applique lorsque l’algèbre de Lie des champs de vecteurs logarithmiques linéaires est réductive.
Pour $n$ inférieur ou égal à $4$, nous montrons que le TCLG est vrai pour tous les DLL.
Nous montrons que les DLL qui apparaissent naturellement comme discriminants dans les espaces de représentations de carquois pour des racines de Schur réelles satisfont au TCLG.
Comme corollaire nous obtenons une démonstration topologique d’un résultat de V. Kac sur le nombre de composantes irréductibles de tels discriminants.
1. Introduction
We denote by $\mathscr{O}=\mathscr{O}_{\mathds{C}^{n}}$ the sheaf of holomorphic functions on $\mathds{C}^{n}$, by $\mathfrak{m}_{p}\subseteq\mathscr{O}_{p}$ the maximal ideal at $p\in\mathds{C}^{n}$, by $\operatorname{Der}=\operatorname{Der}_{\mathds{C}}(\mathscr{O})$ the sheaf of $\mathds{C}$-linear derivations of $\mathscr{O}$ (or holomorphic vector fields) on $\mathds{C}^{n}$, and by $\Omega^{\bullet}=\Omega^{\bullet}_{\mathds{C}^{n}}$ the complex of sheaves of holomorphic differential forms.
We shall frequently use a local or global coordinate system $x=x_{1},\dots,x_{n}$ on $\mathds{C}^{n}$ and then denote by $\partial=\partial_{1},\dots,\partial_{n}$ the corresponding operators of partial derivatives $\partial_{i}=\frac{\partial}{\partial x_{i}}$, $i=1,\dots,n$.
Note that $\operatorname{Der}=\bigoplus\mathscr{O}\cdot\partial_{i}$ is a free $\mathscr{O}$-module of rank $n$.
Let $D\subseteq\mathds{C}^{n}$ be a reduced divisor.
K. Saito [Sai80] associated to $D$ the (coherent) sheaf of logarithmic vector fields $\operatorname{Der}(-\log D)\subseteq\operatorname{Der}$ and the complex of (coherent) sheaves $\Omega^{\bullet}(\log D)\subseteq\Omega^{\bullet}(*D)$ of logarithmic differential forms along $D$.
For a (local or global) defining equation $\Delta\in\mathscr{O}$ of the germ $D$, $\delta\in\operatorname{Der}$ is in $\operatorname{Der}(-\log D)$ if $\delta(\Delta)\in\mathscr{O}\cdot\Delta$, and $\omega\in\Omega^{\bullet}[\Delta^{-1}]$ is in $\Omega^{\bullet}(\log D)$ if $\Delta\cdot\omega,\Delta\cdot d\omega\in\Omega^{\bullet}$.
Note that $\operatorname{Der}(-\log D)$ contains the annihilator $\operatorname{Der}(-\log\Delta)$ of $\Delta$ defined by the condition $\delta(\Delta)=0$.
Saito showed that $\operatorname{Der}(-\log D)$ and $\Omega^{1}(\log D)$ are reflexive and mutually dual and introduced the following important class of divisors.
{defi}
A divisor $D$ is called free if $\operatorname{Der}(-\log D)$, or equivalently $\Omega^{1}(\log D)$, is a locally free $\mathscr{O}$-module, necessarily of rank $n$.
We will be concerned in this article with the following subclass of divisors.
{defi}
A free divisor $D$ is called linear if $\Gamma(\mathds{C}^{n},\operatorname{Der}(-\log D))$ admits a basis $\delta_{1},\dots,\delta_{n}$ such that each $\delta_{i}$ has linear coefficients with respect to the $\mathscr{O}$-basis $\partial_{1},\dots,\partial_{n}$ of $\operatorname{Der}$ or equivalently each $\delta_{i}$ is homogeneous of degree zero with respect to the standard degree defined by $\deg x_{i}=1=-\deg\partial_{i}$ on the variables and generators of $\operatorname{Der}$.
Saito’s criterion [Sai80, Thm. 1.8.(ii)] implies the following fundamental observation.
{lemm}
If $\delta_{1},\dots,\delta_{n}$ is a basis of $\Gamma(\mathds{C}^{n},\operatorname{Der}(-\log D))$ for a linear free divisor $D$, then the homogeneous polynomial $\Delta=\det((\delta_{i}(x_{j}))_{i,j})\in\mathds{C}[x]$ of degree $n$ is a global defining equation for $D$.
Note that because $\operatorname{Der}(-\log D)$ can have no members of negative degree, $D$ cannot be isomorphic to the product of $\mathds{C}$ with a lower dimensional divisor.
It turns out that linear free divisors are relatively abundant; the authors believe that in the current paper and in [BM06], recipes are given which allow the straightforward construction of more free divisors than have been described in the sum of all previous papers.
{exems}
(1)
The normal crossing divisor $D=\{x_{1}\cdots x_{n}=0\}\subseteq\mathds{C}^{n}$ is a linear free divisor where
$$x_{1}\partial_{1},\dots,x_{n}\partial_{n}$$
is a basis of $\operatorname{Der}(-\log D)$.
Up to isomorphism it is the only example among hyperplane arrangements, cf. [OT92, Ch. 4].
(2)
In the space $B_{2,3}$ of binary cubics, the discriminant $D$, which consists of binary cubics having a repeated root, is a linear free divisor.
For $f(u,v)=xu^{3}+yu^{2}v+zuv^{2}+wv^{3}$ has a repeated root if and only if its Jacobian ideal does not contain any power of the maximal ideal $(u,v)$, and this in turn holds if and only if the four cubics
$$u\partial_{u}f,v\partial_{u}f,u\partial_{v}f,v\partial_{v}f$$
are linearly dependent.
Writing the coefficients of these four cubics as the columns of the $4\times 4$ matrix
$$A:=\begin{pmatrix}3x&0&y&0\\
2y&3x&2z&y\\
z&2y&3w&2z\\
0&z&0&3w\end{pmatrix}$$
we conclude that $D$ has equation $\det A=0$.
After division by $3$ this determinant is
$$-y^{2}z^{2}+4wy^{3}+4xz^{3}-18wxyz+27w^{2}x^{2}.$$
In fact each of the columns of this matrix determines a vector field in $\operatorname{Der}(-\log D)$;
for the group $\operatorname{Gl}_{2}(\mathds{C})$ acts linearly on $B_{2,3}$ by composition on the right, and, up to a sign, the four columns here are the infinitesimal generators of this action corresponding to a basis of $\mathfrak{gl}_{2}(\mathds{C})$.
Each is tangent to $D$, since the action preserves $D$.
Further examples of irreducible linear free divisors can be found (though not under this name) in the paper [SK77] of Sato and Kimura.
Besides our example, two, of ambient dimension $12$ and $40$, are described in [SK77, §5, Prop. 11, 15], and by repeated application of castling transformations, cf. [SK77, §2], it is possible to generate infinitely many more, of higher dimensions.
In Section 5 of this paper we describe a number of further examples of
linear free divisors, and in Section LABEL:48 we prove some results about linear bases for the module $\Gamma(\mathds{C}^{n},\operatorname{Der}(-\log D))$, and go on to classify all linear free divisors in dimension $n\leq 4$.
Linear free divisors provide a new insight into a conjecture of H. Terao [Ter78, Conj. 3.1] relating the cohomology of the complement of certain divisors $D$
to the cohomology of the complex $\Omega^{\bullet}(\log D)$ of forms with logarithmic poles along $D$.
For linear free divisors, the link between the complex $\Gamma(\mathds{C}^{n},\Omega^{\bullet}(\log D))$ and $H^{*}(\mathds{C}^{n}\smallsetminus D)$ can be understood as follows.
{defi}
For a linear free divisor $D$ defined by $\Delta\in\mathds{C}[x]$, we consider the subgroup
$$G_{D}:=\{A\in\operatorname{Gl}_{n}(\mathds{C})\mid A(D)=D\}=\{A\in%
\operatorname{Gl}_{n}(\mathds{C})\mid\Delta\circ A\in\mathds{C}\cdot\Delta\}$$
with identity component $G_{D}^{\circ}$ and Lie algebra $\mathfrak{g}_{D}$.
We call $D$ reductive if $G_{D}^{\circ}$, or equivalently $\mathfrak{g}_{D}$, is reductive.
It turns out that $\mathds{C}^{n}\smallsetminus D$ is a single orbit of $G^{\circ}_{D}$ with finite isotropy group, so $H^{*}(\mathds{C}^{n}\smallsetminus D;\mathds{C})$ is isomorphic to the cohomology of $G^{\circ}_{D}$; this is explained in Section 2.
Moreover, $H^{*}(\Gamma(\mathds{C}^{n},\Omega^{\bullet}(\log D)))$ coincides with the Lie algebra cohomology of $\mathfrak{g}_{D}$ with complex coefficients.
For compact connected Lie groups $G$, a well-known argument shows that the Lie algebra cohomology coincides with the topological cohomology of the group.
For linear free divisors the group $G^{\circ}_{D}$ is never compact, but the isomorphism also holds good for the larger class of reductive groups, and for a significant class of linear free divisors, $G^{\circ}_{D}$ is indeed reductive.
In Section 3 we prove our main result:
{theo}
If $D$ is a reductive linear free divisor then
(1)
$$H^{*}(\Gamma(\mathds{C}^{n},\Omega^{\bullet}(\log D)))\simeq H^{*}(\mathds{C}^%
{n}\smallsetminus D;\mathds{C}).$$
Among linear free divisors to which it applies are those arising as discriminants in representation spaces of quivers, as discussed in detail in [BM06] and briefly in Section 4 below.
Terao’s conjecture remains open, though it has been answered in the affirmative for a very large class of arrangements in [WY97], using a technique developed in [CMN96].
For general free divisors, a local result from which the global isomorphism of (1) follows holds when imposing the following additional hypothesis.
{defi}
A divisor $D$ is called quasihomogeneous at $p\in D$ if the germ $(D,p)$ admits a local defining equation $\Delta\in\mathscr{O}_{p}$ that is weighted homogeneous with respect to weights $w_{1},\dots,w_{n}\in\mathds{Q}_{+}$ in some local coordinate system $x_{1},\dots,x_{n}$ centred at $p$.
Dividing $w_{1},\dots,w_{n}$ by the weighted degree of $\Delta$, note that the preceding condition means that $\chi(\Delta)=\Delta$ where $\chi=\sum_{i=1}^{n}w_{i}x_{i}\partial_{i}\in\operatorname{Der}(-\log D)_{p}$.
$D$ is called locally quasihomogeneous if it is quasihomogeneous at $p$ for all $p\in D$.
We say homogeneous instead of quasihomogeneous if $w=1,\dots,1$.
{theo}
[[CMN96]]
Let $D\subseteq\mathds{C}^{n}$ be a locally quasihomogeneous free divisor, let $U=\mathds{C}^{n}\smallsetminus D$, and let $j:U\to\mathds{C}^{n}$ be inclusion.
Then the de Rham morphism
(2)
$$\Omega_{X}^{\bullet}(\log D)\to\mathbf{R}j_{*}\mathds{C}_{U}$$
is a quasi-isomorphism.
Grothendieck’s Comparison Theorem [Gro66] asserts that a similar quasi-isomorphism holds for any divisor $D$, if instead of logarithmic poles we allow meromorphic poles of arbitrary order along $D$.
Because of this similarity, we refer to the quasi-isomorphism of (2) as the Logarithmic Comparison Theorem (LCT) and to the global isomorphism (1) as the Global Logarithmic Comparison Theorem (GLCT).
Several authors have further investigated the range of validity of LCT, and established interesting links with the theory of $\mathscr{D}$-modules, in particular in
[CN02], [CU05], [GS06], [Tor04], and [Wal05].
Local quasihomogeneity was introduced in [CMN96] as a technical device to make possible an inductive proof of the isomorphism in 1.
Subsequently it turned out to have a deeper connection with the theorem.
In particular by [CCMN02], for plane curves the logarithmic comparison theorem holds if and only if all singularities are quasihomogeneous.
The situation in higher dimensions remains unclear.
There is as yet no counterexample to the conjecture that LCT is equivalent to the following weaker condition.
{defi}
A divisor $D$ is called Euler homogeneous at $p\in D$ if there is a germ of vector field $\chi\in\mathfrak{m}_{p}\cdot\operatorname{Der}_{p}$ such that $\chi(\Delta)=\Delta$ for some local defining equation $\Delta\in\mathscr{O}_{p}$ of the germ $(D,p)$.
In this case, $\chi$ is called an Euler vector field for $D$ at $p$.
$D$ is called strongly Euler homogeneous if it is Euler homogeneous at $p$ for all $p\in D$.
{rema}
The Euler homogeneity of $D$ is independent of the choice of an equation.
If $\chi$ is an Euler vector field at $p$ for $D$ defined by $\Delta\in\mathscr{O}_{p}$, and $u\in\mathscr{O}_{p}^{*}$ is a unit, then the defining equation $u\Delta$ of $D$ at $p$ satisfies an equation
$$(\chi(u)+u)^{-1}u\chi(u\Delta)=(\chi(u)+u)^{-1}(\chi(u)+u)u\Delta=u\Delta$$
with Euler vector field $(\chi(u)+u)^{-1}u\chi$.
In Section LABEL:5 we examine the examples described in Sections 5 and LABEL:48 with respect to local quasihomogeneity and strong Euler homogeneity.
It turns out that all linear free divisors in dimension $n\leq 4$ are locally quasihomogeneous and there is no linear free divisor which we know not to be strongly Euler homogeneous.
The optimistic reader could therefore conjecture that all linear free divisors are strongly Euler homogeneous, and also fulfil LCT and so also GLCT.
We do not know any counter-example to these statements.
In Subsection LABEL:62 we give examples of quivers $Q$ and dimension vectors $\mathbf{d}$ for which the discriminant in $\operatorname{Rep}(Q,\mathbf{d})$ is a linear free divisor but is not locally quasihomogeneous.
In such cases Theorem 1 therefore does not apply, but Theorem 1 does.
In Subsection LABEL:65, we show that a linear free divisor does not need to be reductive for LCT to hold.
However we do not know whether reductiveness of the group implies LCT for linear free divisors.
The property of being a linear free divisor is not local, and our proof of GLCT here is quite different from the proof of LCT in [CMN96].
The fact that linear free divisors in $\mathds{C}^{n}$ arise as the complement of the open orbit of an $n$-dimensional connected algebraic subgroup of $\operatorname{Gl}_{n}(\mathds{C})$, means that there is some overlap between the topic of this paper and of the paper [SK77], where Sato and Kimura classify irreducible prehomogeneous vector spaces, that is, triples $(G,\rho,V)$, where $\rho$ is an irreducible representation of the algebraic group $G$ on $V$, in which there is an open orbit.
However, the hypothesis of irreducibility means that the overlap is slight.
Any linear free divisor arising as the complement of the open orbit in an irreducible prehomogeneous vector space is necessarily irreducible by [SK77, §4, Prop. 12], whereas among our examples and in our low-dimensional classification (in Section LABEL:48) all the linear free divisors except one (Example 1(2)) are reducible.
Even where $G$ is reductive, the passage from irreducible to reducible representations in this context is by no means trivial, including as it does substantial parts of the theory of representations of quivers.
2. Linear free divisors and subgroups of $\operatorname{Gl}_{n}(\mathds{C})$
A degree zero vector field $\delta\in\operatorname{Der}$ can be identified with an $n\times n$ matrix $A=(a_{i,j})_{i,j}\in\mathds{C}^{n\times n}$ by $\delta=\sum_{i,j}x_{i}a_{i,j}\partial_{j}=xA\partial^{t}$.
Under this identification, the commutator of square matrices corresponds to the Lie bracket of vector fields.
Let $D\subseteq\mathds{C}^{n}$ be a reduced divisor defined by a homogeneous polynomial $\Delta\in\mathds{C}[x]$ of degree $d$.
{defi}
We denote by
$$\mathrm{L}_{D}:=\{xA\partial^{t}\mid xA\partial^{t}(\Delta)\in\mathds{C}\cdot%
\Delta\}\subseteq\Gamma(\mathds{C}^{n},\operatorname{Der}(-\log D))$$
the Lie algebra of degree zero global logarithmic vector fields.
Recall from Definition 1, that $D$ is linear free if $\mathrm{L}_{D}$ contains a basis of $\operatorname{Der}(-\log D)$, and recall $G_{D}^{\circ}$ from Definition 1.
{lemm}
$G_{D}^{\circ}$ is an algebraic subgroup of $\operatorname{Gl}_{n}(\mathds{C})$ and $\mathfrak{g}_{D}=\{A\mid xA^{t}\partial^{t}\in\mathrm{L}_{D}\}$.
Proof.
Clearly $G_{D}$ is a subgroup of $\operatorname{Gl}_{n}(\mathds{C})$ and defined by a system of polynomial (determinantal) equations.
Thus $G_{D}$ and hence also $G_{D}^{\circ}$ is an algebraic subgroup of $\operatorname{Gl}_{n}(\mathds{C})$.
The Lie algebra of $G_{D}^{\circ}$ consists of all $n\times n$-matrices $A$ such that
$$\Delta\circ(I+A\varepsilon)=a(\varepsilon)\cdot\Delta\in\mathds{C}[\varepsilon%
]\cdot\Delta$$
where $\mathds{C}[\varepsilon]=\mathds{C}[t]/{\langle t^{2}\rangle}\ni[t]=:\varepsilon$.
Taylor expansion of this equation with respect to $\varepsilon$ yields
$$\Delta+\partial(\Delta)\cdot A\cdot x^{t}\cdot\varepsilon=(a(0)+a^{\prime}(0)%
\cdot\varepsilon)\cdot\Delta$$
and hence $a(0)=1$ and, by transposing the $\varepsilon$-coefficient, $xA^{t}\partial^{t}\in\mathrm{L}_{D}$.
The argument can be reversed to prove the converse by setting
$$a(\varepsilon):=1+(xA^{t}\partial^{t}(\Delta)/\Delta)\cdot\varepsilon.$$
∎
{lemm}
The complement $\mathds{C}^{n}\smallsetminus D$ of a linear free divisor is an orbit of $G_{D}^{\circ}$ with finite isotropy groups.
Proof.
For $p\in\mathds{C}^{n}$, the orbit $G_{D}^{\circ}\cdot p$ is a smooth locally closed subset of $\mathds{C}^{n}$ whose boundary is a union of strictly lower dimensional orbits, cf. [Hum75, Prop. 8.3].
The orbit map $G_{D}^{\circ}\to G_{D}^{\circ}\cdot p$ sends $I_{n}+A\varepsilon$ to $p+pA^{t}\varepsilon$ and induces a tangent map
(3)
$$\mathfrak{g}_{D}\twoheadrightarrow T_{p}(G_{D}^{\circ}\cdot p),\quad A\mapsto
pA%
^{t}.$$
For $p\not\in D$, $\operatorname{Der}(-\log D)(p)$ and hence also $\mathrm{L}_{D}(p)$ is $n$-dimensional.
Then by Lemma 2 and (3) $T_{p}G_{D}^{\circ}\cdot p$ and hence $G_{D}^{\circ}\cdot p$ are $n$-dimensional which implies the finiteness of the isotropy group of $p$ in $G_{D}^{\circ}$.
As this holds for all $p\not\in D$, the boundary of $G_{D}^{\circ}\cdot p$ must be $D$ and then $G_{D}^{\circ}\cdot p=\mathds{C}^{n}\smallsetminus D$.
∎
Reversing our point of view we might try to find algebraic subgroups $G\subseteq\operatorname{Gl}_{n}(\mathds{C})$ that define linear free divisors.
This requires by definition that $G$ is $n$-dimensional and connected and by Lemma 2 that there is an open orbit.
The complement $D$ is then a candidate for a free divisor.
Indeed $D$ is a divisor: comparing with (3), $D$ is defined by the discriminant determinant
$$\Delta=\det\begin{pmatrix}A_{1}x^{t}&\cdots&A_{n}x^{t}\end{pmatrix}$$
where $A_{1},\dots,A_{n}$ is a basis of the Lie algebra $\mathfrak{g}$ of $G$ and we denote by $f=\Delta_{\text{red}}$ the reduced equation of $D$.
As the entries of the defining polynomial are linear, $\Delta$ is a homogeneous polynomial of degree $n$.
Thus, if $\Delta$ is not reduced, $D$ can not be linear free.
We shall see examples where this happens in the next section.
On the other hand, Saito’s criterion [Sai80, Lem. 1.9] shows the following.
{lemm}
Let the $n$-dimensional algebraic group $G$ act linearly on $\mathds{C}^{n}$ with an open orbit.
If $\Delta$ is reduced then $D$ is a linear free divisor.∎
As a first step towards our main result, we now describe the cohomology of $\mathds{C}^{n}\smallsetminus D$ in terms of $G_{D}^{\circ}$.
{prop}
Suppose that $D\subseteq\mathds{C}^{n}$ is a linear free divisor and let $G^{\circ}_{D,p}$ be the (finite) isotropy group of $p\in\mathds{C}^{n}\smallsetminus D$ in $G_{D}^{\circ}$.
Then
$$H^{*}(\mathds{C}^{n}\smallsetminus D;\mathds{C})=H^{*}(G^{\circ}_{D};\mathds{C%
})^{G^{\circ}_{D,p}}=H^{*}(G^{\circ}_{D};\mathds{C}).$$
Proof.
By Lemma 2, $\mathds{C}^{n}\smallsetminus D\cong G^{\circ}_{D}/G^{\circ}_{D,p}$ with finite $G^{\circ}_{D,p}$ and the first equality follows.
The second equality holds because $G^{\circ}_{D}$ is path connected, which means that left translation by $g\in G^{\circ}_{D,p}$ is homotopic to the identity and thus induces the identity map on cohomology.
∎
{rema}
The argument for the second equality also shows that if $G^{\circ}_{D}$ is a finite quotient of the connected Lie group $G$ then $H^{*}(\mathds{C}^{n}\smallsetminus D;\mathds{C})\simeq H^{*}(G;\mathds{C}).$
We will use this below in calculating the cohomology of $\mathds{C}^{n}\smallsetminus D$.
3. Cohomology of the complement and Lie algebra cohomology
Let $\mathfrak{g}$ be a Lie algebra.
The complex of Lie algebra cochains with coefficients in the complex representation $V$ of $\mathfrak{g}$ has $k$th term $\bigwedge^{k}_{\mathds{C}}\operatorname{Hom}_{\mathds{C}}(\mathfrak{g},V)\cong%
\operatorname{Hom}_{\mathds{C}}(\bigwedge^{k}_{\mathds{C}}\mathfrak{g},V)$, and differential $d_{L}:\bigwedge^{k}_{\mathds{C}}\operatorname{Hom}(\mathfrak{g},V)\to\bigwedge%
^{k+1}_{\mathds{C}}\operatorname{Hom}(\mathfrak{g},V)$ defined by
(4)
$$\displaystyle(d_{L}\omega)(v_{1}\wedge\dots\wedge v_{k+1})=$$
$$\displaystyle\sum_{i<j}(-1)^{i+j}\omega([v_{i},v_{j}]\wedge v_{1}\dots\wedge%
\widehat{v_{i}}\wedge\dots\wedge\widehat{v_{j}}\wedge\dots\wedge v_{k+1})+$$
$$\displaystyle\sum_{i}(-1)^{i+1}v_{i}\cdot\omega(v_{1}\wedge\dots\wedge\widehat%
{v_{i}}\wedge\dots\wedge v_{k+1}).$$
The cohomology of this complex is the Lie algebra cohomology of $\mathfrak{g}$ with coefficients in $V$ and will be denoted $H^{*}_{A}(\mathfrak{g};V)$.
The exterior derivative of a differential $k$-form satisfies an
identical formula:
(5)
$$\displaystyle d\omega(\chi_{1}\wedge\dots\wedge\chi_{k+1})=$$
$$\displaystyle\sum_{i<j}(-1)^{i+j}\omega([\chi_{i},\chi_{j}]\wedge\chi_{1}%
\wedge\dots\wedge\widehat{\chi_{i}}\wedge\dots\wedge\widehat{\chi_{j}}\wedge%
\dots\wedge\chi_{k+1})+$$
$$\displaystyle\sum_{i}(-1)^{i+1}\chi_{i}\cdot\omega(\chi_{1}\wedge\dots\wedge%
\widehat{\chi_{i}}\wedge\dots\wedge\chi_{k+1}).$$
Here the $\chi_{i}$ are vector fields.
When $D$ is a free divisor and $V=\mathscr{O}_{p}$ for some $p\in D$, it is tempting to conclude from the comparison of (4) and (5) that the
complex $\Omega^{\bullet}(\log D)$ coincides with the complex of Lie algebra cohomology, with coefficients in $\mathscr{O}_{p}$, of the Lie algebra $\operatorname{Der}(-\log D)_{p}$.
For $\Omega^{1}(\log D)_{p}$ is the dual of $\operatorname{Der}(-\log D)_{p}$, and $\Omega^{k}(\log D)=\bigwedge^{k}\Omega^{1}(\log D)$.
However, this identification is incorrect, since, in the complex $\Omega^{\bullet}(\log D)$, both exterior powers and $\operatorname{Hom}$ are taken over the ring of coefficients $\mathscr{O}$, rather than over $\mathds{C}$, as in the complex of Lie algebra cochains.
The cohomology of $\Omega^{\bullet}(\log D)_{p}$ is instead the Lie algebroid cohomology of $\operatorname{Der}(-\log D)_{p}$ with coefficients in $\mathscr{O}_{p}$.
Nevertheless, when $D$ is a linear free divisor, there is the following important
link between these two complexes.
Recall $\mathrm{L}_{D}$ from Definition 2.
{lemm}
Let $D$ be a linear free divisor.
The complex $\Gamma(\mathds{C}^{n},\Omega^{\bullet}(\log D))_{0}$ of global homogeneous differential forms of degree zero coincides with the complex $\bigwedge^{\bullet}_{\mathds{C}}\operatorname{Hom}(\mathrm{L}_{D},\mathds{C})$ of Lie algebra cochains with coefficients in $\mathds{C}$.
Proof.
First we establish a natural isomorphism between the corresponding
terms of the two complexes.
We have
$$\displaystyle\Omega^{1}(\log D)$$
$$\displaystyle=\operatorname{Hom}_{\mathscr{O}}(\operatorname{Der}(-\log D),%
\mathscr{O})$$
$$\displaystyle=\operatorname{Hom}_{\mathscr{O}}(\mathrm{L}_{D}\otimes_{\mathds{%
C}}\mathscr{O},\mathscr{O})$$
$$\displaystyle=\operatorname{Hom}_{\mathds{C}}(\mathrm{L}_{D},\mathds{C})%
\otimes_{\mathds{C}}\mathscr{O}.$$
Since $\operatorname{Hom}_{\mathds{C}}(\mathrm{L}_{D},\mathds{C})$ is purely of degree zero, and the degree zero
part of $\mathscr{O}$ consists just of $\mathds{C}$, the degree zero part of $\Gamma(\mathds{C}^{n},\Omega^{1}(\log D))$ is
$$\Gamma(\mathds{C}^{n},\Omega^{1}(\log D))_{0}=\operatorname{Hom}_{\mathds{C}}(%
\mathrm{L}_{D},\mathds{C}).$$
Since moreover $\Gamma(\mathds{C}^{n},\Omega^{1}(\log D))$ has no part of negative degree, it follows that
$$\Gamma(\mathds{C}^{n},\Omega^{k}(\log D))_{0}=\Gamma\bigl{(}\mathds{C}^{n},%
\bigwedge^{k}_{\mathscr{O}}\Omega^{1}(\log D)\bigr{)}_{0}=\bigwedge^{k}_{%
\mathds{C}}\operatorname{Hom}_{\mathds{C}}(\mathrm{L}_{D},\mathds{C}).$$
Next, we show that the coboundary operators are the same.
Because we are working with constant coefficients, the second sum on the right in (4) vanishes.
Let $\chi_{1},\dots,\chi_{k+1}\in\mathrm{L}_{D}$.
Then for $\omega\in\Gamma(\mathds{C}^{n},\Omega^{k}(\log D))_{0}$ and $i\in\{1,\dots,k+1\}$, $\omega(\chi_{1}\wedge\dots\wedge\widehat{\chi_{i}}\wedge\dots\wedge\chi_{k+1})$ is a constant.
It follows that the second sum on the right in (5) vanishes.
Thus, the coboundary operator $d_{L}$ and the exterior derivative $d$ coincide.
∎
More generally let us consider weights $w=w_{1},\dots,w_{n}\in\mathds{Q}_{+}$ and assign the weight $w_{i}$ (resp. $-w_{i}$) to $x_{i}$ and $dx_{i}$ (resp. to $\partial_{i}$).
Then the set of homogeneous vector fields or differential forms of a given degree is well defined.
{lemm}
Suppose that the divisor $D\subseteq\mathds{C}^{n}$ is quasihomogeneous with respect to weights $w=w_{1},\dots,w_{n}\in\mathds{Q}_{+}$.
Then the following holds for any open set $U\subseteq\mathds{C}^{n}$:
(1)
If $\omega\in\Gamma(U,\Omega^{k}(\log D))$ is $w$-homogeneous, then $L_{\chi}(\omega)=\deg_{w}(\omega)\omega$, where $L_{\chi}$ is the Lie derivative with respect the Euler vector field $\chi=\sum_{i=1}^{n}w_{i}x_{i}\partial_{i}$.
(2)
For any closed $\omega\in\Gamma(U,\Omega^{k}(\log D))$ with decomposition $\omega=\sum_{j\geq j_{0}}\omega_{j}$ into $w$-homogeneous parts, $\omega-\omega_{0}$ is exact.
(3)
If $\Gamma(U,\Omega^{k}(\log D))_{r}\subseteq\Gamma(U,\Omega^{k}(\log D))$ denotes the subspace of $w$-homogeneous forms of $w$-degree $r$, then
$$\Gamma(U,\Omega^{\bullet}(\log D))_{0}\hookrightarrow\Gamma(U,\Omega^{\bullet}%
(\log D))$$
is a quasi-isomorphism.
Proof.
(1)
is a straightforward calculation, using Cartan’s formula
$L_{\chi}(\omega)=d\iota_{\chi}\omega+\iota_{\chi}d\omega$, where $\iota_{\chi}$ is
contraction by $\chi$.
(2)
follows, for, if $\omega$ is closed, so is $\omega_{j}$ for
every $j$, and thus
$$\omega-\omega_{0}=\sum_{0\neq j\geq j_{0}}\omega_{j}=L_{\chi}\bigl{(}\sum_{0%
\neq j\geq j_{0}}\frac{\omega_{j}}{j}\bigr{)}=d(\iota_{\chi}\bigl{(}\sum_{0%
\neq j\geq j_{0}}\frac{\omega_{j}}{j}\bigr{)}).$$
(3)
is now an immediate consequence.∎
From Lemma 3 and Lemma 3(3) applied to $U=\mathds{C}^{n}$ we deduce the following
{prop}
Let $D\subseteq\mathds{C}^{n}$ be a linear free divisor.
Then
$$H^{*}(\Gamma(\mathds{C}^{n},\Omega^{\bullet}(\log D)))\cong H^{*}_{A}(\mathrm{%
L}_{D};\mathds{C}).\qed$$
Recall $G_{D}^{\circ}$ and $\mathfrak{g}_{D}$ from Definition 1.
From Propositions 2 and 3 we deduce
{coro}
The global logarithmic comparison theorem holds for a linear free divisor $D$ if and only if
(6)
$$H^{*}(G_{D}^{\circ};\mathds{C})\cong H^{*}_{A}(\mathfrak{g}_{D};\mathds{C}).\qed$$
There is such an isomorphism if $G$ is a connected compact real Lie group with Lie algebra $\mathfrak{g}$ (which is not our situation here).
Left translation around the group gives rise to an isomorphism of complexes
$$T:\bigwedge^{\bullet}\mathfrak{g}^{*}\to\bigl{(}\Omega^{\bullet}(G)^{G},d\bigr%
{)}$$
where $\mathfrak{g}^{*}=\operatorname{Hom}_{\mathds{R}}(\mathfrak{g},\mathds{R})$ and $\Omega^{\bullet}(G)^{G}$ is the complex of left-invariant real-valued differential forms on $G$.
Composing this with the inclusion
(7)
$$\bigl{(}\Omega^{\bullet}(G)^{G},d\bigr{)}\to\bigl{(}\Omega^{\bullet}(G),d\bigr%
{)}$$
and taking cohomology gives a morphism
(8)
$$\tau_{G}\colon H^{*}_{A}(\mathfrak{g}_{D};\mathds{R})\to H^{*}(G;\mathds{R}).$$
If $G$ is compact, (8) is an isomorphism.
For from each closed $k$-form $\omega$ we obtain a left-invariant closed $k$-form $\omega_{A}$ by averaging:
$$\omega_{A}:=\frac{1}{|G|}\int_{G}\ell_{g}^{*}(\omega)d\mu_{L},$$
where $\mu_{L}$ is a left-invariant measure and $|G|$ is the volume of $G$ with respect to this measure.
As $G$ is path-connected, for each $g\in G$ $\ell_{g}$ is homotopic to the identity, so $\omega$ and $\ell_{g}^{*}(\omega)$ are equal in cohomology.
It follows from this that $\omega$ and $\omega_{A}$ are also equal in cohomology.
Of course, this does not apply directly in any of the cases discussed here, since $G_{D}$ is not compact.
Nevertheless if $G_{D}$ is a reductive group, the complexified morphism (8) is an isomorphism.
We now briefly outline the necessary definitions.
Let $G_{0}$ be a compact Lie group.
Then ([OV90, §5.4, Thm. 10]) $G_{0}$ has a faithful real representation.
It follows ([OV90, §3.4, Thm. 5]) that $G_{0}$ has an affine real algebraic group structure.
This allows its complexification.
{defi}
(1)
The complex Lie algebra representation is reductive if it is the direct sum of a semi-simple ideal and a diagonalizable ideal.
(2)
The complex linear algebraic group $G$ is reductive if its Lie algebra (representation) is reductive.
The term “reductive” is due to the fact that these groups are characterised, among complex algebraic groups, by the complete reducibility of every finite-dimensional complex representation.
Chapter 5 of [OV90] establishes a bijection between compact Lie groups and reductive complex linear algebraic groups:
{theo}
[[OV90, §5.2, Thm. 5]]
On any compact Lie group $K$ there exists a unique real algebraic group structure, whose complexification $K(\mathds{C})$ is reductive.
Any reductive complex algebraic group possesses an algebraic compact real form (of which it is therefore the complexification).∎
The significance of this notion for us derives from the following fact:
{theo}
[[OV90, §5.2 Thm. 2]]
Let $G$ be a complex reductive algebraic group with an $n$-dimensional compact real form $K$.
Then $G$ is diffeomorphic to $K\times\mathds{R}^{n}$.∎
{coro}
If $G$ is a connected reductive complex algebraic group with complex Lie algebra $\mathfrak{g}$ then
$$H^{*}_{A}(\mathfrak{g};\mathds{C})\simeq H^{*}(G;\mathds{C}).$$
Proof.
Let $K$ be a compact real form of $G$.
By 3, inclusion of $K$ into $G=K(\mathds{C})$ induces an ismorphism on cohomology.
The Lie algebra $\mathfrak{g}$ of $G$ is the complexification of the Lie algebra $\mathfrak{k}$ of $K$, so we have
$$H^{*}_{A}(\mathfrak{g};\mathds{C})\simeq H^{*}(\mathfrak{k};\mathds{R})\otimes%
\mathds{C}\simeq H^{*}(K;\mathds{R})\otimes\mathds{C}\simeq H^{*}(K;\mathds{C}%
)\simeq H^{*}(G;\mathds{C}),$$
where the second isomorphism comes from the isomorphism (8).
∎
From Corollary 3, Definition 3, and Corollary 3 we now conclude Theorem 1 as announced in the introduction: the Global Logarithmic Comparison Theorem holds for all reductive linear free divisors.
Using the reductiveness of the group $\operatorname{Gl}_{n}(\mathds{C})$, we will show in the next section that the group $G_{D}$ is reductive for divisors obtained as discriminants in the representation spaces of quivers.
The subgroup $B_{n}\subseteq\operatorname{Gl}_{n}(\mathds{C})$ of upper triangular matrices is not reductive, and appears as the group $G_{D}$ in Example 5.1 which shows that reductivity is not necessary for the GLCT to hold.
4. Linear free divisors in quiver representation spaces
The following discussion summarises part of [BM06].
A quiver $Q$ is a finite connected oriented graph; it consists of a set $Q_{0}$ of nodes and a set $Q_{1}$ of arrows joining some of them.
For each arrow $\varphi\in Q_{1}$ we denote by $t\varphi$ (for “tail”) and $h\varphi$ (for “head”) the nodes where it starts and finishes.
A (complex) representation $V$ of $Q$ is a choice of complex vector space $V_{\alpha}$ for each node $\alpha\in Q_{0}$ and linear map $V(\varphi):V_{t\varphi}\to V_{h\varphi}$ for each arrow $\varphi\in Q_{1}$.
For a fixed dimension vector
$$\mathbf{d}=(d_{\alpha})_{\alpha\in Q_{0}}:=(\dim V_{\alpha})_{\alpha\in Q_{0}}.$$
and a choice of bases for the $V_{\alpha}$, $\alpha\in Q_{0}$, the representation space of the quiver $Q$ of dimension $\mathbf{d}$ is
$$\operatorname{Rep}(Q,\mathbf{d}):=\prod_{\varphi\in Q_{1}}\operatorname{Hom}(%
\mathds{C}^{d_{t\varphi}},\mathds{C}^{d_{h\varphi}})\cong\prod_{\varphi\in Q_{%
1}}\operatorname{Hom}(V_{t\varphi},V_{h\varphi}).$$
On this space the quiver group
$$\operatorname{Gl}(Q,\mathbf{d}):=\prod_{\alpha\in Q_{0}}\operatorname{Gl}_{d_{%
\alpha}}(\mathds{C})\cong\prod_{\alpha\in Q_{0}}\operatorname{Gl}(V_{\alpha})$$
acts, by
(9)
$$\bigl{(}(g_{\alpha})_{\alpha\in Q_{0}}\cdot V\bigr{)}_{\varphi}:=g_{h\varphi}V%
(\varphi)g_{t\varphi}^{-1}.$$
This action factors through the group
(10)
$$Z:=\mathds{C}^{*}\cdot(I_{d_{\alpha}})_{\alpha\in Q_{0}}\subseteq\operatorname%
{Z}(\operatorname{Gl}(Q,\mathbf{d}))$$
in the center of $\operatorname{Gl}(Q,\mathbf{d})$ where $I_{d_{\alpha}}\in\operatorname{Gl}_{d_{\alpha}}(\mathds{C})$ is the unit matrix.
The group $\operatorname{Gl}(Q,\mathbf{d})/Z$ is reductive as, choosing a vertex $x_{0}\in Q_{0}$, we can consider it as a central quotient
(11)
$$\operatorname{Gl}(Q,\mathbf{d})/Z\cong\Bigl{(}\operatorname{Sl}_{d_{x_{0}}}(%
\mathds{C})\times\prod_{x\in Q_{0}\smallsetminus\{x_{0}\}}\operatorname{Gl}_{d%
_{x}}(\mathds{C})\Bigr{)}\Big{/}\bigl{(}\mu_{d_{x_{0}}}\cdot\prod I_{d_{x}}%
\bigr{)}$$
where $\mu_{k}\subseteq\mathds{C}^{*}$ denotes the cyclic subgroup of order $k$.
It acts faithfully on $\operatorname{Rep}(Q,\mathbf{d})$.
For $\operatorname{Rep}(Q,\mathbf{d})$ and $\operatorname{Gl}(Q,\mathbf{d})/Z$ to play the role of $\mathds{C}^{n}$ and $G_{D}$ as in Section 2, we must require
(12)
$$\displaystyle\sum_{n\in N}d_{n}^{2}-\sum_{\varphi\in A}d_{t\varphi}d_{h\varphi}$$
$$\displaystyle=\dim_{\mathds{C}}\operatorname{Gl}(Q,\mathbf{d})-\dim_{\mathds{C%
}}\operatorname{Rep}(Q,\mathbf{d})$$
$$\displaystyle=\dim Z=1.$$
But this equality is not yet sufficient:
it is also necessary that $\operatorname{Gl}(Q,\mathbf{d})/Z$ has an open orbit.
This occurs if the general representation in $\operatorname{Rep}(Q,\mathbf{d})$ is indecomposable.
If both this last condition and (12) hold, $\mathbf{d}$ is called a real Schur root of $Q$.
In this case, there is a single open orbit, and the discriminant determinant $\Delta$ defines its complement $D$, a divisor called the discriminant.
This is the consequence of a result due to Kraft and Riedtmann [KR86, §2.6], which asserts that if the general representation is indecomposable it has only scalar endomorphisms.
Then
(13)
$$\operatorname{Gl}(Q,\mathbf{d})/Z\cong G_{D}=G_{D}^{\circ}.$$
The above discussion combined with Theorem 1 proves the following
{theo}
If $\mathbf{d}$ is a real Schur root of a quiver $Q$ and the discriminant $D$ in $\operatorname{Rep}(Q,\mathbf{d})$ is reduced then $D$ is a linear free divisor that satisfies the GLCT.
In [BM06] it is shown that if, moreover, $Q$ is a Dynkin quiver, i.e. its underlying unoriented graph is a Dynkin diagram of type $A_{n}$, $D_{n}$, $E_{6}$, $E_{7}$ or $E_{8}$, then $\Delta$ is always reduced, and thus defines a linear free divisor.
The significance of the Dynkin quivers is, that by a theorem of Gabriel [Gab72], they are the quivers of finite type, i.e. the number of $\operatorname{Gl}(Q,\mathbf{d})$ orbits in $\operatorname{Rep}(Q,\mathbf{d})$ is finite.
It is this that guarantees that $\Delta$ is always reduced, cf. [BM06, Prop. 5.4].
It also implies that every root of a Dynkin quiver is a real Schur root.
{coro}
If $\mathbf{d}$ is a (real Schur) root of a Dynkin quiver $Q$ then the discriminant $D$ in $\operatorname{Rep}(Q,\mathbf{d})$ is a linear free divisor that satisfies GLCT.
{rema}
The argument showing that GLCT holds for the free divisors arising as discriminants in quiver representation spaces yields a simple topological proof of a theorem of V. Kac [Kac82, p. 153] (see also [Sch91]):
When $\mathbf{d}$ is a sincere (i.e. $\mathbf{d}_{x}>0$ for all $x\in Q_{0}$) real Schur root of a quiver $Q$ with no oriented cycles, the discriminant in $\operatorname{Rep}(Q,\mathbf{d})$ has $|Q_{0}|-1$ irreducible components.
The proof is this:
the number of irreducible components of a divisor in a complex vector space is equal to the rank of $H^{1}$ of the complement.
From Theorem 1 we know that $H^{1}(\operatorname{Rep}(Q,\mathbf{d})\smallsetminus D;\mathds{C})\simeq H^{1}%
_{A}(\mathfrak{g}_{D};\mathds{C})$; as by (11)
$$\mathfrak{g}_{D}\simeq\mathfrak{sl}_{d_{x_{0}}}(\mathds{C})\oplus\bigoplus_{x%
\in Q_{0}\smallsetminus\{x_{0}\}}\mathfrak{gl}_{d_{x}}(\mathds{C}),$$
it follows that
$$H^{1}(\operatorname{Rep}(Q,\mathbf{d})\smallsetminus D;\mathds{C})\simeq 0%
\oplus\bigoplus_{x\in Q_{0}\smallsetminus\{x_{0}\}}H^{1}(\mathfrak{gl}_{d_{x}}%
(\mathds{C});\mathds{C})$$
and so has rank $|Q_{0}|-1$.
Another simple algebraic proof of Kac’s theorem was pointed out to us by the referee.
It consists in determining the dimension of the vector space of rational function on $\mathds{C}^{n}$ with zeroes and poles along $D$ only and lifting them to the group $G_{D}$.
5. Examples of linear free divisors
The conclusion of Section 2 guides our search for linear free divisors.
Our first example shows that the implication in Theorem 1 is not an equivalence.
We denote by
(14)
$$E_{ij}=(\delta_{i,k}\cdot\delta_{j,l})_{k,l}\in\mathfrak{gl}_{n}(\mathds{C})$$
the elementary matrix with $1$ in the $i$th row and $j$th column and $0$ elsewhere.
5.1. A non-reductive example satisfying GLCT
{exem}
For $n\geq 2$, the group $B_{n}$ of $n\times n$ invertible upper triangular matrices is not reductive.
It acts on the space $\operatorname{Sym}_{n}(\mathds{C})$ of symmetric matrices by transpose conjugation:
$$B\cdot S=B^{t}SB.$$
Under the corresponding infinitesimal action, the matrix $b$ in the Lie algebra $\mathfrak{b}_{n}$ gives rise to the vector field $\chi_{b}$ defined by
$$\chi_{b}(S)=b^{t}S+Sb.$$
The dimensions of $B_{n}$ and $\operatorname{Sym}_{n}(\mathds{C})$ are equal.
The discriminant determinant $\Delta$ is reduced and defines a linear free divisor $D=V(\Delta)$.
To see this, consider an elementary matrix $E_{ij}\in\mathfrak{b}_{n}$ and let $\chi_{ij}$ be the corresponding vector field on $\operatorname{Sym}_{n}(\mathds{C})$.
If $I$ is the $n\times n$ identity matrix, then $\chi_{ij}(I)=E_{ji}+E_{ij}$.
The vectors $\chi_{ij}(I)$ for $1\leq i\leq j\leq n$ are therefore linearly independent, and $\Delta(I)\neq 0$.
For an $n\times n$ matrix $A$, let $A_{j}$ be the $j\times j$ matrix obtained by deleting the last $n-j$ rows and columns of $A$, and let $\det_{j}(A)=\det(A_{j})$.
If $B\in B_{n}$ and $S\in\operatorname{Sym}_{n}(\mathds{C})$, then because $B$ is upper triangular, $(B^{t}SB)_{j}=B^{t}_{j}S_{j}B_{j},$ and so $\det_{j}(B^{t}SB)=\det_{j}(B_{j})^{2}\det_{j}(S)$.
It follows that the hypersurface $D_{j}:=\{\det_{j}=0\}$ is invariant under the action, and the infinitesimal action of $B_{n}$ on $\operatorname{Sym}_{n}(\mathds{C})$ is tangent to each.
Thus $\Delta$ vanishes on each of them.
The sum of the degrees of the $D_{j}$ as $j$ ranges from $1$ to $n$ is equal to $\dim\operatorname{Sym}_{n}(\mathds{C})$, and so coincides with the degree of $\Delta$.
Hence $\Delta$ is reduced, and we conclude, by Lemma 2, that
$D=D_{1}\cup\cdots\cup D_{n}$ is a linear
free divisor.
In particular, when $n=2$, $D\subseteq\operatorname{Sym}_{2}(\mathds{C})=\mathds{C}^{3}$ is the union of a quadric cone and one of its tangent planes.
We now give a proof that GLCT holds for $D$, in the spirit of the proofs of the preceding section, even though $D$ is not reductive.
In fact LCT already follows, by Theorem 1, from local quasihomogeneity, which we prove in Subsection LABEL:65 below.
{prop}
GLCT holds for the discriminant $D$ of the action of $B_{n}$ on $\operatorname{Sym}_{n}(\mathds{C})$ in Example 5.1.
Proof.
The group $G_{D}^{\circ}$ is a finite quotient of the group $B_{n}$ of upper-triangular matrices in $\operatorname{Gl}_{n}(\mathds{C})$.
There is a deformation retraction of $B_{n}$ to the maximal torus $T$ consisting of its diagonal matrices, and, with respect to the standard coordinates $a_{ij}$ on matrix space, it follows that $H^{*}(B_{n})$ is isomorphic to the free exterior algebra on the forms $da_{ii}/a_{ii}$.
Each of these is left-invariant, and it follows that the map $\tau_{B_{n}}\colon H^{*}_{A}(\mathfrak{b}_{n};\mathds{C})\to H^{*}(B_{n};%
\mathds{C})$ from (8) is an epimorphism.
Similarly, the Lie algebra complex $\bigwedge^{\bullet}\mathfrak{b}_{n}^{*}$ has a contracting homotopy to its semisimple part.
We may consider it as the complex of left-invariant forms on the group $B_{n}$.
Assign weights $w_{1},\ldots,w_{n}$ to the columns and weights $-w_{1},\ldots,-w_{n}$ to the rows.
This gives the elementary matrix $E_{ij}\in\mathfrak{b}_{n}$ the weight $w_{i}-w_{j}$.
If $\varepsilon_{i,j}\in\mathfrak{b}_{n}^{*}$ denotes the dual basis and we assign the weight $0$ to $\mathds{C}$ then $\operatorname{wt}(\varepsilon_{i,j})=-\operatorname{wt}(E_{i,j})$.
With respect to the resulting gradings of $\mathfrak{b}_{n}$ and $\mathfrak{b}_{n}^{*}$, both the Lie bracket and the differential $d_{L}$ of the complex $\bigwedge^{\bullet}\mathfrak{b}_{n}^{*}$ are homogeneous of degree 0, cf. (4).
Let $E=\sum_{i}w_{i}E_{ii}$, and let $\iota_{E}:\bigwedge^{\bullet}\mathfrak{b}_{n}^{*}\to\bigwedge^{\bullet}%
\mathfrak{b}_{n}^{*}$ be the operation of contraction by $E$ defined by
$$(\iota_{E}\omega)(v_{1}\wedge\cdots\wedge v_{k}):=\omega(E\wedge v_{1}\wedge%
\cdots\wedge v_{k}).$$
Observe that for each generator $E_{ij}\in\mathfrak{b}_{n}$ we have
(15)
$$[E,E_{ij}]=(w_{i}-w_{j})\cdot E_{ij}=\operatorname{wt}(E_{ij})\cdot E_{ij}.$$
We claim that the operation
$$L_{E}:=\iota_{E}d_{L}+d_{L}\iota_{E},$$
of taking the Lie derivative along $E$ has the effect of multiplying each homogeneous element of $\bigwedge^{\bullet}\mathfrak{b}_{n}^{*}$ by its $w$-degree.
Indeed the operation $L_{E}$ is a derivation of degree zero on $\bigwedge^{\bullet}\mathfrak{b}_{n}^{*}$,
and the result on 1 forms,
$$L_{E}(\varepsilon_{i,j})=(w_{j}-w_{i})\varepsilon_{i,j},$$
is therefore sufficient and can be easily checked by direct calculation.
Thus $L_{E}$ defines a contracting homotopy from $\bigwedge^{\bullet}\mathfrak{b}_{n}^{*}$ to its $w$-degree $0$ part $\bigl{(}\bigwedge^{\bullet}\mathfrak{b}_{n}^{*}\bigr{)}_{0}$, by exactly the same calculation as in Lemma 3, but with $\Gamma(U,\Omega^{k}(\log D))$ and $L_{\chi}$ replaced respectively by $\bigwedge^{\bullet}\mathfrak{b}_{n}^{*}$ and $L_{E}$.
If we choose $w_{1}<\cdots<w_{n}$ then all off-diagonal members of the basis $\{\varepsilon_{i,j}\}_{1\leq i\leq j\leq n}$ of $\mathfrak{b}_{n}^{*}$ have strictly positive $w$-degree.
It follows that
$$\bigwedge^{\bullet}\mathfrak{b}_{n}^{*}\simeq\Bigl{(}\bigwedge^{\bullet}%
\mathfrak{b}_{n}^{*}\Bigr{)}_{0}=\bigwedge^{\bullet}{\langle\varepsilon_{1,1},%
\dots,\varepsilon_{n,n}\rangle}=\bigwedge^{\bullet}\mathfrak{t}^{*}$$
where $\mathfrak{t}$ is the Lie algebra of the torus $T$ above.
The differential $d_{L}$ is zero on this subcomplex, showing that $\tau_{B_{n}}$ is an isomorphism.
∎
5.2. Discriminants of quiver representations
The following example, due to Ragnar-Olaf Buchweitz, is of the type discussed in Section 4.
{exem}
In the space $M_{n,n+1}(\mathds{C})$ of $n\times(n+1)$ matrices, let $D$ be the divisor defined by the vanishing of the product of
the maximal minors.
That is, for each matrix $A\in M_{n,n+1}(\mathds{C})$, let $A_{j}$ be $A$ minus its $j$’th column, and let $\Delta_{j}(A)=\det(A_{j})$.
Then
$$D=\{A\in M_{n,n+1}(\mathds{C}):\delta=\prod_{i=1}^{n+1}{\Delta}_{j}(A)=0\}.$$
It is a linear free divisor.
Here, as the group $G$ in Remark 2 we may take the product $Gl_{n}(\mathds{C})\times\{\operatorname{diag}(1,\lambda_{1},\dots,\lambda_{n})%
:\lambda_{1},\dots,\lambda_{n}\in\mathds{C}^{*}\}$, acting by
$$\bigl{(}A,\operatorname{diag}(1,\lambda_{1},\dots,\lambda_{n})\bigr{)}\cdot M=%
A\cdot M\cdot\operatorname{diag}(1,\lambda_{1},\dots,\lambda_{n})^{-1}$$
The placing of the $1$ in the first entry of the diagonal matrices is
rather arbitrary; it could be placed instead in any other fixed position
on the diagonal.
That $D$ is a linear free divisor follows from the fact that
(1)
the complement of $D$ is a single orbit, so the discriminant determinant
is not identically zero, and
(2)
the degree of $D$ is equal to the dimension of $G_{D}$, so the discriminant
determinant is reduced.
In our Example 5.2, $M_{n,n+1}(\mathds{C})$ is the representation space of the star quiver consisting of one sink and $n+1$ sources, with dimension vector assigning dimension $n$ to the sink and $1$ to each of the sources.
The case $n=5$ is shown in Figure 5.2. |
Large Deviation Approach to Random Recurrent Neuronal Networks:
Rate Function, Parameter Inference, and Activity Prediction
Alexander van Meegen
Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced
Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships
(INM-10), Jülich Research Centre, Jülich, Germany
Tobias Kühn
Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced
Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships
(INM-10), Jülich Research Centre, Jülich, Germany
Department of Physics, Faculty 1, RWTH Aachen University, Aachen,
Germany
Laboratoire de Physique de l’ENS, CNRS, Paris, France
Moritz Helias
Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced
Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships
(INM-10), Jülich Research Centre, Jülich, Germany
Department of Physics, Faculty 1, RWTH Aachen University, Aachen,
Germany
(November 18, 2020)
Abstract
Statistical field theory captures collective non-equilibrium dynamics
of neuronal networks, but it does not address the inverse problem
of searching the connectivity to implement a desired dynamics. We
here show for an analytically solvable network model that the effective
action in statistical field theory is identical to the rate function
in large deviation theory; using field theoretical methods we derive
this rate function. It takes the form of a Kullback-Leibler divergence
and enables data-driven inference of model parameters and Bayesian
prediction of time series.
\newrefformat
eqEq. (LABEL:#1)
\newrefformatcapFig. LABEL:#1
\newrefformatfigFig. LABEL:#1
\newrefformattabTable LABEL:#1
\newrefformatsecSection LABEL:#1
\newrefformatsubSection LABEL:#1
\newrefformatchaChapter LABEL:#1
Introduction.–
Biological neuronal networks are systems with many degrees of freedom
and intriguing properties: their units are coupled in a directed,
non-symmetric manner, so that they typically operate outside thermodynamic
equilibrium (Rabinovich et al., 2006; Sompolinsky, 1988). Even more
challenging is the question how this non-equilibrium dynamics is used
to perform information processing. A rigorous understanding is still
deficient but imperatively needed to shed light into their remarkable
abilities of computation.
The primary method to study neuronal networks has been mean-field
theory (Amari, 1972; Sompolinsky et al., 1988; Stern et al., 2014; Kadmon and Sompolinsky, 2015; Aljadeff et al., 2015; van Meegen and Lindner, 2018).
Its field-theoretical basis has been exposed only recently (Schuecker et al., 2018; Crisanti and Sompolinsky, 2018)
which promises highly efficient schemes to study effects beyond the
mean–field approximation. However, to understand the parallel and
distributed information processing performed by neuronal networks,
the study of the forward problem – from the microscopic parameters
of the model to its dynamics – is not sufficient. One additionally
faces the inverse problem of determining the parameters of the model
given a desired dynamics and thus function. Formally, one needs to
link statistical physics with concepts from information theory and
statistical inference.
We here expose a tight relation between statistical field theory of
neuronal networks, large-deviation theory, information theory, and
inference. To this end, we generalize the probabilistic view of large
deviation theory, which yields rigorous results for the leading order
behavior in the network size $N$ (Arous and Guionnet, 1995; Guionnet, 1997),
to arbitrary single unit dynamics and transfer functions. We then
show that the central quantity of large deviation theory, the rate
function, is identical to the effective action in statistical field
theory. Having rendered this comprehensive picture, a third relation
is found: Bayesian inference and prediction are naturally formulated
within this framework, spanning the arc to information processing.
Concretely, we discover a method of parameter inference from transient
data and perform Bayes-optimal prediction of the time-dependent network
activity.
Model.-
We consider random networks of $N$ nonlinearly interacting units
$x_{i}(t)$ driven by an external input $\xi_{i}(t)$. The dynamics
of the units are governed by the stochastic differential equation
$$\displaystyle\dot{x}_{i}(t)$$
$$\displaystyle=-\nabla U(x_{i}(t))+\sum_{j=1}^{N}J_{ij}\phi(x_{j}(t))+\xi_{i}(t).$$
(1)
In the absence of recurrent and external inputs, the units undergo
an overdamped motion in a potential $U(x)$. The $J_{ij}$ are independent
and identically Gaussian-distributed random coupling weights with
zero mean and variance $\langle J_{ij}^{2}\rangle=g^{2}/N$ where the coupling
strength $g$ controls the heterogeneity of the weights. The time-varying
external inputs $\xi_{i}(t)$ are independent Gaussian white-noise
processes with zero mean and correlation functions $\left\langle\xi_{i}(t_{1})\xi_{j}(t_{2})\right\rangle=2D\delta_{ij}\delta(t_{1%
}-t_{2})$.
The model corresponds to the one studied in (Sompolinsky et al., 1988)
if the external input vanishes, $D=0$, the potential is quadratic,
$U(x)=\frac{1}{2}x^{2}$, and the transfer function is sigmoidal,
$\phi(x)=\tanh(x)$; for $D=\frac{1}{2}$, $U(x)=-\log(A^{2}-x^{2})$,
and $\phi(x)=x$ it corresponds to the one in (Arous and Guionnet, 1995),
which is inspired by the dynamical spin glass model of (Sompolinsky and Zippelius, 1981).
Field theory.-
The field-theoretical treatment of \prettyrefeq:network_ode employs
the Martin–Siggia–Rose–de Dominicis–Janssen path integral
formalism (Martin et al., 1973; Janssen, 1976; Chow and Buice, 2015; Hertz et al., 2017).
We denote the expectation over paths across different realizations
of the noise $\xi$ as
$$\displaystyle\left\langle\cdot\right\rangle_{\bm{\mathbf{x}}|\bm{J}}$$
$$\displaystyle\equiv\left\langle\left\langle\cdot\right\rangle_{\bm{\mathbf{x}}%
|\bm{J},\bm{\mathbf{\xi}}}\right\rangle_{\bm{\mathbf{\xi}}}=\int\mathcal{D}\bm%
{\mathbf{x}}\,\int\mathcal{D}\bm{\mathbf{\tilde{x}}}\,\cdot\,e^{S_{0}(\bm{%
\mathbf{x}},\bm{\mathbf{\tilde{x}}})-\bm{\mathbf{\tilde{x}}}^{\mathrm{T}}\bm{J%
}\phi(\bm{\mathbf{x}})},$$
where $\left\langle\cdot\right\rangle_{\bm{\mathbf{x}}|\bm{J},\bm{\mathbf{\xi}}}$ integrates over the
unique solution of \prettyrefeq:network_ode given one realization
$\bm{\mathbf{\xi}}$ of the noise (see Appendix A). Here, $S_{0}(\bm{\mathbf{x}},\bm{\mathbf{\tilde{x}}})=\bm{\mathbf{\tilde{x}}}^{%
\mathrm{T}}(\dot{\bm{\mathbf{x}}}+\nabla U(\bm{\mathbf{x}}))+D\bm{\mathbf{%
\tilde{x}}}^{\mathrm{T}}\bm{\mathbf{\tilde{x}}}$
is the action of the uncoupled neurons. We use the shorthand notation
$\bm{\mathbf{a}}^{\mathrm{T}}\bm{\mathbf{b}}=\sum_{i=1}^{N}\int_{0}^{T}\mathrm{%
d}t\,a_{i}(t)b_{i}(t)$.
For large $N$, the system becomes self-averaging, a property known
from many disordered systems with large numbers of degrees of freedom:
the collective behavior is stereotypical, independent of the realization
$J_{ij}$. This holds for observables of the form $\sum_{i=1}^{N}\ell(x_{i})$,
where $\ell$ is an arbitrary functional of a single unit’s trajectory.
It is therefore convenient to introduce the scaled cumulant–generating
functional
$$\displaystyle W_{N}(\ell)$$
$$\displaystyle:=\frac{1}{N}\ln\left\langle\left\langle e^{\sum_{i=1}^{N}\ell(x_%
{i})}\right\rangle_{\bm{\mathbf{x}}|\bm{J}}\right\rangle_{\bm{J}},$$
(2)
where the prefactor $1/N$ makes sure that $W_{N}$ is an intensive
quantity, reminiscent of the bulk free energy (Goldenfeld, 1992).
In fact, we will show that the $N$-dependence vanishes in the limit
$N\to\infty$ because the system decouples.
Performing the average over $\bm{J}$ and introducing the auxiliary
field $C(t_{1},t_{2}):=g^{2}N^{-1}\,\sum_{i=1}^{N}\phi(x_{i}(t_{1}))\,\phi(x_{i}(t_{2%
}))$
as well as the conjugate field $\tilde{C}$, we can write $W_{N}$
as (Schuecker et al., 2016, 2018)
$$\displaystyle W_{N}(\ell)$$
$$\displaystyle=\frac{1}{N}\ln\,\int\mathcal{D}C\,\int\mathcal{D}\tilde{C}\,e^{-%
\frac{N}{g^{2}}\,C^{\mathrm{T}}\tilde{C}+N\,\Omega_{\ell}(C,\tilde{C})},$$
(3)
$$\displaystyle\Omega_{\ell}(C,\tilde{C})$$
$$\displaystyle:=\ln\,\int\mathcal{D}x\,\int\mathcal{D}\tilde{x}\,e^{S_{0}(x,%
\tilde{x})+\frac{1}{2}\tilde{x}^{\mathrm{T}}C\tilde{x}+\phi^{\mathrm{T}}\tilde%
{C}\phi+\ell(x)}.$$
The effective action is defined as the Legendre transform of $W_{N}(\ell)$,
$$\displaystyle\Gamma_{N}(\mu)$$
$$\displaystyle:=\int\mathcal{D}x\,\mu(x)\,\ell_{\mu}(x)-W_{N}(\ell_{\mu}),$$
(4)
where $\ell_{\mu}$ is determined implicitly by the condition $\mu=W_{N}^{\prime}(\ell_{\mu})$
and the derivative $W_{N}^{\prime}(\ell)$ has to be understood as
a generalized derivative, the coefficient of the linearization akin
to a Fréchet derivative (Berger, 1977).
Note that $W_{N}$ and $\Gamma_{N}$ are, respectively, generalizations
of a cumulant–generating functional and of the effective action
(Zinn-Justin, 1996) because both map a functional ($\ell$ or $\mu$)
to the reals. For the choice $\ell(x)=j^{\mathrm{T}}x$, we recover the usual
cumulant–generating functional of the single unit’s trajectory (see
Appendix D) and the corresponding effective action.
Rate function.-
Self–averaging corresponds to a sharply peaked distribution of an
observable over realizations of $\bm{J}$—because the distribution
is very narrow, the observable always attains the same value. However,
this can only hold for observables averaged over all units, reminiscent
of the central limit theorem. Therefore, it is natural to restrict
the observables to network–averaged ones; formally, this can be
achieved without loss of generality using the empirical measure
$$\displaystyle\mu(y)$$
$$\displaystyle:=\frac{1}{N}\sum_{i=1}^{N}\delta(x_{i}-y),$$
(5)
since $\frac{1}{N}\sum_{i=1}^{N}\ell(x_{i})=\int\mathcal{D}y\,\mu(y)\ell(y)$.
Of particular interest is the leading–order exponential behavior
of the distribution of empirical measures $P(\mu)=\langle\langle P(\mu\,|\,\bm{\mathbf{x}})\rangle_{\bm{\mathbf{x}}|\bm{J%
}}\rangle_{\bm{J}}$
across realizations of $\bm{J}$ and $\bm{\mathbf{\xi}}$ described by the
rate function (see e.g. Mezard and Montanari, 2009)
$$\displaystyle H(\mu)$$
$$\displaystyle:=-\lim_{N\to\infty}\frac{1}{N}\ln P(\mu).$$
(6)
For large $N$, the probability of an empirical measure that does
not correspond to the minimum $H^{\prime}(\bar{\mu})=0$ is exponentially
suppressed. Put differently, the system is self–averaging and the
statistics of any network–averaged observable can be obtained using
$\bar{\mu}$.
Similar to field theory, it is convenient to introduce the scaled
cumulant–generating functional of the empirical measure. Because
$\frac{1}{N}\sum_{i=1}^{N}\ell(x_{i})=\int\mathcal{D}y\,\mu(y)\ell(y)$ holds
for an arbitrary functional $\ell(x_{i})$ of the single unit’s trajectory
$x_{i}$, \prettyrefeq:def_WN has the form of the scaled cumulant–generating
functional for $\mu$ at finite $N$.
Using a saddle-point approximation for the integrals over $C$ and
$\tilde{C}$ in \prettyrefeq:W_N (see Appendix A), we get
$$\displaystyle W_{\infty}(\ell)$$
$$\displaystyle=-\frac{1}{g^{2}}C_{\ell}^{\mathrm{T}}\tilde{C}_{\ell}+\Omega_{%
\ell}(C_{\ell},\tilde{C}_{\ell}).$$
(7)
Both $C_{\ell}$ and $\tilde{C}_{\ell}$ are determined self-consistently by
the saddle-point equations $C_{\ell}=g^{2}\left.\partial_{\tilde{C}}\Omega_{\ell}(C,\tilde{C})\right|_{C_{%
\ell},\tilde{C}_{\ell}}$
and $\tilde{C}_{\ell}=g^{2}\left.\partial_{C}\Omega_{\ell}(C,\tilde{C})\right|_{C_{%
\ell},\tilde{C}_{\ell}}$
where $\partial_{C}$ denotes a partial functional derivative.
From the scaled cumulant–generating functional, \prettyrefeq:scgf_spa,
we obtain the rate function via a Legendre transformation (Touchette, 2009):
$H(\mu)=\int\mathcal{D}x\,\mu(x)\ell_{\mu}(x)-W_{\infty}(\ell)$ with $\ell_{\mu}$
implicitly defined by $\mu=W_{\infty}^{\prime}(\ell_{\mu})$. Comparing
with \prettyrefeq:Gamma_N, we observe that the rate function is
equivalent to the effective action: $H(\mu)=\lim_{N\to\infty}\Gamma_{N}(\mu)$.
The equation $\mu=W_{\infty}^{\prime}(\ell_{\mu})$ can be solved for
$\ell_{\mu}$ to obtain a closed expression for the rate function viz. effective
action (see Appendix B)
$$\displaystyle H(\mu)$$
$$\displaystyle=\int\mathcal{D}x\,\mu(x)\ln\frac{\mu(x)}{\left\langle\delta(\dot%
{x}+\nabla U(x)-\eta)\right\rangle_{\eta}},$$
(8)
where $\eta$ is a zero–mean Gaussian process with a correlation
function that is determined by $\mu(x)$,
$$\displaystyle C_{\eta}(t_{1},t_{2})=$$
$$\displaystyle 2D\,\delta(t_{1}-t_{2})$$
$$\displaystyle+g^{2}\negthinspace\int\mathcal{D}x\,\mu(x)\,\phi(x(t_{1}))\phi(x%
(t_{2})).$$
(9)
For $D=\frac{1}{2}$, $U(x)=-\log(A^{2}-x^{2})$, and $\phi(x)=x$,
\prettyrefeq:ratefct_emp_meas can be shown to be a equivalent to
the mathematically rigorous result obtained in the seminal work by
Ben Arous and Guionnet (see Appendix C).
The rate function \prettyrefeq:ratefct_emp_meas takes the form
of a Kullback-Leibler divergence. Thus, it possesses a unique minimum
at
$$\displaystyle\bar{\mu}(x)$$
$$\displaystyle=\left\langle\delta(\dot{x}+\nabla U(x)-\eta)\right\rangle_{\eta},$$
(10)
which corresponds to the well-known self-consistent stochastic dynamics
that is obtained in field theory (Sompolinsky et al., 1988; Crisanti and Sompolinsky, 2018; Schuecker et al., 2016, 2018).
Note that the correlation function of the effective stochastic input
$\eta$ at the minimum depends self-consistently on $\bar{\mu}(x)$
through \prettyrefeq:auto_corr_eta_empirical.
Parameter Inference.–
Thus far, we considered the network–averaged statistics of the activity
for given statistics of the connectivity and the external input. The
rate function opens the way to address the inverse problem: given
the network–averaged activity statistics, encoded in the corresponding
empirical measure $\mu$, what are the statistics of the connectivity
and the external input, i.e. $g$ and $D$?
We determine the parameters using maximum likelihood estimation. Using
\prettyrefeq:rate_function and \prettyrefeq:ratefct_emp_meas,
the likelihood of the parameters is given by
$$\displaystyle\ln P(\mu\,|\,g,D)$$
$$\displaystyle\simeq-NH(\mu\,|\,g,D),$$
where $\simeq$ denotes equality in the limit $N\to\infty$ and we
made the dependence on $g$ and $D$ explicit. The maximum likelihood
estimation is given by the minimum with respect to the parameters
$g$ and $D$ of the Kullback–Leibler divergence on the right hand
side. Only the cross entropy $-\int\mathcal{D}x\,\mu(x)\ln\left\langle\delta(\dot{x}+\nabla U(x)-\eta)\right\rangle$,
by \prettyrefeq:auto_corr_eta_empirical, depends on the parameters,
thus maximizing the likelihood estimation is equivalent to minimizing
the cross entropy. The derivative of the cross entropy by the parameters
yields
$$\displaystyle\partial_{a}\ln P(\mu\,|\,g,D)$$
$$\displaystyle\simeq-\frac{N}{2}\mathrm{tr}\left((C_{0}-C_{\eta})\frac{\partial
C%
_{\eta}^{-1}}{\partial a}\right),$$
where we used $\left\langle\delta(\dot{x}+\nabla U(x)-\eta)\right\rangle_{\eta}=Z^{-1}\exp(-S%
(x))$
with $S(x)=\frac{1}{2}(\dot{x}+\nabla U(x))^{\mathrm{T}}C_{\eta}^{-1}(\dot{x}+\nabla
U%
(x))$
and normalization $Z=\sqrt{\det(2\pi C_{\eta})}$ (see Appendix F),
defined $C_{0}(t_{1},t_{2})\equiv\int\mathcal{D}x\,\mu(x)\,\big{(}\dot{x}(t_{1})+\nabla
U%
(x(t_{1}))\big{)}\,\big{(}\dot{x}(t_{2})+\nabla U(x(t_{2}))\big{)}$,
and abbreviated $a\in\{g,D\}$. The derivative vanishes for $C_{0}=C_{\eta}$.
Assuming stationarity, in Fourier domain this condition reads
$$\displaystyle\mathcal{S}_{\dot{x}+\nabla U(x)}(f)$$
$$\displaystyle=2D+g^{2}\mathcal{S}_{\phi(x)}(f),$$
(11)
where $\mathcal{S}_{X}(f)$ denotes the network–averaged power spectrum
of the observable $X$. Using non–negative least squares (Lawson and Hanson, 1995),
\prettyrefeq:inference_condition_stat allows a straightforward
inference of $g$ and $D$ (see \prettyreffig:inference).
Model Comparison.–
Parameter estimation allows us to determine the statistical properties
of the recurrent connectivity $g$ and the external input $D$. However,
this leaves the potential $U$ and the transfer function $\phi$ unspecified.
We determine $U$ and $\phi$ using model comparison techniques (MacKay, 2003).
We consider two options to obtain $U$ and $\phi$: comparing the
mean squared error in \prettyrefeq:inference_condition_stat for
the inferred parameters and comparing the likelihood of the inferred
parameters. For the latter option, we can use the rate function from
\prettyrefeq:rate_function and \prettyrefeq:ratefct_emp_meas.
Given two choices $U_{i}$, $\phi_{i}$, $i\in\{1,2\}$, with corresponding
inferred parameters $\hat{g}_{i}$, $\hat{D}_{i}$, we have
$$\displaystyle\ln\frac{P(\mu\,|\,U_{1},\phi_{1},\hat{g}_{1},\hat{D}_{1})}{P(\mu%
\,|\,U_{2},\phi_{2},\hat{g}_{2},\hat{D}_{2})}$$
$$\displaystyle\simeq-N(H_{1}-H_{2})$$
(12)
with $H_{i}\equiv H(\mu\,|\,U_{i},\phi_{i},\hat{g}_{i},\hat{D}_{i})$.
The difference $H_{1}-H_{2}$ equals the difference of the minimal
cross entropies for the respective choices $U_{i}$, $\phi_{i}$.
Assuming an infinite observation time, this difference can be expressed
as an integral that is straightforward to evaluate numerically (see
Appendix G).
To illustrate the procedure, we consider the potential
$$\displaystyle U(x)$$
$$\displaystyle=\frac{1}{2}x^{2}-s\ln\cosh x,$$
which is bistable for $s>1$ (Stern et al., 2014) and determine
$s$ using the mean squared error and the cross entropy difference
(see \prettyreffig:model_comparison). Parameter estimation yields
estimates $\hat{g}$ and $\hat{D}$ that depend on $s$ (\prettyreffig:model_comparisonA,B).
The mean squared error displays a clear minimum at the true value
$s=1.5$ (\prettyreffig:model_comparisonC) whereas the
maximal cross entropy occurs at a value larger than $s=1.5$ (\prettyreffig:model_comparisonD).
The latter effect arises because the cross entropy is dominated by
the parameter estimates, thus the mean squared error provides a more
reliable criterion in this case.
Activity Prediction.–
Parameter inference and model comparison allow us to fully determine
the model from data. Using this information, we can proceed to a functional
aspect: predicting the future activity of a unit of the recurrent
neuronal network from the knowledge of its recent past.
If the potential of the model is quadratic, $U(x)\propto\frac{1}{2}x^{2}$,
the measure $\bar{\mu}$ that minimizes the rate function corresponds
to a Gaussian process. For Gaussian processes, it is possible to perform
Bayes–optimal prediction only based on its correlation function
(Matheron, 1963; MacKay, 2003). Denoting the correlation function
of the process as $C_{x}$, the prediction is given by
$$\displaystyle\hat{x}$$
$$\displaystyle=\mathbf{k}^{\mathrm{T}}\mathbf{K}^{-1}\mathbf{x}$$
(13)
with $K_{ij}=C_{x}(t_{i},t_{j})$, $k_{i}=C_{x}(t_{i},\hat{t})$,
and $x_{i}=x(t_{i})$. Here $\hat{t}$ denotes the timepoint of the
prediction and $\{t_{i}\}$ a set of timepoints where the activity
is known. The predicted value $\hat{x}$ itself is Gaussian distributed
with variance
$$\displaystyle\sigma_{\hat{x}}^{2}$$
$$\displaystyle=\kappa-\mathbf{k}^{\mathrm{T}}\mathbf{K}^{-1}\mathbf{k}$$
(14)
where $\kappa=C_{x}(\hat{t},\hat{t})$. The variance $\sigma_{\hat{x}}^{2}$
quantifies the uncertainty associated with the prediction $\hat{x}$.
Given the inferred parameters, we determine the self-consistent autocorrelation
function $C_{x}$ using \prettyrefeq:auto_corr_eta_empirical and
\prettyrefeq:selfconsistent_mu. We use this self-consistent autocorrelation
function to predict the future activity of two arbitrary units using
\prettyrefeq:pred_mean and \prettyrefeq:pred_var (\prettyreffig:predictionA,B).
To quantify the error, we calculate the network–averaged mean squared
error relative to the variance obtained from the self–consistent
autocorrelation function $C_{x}(0)$ and compare it to the variability
predicted by \prettyrefeq:pred_var, yielding a good correspondence
(\prettyreffig:predictionC). The timescale of the error
is determined by half of the timescale of the autocorrelation function
(see Appendix G). We plot $1-\sigma_{\hat{x}}^{2}/\sigma_{x}^{2}$
against an exponential decay $\mathcal{C}\exp(-2\tau/\tau_{c})$,
where $C_{x}(\tau)/C_{x}(0)\sim\exp(-\tau/\tau_{c})$, and find a
very good agreement (\prettyreffig:predictionD). Since
$\tau_{c}$ diverges for $g\to 1+$ (cf. (Sompolinsky et al., 1988)),
the timescale of the error diverges as well.
Discussion.–
In this Letter, we found a tight link between the field theoretical
approach to neuronal networks and its counterpart based on large deviation
theory. We obtained the rate function of the empirical measure for
the widely used and analytically solvable model of a recurrent neuronal
network (Sompolinsky et al., 1988) by field theoretical methods. This
rate function generalizes the seminal result by Ben Arous and Guionnet
(Arous and Guionnet, 1995; Guionnet, 1997) to arbitrary potentials and
transfer functions. Intriguingly, our derivation elucidates that the
rate function is identical to the effective action and takes the form
of a Kullback–Leibler divergence, akin to Sanov’s theorem for sums
of independent and identically distributed random variables (Touchette, 2009; Mezard and Montanari, 2009).
The rate function can thus be interpreted as a distance between an
empirical measure, for example given by data, and the activity statistics
of the network model. This result allows us to address the inverse
problem: 1) Inferring the parameters of the connectivity and external
input from a set of trajectories. 2) Determining the potential and
the transfer function. 3) Predicting future activity in a Bayes–optimal
way.
The exposed link between the effective action defined within statistical
field theory and the rate function, central to large deviation theory,
opens the door to applying established field-theoretical techniques,
such as the loopwise expansion (Zinn-Justin, 1996), to obtain systematic
corrections beyond the mean-field limit. Such sub–exponential corrections
to the rate function are important for small or sparse networks with
non–vanishing mean connectivity, to explain correlated neuronal
activity, and to study information processing in finite-size networks
with realistically limited resources. More generally, the link allows
the systematic derivation of results using field theory and to subsequently
prove them in a mathematically rigorous manner within the large deviation
framework.
Acknowledgments.– We are grateful to Olivier Faugeras and Etienne
Tanré for helpful discussions on LDT of neuronal networks, to Anno
Kurth for pointing us to the Fréchet derivative and to Alexandre René,
David Dahmen, Kirsten Fischer, and Christian Keup for feedback on
an earlier version of the manuscript. This work was partly supported
by the Helmholtz young investigator’s group VH-NG-1028, European Union
Horizon 2020 grant 785907 (Human Brain Project SGA2).
Appendix
.1 Scaled Cumulant Generating Functional
Here, we derive the scaled cumulant generating functional and the
saddle-point equations. The first steps of the derivations are akin
to the manipulations presented in (Schuecker et al., 2016, 2018),
thus we keep the presentation concise. We interpret the stochastic
differential equations governing the network dynamics in the Itô convention.
Using the Martin–Siggia–Rose–de Dominicis–Janssen path integral
formalism, the expectation $\left\langle\cdot\right\rangle_{\bm{\mathbf{x}}|\bm{J}}$ of some
arbitrary functional $G(\bm{\mathbf{x}})$ can be written as
$$\displaystyle\langle\left\langle G(\bm{\mathbf{x}})\right\rangle_{\bm{\mathbf{%
x}}|\bm{J},\bm{\mathbf{\xi}}}\rangle_{\bm{\mathbf{\xi}}}$$
$$\displaystyle=\int\mathcal{D}\bm{\mathbf{x}}\,\left\langle\delta(\dot{\bm{%
\mathbf{x}}}+\nabla U(\bm{\mathbf{x}})+\bm{J}\phi(\bm{\mathbf{x}})+\bm{\mathbf%
{\xi}})\right\rangle_{\bm{\mathbf{\xi}}}G(\bm{\mathbf{x}})$$
$$\displaystyle=\int\mathcal{D}\bm{\mathbf{x}}\,\int\mathcal{D}\bm{\mathbf{%
\tilde{x}}}\,\,e^{S_{0}(\bm{\mathbf{x}},\bm{\mathbf{\tilde{x}}})-\bm{\mathbf{%
\tilde{x}}}^{\mathrm{T}}\bm{J}\phi(\bm{\mathbf{x}})}G(\bm{\mathbf{x}})$$
where we used the Fourier representation $\delta(x)=\frac{1}{2\pi i}\int_{-i\infty}^{i\infty}e^{\tilde{x}x}d\tilde{x}$
in every timestep in the second step and defined the action
$$\displaystyle S_{0}(\bm{\mathbf{x}},\bm{\mathbf{\tilde{x}}})$$
$$\displaystyle=\bm{\mathbf{\tilde{x}}}^{\mathrm{T}}(\dot{\bm{\mathbf{x}}}+%
\nabla U(\bm{\mathbf{x}}))+D\bm{\mathbf{\tilde{x}}}^{\mathrm{T}}\bm{\mathbf{%
\tilde{x}}}.$$
An additional average over realizations of the connectivity $\bm{J}\stackrel{{\scriptstyle\text{i.i.d.}}}{{\sim}}\mathcal{N}(0,N^{-1}g^{2})$
only affects the term $-\bm{\mathbf{\tilde{x}}}^{\mathrm{T}}\bm{J}\phi(\bm{\mathbf{x}})$
in the action and results in
$$\displaystyle\langle e^{-\bm{\mathbf{\tilde{x}}}^{\mathrm{T}}\bm{J}\phi(\bm{%
\mathbf{x}})}\rangle_{\bm{J}}$$
$$\displaystyle=\int\mathcal{D}C\,\int\mathcal{D}\tilde{C}\,e^{-\frac{N}{g^{2}}%
\,C^{\mathrm{T}}\tilde{C}+\frac{1}{2}\bm{\mathbf{\tilde{x}}}^{\mathrm{T}}C\bm{%
\mathbf{\tilde{x}}}+\phi(\bm{\mathbf{x}})^{\mathrm{T}}\tilde{C}\phi(\bm{%
\mathbf{x}})},$$
where we introduced the network–averaged auxiliary field
$$C(u,v)=\frac{g^{2}}{N}\sum_{i=1}^{N}\phi(x_{i}(u))\phi(x_{i}(v))$$
via a Hubbard–Stratonovich transformation. The average over the
connectivity and the subsequent Hubbard–Stratonovich transformation
decouple the dynamics across units; afterwards the units are only
coupled through the global fields $C$ and $\tilde{C}$.
Now, we consider the scaled cumulant generating functional of the
empirical density
$$\displaystyle W_{N}(\ell)$$
$$\displaystyle=\frac{1}{N}\ln\left\langle\left\langle e^{\sum_{i=1}^{N}\ell(x_{%
i})}\right\rangle_{\bm{\mathbf{x}}|\bm{J}}\right\rangle_{\bm{J}}.$$
Using the above results and the abbreviation $\phi(x)\equiv\phi$,
it can be written as
$$\displaystyle W_{N}(\ell)$$
$$\displaystyle=\frac{1}{N}\ln\,\int\mathcal{D}C\,\int\mathcal{D}\tilde{C}\,e^{-%
\frac{N}{g^{2}}\,C^{\mathrm{T}}\tilde{C}+N\,\Omega_{\ell}(C,\tilde{C})},$$
$$\displaystyle\Omega_{\ell}(C,\tilde{C})$$
$$\displaystyle=\ln\,\int\mathcal{D}x\,\int\mathcal{D}\tilde{x}\,e^{S_{0}(x,%
\tilde{x})+\frac{1}{2}\tilde{x}^{\mathrm{T}}C\tilde{x}+\phi^{\mathrm{T}}\tilde%
{C}\phi+\ell(x)},$$
where the $N$ in front of the single–particle cumulant generating
functional $\Omega$ results from the factorization of the $N$ integrals
over $x_{i}$ and $\tilde{x}_{i}$ each; thus it is a hallmark of
the decoupled dynamics. Next, we approximate the $C$ and $\tilde{C}$
integrals in a saddle–point approximation which yields
$$\displaystyle W_{N}(\ell)$$
$$\displaystyle=-\frac{1}{g^{2}}\,C_{\ell}^{\mathrm{T}}\tilde{C}_{\ell}+\Omega_{%
\ell}(C_{\ell},\tilde{C}_{\ell})+O(\ln(N)/N),$$
where $C_{\ell}$ and $\tilde{C}_{\ell}$ are determined by the saddle–point
equations
$$\displaystyle C_{\ell}$$
$$\displaystyle=g^{2}\left.\partial_{\tilde{C}}\Omega_{\ell}(C,\tilde{C})\right|%
_{C_{\ell},\tilde{C}_{\ell}},$$
$$\displaystyle\tilde{C}_{\ell}$$
$$\displaystyle=g^{2}\left.\partial_{C}\Omega_{\ell}(C,\tilde{C})\right|_{C_{%
\ell},\tilde{C}_{\ell}}.$$
Here, $\partial_{C}$ denotes a partial functional derivative. In
the limit $N\to\infty$, the remainder $O(\ln(N)/N)$ vanishes and
the saddle–point approximation becomes exact.
.2 Rate Function
Here, we derive the rate function from the scaled cumulant generating
functional. According to the Gärtner-Ellis theorem (Touchette, 2009),
we obtain the rate function via the Legendre transformation
$$\displaystyle H(\mu)$$
$$\displaystyle=\int\mathcal{D}x\,\mu(x)\ell_{\mu}(x)-W_{\infty}(\ell_{\mu})$$
(15)
with $\ell_{\mu}$ implicitly defined by
$$\displaystyle\mu$$
$$\displaystyle=W_{\infty}^{\prime}(\ell_{\mu}).$$
(16)
Due to the saddle–point equations, the derivative of the cumulant
generating functional in \prettyrefeq:implicit_eq_ell_mu simplifies
to $W_{\infty}^{\prime}(\ell_{\mu})=\left.(\partial_{\ell}\Omega_{\ell})(C_{\ell},%
\tilde{C}_{\ell})\right|_{\ell_{\mu}}$
where the derivative only acts on the $\ell$ that is explicit in
$\Omega_{\ell}(C_{\ell},\tilde{C_{\ell}})$ and not on the implicit
dependencies through $C_{\ell}$, $\tilde{C_{\ell}}$. Thus, \prettyrefeq:implicit_eq_ell_mu
yields
$$\displaystyle\mu(x)$$
$$\displaystyle=\frac{\int\mathcal{D}\tilde{x}\,e^{S_{0}(x,\tilde{x})+\frac{1}{2%
}\tilde{x}^{\mathrm{T}}C_{\ell_{\mu}}\tilde{x}+\phi^{\mathrm{T}}\tilde{C}_{%
\ell_{\mu}}\phi+\ell_{\mu}(x)}}{\int\mathcal{D}x\,\int\mathcal{D}\tilde{x}\,e^%
{S_{0}(x,\tilde{x})+\frac{1}{2}\tilde{x}^{\mathrm{T}}C_{\ell_{\mu}}\tilde{x}+%
\phi^{\mathrm{T}}\tilde{C}_{\ell_{\mu}}\phi+\ell_{\mu}(x)}}.$$
Taking the logarithm and using $W_{\infty}(\ell_{\mu})+\frac{1}{g^{2}}\,C_{\ell_{\mu}}^{\mathrm{T}}\tilde{C}_{%
\ell_{\mu}}=\Omega_{\ell_{\mu}}(C_{\ell_{\mu}},\tilde{C}_{\ell_{\mu}})$
leads to
$$\displaystyle\ell_{\mu}(x)=$$
$$\displaystyle\ln\frac{\mu(x)}{\int\mathcal{D}\tilde{x}\,e^{S_{0}(x,\tilde{x})+%
\frac{1}{2}\tilde{x}^{\mathrm{T}}C_{\ell_{\mu}}\tilde{x}}}+W_{\infty}(\ell_{%
\mu})$$
$$\displaystyle+\frac{1}{g^{2}}\,C_{\ell_{\mu}}^{\mathrm{T}}\tilde{C}_{\ell_{\mu%
}}-\phi^{\mathrm{T}}\tilde{C}_{\ell_{\mu}}\phi.$$
Inserting $\ell_{\mu}(x)$ into the Legendre transformation (15)
yields
$$\displaystyle H(\mu)=$$
$$\displaystyle\int\mathcal{D}x\,\mu(x)\ln\frac{\mu(x)}{\int\mathcal{D}\tilde{x}%
\,e^{S_{0}(x,\tilde{x})+\frac{1}{2}\tilde{x}^{\mathrm{T}}C_{\ell_{\mu}}\tilde{%
x}}}$$
$$\displaystyle+\frac{1}{g^{2}}\,C_{\ell_{\mu}}^{\mathrm{T}}\tilde{C}_{\ell_{\mu%
}}-C_{\mu}^{\mathrm{T}}\tilde{C}_{\ell_{\mu}}$$
with
$$\displaystyle C_{\mu}(u,v)$$
$$\displaystyle=\int\mathcal{D}x\,\mu(x)\phi(x(u))\phi(x(v)).$$
Identifying $\mu(x)$ in the saddle–point equation
$$\displaystyle C_{\ell_{\mu}}$$
$$\displaystyle=g^{2}\left.\partial_{\tilde{C}}\Omega_{\ell}(C,\tilde{C})\right|%
_{C_{\ell_{\mu}},\tilde{C}_{\ell_{\mu}}}$$
$$\displaystyle=g^{2}\frac{\int\mathcal{D}x\,\int\mathcal{D}\tilde{x}\,\phi\phi e%
^{S_{0}(x,\tilde{x})+\frac{1}{2}\tilde{x}^{\mathrm{T}}C_{\ell_{\mu}}\tilde{x}+%
\phi^{\mathrm{T}}\tilde{C}_{\ell_{\mu}}\phi+\ell_{\mu}(x)}}{\int\mathcal{D}x\,%
\int\mathcal{D}\tilde{x}\,e^{S_{0}(x,\tilde{x})+\frac{1}{2}\tilde{x}^{\mathrm{%
T}}C_{\ell_{\mu}}\tilde{x}+\phi^{\mathrm{T}}\tilde{C}_{\ell_{\mu}}\phi+\ell_{%
\mu}(x)}}$$
yields
$$\displaystyle C_{\ell_{\mu}}(u,v)$$
$$\displaystyle=g^{2}\int\mathcal{D}x\,\mu(x)\phi(x(u))\phi(x(v))$$
and thus $C_{\ell_{\mu}}=g^{2}C_{\mu}$. Accordingly, the last two
terms in the Legendre transformation cancel and we arrive at
$$H(\mu)=\int\mathcal{D}x\,\mu(x)\ln\frac{\mu(x)}{\int\mathcal{D}\tilde{x}\,e^{S%
_{0}(x,\tilde{x})+\frac{g^{2}}{2}\tilde{x}^{\mathrm{T}}C_{\mu}\tilde{x}}}$$
(17)
where still $C_{\mu}(u,v)=\int\mathcal{D}x\,\mu(x)\phi(x(u))\phi(x(v))$.
In the main text, we use the notation
$$\displaystyle\int\mathcal{D}\tilde{x}\,e^{S_{0}(x,\tilde{x})+\frac{g^{2}}{2}%
\tilde{x}^{\mathrm{T}}C_{\mu}\tilde{x}}$$
$$\displaystyle=\left\langle\delta(\dot{x}+\nabla U(x)-\eta)\right\rangle_{\eta}$$
with $C_{\eta}=2D\delta+g^{2}C_{\mu}$ appearing in the rate function.
Indeed, using the Martin–Siggia–Rose–de Dominicis–Janssen
formalism, we have
$$\displaystyle\left\langle\delta(\dot{x}+\nabla U(x)-\eta)\right\rangle_{\eta}$$
$$\displaystyle=\int\mathcal{D}\tilde{x}\,e^{\tilde{x}^{\mathrm{T}}(\dot{x}+%
\nabla U(x))}\langle e^{\tilde{x}^{\mathrm{T}}\eta}\rangle_{\eta}$$
$$\displaystyle=\int\mathcal{D}\tilde{x}\,e^{\tilde{x}^{\mathrm{T}}(\dot{x}+%
\nabla U(x))+\frac{1}{2}\tilde{x}^{\mathrm{T}}C_{\eta}\tilde{x}}$$
which shows that the two notations are equivalent since $\tilde{x}^{\mathrm{T}}(\dot{x}+\nabla U(x))+\frac{1}{2}\tilde{x}^{\mathrm{T}}C%
_{\eta}\tilde{x}=S_{0}(x,\tilde{x})+\frac{g^{2}}{2}\tilde{x}^{\mathrm{T}}C_{%
\mu}\tilde{x}$
for $C_{\eta}=2D\delta+g^{2}C_{\mu}$.
.3 Equivalence to Ben Arous and Guionnet (1995)
Here, we show explicitly that the rate function we obtained generalizes
the rate function obtained by Ben Arous and Guionnet. We start with
Theorem 4.1 in (Arous and Guionnet, 1995) adapted to our notation: Define
$$\displaystyle Q(x)$$
$$\displaystyle:=\int\mathcal{D}\tilde{x}\,e^{\tilde{x}^{\mathrm{T}}(\dot{x}+%
\nabla U(x))+\frac{1}{2}\tilde{x}^{\mathrm{T}}\tilde{x}}$$
and
$$\displaystyle G(\mu)$$
$$\displaystyle:=\int\mathcal{D}x\,\mu(x)\,\log\left(\langle e^{gy^{\mathrm{T}}(%
\dot{x}+\nabla U(x))-\frac{g^{2}}{2}y^{\mathrm{T}}y}\rangle_{y}\right),$$
where $\langle\cdot\rangle_{y}$ is the expectation value over a zero–mean
Gaussian process $y$ with $C_{\mu}(u,v)=\int\mathcal{D}x\,\mu(x)x(u)x(v)$,
written as $\langle\cdot\rangle_{y}=\int\mathcal{D}y\,\int\mathcal{D}\tilde{y}\,\left(%
\cdot\right)\,e^{\tilde{y}^{\mathrm{T}}y+\frac{1}{2}\tilde{y}^{\mathrm{T}}C_{%
\mu}\tilde{y}}$.
With the Kullback–Leibler divergence $D_{\text{KL}}(\mu\,|\,Q)$,
Theorem 4.1 states that the function
$$\displaystyle\tilde{H}(\mu)$$
$$\displaystyle=\begin{cases}D_{\text{KL}}(\mu\,|\,Q)-G(\mu)&\text{if }D_{\text{%
KL}}(\mu\,|\,Q)<\infty\\
+\infty&\text{otherwise}\end{cases}$$
is a good rate function.
Now we relate $\tilde{H}$ to the rate function that is derived
above, \prettyrefeq:rate_fct_appendix. Using the Onsager–Machlup
action, we can write
$$\displaystyle D_{\text{KL}}(\mu\,|\,Q)$$
$$\displaystyle=\int\mathcal{D}x\,\mu(x)\log\frac{\mu(x)}{e^{-S_{\mathrm{OM}}(x)%
}}+\mathcal{C}$$
with $S_{\mathrm{OM}}(x)=\frac{1}{2}(\dot{x}+\nabla U(x))^{\mathrm{T}}(\dot{x}+%
\nabla U(x))$.
Next, we transform $gy\to y$, $\tilde{y}/g\to\tilde{y}$ and solve
the integral over $y$ in $G(\mu)$:
$$\displaystyle\int\mathcal{D}y\,e^{-\frac{1}{2}y^{\mathrm{T}}y+y^{\mathrm{T}}(%
\dot{x}+\nabla U(x)+\tilde{y})}$$
$$\displaystyle\propto e^{S_{\mathrm{OM}}[x]+\tilde{y}^{\mathrm{T}}(\dot{x}+%
\nabla U(x))+\frac{1}{2}\tilde{y}^{\mathrm{T}}\tilde{y}}.$$
The Onsager–Machlup action in the logarithm in $D_{\text{KL}}(\mu\,|\,Q)$
and $G(\mu)$ cancel and we arrive at
$$\displaystyle\tilde{H}(\mu)$$
$$\displaystyle=\int\mathcal{D}x\,\mu(x)\log\frac{\mu(x)}{\int\mathcal{D}\tilde{%
y}\,e^{\tilde{y}^{\mathrm{T}}(\dot{x}+\nabla U(x))+\frac{1}{2}\tilde{y}^{%
\mathrm{T}}(g^{2}C_{\mu}+\delta)\tilde{y}}}$$
up to an additive constant that we set to zero. Since $C_{\mu}(u,v)=\int\mathcal{D}x\,\mu(x)x(u)x(v)$,
the rate function by Ben Arous and Guionnet is thus equivalent to
\prettyrefeq:ratefct_emp_meas with $\phi(x)=x$ and $D=\frac{1}{2}$.
.4 Relation to Sompolinsky, Crisanti, Sommers (1988)
Here, we relate the approach that we laid out in the main text to
the approach pioneered by Sompolinsky, Crisanti, and Sommers (Sompolinsky et al., 1988)
(reviewed in (Schuecker et al., 2018; Crisanti and Sompolinsky, 2018)) using
our notation for consistency. Therein, the starting point is the scaled
cumulant–generating functional
$$\displaystyle\hat{W}_{N}(j)$$
$$\displaystyle=\frac{1}{N}\ln\left\langle\left\langle e^{j^{\mathrm{T}}\bm{%
\mathbf{x}}}\right\rangle_{\bm{\mathbf{x}}|\bm{J}}\right\rangle_{\bm{J}},$$
which gives rise to the cumulants of the trajectories. For the linear
functional
$$\displaystyle\ell(x)$$
$$\displaystyle=j^{\mathrm{T}}x,$$
we have $\sum_{i=1}^{N}\ell(x_{i})=j^{\mathrm{T}}\bm{\mathbf{x}}$ and thus $W_{N}(j^{\mathrm{T}}x)=\hat{W}_{N}(j)$.
Put differently, the scaled cumulant–generating functional of the
trajectories $\hat{W}_{N}(j)$ is a special case of the more general
scaled cumulant–generating functional $W_{N}(\ell)$ we consider
in this manuscript. Of course one can start from the scaled cumulant–generating
functional of the observable of interest and derive the corresponding
rate function. Conversely, we show below how to obtain the rate function
of a specific observable from the rate function of the empirical measure.
Contraction Principle
Here, we relate the rather general rate function of the empirical
measure $H(\mu)$ to the rate function of a particular observable
$I(C)$. As an example, we choose the correlation function
$$\displaystyle C(u,v)$$
$$\displaystyle=\frac{g^{2}}{N}\sum_{i=1}^{N}\phi(x_{i}(u))\phi(x_{i}(v))$$
because it is a quantity that arises naturally during the Hubbard–Stratonovich
transformation. The generic approach to this problem is given by the
contraction principle (Touchette, 2009):
$$\displaystyle I(C)$$
$$\displaystyle=\inf_{\mu\>\text{s.t.}\>C=g^{2}\int\mathcal{D}x\,\mu(x)\phi\phi}%
H(\mu).$$
Here, the infimum is constrained to the empirical measures that give
rise to the correlation function $C$, i.e. those that fulfill $C(u,v)=g^{2}\int\mathcal{D}x\,\mu(x)\phi(x(u))\phi(x(v))$.
Writing $H(\mu)$ as the Legendre transform of the scaled cumulant–generating
functional, $H(\mu)=\inf_{\ell}[\int\mathcal{D}x\,\mu(x)\ell(x)-W_{\infty}(\ell)$],
the empirical measure only appears linearly. Using a Lagrange multiplier
$k(u,v)$, the infimum over $\mu$ leads to the constraint $\ell(x)=g^{2}\phi^{\mathrm{T}}k\phi$
and we arrive at
$$\displaystyle I(C)$$
$$\displaystyle=\inf_{k}[k^{\mathrm{T}}C-W_{\infty}(g^{2}\phi^{\mathrm{T}}k\phi)].$$
Once again, we see how to relate $W_{N}(\ell)$ to a specific observable—this
time for the choice $\ell(x)=g^{2}\phi^{\mathrm{T}}k\phi$.
Up to this point, the discussion applies to any observable. For the
current example, we can proceed a bit further. With the redefinition
$\tilde{C}+g^{2}k\to\tilde{C}$, we get
$$\displaystyle W_{\infty}(g^{2}\phi^{\mathrm{T}}k\phi)$$
$$\displaystyle=\mathrm{extr}_{C,\tilde{C}}\left[-\frac{1}{g^{2}}C^{\mathrm{T}}%
\tilde{C}+C^{\mathrm{T}}k+\Omega_{0}(C,\tilde{C})\right],$$
$$\displaystyle\Omega_{0}(C,\tilde{C})$$
$$\displaystyle=\ln\,\int\mathcal{D}x\,\int\mathcal{D}\tilde{x}\,e^{S_{0}(x,%
\tilde{x})+\frac{1}{2}\tilde{x}^{\mathrm{T}}C\tilde{x}+\phi^{\mathrm{T}}\tilde%
{C}\phi},$$
which made $\Omega_{0}$ independent of $k$. Now we can take the
infimum over $k$, leading to
$$\displaystyle I(C)$$
$$\displaystyle=\mathrm{extr}_{\tilde{C}}\left[\frac{1}{g^{2}}C^{\mathrm{T}}%
\tilde{C}-\Omega_{0}(C,\tilde{C})\right].$$
(18)
The remaining extremum gives rise to the condition
$$\displaystyle C$$
$$\displaystyle=g^{2}\frac{\int\mathcal{D}x\,\int\mathcal{D}\tilde{x}\,\phi\phi e%
^{S_{0}(x,\tilde{x})+\frac{1}{2}\tilde{x}^{\mathrm{T}}C\tilde{x}+\phi^{\mathrm%
{T}}\tilde{C}\phi}}{\int\mathcal{D}x\,\int\mathcal{D}\tilde{x}\,e^{S_{0}(x,%
\tilde{x})+\frac{1}{2}\tilde{x}^{\mathrm{T}}C\tilde{x}+\phi^{\mathrm{T}}\tilde%
{C}\phi}},$$
i.e. a self–consistency condition for the correlation function.
As a side remark, we mention that the expression in the brackets of
\prettyrefeq:Rate_fct_C_contraction_principle is the joint effective
action for $C$ and $\tilde{C}$, because for $N\rightarrow\infty$,
the action equals the effective action. This result is therefore analogous
to the finding that the effective action in the Onsager–Machlup
formalism is given as the extremum of its counterpart in the Martin–Siggia–Rose–de
Dominicis–Janssen formalism (Stapmanns et al., 2020, eq. (24)).
The only difference is that here, we are dealing with second order
statistics and not just mean values. The origin of this finding is
the same in both cases: we are only interested in the statistics of
the physical quantity (the one without tilde, $x$ or $C$, respectively).
Therefore we only introduce a source field ($k$ in the present case)
for this one, but not for the auxiliary field, which amounts to setting
the source field of the latter to zero. This is translated into the
extremum in \prettyrefeq:Rate_fct_C_contraction_principle over
the auxiliary variable (Stapmanns et al., 2020, appendix 5).
.5 Log–Likelihood Derivative
Here, we calculate the derivatives of the log–likelihood with respect
to the parameters $g$ and $D$. In terms of the rate function, we
have
$$\displaystyle\partial_{a}\ln P(\mu\,|\,g,D)$$
$$\displaystyle\simeq-N\partial_{a}H(\mu\,|\,g,D)$$
where $a$ denotes either $g$ or $D$. The parameters appear only
in the cross entropy
$$\displaystyle\partial_{a}H(\mu)$$
$$\displaystyle=-\int\mathcal{D}x\,\mu(x)\partial_{a}\ln\left\langle\delta(\dot{%
x}+\nabla U(x)-\eta)\right\rangle_{\eta}$$
through the correlation function $C_{\eta}(u,v)=2D\delta(u-v)+g^{2}\negthinspace\int\mathcal{D}x\,\mu(x)\phi(x(u%
))\phi(x(v))$.
Above, we showed that
$$\displaystyle\left\langle\delta(\dot{x}+\nabla U(x)-\eta)\right\rangle_{\eta}$$
$$\displaystyle=\int\mathcal{D}\tilde{x}\,e^{\tilde{x}^{\mathrm{T}}(\dot{x}+%
\nabla U(x))+\frac{1}{2}\tilde{x}^{\mathrm{T}}C_{\eta}\tilde{x}}.$$
Because $\tilde{x}$ is at most quadratic in the exponent, the integral
is solvable and we get
$$\displaystyle\left\langle\delta(\dot{x}+\nabla U(x)-\eta)\right\rangle_{\eta}$$
$$\displaystyle=\frac{e^{-\frac{1}{2}(\dot{x}+\nabla U(x))^{\mathrm{T}}C_{\eta}^%
{-1}(\dot{x}+\nabla U(x))}}{\sqrt{\det(2\pi C_{\eta})}}.$$
Note that the normalization $1/\sqrt{\det(2\pi C_{\eta})}$ does not
depend on the potential $U$. Now we can take the derivatives of $\ln\left\langle\delta(\dot{x}+\nabla U(x)-\eta)\right\rangle_{\eta}$
and get
$$\displaystyle\partial_{a}\ln\left\langle\delta(\dot{x}+\nabla U(x)-\eta)\right%
\rangle_{\eta}=$$
$$\displaystyle\qquad-\frac{1}{2}(\dot{x}+\nabla U(x))^{\mathrm{T}}\frac{%
\partial C_{\eta}^{-1}}{\partial a}(\dot{x}+\nabla U(x))-\frac{1}{2}\partial_{%
a}\mathrm{tr}\ln C_{\eta}$$
where we used $\ln\det C=\mathrm{tr}\ln C$. With this, we arrive at
$$\displaystyle\partial_{a}H(\mu)$$
$$\displaystyle=\frac{1}{2}\mathrm{tr}\left(C_{0}\frac{\partial C_{\eta}^{-1}}{%
\partial a}\right)+\frac{1}{2}\mathrm{tr}\left(\frac{\partial C_{\eta}}{%
\partial a}C_{\eta}^{-1}\right)$$
where the integral over the empirical measure gave rise to $C_{0}=\int\mathcal{D}x\,\mu(x)(\dot{x}+\nabla U(x))(\dot{x}+\nabla U(x))$
and we used $\partial_{a}\ln C=\frac{\partial C}{\partial a}C^{-1}$.
Finally, using $\frac{\partial C}{\partial a}C^{-1}=CC^{-1}\frac{\partial C}{\partial a}C^{-1}%
=C\frac{\partial C^{-1}}{\partial a}$,
we get
$$\displaystyle\partial_{a}\ln P(\mu\,|\,g,D)$$
$$\displaystyle\simeq-\frac{N}{2}\mathrm{tr}\left((C_{0}-C_{\eta})\frac{\partial
C%
_{\eta}^{-1}}{\partial a}\right)$$
as stated in the main text.
.6 Cross Entropy Difference
Here, we express the cross entropy difference
$$\displaystyle H_{1}-H_{2}$$
$$\displaystyle:=H(\mu\,|\,U_{1},\phi_{1},\hat{g}_{1},\hat{D}_{1})-H(\mu\,|\,U_{%
2},\phi_{2},\hat{g}_{2},\hat{D}_{2})$$
in a form that can be evaluated numerically. Using the rate function,
we get
$$H_{1}-H_{2}=\int\mathcal{D}x\,\mu(x)\ln\frac{\left\langle\delta(\dot{x}+\nabla
U%
_{2}(x)-\eta_{2})\right\rangle_{\eta_{2}}}{\left\langle\delta(\dot{x}+\nabla U%
_{1}(x)-\eta_{1})\right\rangle_{\eta_{1}}}$$
with $C_{\eta_{i}}=2\hat{D}_{i}\delta+\hat{g}_{i}^{2}\int\mathcal{D}x\,\mu(x)\phi_{i%
}\phi_{i}$.
Again, we use
$$\displaystyle\left\langle\delta(\dot{x}+\nabla U(x)-\eta)\right\rangle_{\eta}$$
$$\displaystyle=\frac{e^{-\frac{1}{2}(\dot{x}+\nabla U(x))^{\mathrm{T}}C_{\eta}^%
{-1}(\dot{x}+\nabla U(x))}}{\sqrt{\det(2\pi C_{\eta})}}$$
to arrive at
$$\displaystyle H_{1}-H_{2}=$$
$$\displaystyle-\frac{1}{2}\mathrm{tr}\left(C_{1}C_{\eta_{1}}^{-1}\right)-\frac{%
1}{2}\mathrm{tr}\ln C_{\eta_{1}}$$
$$\displaystyle+\frac{1}{2}\mathrm{tr}\left(C_{2}C_{\eta_{2}}^{-1}\right)+\frac{%
1}{2}\mathrm{tr}\ln C_{\eta_{2}}$$
with $C_{i}=\int\mathcal{D}x\,\mu(x)(\dot{x}+\nabla U_{i}(x))(\dot{x}+\nabla U_{i}(x))$.
For stationary correlation functions over infinite time intervals,
we can evaluate the traces as integrals over the power spectra:
$$\displaystyle\mathrm{tr}(AB^{-1})$$
$$\displaystyle\propto\int_{-\infty}^{\infty}\frac{\tilde{A}(f)}{\tilde{B}(f)}df,$$
$$\displaystyle\mathrm{tr}\ln A$$
$$\displaystyle\propto\int_{-\infty}^{\infty}\ln(\tilde{A}(f))df.$$
With this, we get
$$\displaystyle H_{1}-H_{2}\propto$$
$$\displaystyle-\frac{1}{2}\int_{-\infty}^{\infty}\frac{\mathcal{S}_{\dot{x}+%
\nabla U_{1}(x)}(f)}{2\hat{D}_{1}+\hat{g}_{1}^{2}\mathcal{S}_{\phi_{1}(x)}(f)}df$$
$$\displaystyle-\frac{1}{2}\int_{-\infty}^{\infty}\ln(2\hat{D}_{1}+\hat{g}_{1}^{%
2}\mathcal{S}_{\phi_{1}(x)}(f))df$$
$$\displaystyle+\frac{1}{2}\int_{-\infty}^{\infty}\frac{\mathcal{S}_{\dot{x}+%
\nabla U_{2}(x)}(f)}{2\hat{D}_{2}+\hat{g}_{2}^{2}\mathcal{S}_{\phi_{2}(x)}(f)}df$$
$$\displaystyle+\frac{1}{2}\int_{-\infty}^{\infty}\ln(2\hat{D}_{2}+\hat{g}_{2}^{%
2}\mathcal{S}_{\phi_{2}(x)}(f))df.$$
Accordingly, the cross entropy difference can be evaluated with integrals
over the respective power spectra that can be obtained using Fast
Fourier Transformation.
.7 Timescale of Prediction Error
We here relate the timescale of the prediction error to the timescale
of the autocorrelation function $C_{x}(\tau)/C_{x}(0)\sim\exp(-\tau/\tau_{c})$.
The predicted variance in the continuous time limit is determined
by the corresponding limit of \prettyrefeq:pred_var,
$$\displaystyle\sigma_{\hat{x}}^{2}$$
$$\displaystyle=C_{x}(\hat{t},\hat{t})-\int_{0}^{T}\int_{0}^{T}C_{x}(\hat{t},u)C%
_{x}^{-1}(u,v)C_{x}(v,\hat{t})\,du\,dv,$$
where $T$ denotes the training interval. Writing $\hat{t}=T+\tau$
and approximating $C_{x}(T+\tau,u)\approx C_{x}(T,u)e^{-\tau/\tau_{c}}$,
we get
$$\displaystyle\sigma_{\hat{x}}^{2}$$
$$\displaystyle\approx C_{x}(\hat{t},\hat{t})-e^{-2\tau/\tau_{c}}C_{x}(T,T),$$
where we used $\int_{0}^{T}C_{x}^{-1}(u,v)C_{x}(v,T)\,dv=\delta(u-T)$.
Using stationarity $C_{x}(u,v)=C_{x}(v-u)$, we arrive at
$$\displaystyle\sigma_{\hat{x}}^{2}/\sigma_{x}^{2}$$
$$\displaystyle\approx 1-e^{-2\tau/\tau_{c}}$$
where $C_{x}(0)=\sigma_{x}^{2}$. Thus, for large $\tau$, the timescale
of the prediction error is given by $\tau_{c}/2$.
References
Rabinovich et al. (2006)
M. I. Rabinovich,
P. Varona,
A. I. Selverston,
and H. D.
Abarbanel, Rev. Mod. Phys.
78, 1213 (2006).
Sompolinsky (1988)
H. Sompolinsky,
Physics Today 41,
70 (1988).
Amari (1972)
S.-I. Amari,
Systems, Man and Cybernetics, IEEE Transactions on
SMC-2, 643
(1972), ISSN 2168-2909.
Sompolinsky et al. (1988)
H. Sompolinsky,
A. Crisanti, and
H. J. Sommers,
Phys. Rev. Lett. 61,
259 (1988).
Stern et al. (2014)
M. Stern,
H. Sompolinsky,
and L. F.
Abbott, Phys. Rev. E
90, 062710
(2014).
Kadmon and Sompolinsky (2015)
J. Kadmon and
H. Sompolinsky,
Phys. Rev. X 5,
041030 (2015).
Aljadeff et al. (2015)
J. Aljadeff,
M. Stern, and
T. Sharpee,
Phys. Rev. Lett. 114,
088101 (2015).
van Meegen and Lindner (2018)
A. van Meegen and
B. Lindner,
Phys. Rev. Lett. 121,
258302 (2018).
Schuecker et al. (2018)
J. Schuecker,
S. Goedeke, and
M. Helias,
Phys Rev X 8,
041029 (2018).
Crisanti and Sompolinsky (2018)
A. Crisanti and
H. Sompolinsky,
Phys Rev E 98,
062120 (2018).
Arous and Guionnet (1995)
G. B. Arous and
A. Guionnet,
Probability Theory and Related Fields
102, 455 (1995),
ISSN 1432-2064.
Guionnet (1997)
A. Guionnet,
Probability Theory and Related Fields
109, 183 (1997).
Sompolinsky and Zippelius (1981)
H. Sompolinsky and
A. Zippelius,
Phys. Rev. Lett. 47,
359 (1981).
Martin et al. (1973)
P. Martin,
E. Siggia, and
H. Rose,
Phys. Rev. A 8,
423 (1973).
Janssen (1976)
H.-K. Janssen,
Zeitschrift für Physik B Condensed Matter
23, 377 (1976).
Chow and Buice (2015)
C. Chow and
M. Buice, J
Math. Neurosci 5, 1
(2015).
Hertz et al. (2017)
J. A. Hertz,
Y. Roudi, and
P. Sollich,
Journal of Physics A: Mathematical and Theoretical
50, 033001
(2017).
Goldenfeld (1992)
N. Goldenfeld,
Lectures on phase transitions and the renormalization
group (Perseus books, Reading,
Massachusetts, 1992).
Schuecker et al. (2016)
J. Schuecker,
S. Goedeke,
D. Dahmen, and
M. Helias,
arXiv (2016), 1605.06758
[cond-mat.dis-nn].
Berger (1977)
M. S. Berger,
Nonlinearity and Functional Analysis
(Elsevier, 1977), 1st
ed., ISBN 9780120903504.
Zinn-Justin (1996)
J. Zinn-Justin,
Quantum field theory and critical phenomena
(Clarendon Press, Oxford, 1996).
Mezard and Montanari (2009)
M. Mezard and
A. Montanari,
Information, physics and computation
(Oxford University Press, 2009).
Touchette (2009)
H. Touchette,
Physics Reports 478,
1 (2009).
Lawson and Hanson (1995)
C. L. Lawson and
R. J. Hanson,
Solving Least Squares Problems
(SIAM, 1995).
MacKay (2003)
D. J. MacKay,
Information theory, inference and learning algorithms
(Cambridge university press, 2003).
Matheron (1963)
G. Matheron,
Economic Geology 58,
1246 (1963).
Stapmanns et al. (2020)
J. Stapmanns,
T. Kühn,
D. Dahmen,
T. Luu,
C. Honerkamp,
and M. Helias,
Phys. Rev. E 101,
042124 (2020). |
Comparing directed networks via denoising graphlet distributions
\nameMiguel E. P. Silva${}^{*}$
\nameRobert E. Gaunt
\nameLuis Ospina-Forero
\nameCaroline Jay
\nameThomas House
${}^{*}$Corresponding author: [email protected]
Department of Computer Science, University of Manchester, UK
Department of Mathematics, University of Manchester, UK
The Alliance Manchester Business School, University of Manchester, UK
Department of Computer Science, University of Manchester, UK
Department of Mathematics, University of Manchester, UK
Abstract
Network comparison is a widely-used tool for analyzing complex systems, with
applications in varied domains including comparison of protein interactions or
highlighting changes in structure of trade networks. In recent years, a number
of network comparison methodologies based on the distribution of graphlets
(small connected network subgraphs) have been introduced. In particular, NetEmd
has recently achieved state of the art performance in undirected networks. In
this work, we propose an extension of NetEmd to directed networks and deal with
the significant increase in complexity of graphlet structure in the directed
case by denoising through linear projections. Simulation results show that our
framework is able to improve on the performance of a simple translation of the undirected NetEmd algorithm to the directed case, especially when networks
differ in size and density.
directed networks; network comparison; network topology; principal component
analysis; independent component analysis
1 Introduction
Complex networks represent relationships between actors of systems with non-trivial structural properties and are ubiquitous in a myriad of domains. Studying the networks underlying these complex systems is then a step towards gaining an understanding about the systems themselves. Comparing different objects is a fundamental part of human cognition, therefore it is only natural that analysis of networks encompasses a comparison element. It is sometimes straightforward to show two networks are different, through finding a mismatch between the set of nodes and edges that compose the networks, which would and claim a difference if these sets do not match. On the other hand, knowing if two networks are exactly the same is known as the graph isomorphism problem, which has been shown to be in the NP complexity class [6]. Between these two extremes, network comparison has been established as an area of study within network analysis that combines network statistics and computable properties of a network to tell how similar or dissimilar two networks are. Network comparison has been the subject of an increasing amount of research, see for example [46, 41, 2, 1, 43], and its applications are widespread, most notably in biological areas like protein-protein interaction networks [1] and metabolic networks [2], but also in other domains, like tracking dynamics of world trade networks [49].
Network comparison methods can be broadly split in two categories: node alignment methods [20, 26, 9], that attempt to create a mapping between the nodes of the networks being compared, and alignment-free methods [28, 32, 30, 19], that use global features of the networks to determine similarity or dissimilarity. Among the latter, methods that use distributions of small connected subgraphs, known as graphlets, have emerged as the state of the art in the area [32, 49, 1, 2, 41]. In particular, Wegner et al. [46] recently introduced the NetEmd measure that achieves state of the art performance in undirected networks, by comparing the shape of graphlet distributions using the Earth Mover’s Distance (EMD).
Undirected networks represent relationships between entities under the assumption that the relationship is symmetrical. For example, a Facebook friendship is mutual because both users mutually agree to become friends and thus there is no difference between source and target in the friendship. This abstraction is often insufficient to capture more nuanced relationships. For instance, in Twitter there is an asymmetry in the follower-followed relationship because the action of following another user is not necessarily reciprocated. Directed networks allow us to model such relationships, but this increased wealth of information leads to greater complexity when analyzing them. Such is the case when comparing directed networks, where network comparison metrics designed for undirected networks have been shown to be unsuitable for the task [41, 2].
The first contribution of our work is an extension of NetEmd to directed networks, using directed graphlets of size up to 4. Xu and Reinert [47] previously proposed a similar extension, named TriadEMD, that is limited to graphlets of size 3. The main obstacle when going from size 3 to size 4 graphlets in directed networks is the combinatorial explosion in number of orbits due to edge direction, there are 33 orbits in graphlets of size up to 3 but 730 when including size 4 as well. In the undirected case, scaling up from size 3 to size 4 is a comparatively much smaller jump, from 4 to 15. We propose two methodologies for our extension to size 4: the first is a simple extension, where all orbits are used in the NetEmd calculation; in the second, we adapt the idea of Aparicio et al. [2] and use only orbits that have non-zero frequency in both networks for the comparison. We test systematically the performance difference between using size 3 and size 4 graphlets to understand under which circumstances the added computational load of size 4 graphlets leads to gains in comparison performance. It is also worth noting that the previous methods that use graphlets for comparing directed networks [41, 2] also use size 4 graphlets as input to their comparison measures.
Another difference between our work and TriadEMD is the inclusion of orbits from size 2 graphlets in the comparison, which is tantamount to including a comparison of the degree distribution in the network comparison measure. Degree distributions are the most commonly studied property to measure the structure of networks and are able to distinguish between networks created with different models [35, 29, 33], but often they are insufficient alone as a heuristic for network comparison as it is possible to craft networks with the same degree distribution but widely different structures [33, 24]. The inclusion of orbits from size 2 graphlets is also supported by the undirected version of NetEmd, where these orbits are also included [46].
Complex networks are heterogeneous in the amount of data they represent, with famous examples ranging from Zachary’s karate club [51] with only 34 individuals to gigantic networks like the Friendster social network [48], which contains more than one billion relationships between its users. Such disparity makes the task of comparing networks even harder, as networks that differ in size by multiple orders of magnitude may still be organized according to similar topography, for instance when they are generated by the same process. Identifying such “common organizational principles” [46] is the basis for the NetEmd comparison measure. However, networks representing real world phenomena are often inaccurate representations of that phenomena due to unseen data, measurement errors or simply because the systems they represent are too complex to fully describe. These inaccuracies lead to noise in the statistics used to describe the network and similar topographies become harder to recognize, especially when networks differ greatly in size.
The distributions of graphlets are not immune to these errors, so our second contribution in this work is using denoising methods to reduce the impact of these errors so that NetEmd is able to more accurately distinguish networks according to those core structures that compose each network. To this end, we propose a framework that employs Principal Component Analysis (PCA) [13] as the most commonly used denoising method before comparing the graphlet distributions with EMD. PCA finds uncorrelated directions of ordered by contribution to variance, which is sufficient for independence under the assumption of normality [42]. Graphlet distributions for complex networks, however, break this assumption as, although hard to characterize exactly, they are thought to be similar to degree distributions, which, whether or not they are approximately power-law as often claimed [3], are at least often far from normal due to being heavy-tailed [4]. Therefore, we hypothesize that PCA struggles to create an appropriate model for these distributions, which can make the denoising process ineffective, so we also propose an alternative intermediate step that uses Independent Component Analysis (ICA) [5], a method that is explicitly designed to work with heavy-tailed distributions [25].
We test our contributions in clustering tasks involving synthetic and real world networks, following the same experimental procedure as the original NetEmd paper [46], which adheres to the framework of graph comparison introduced by Yaveroğlu et al. [50]. We find that our proposed method of denoising using component analysis techniques is able to improve NetEmd’s performance on clustering tasks, both in undirected and directed networks. This improvement is particularly significant when comparing networks of different sizes and densities, where NetEmd is already the state of the art.
2 Background
2.1 Graph theoretical concepts
A graph $G$ is composed of a set of vertices (or nodes) $V(G)$ and a set of edges $E(G)\subseteq V(G)\times V(G)$, represented by pairs $(a,b)\in E(G)$ for $a,b\in V(G)$. A graph can be either $directed$, when the order of the vertices in the pairs expresses direction, meaning that $(a,b)\in E(G)$ does not imply that $(b,a)\in E(G)$, or undirected otherwise (i.e. when $(a,b)\in E(G)$ if and only if $(b,a)\in E(G)$). The size of a graph is the number of vertices in the graph, written as $|V(G)|$, and the density is the portion of edges present in the graph over the total potential ones ($|E(G)|/\binom{|V(G)|}{2}$ in undirected networks). For a directed network, the reciprocity $\rho$ is the probability that, given an edge $(a,b)$ selected uniformly at random from $E(G)$, $(b,a)$ is also a member of $E(G)$. A graph is called simple if it does not contain multiple edges (two or more edges connecting the same pair of vertices) or self-loops (an edge of the form $(a,a)$ that connects a vertex to itself). In this work, we consider only simple graphs.
The neighbourhood of a vertex $u\in V(G)$ is defined as $N(u)=\{v:(v,u)\in E(G)\lor(u,v)\in E(G)\}$. All nodes are assigned consecutive integer numbers starting from $0$ and running to $|V(G)|-1$. The degree of a vertex is the number of edges it participates in, which is equivalent to the size of the node’s neighbourhood. In directed networks, the degree of a vertex can be split into the in-degree and out-degree, counting the number of incoming and outgoing edges, respectively, from that node. In directed networks, the degree of a vertex is the sum of its in-degree and out-degree. In undirected networks, the degree of a node $u$ is simply $|N(u)|$. The distribution of degrees in a network is called the degree sequence.
A subgraph of size $k$, $G_{k}$, of a graph $G$ is a graph with $k$ vertices, such that $V(G_{k})\subseteq V(G)$ and $E(G)\supseteq E(G_{k})\subseteq V(G_{k})\times V(G_{k})$. A subgraph is induced if $\forall u,v\in V(G_{k}),\;(u,v)\in E(G_{k})\leftrightarrow(u,v)\in E(G)$ and is said to be connected when all pairs of vertices have a sequence of edges connecting them. Two graphs $G$ and $H$ are isomorphic, written as $G\sim H$, if there is a bijection between $V(G)$ and $V(H)$ such that two vertices are adjacent in $G$ if and only if their correspondent vertices in $H$ are adjacent. A match of a graph $H$ in a larger graph $G$ is a set of nodes that induce the respective subgraph $H$. In other words, it is a subgraph $G_{k}$ of $G$ that is isomorphic to $H$. The frequency of a subgraph $G_{k}$ is then the number of different matches of $G_{k}$ in $G$.
Orbits are unique positions of a graph, calculated by partitioning the set of vertices into equivalence classes where two vertices belong to the same class if there is an automorphism that maps one into the other [37].
Small, connected, non-isomorphic, induced subgraphs are commonly called graphlets [32]. The smallest graphlet considered is a single edge, which can be seen as a size-2 subgraph. An undirected edge has a single orbit (three in the directed case) and the frequency of a node in this orbit is equivalent to the degree of the node. The graphlet degree vector of a node is an extension of the definition of degree to size-$k$ graphs, representing how many times the node occurs in each orbit. The graphlet degree matrix of a graph is the collection of graphlet degree vectors of each node in the graph.
Figure 1 shows the graphlets of size 2, 3 and 4 in undirected networks and size 2 and 3 in directed networks, alongside the respective orbits.
2.2 NetEmd
NetEmd [46] is a network comparison measure that relies on structural features of the network, mainly the distribution of orbit frequencies. The core idea is formalizing the intuition that the shape of the degree distribution is indicative of the network’s generation mechanisms, for instance, a network with a power law degree distribution is generated by a process distinct from a network with a uniform degree distribution. As the graphlet degree vector is a generalization of the degree distribution for graphlets of size $k\geq 3$, the shapes of the distributions of each orbit also carry information about the topology of the network. Note that because the shape of a distribution is invariant under linear transformations such as translations, using the shape as the focus of the comparison is well-suited to comparing networks of different sizes and densities.
Wegner et al. [46] postulate that “any metric that aims to capture the similarity of shapes should be invariant under linear transformations of its inputs.” Thus, they define a measure of similarity between distributions $p$ and $q$, with non-zero and finite variances, using the Earth Mover’s Distance (EMD) [40]:
$$EMD^{*}(p,q)=\text{inf}_{c\in\mathbb{R}}(EMD(\tilde{p}(\cdot+c),\tilde{q}(\cdot))),$$
where $\tilde{p}$ and $\tilde{q}$ are the distributions resulting of scaling $p$ and $q$ to variance 1. Any distance metric $d$ can be used to generate $d^{*}$; EMD was used by the authors as it has been shown to be an appropriate metric to compare shapes of distribution in domains such as information retrieval and it produced better results than other distance metrics, like the Kolmogorov or $L^{1}$ distances.
Given two networks $G$ and $H$ and a set of $m$ orbits $\mathcal{O}=\{o_{1},o_{2},\ldots,o_{m}\}$, the NetEmd measure is defined as:
$$NetEmd_{\mathcal{O}}(G,H)=\frac{1}{m}\sum\limits_{i=1}^{m}EMD^{*}(p_{o_{i}}(G),p_{o_{i}}(H)),$$
(1)
where $p_{o_{i}}(G)$ and $p_{o_{i}}(H)$ are the distributions of orbit $i$ in graphs $G$ and $H$, respectively. Note that $\mathcal{O}$ may be replaced by any set of network features; in this work we focus on orbits only.
3 NetEmd with Dimension Reduction
3.1 Motivation
In this section, we describe how we couple dimensionality reduction techniques with NetEmd, using them as noise reduction techniques. The rationale is that processes that generate networks are inherently noisy due to the complexity of the systems they represent. This is particularly evident when we consider random networks generated by the same model; we expect them to have similar structure and that similarity to be reflected in the distribution of subgraph and orbit frequencies, but the stochastic nature of the generation process introduces noise in these distributions that makes them harder to identify.
Considering the graphlets and orbits shown in Figure 1, it is clear that (for example) directed graphlet $G_{2}$ cannot exist if $G_{0}$ does not, and similarly directed graphlet $G_{12}$ cannot exist if $G_{1}$ does not. More generally, it is clear that the different orbits do not represent independent degrees of freedom, and are expected to be constrained by the requirement of combinatorial consistency of the underlying full graph $G$ as well as correlations induced by the generation process.
In practice, both combinatorial enumeration of graphlets and a priori determination of statistical relationships between them (beyond comparison to simple random graphs as in [28]) are computationally infeasible. While this does not matter in practice for the undirected case, for directed graphs as we consider here, the relatively large number and complexity of orbits makes these much more important. We therefore seek an approach that can adjust for the non-independence of orbit counts.
Proceeding somewhat formally, let $\mathbf{F}_{G}$ denote the graphlet degree matrix of graph $G$ for a set of orbits $\mathcal{O}$, a $n\times m$ matrix where $n=|V(G)|$, $m=|\mathcal{O}|$, such that $[\mathbf{F}_{G}]_{i,j}$ represent the frequency of orbit $j$ for node $i$. Further, let $\mathbf{f}_{i}$ be the vector of frequencies of node $i$, i.e. $(\mathbf{f}_{i})_{j}=[\mathbf{F}_{G}]_{i,j}$ meaning $\mathbf{f}_{i}$ is the $i$-th row of $\mathbf{F}_{G}$. Now note that we are only interested into networks up to isomorphism; in fact, our distance measures depend on empirical histograms of orbit counts, deriving the distribution $p_{o_{i}}$ in (1) from histogram heights like
$$h_{o_{j},y}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{1}_{\{(\mathbf{f}_{i})_{j}=y\}},$$
(2)
which is the proportion of nodes with $y$ counts of orbit $o_{j}$, where $\mathbf{1}$ is an indicator function taking the value 1 if its argument is true and the value 0 otherwise. The right-hand side of (2) is a sum of random variables, one for each node, and from (1) we will further sum over orbits for a function of two such histogram heights.
In general, we will expect a random graph model to assign a probability to each graph in some finite set, and according to this measure we expect a probability distribution to be induced on $\mathbf{h}_{G}=[h_{o_{j},y}]$. As discussed above, we expect both combinatorial constraints and correlations between orbit counts, meaning that a sum over all elements of $\mathbf{h}_{G}$ as implied by (2) and (1) will inflate, due to the additivity of variances of random variables, the amount of noise in EMD realisations compared to the amount that is absolutely necessary under the random graph model.
Therefore, a natural approach is to denoise using techniques that do not require explicit solution of or simulation from random graph models. In particular, we will use the linear techniques of principal component analysis (PCA) and independent component analysis (ICA), noting their computational efficiency compared to potentially more general nonlinear methods [11]. The main idea of our methodology is therefore to project the orbit frequencies to a lower dimension, training the dimension reduction model to lose minimal information while removing noise.
Since it is not guaranteed that each graph will dimensionally reduce to the same size, the approaches we use allow for expansion of the dimensionally reduced features back to the original (high) dimension. The NetEmd comparison from Equation 1 is consequently applied to the reconstructed frequencies. By using the first $L$ components of the linear methods, which contain the most signal from the dataset, to reconstruct the original counts, this can be intuitively understood as decreasing noise within the orbit frequencies when we do not have a good model for that noise.
3.2 Principal Component Analysis
Principal component analysis (PCA) is a technique for dimensionality reduction that preserves as much variability of the data as possible, by computing principal components of the data. Principal components are sequences of orthogonal unit vectors that form an orthonormal basis, in which the original dimensions of the data are linearly uncorrelated. When projecting the data onto this new basis, these vectors can be seen as the directions that maximize the variance of the projected data, with the first principal component representing the maximum variance.
Let $\mathbf{F}_{G}$ denote the graphlet degree matrix of graph $G$ for a set of orbits $\mathcal{O}$, a $n\times m$ matrix where $n=|V(G)|$, $m=|\mathcal{O}|$ and $[\mathbf{F}_{G}]_{i,j}$ represents the frequency of orbit $j$ for node $i$. We assume that the frequencies have been normalized, a preprocessing step advised before applying PCA (to prevent notation overload, we use the same $\mathbf{F}_{G}$ to denote the normalized version of the graphlet degree matrix). The principal components of this matrix are defined by $\mathbf{V}=\mathbf{F}_{G}\mathbf{W}$, where $\mathbf{W}$ is a $m\times m$ matrix whose columns are the eigenvectors of $\mathbf{F}_{G}^{\intercal}\mathbf{F}_{G}$. To allow for only the first L components, the matrix $\mathbf{W}$ can be truncated to $\mathbf{W}_{L}$, with dimensions $m\times L$, leading to a transformation $\mathbf{V}_{L}=\mathbf{F}_{G}\mathbf{W}_{L}$. This truncation is done such that the $L$ eigenvectors chosen to correspond to the largest eigenvalues of $\mathbf{W}$. The original graphlet degree matrix can be reconstructed as $\hat{\mathbf{F}}_{G}=\mathbf{V}_{L}\mathbf{W}_{L}^{\intercal}=\mathbf{F}_{G}\mathbf{W}_{L}\mathbf{W}_{L}^{\intercal}$. The goal of PCA is to learn $\mathbf{W}_{L}$ such that the variance of the original data preserved is maximized, while also minimizing the total squared reconstruction error $||\mathbf{F}_{G}-\hat{\mathbf{F}}_{G}||_{2}^{2}=||\mathbf{V}\mathbf{W}^{\intercal}-\mathbf{V}_{L}\mathbf{W}_{L}^{\intercal}||_{2}^{2}$.
We use the reconstructed graphlet degree matrix to compare networks with NetEmd, keeping Equation 1 virtually unchanged from the original formulation:
$$PCA\_NetEmd_{\mathcal{O}}(G,H)=\frac{1}{m}\sum\limits_{i=1}^{m}EMD^{*}(\hat{p}_{o_{i}}(G),\hat{p}_{o_{i}}(H)),$$
(3)
where $\hat{p}_{i}(G)$ and $\hat{p}_{i}(H)$ are obtained from $\hat{\mathbf{F}}_{G}$ and $\hat{\mathbf{F}}_{H}$ respectively, instead of
$\mathbf{F}_{G}$ and $\mathbf{F}_{H}$.
3.2.1 Choosing the number of components.
Choosing an appropriate number of components to project the data down affects the reconstruction error and therefore the amount of noise reduced. If we take $L=m$, then clearly we are able to reconstruct the original frequencies perfectly and no noise has been removed from the data. On the other hand, picking a value for L too low, leads to foregoing descriptiveness of the data. A common strategy for picking the number of components is through the amount of variance explained by each component, a strategy that allows us to adapt the number of components to suit the networks we are comparing.
The sample covariance matrix of each orbit, i.e. $F_{G_{i,1}},\,F_{G_{i,2}},\,\ldots,\,F_{G_{i,m}}$, is proportional to $\mathbf{F}_{G}^{\intercal}\mathbf{F}_{G}$ (the derivation for this result can be found in [18], pp. 30-31) and contains the variance of the frequency of each orbit in its diagonal. As a square matrix, according to the spectral theorem, the covariance matrix can be diagonalized by its eigenvectors and the values of the resulting diagonal matrix are the corresponding eigenvalues. These eigenvalues represent variability of each axis in the projected space. Therefore, by taking the sum of the $L$ highest eigenvalues, the ones corresponding to the first $L$ components, we get the variance explained by the first $L$ principal components. The ratio $\sum_{1}^{L}\lambda_{i}/\sum_{1}^{m}\lambda_{i}$, where $\lambda_{i}$ is the eigenvalue corresponding to the $i$th eigenvector, measures the proportion of variance explained by the first $L$ principal components. Finally, to condition the number of components on the proportion of variance explained, the smallest $L$ is calculated such that $\sum_{1}^{L}\lambda_{i}/\sum_{1}^{m}\lambda_{i}\geq r$ where $r\times 100\%$ is the percentage of variance explained.
3.3 Independent Component Analysis
Independent component analysis (ICA) is a statistical technique in which an observed vector of random variables is thought to be the linear combination of unknown independent components. These components are assumed to be mutually statistically independent and with zero mean. The classic example of an ICA application is a party where a microphone is picking up multiple conversations and the goal is to separate the source signal from each conversation from the mixed one picked up by the microphone.
Transposing ICA to the domain of network comparison, we can draw an analogy from the above application by considering that the voices in the conversation are the nodes in the network and the orbit frequencies are the data points captured by the microphone. Our goal is then to search for the source signals, i.e., network characteristics stemming from its generation mechanism that generate such frequency distribution. We conjecture that by using these source signals to reduce noise in the orbit frequency distributions, our network comparison measure becomes able to more accurately distinguish networks with different generation mechanisms.
As before, let $\mathbf{F}_{G}$ denote the graphlet degree matrix of graph $G$ for a set of orbits $\mathcal{O}$ and let $\mathbf{f}_{i}$ be the vector of frequencies of node $i$, i.e. $f_{i,j}=\mathbf{F}_{G_{i,j}}$. We can write $\mathbf{f}_{i}$ as $\mathbf{f}_{i}=\mathbf{A}\mathbf{s}_{i}$, where $\mathbf{A}$ is called the mixing matrix and $\mathbf{s}_{i}$ are the $L$ independent components. The goal of ICA is to estimate $\mathbf{A}$ and $\mathbf{s}_{i}$ from $\mathbf{f}_{i}$ only. In practice, algorithms to calculate independent components compute a weight matrix $\mathbf{W}$, a pseudo-inverse of $\mathbf{A}$ and obtain the independent components through $\mathbf{s}_{i}=\mathbf{W}\mathbf{f}_{i}$. Using the FastICA algorithm [16, 15], $\mathbf{W}$ is constructed iteratively by finding unit vectors $\mathbf{w}$ such that the projection $\mathbf{w}^{\intercal}\mathbf{f}_{i}$ maximizes non-gaussianity, measured by the approximation of negentropy $J(\mathbf{w}^{\intercal}\mathbf{f}_{i})\propto\left[E(G(\mathbf{w}^{\intercal}\mathbf{f}_{i}))-E(G(\mathbf{\nu}))\right]^{2}$, where $G(x)=\log\cosh x$ and $\mathbf{\nu}$ is a standard Gaussian random variable with mean 0 and variance 1. These unit vectors $\mathbf{w}$ are combined into the weight matrix $\mathbf{W}$ and decorrelated to prevent convergence to the same maxima by $\mathbf{W}=\mathbf{W}/\sqrt{\|\mathbf{W}\mathbf{W}^{\intercal}\|}$ (the matrix norm is calculated using the $l^{2}$ norm); this procedure is then repeated with $\mathbf{W}=\frac{3}{2}\mathbf{W}-\frac{1}{2}\mathbf{W}\mathbf{W}^{\intercal}\mathbf{W}$ until convergence.
Upon obtaining the weight matrix, we calculate its pseudo-inverse to obtain the mixing matrix $\mathbf{A}$ and calculate $\hat{\mathbf{F}}_{G}$ similarly to what was done with PCA: we project $\mathbf{F}_{G}$ to a lower dimensional space using $\mathbf{W}$ and apply the mixing matrix $\mathbf{A}$ to retrieve the original data. The final NetEmd formula then takes a similar form as for $PCA\_NetEmd$:
$$ICA\_NetEmd_{\mathcal{O}}(G,H)=\frac{1}{m}\sum\limits_{i=1}^{m}EMD^{*}(\hat{p}_{o_{i}}(G),\hat{p}_{o_{i}}(H)),$$
(4)
with $\hat{p}_{i}(G)$ and $\hat{p}_{i}(H)$ obtained from $\hat{\mathbf{F}}_{G}$ and $\hat{\mathbf{F}}_{H}$ respectively, as before.
4 Directed NetEmd
Directed networks pose challenges unlike the ones observed in undirected networks, due to the combinatorial explosion in number of orbits introduced by distinguishing $(u,v)$ and $(v,u)$ as different edges between $u$ and $v$. There are 730 orbits when considering directed graphlets of size up 4, compared to 15 in the undirected case, and scaling up to size 5 in directed networks becomes unfeasible as the number of orbits rises to $45,637$. This sharp increase makes the task of counting orbit frequencies for each node even harder, with no known analytical approaches like ORCA [12] that rely on crafting sets of equations that exploit combinatorial relationships between smaller graphlets to compute orbit counts. Instead, enumeration-based approaches are required in the directed case, which are known to be at least an order of magnitude slower than analytical approaches in undirected networks. In order to adapt NetEmd to directed networks, we use the G-Trie [38] data structure and the counting algorithm proposed by Aparicio et al. [2], publicly available at [36], modified to return the graphlet degree matrix instead of the graphlet degree distribution.
How to handle this increased number of orbits in the comparison is also a challenging problem, especially when comparing networks that contain only a small subset of orbits and differences within those orbits can get diluted when taking the average over the whole set. Aparicio et al. [2] argue that the comparison should only be done over the orbits present in at least one of the networks, using networks $G$ and $H$ in Figure 2 as an example. In this case, the authors show that using the original formulation of the Graphlet Degree Distribution Agreement (GDA) (described in Section 5.4.2) inflates the similarity score between the two networks far beyond what would be expected from two so distinct looking networks, with an agreement score of 0.92 against 0.32 when using their modified version that compares only orbits in at least one of the networks. This modification to GDA is well-founded because GDA is a measure of agreement, bounded between 0 and 1 and meant to be interpretable; a score close to 1 means that the networks are similar and a score close to 0 the opposite, so the magnitude of the value that the measure outputs is of interest. However, NetEmd is a measure of distance between networks and the relative difference between distances is more informative than the absolute value of that distance, so including information that an orbit is missing in both networks improves our ability to tell if they are less distant from each other when comparing against another pair of networks.
With the above in mind, we propose two versions of directed NetEmd. Given two networks $G$ and $H$ and a set of $m$ orbits $\mathcal{O}=\{o_{1},o_{2},\ldots,o_{m}\}$, the first uses all $m$ orbits in this set, which in practice are all the orbits in graphlets of size up to 3 or 4 and the formulation is the same as the original NetEmd in Equation 1. The second is using the same idea of Aparicio et al. [2], which we refer to as weighted NetEmd, where we restrict the set of orbits to those that occur in at least one of the networks $G$ or $H$. This leads to a new set of $m^{\prime}$ orbits $\mathcal{O^{\prime}}=\{o_{1}^{\prime},o_{2}^{\prime},\ldots,o_{m}^{\prime}\}$, with $m^{\prime}\leq m$. The average in Equation 1 is done over this set $\mathcal{O^{\prime}}$ instead of the original $\mathcal{O}$, but the rest of the formula is the same:
$$Weighted\_NetEmd_{\mathcal{O^{\prime}}}(G,H)=\frac{1}{m^{\prime}}\sum\limits_{i=1}^{m^{\prime}}EMD^{*}(p_{o_{i}}(G),p_{o_{i}}(H)).$$
(5)
Graphs $G^{\prime}$ and $H^{\prime}$ in Figure 2 show an analogous example to the one shown by Aparicio et al. [2] to demonstrate the difference between weighted and original NetEmd applied to directed networks. Although it is usually impractical to scale up to size 5 in directed networks, in a small example like this one it becomes feasible to do so. In this context, we calculate a distance of $5\text{\times}{10}^{-5}$ using all orbits and $0.12$ when using the weighted version.
5 Experimental Setup
Experiments were performed on an AMD Opteron Processor 6380 with 1.4 GHz and 2 MB of cache memory, using Python version 3.5.4. The implementation of our methods is available online at https://github.com/migueleps/denoise-dir-netemd.
5.1 Measure of Cluster Performance
Given a set of networks $\mathcal{G}=\{G_{1},G_{2},\ldots,G_{N}\}$, divided in disjoint classes $C=\{c_{1},c_{2},\ldots,c_{m}\}$, we use the performance measure proposed by Wegner et al. [46] that captures the idea that networks of the same class should be nearer to each other than to networks of other classes. It is defined as an empirical probability $P(G)=d(G,G_{1})<d(G,G_{2})$, where $G_{1}$ is a network selected randomly from the same class as $G$ and $G_{2}$ is randomly selected from a different class, and $d$ is the network comparison statistic. The performance over the whole dataset is the average $P(G)$ over all the networks in $\mathcal{G}$, and we write this as $\overline{P}=P(\mathcal{G})=\frac{1}{|\mathcal{G}|}\sum_{G\in\mathcal{G}}P(G)$. In Appendix D, we present results using area under precision-recall curve (AUPR) and adjusted Rand index (ARI).
5.2 Synthetic datasets
We reproduce the experimental setup of Wegner et al. [46], testing our proposed modifications to NetEmd on the same eight random network models. These models are Erdős-Rényi [7], Barabasi-Albert preferential attachment [3], configuration model, geometric random graphs [8], geometric gene duplication model [10], duplication divergence of Vazquez et al. [44], duplication divergence of Ispolatov et al. [17] and Watts-Strogatz [45]. Details of parameters for each model are described in Appendix E.
We generate 10 networks per model per combination of number of nodes and average degree. The set of number of nodes used is $N\in\{1250,2500,5000,10000\}$ and the set of average degrees is $k\in\{10,20,40,80\}$. This leads to a total of 160 networks per model and 1280 networks in total.
For the directed datasets, we add varying levels of reciprocity $\rho\in\{0,0.25,0.5,0.75,1\}$. To generate a directed network with reciprocity $\rho$, we take the undirected version, we duplicate and invert all edges (if $(u,v)$ is in the undirected network, we add $(v,u)$ to the directed version) and we then take a proportion $1-\rho$ of these pairs and choose randomly a direction to remove (either $(u,v)$ or $(v,u)$). This is done incrementally such that if $(u,v)$ is removed from the dataset with 75% reciprocity, then it is also removed from the datasets with 50%, 25% and 0% reciprocity.
These datasets of synthetic networks give rise to two tasks aimed at gauging how well a network comparison measure separates clusters according to their generation mechanism, with the ground truth given by the random network model used to generate the networks in each cluster. The first, more simple task is to separate networks with the same number of nodes and the same density. To this end, we create 16 groups based on the combinations of $N$ and $k$, each group containing 10 realizations of each random network model, for a total of 80 networks. This task is equivalent to $RG_{1}$ in the original NetEmd paper [46]. The second task, equivalent to $RG_{3}$ in the original NetEmd paper, is to compare all 1280 networks simultaneously, finding the 8 clusters of 160 networks. This task measures how sensitive the comparison measure is to differences in an order of magnitude in number of nodes and edges, making the separation by model type more difficult. In directed networks, we repeat Task 1 and 2 for each level of reciprocity, determining how sensitive the network comparison is to this third parameter that impacts the set of orbits available for comparison (the set of orbits with 0% reciprocity is disjoint from the set of orbits with 100% reciprocity).
5.3 Real world datasets
We use the dataset of Onnela et al. [30] to validate our methodology on a mix of real world and synthetic networks. The multiple sources that make up this dataset lead to a heterogeneous set of networks; however, the ability to separate these networks according to their domain is desired from a network comparison method. We were unable to find sources for the original 746 networks, so we use the reduced set proposed by Ali et al. [1] with 151 unweighted and undirected networks. There is no ground truth for the true clusters in this dataset and the dendograms constructed through the methods proposed by Onnela et al. [30] and Ali et al. [1] disagree on the composition of each cluster. Therefore, we aim to reconstruct clusters according to the type of data, which can be visualized in the supplementary material of Ali et al. [1].
For directed networks, we download four datasets from the SNAP library [23]: Gnutella peer-to-peer file sharing network from August 2002 [39, 22] (9 networks with an average reciprocity of 0%), CAIDA autonomous systems relationships datasets from January 2004 to November 2007 [21] (122 networks with an average reciprocity of 100%), ego networks of circles from Google+ [27] (132 networks with an average reciprocity of 28%) and ego networks of lists from Twitter [27] (973 networks with an average reciprocity of 54%).
5.4 Other network comparison methods
5.4.1 GCD.
Yaveroğlu et al. [49] note that orbit counts have dependencies between them, making them redundant since they can be expressed as a linear combination of other orbits. The authors identify 11 out of 15 orbits in graphlets of size up to 4 and 56 out of 73 orbits in graphlets of size up to 5 as non-redundant. The authors construct the graphlet degree vector of each node in the network using this reduced number of orbits (although the full set of orbits may also be used), making up a matrix where each row is the graphlet degree vector of each node, and compute the Spearman’s correlation coefficient between each pair of orbits, i.e., each pair of columns in this matrix. The pairwise Spearman’s correlation coefficients are aggregated in a square $m\times m$ matrix called the Graphlet Correlation Matrix (GCM), where $m$ is the number of orbits used (e.g., 11 if using non-redundant orbits of graphlets up to size 4).
To compare two networks, the authors propose the Graphlet Correlation Degree (GCD), which is the Euclidean distance between the upper triangle of the GCM of each network.
Sarajlić et al. [41] extend GCD to directed networks, finding 13 non-redundant orbits in graphlets of size up to 3 and 129 in graphlets of size up to 4.
5.4.2 GDA.
Przulj et al. [32] introduced the Graphlet Degree Distribution (GDD), calculated as follows. Let $d^{o}_{G}(k)$ be the number of nodes of graph $G$ that participate $k$ times in orbit $o$, i.e., $d^{o}_{G}(k)$ is graphlet degree distribution of orbit $o$. The authors scale the distribution to decrease the contributions of larger orbits by calculating $S^{o}_{G}(k)=d^{o}_{G}(k)/k$ and then normalize the distribution as $N_{G}^{o}(k)=S^{o}_{G}(k)/\sum_{k=1}^{\infty}S_{G}^{o}(k)$. To compare the GDD distributions of the same orbit in different networks $G$ and $H$, the authors propose the GDD-agreement (GDA) metric defined as:
$$A^{o}(G,H)=1-\left(\sum\limits_{k=1}^{\infty}\left[N_{G}^{o}(k)-N_{H}^{o}(k)\right]^{2}\right)^{1/2}.$$
As a measure of agreement, the output of GDA is 1 if the distributions are identical, as opposed to NetEmd which is a measure of distance meaning it will output 0 for identical distributions. To aggregate the agreements of the multiple orbits, the authors propose either a arithmetic or a geometric mean. We compare against the implementation of Aparicio et al. [2], which uses the arithmetic mean and is available at [36]. This implementation uses the idea that if a graphlet has a frequency of 0 in both networks, then the orbits from that graphlet are excluded from the arithmetic mean, for both in the directed and undirected versions.
6 Results
6.1 Undirected Results
We present the results for Tasks 1 and 2 in undirected networks and the results for the Onnela et al. [30] dataset in Table 1. The results for Task 1 show the average and standard error for the 16 values of $\overline{P}$, one for each combination of $N$ and $k$. The results for Task 2 and Onnela et al. dataset show the single value of $\overline{P}$ after comparing the 1280 and 151 networks, respectively.
We find that our proposed modifications to NetEmd achieve an increase in performance, with $ICA\_NetEmd$ outperforming original NetEmd in synthetic datasets and $PCA\_NetEmd$ in the Onnela et al. dataset. The performance gain in Task 1 is not significant as the results are within a standard error of each other, but on Task 2, the more difficult task, the difference is more pronounced. This performance improvement in synthetic datasets is not reflected in the Onnela et al. dataset, where 15 components are necessary to equalize the performance of original NetEmd. The best performance in the Onnela et al. dataset is $PCA\_NetEmd$ with 80% explained variance, which contrasts with the results in the synthetic dataset, where 80% explained variance achieves worse performances than the original NetEmd. Using higher values for explained variance in $PCA\_NetEmd$ also improves performance over original NetEmd in Task 2 and Onnela et al. dataset, but not in Task 1.
6.2 Directed Results
We show the results for Task 1 in directed networks in Table LABEL:tab:scores_dir_task1 and the results for Task 2 and the dataset of real world directed networks in Table 2. The results for Task 1 show the average and standard error for the 16 values of $\overline{P}$, one for each combination of $N$ and $k$, for each level of reciprocity. The results for Task 2 and real world directed networks dataset show the single value of $\overline{P}$ after comparing the 1280, for each level of reciprocity, and 1232 networks, respectively.
Similarly to the undirected case, we find that the results for each algorithm in Task 1 are similar across different levels of reciprocity, meaning that the various different versions of NetEmd and DGCD are able to distinguish the generation mechanism regardless of how many edges are reciprocated in a directed network. Results show that DGCD with 129 orbits achieves best performance for 0%, 25%, 50% and 75% and $PCA\_NetEmd$ with 90% explained variance for 100% reciprocity. We observe no significant difference between using NetEmd with all orbits, with weighted orbits or coupled with dimensionality reduction techniques. Results with TriadEMD are non-significantly better than our NetEmd version with orbits from size 3 graphlets in this task, indicating that using graphlets of size 2 for the comparison in this task does not help differentiating between different models. This is likely due to the configuration model, which shares the same degree distribution as the duplication divergence model of Vazquez et al. [44].
In Task 2, our results show a noticeable gain in performance when using $ICA\_NetEmd$ over other NetEmd versions and DGCD, in particular when using only 2 components and in reciprocity levels smaller than 100%. We find that using orbits in graphlets of only size 2 and 3 orbits achieves better results for this task than using ones with size up to 4, with the exception of the two extremities in reciprocity (0% and 100%), where having more available orbits for comparison helps performance. In this task, we also find that using $Weighted\_NetEmd$ degrades performance compared to using all orbits. Results also show that our versions of NetEmd perform very similarly to TriadEMD, with a slight advantage to TriadEMD at 0, 75% and 100% reciprocity, which is perhaps expected again due to the configuration model adding a confounding factor when using graphlets $G_{0}$ and $G_{1}$ from Figure 1 in the comparison.
On the other hand, $Weighted\_NetEmd$ reaches the best performance in the real directed networks dataset, where, similarly to Task 2 in synthetic networks, using smaller graphlets yields better results. Table 2 shows that $ICA\_NetEmd$ with 10 and 15 components obtains better performance than NetEmd without dimensionality reduction and with size 4 graphlets. This comparison is unfair to $ICA\_NetEmd$, as results suggest that when there is a large discrepancy between the sizes of the networks, using smaller graphlet sizes improves the performance of NetEmd. Therefore, although $Weighted\_NetEmd$ achieves the best performance out of the parameters we include in Table 2, the top performance in the real world directed networks dataset is achieved by $ICA\_NetEmd$ with size 3 graphlets and 10 components. All four versions of NetEmd that we propose achieve better results than DGCD and GDA in this dataset. This is also true when comparing our versions with size 3 graphlets against TriadEMD, indicating that in real world networks the degree distribution is able to contribute in a positive manner towards being able to discriminate networks of different sources
6.3 Performance by number of ICA components
We examine the impact on performance of varying the number of components used for $ICA\_NetEmd$, with the caveat that increasing the number of components leads to a greater computational complexity as the FastICA algorithm takes more iterations to converge. We also explore the performance of $ICA\_NetEmd$ under smaller graphlet sizes, namely graphlets sizes 3 and 4 for undirected networks and size 3 for directed. There are two different ways of using smaller graphlets on $ICA\_NetEmd$, the more natural way in which we compute the independent components using the set of orbits related to the desired graphlet size, or an alternative way in which the components are computed using a larger graphlet size and the reconstructed graphlet degree matrix is posteriorly truncated to include only the orbits of the desired graphlet size. For example, when calculating network distances using undirected graphlets of size up to 4, there are 15 orbits. In the first method, the graphlet degree matrix $\mathbf{F}_{G}$ is a $|V(G)|\times 15$ matrix and we simply calculate $\hat{\mathbf{F}}_{G}$ as described in Section 3, constraining the number of components used to a maximum of 14 components. In the second method, $\mathbf{F}_{G}$ is a $|V(G)|\times 73$ matrix, allowing a maximum of 72 components to compute $\hat{\mathbf{F}}_{G}$, which is then truncated to 15 columns.
Figures 5 to 8 show, in order, performance with varying number of components in undirected networks Task 1, undirected networks Task 2, directed networks Task 1, directed networks Task 2, the Onnela et al. dataset and directed real world networks.
We find that adding more components to ICA does not always translate to a better performance, instead we find that there is a performance maximum obtained with a low number of components that depends on the task. Adding more components after this maximum leads to a decline in performance, either an immediately visible decline (Figures 5, 8 and 8) or after plateauing at that maximum value (Figures 5, 5 and 8). This behaviour indicates that our ICA algorithm is working as expected, as we find a suitable number of components that maximizes the noise reduced in the orbit frequency distributions, so adding more components reduces the difference between $\mathbf{F}_{G}$ and $\hat{\mathbf{F}}_{G}$ which negates the positive impact on that noise reduction.
Figures 5 and 8 show a sharp decline in performance when using graphlets of size up to 3 with reciprocity 0% and 100%, after 13 and 4 components respectively. This is caused by these extreme levels of reciprocity limiting the orbits that can occur in the networks we compare. With 0% reciprocity, only graphlets with no reciprocal edges can occur in these networks, so all orbits in graphlets with reciprocal edges have a frequency of 0. As there are only 13 orbits belonging to graphlets with no reciprocal edges, when we try to compute more than 13 independent components the performance degrades as more components are added. With 100% reciprocity, we find a similar situation but there are only 4 orbits in graphlets of size 2 and 3 for which all edges are reciprocal (orbits 2, 27, 28 and 32 in Figure 1).
7 Conclusion
We presented two extensions of NetEmd to directed networks, one a direct extension that uses all orbits of graphlets up to size 4 and another that compares networks based only on the orbits that exist in at least one of the networks. We showed that our methodology achieves state of the art performance in large datasets of synthetic networks with heterogeneous network sizes and in datasets of real world networks.
We also proposed to add dimensionality reduction techniques, namely PCA and ICA, as a preprocessing step to the network comparison. The goal of this step is to use dimensionality reduction as a process to attenuate noise within orbit frequencies, that can be introduced by random number generation (in synthetic networks) or data collection/representation (in real world networks). Results show that this preprocessing improves the performance of NetEmd not only when comparing networks with the same number of nodes and edges, but especially in large datasets containing networks of different sizes.
From an end-user perspective, our extensive testing allows us to recommend guidelines for which version of NetEmd to use in which situation. If the set of networks is homogeneous in number of nodes and average degree, using the largest graphlet size available (size 5 in undirected, size 4 in directed) coupled with ICA (10 to 15 components) leads to the best results. In the cases where there is a large variety in number of nodes or average degree between the networks in the dataset, using smaller graphlets is generally advised. The exception is in directed networks, if the connections within the networks have no reciprocity, then using graphlets of size 4 (with ICA) is more likely to yield a more accurate comparison, due to having a larger set of orbits to with non-zero frequency. On the other hand, if the average reciprocity allows using the whole range of graphlets of size 3 and 4, then it is more likely that using graphlets of size 3 coupled with ICA will lead to a more accurate distinction between networks. In this case, 2 components might be enough if the size of the networks are all within one order of magnitude from each other but 5 to 10 components if there is a wider difference.
Acknowledgments
MEPS is funded by Engineering and Physical Sciences Research Council Manchester Centre for Doctoral Training in Computer Science (grant number EP/I028099/1). TH is supported by the UKRI through the JUNIPER modelling consortium (grant no. MR/V038613/1) and the Engineering and Physical Sciences COVID-19 scheme (grant number EP/V027468/1), the Royal Society (grant number INF/R2/180067), and the Alan Turing Institute for Data Science and Artificial Intelligence.
Appendix A Discussion on the time complexity of NetEmd
Calculating the NetEmd measure is a process that can be separated in two phases: acquiring the distributions of the network statistics of interest and comparing the statistics using the EMD. In the case of the algorithm we propose in this work, obtaining the distributions of network statistics involves calculating the graphlet degree matrix and performing PCA or ICA on this matrix.
Obtaining the graphlet degree matrix is the most computationally expensive step of this process, with a time complexity of $O(Nd^{m-1}$), where $N$ is the number of nodes in the network, $d$ the maximum degree of any node in the network and $m$ the size of the graphlets being enumerated. In undirected networks, we use the combinatorial algorithm ORCA [12] that relies on an analytical approach to set up a system of linear equations that relates different orbit frequencies. In directed networks, no such approaches are known [37], so we rely on G-Tries [38, 2], a data structure that supports a graphlet enumeration algorithm by representing subgraphs in a prefix tree, the state of the art in enumerating directed graphlets, to our knowledge [37, 2].
The complexity of PCA is split in two parts, computing the covariance matrix is $O(Np^{2})$ and doing the eigenvalue decomposition is $O(p^{3})$, where $N$ is the number of nodes in the network and $p$ the number of orbits being considered. This leads to an overall complexity of $O(Np^{2}+p^{3})$.
The complexity of ICA is harder to characterize. Firstly, the FastICA algorithm assumes that the data has been centered and whitened [16]. The implementation we use, from scikit-learn [31], uses PCA to do the whitening preprocessing, so its complexity is at least as high as PCA. The iterative algorithm to find the weight matrix $\mathbf{W}$ is repeated until the matrix has converged, but there is no prior assurance that the algorithm converges or how many iterations it takes to do so. The scikit-learn implementation defines a maximum number of iterations to stop execution, we set this value to 1000. Each iteration involves calculating $\log\cosh(\mathbf{w}^{\intercal}\mathbf{f}_{i})$ for each component and each node, which the scikit-learn implementation does in $O(Nc^{2})$, where $c$ is the number of components to be calculated, and the decorrelation step, which is done in $O(c^{2})$. The overall complexity of ICA then becomes $O(Np^{2}+p^{3}+INc^{2})$, where $I$ is the number of iterations.
Wegner et al. [46] calculate the complexity of comparing two graphlet distributions using $EMD^{*}$ to be $O(k(N+N^{\prime})\log(N+N^{\prime}))$, where $N$ and $N^{\prime}$ are the number of nodes of each network and $k$ is the maximum number of function calls to the optimization algorithm used to align the distributions.
Appendix B Results for Task 1 in directed networks
Appendix C Discussion on orbits from smaller graphlet sizes
Wegner et al. [46] have shown that reducing the graphlet size as input to NetEmd can improve the quality of the clusters under certain conditions. A more common reason to use smaller graphlets for comparison is a computational load argument, as we show in Appendix A, increasing the graphlet size leads to more computation time in each step of the NetEmd framework. In the first step, computing orbit frequencies of graphlets of size 4 in directed networks or size 5 in undirected networks can be prohibitive for very large graphs. Another consideration is the time taken to compute the EMD between orbit distributions, particularly in directed networks with graphlets of size 4 requiring 22 times more calls to the $EMD^{*}$ function than using size 3. Finally, computing the principal or independent components also contain a complexity component dependent on the number of orbits we consider as features to these methods, which also contribute to greater computational load when increasing the graphlet sizes as input to NetEmd. Therefore, it is important to understand in which situations decreasing the graphlet size as input to NetEmd leads to similar or better performance to avoid paying the high computational cost.
We previously presented the performance of $ICA\_NetEmd$ under smaller graphlet sizes, for multiple components used, in Figures 5 to 8. The results for Task 1, both in directed and undirected networks, demonstrate that when comparing networks of similar sizes and densities using bigger graphlet sizes yields a more accurate comparison. The same holds for Task 2 in the undirected case and in the extreme cases of reciprocity (0% and 100%) in Task 2 of the directed experiments, where ICA achieves the top performance with size 5 undirected and size 4 directed graphlets, respectively. On the other hand, in the datasets of real world networks and in directed Task 2 with reciprocity of 25%, 50% and 75%, size 5 graphlets in undirected networks and size 4 in directed networks perform worse than size 4 and size 3, respectively.
For the datasets of real networks in particular, this difference in performance can be explained by noticing that these datasets contain very small networks such that some graphlets have no matches in them. This issue is amplified the larger the graphlet size considered is and it adds a confounding factor to the network comparison, similar to the example of Figure 2. Incidentally, this is the only dataset where $Weighted\_NetEmd$ achieves better performance than following the original formulation.
Figures 5 to 8 also show the performance results of using the full set of size $k$ orbits to denoise the distributions of $k-1$ or $k-2$ orbits. In this second way of using ICA, the only time saved compared to using the full set of orbits is the time to compute the EMD between orbit distributions and we find that in no case it leads to the best clustering performance. However, Figure 5 shows a breakpoint at 30 components, where the performance using size 3 and 4 orbits after denoising with size 5 succeeds in performing better than size 5. The reason for this is that, as more components are added, the smaller the reconstruction error $||\mathbf{F}_{G}-\hat{\mathbf{F}}_{G}||_{2}$ is, so the performance with those parameters tends to approximate the performance of the original NetEmd with those graphlets sizes.
The performance of $PCA\_NetEmd$ under smaller graphlet sizes follows a similar set of rules to $ICA\_NetEmd$, with better performances in Task 1 measured when using larger graphlets and in Task 2 and real world networks when using smaller graphlet sizes. Tables 3 to 5 detail the results for multiple combinations of graphlet size and percentage of variance explained. Unlike $ICA\_NetEmd$, our experimental setup does not allow us to claim that after a certain value of explained variance performance degrades or stabilizes. Instead, results suggest that the optimal threshold depends on both the dataset and the size of graphlet chosen. Like with $ICA\_NetEmd$, choosing a low value for percentage of explained variance for Task 2 in directed networks yields the best performance for 0, 25, 50 and 75% reciprocity, but otherwise choosing 90 or 95% explained variance seems to be a good rule of thumb for best performance with $PCA\_NetEmd$.
Appendix D Different metrics
D.1 Area Under Precision-Recall (AUPR)
Sarajlic et al. [41] evaluate the performance of their graph comparison tool using the Area Under Precision - Recall curve (AUPR), after the framework proposed by Yaveroğlu et al. [50]. The metric is calculated as follows: for each value of a parameter $\epsilon\geq 0$, if the distance between two networks is less than $\epsilon$, then the networks belong to the same cluster. The parameter $\epsilon$ ranges from 0 to 1, with values incrementing by $5\text{\times}{10}^{-3}$. For each value of $\epsilon$, we compute the number of true positives, false positives, true negatives and false negatives. In this context, we define these concepts as:
•
True positive (TP): the distance between the networks is smaller than $\epsilon$ and they were generated by the same random network model.
•
False positive (FP): the distance between the networks is smaller than $\epsilon$ but they were generated by different random network models.
•
True negative (TN): the distance between the networks is greater than $\epsilon$ and they were generated by different random network models.
•
False negative (FN): the distance between the networks is greater than $\epsilon$ but they were generated by the same random network model.
From these quantities, we calculate precision as $\frac{TP}{TP+FP}$ and recall as $\frac{TP}{TP+FN}$, obtaining 200 tuples of (precision, recall), from which we are able to plot the precision-recall curve. The area under this curve is a metric relevant to our problem as it puts an emphasis on the positive predictive value of the model, disregarding the true negatives that compose the majority of datasets with imbalanced labels such as this one (with 8 random network models, positive labels comprise only $12.5\%$ of the dataset).
D.1.1 Undirected Results
Table 6 shows the AUPR for Task 1 and Task 2 in the undirected case and the Onnela et al. dataset.
In Task 1, according to AUPR, using PCA or ICA does not lead to a gain in performance, unlike what we observed when using $\overline{P}$. This is not surprising as the improvement we displayed with $\overline{P}$ was within a standard error and both methodologies are able to separate networks between these random models with a very high degree of accuracy. Task 1 serves as a proof of concept that our proposed measure is able to perform the most simple task.
The results for Task 2 are aligned with what we observe when using $\overline{P}$, coupling ICA with NetEmd leads to the best separation between models when different network sizes and densities are present. In the Onnela et al. dataset, the results with AUPR also agree with $\overline{P}$, both performance metrics consider the original NetEmd with size 4 orbits to be one of the top performers with this dataset. The difference between the two metrics is that when measuring with $\overline{P}$, we observe that using PCA with 90% explained variance leads to similar performance (Table 3), but the top performer in this dataset according to AUPR is PCA with 99% explained variance, achieving an AUPR score of 0.791.
D.1.2 Directed Results
The results using AUPR for Task 1 in directed networks are shown in Table LABEL:tab:aupr_dir_task1 and for Task 2 and the dataset of real world directed networks in Table 7.
In Task 1, the main difference we find is that GDA with size 3 orbits shows an improvement over the other algorithms, becoming the top performer with 25% (tied with DGCD with 129 orbits) and 75% reciprocity. DGCD with 129 orbits still shows the best performance with 0% reciprocity, but PCA with 90, 95 and 99% explained variance is within one standard error, similarly to the results measured with $\overline{P}$. At a 100% reciprocity, NetEmd with size 4 orbits achieves the best performance, but PCA with 90, 95 and 99% explained variance and ICA with 2 components obtain similar AUPR scores. As before, we highlight that the high scores achieved by the 6 algorithms in Table LABEL:tab:aupr_dir_task1 serve as a proof of concept that they are able to distinguish the random network models for different levels of reciprocity, even if the AUPR score is not within a standard error of the best score.
In Task 2, similarly to the undirected version, we find an agreement between the AUPR score and $\overline{P}$, with ICA with 2 components highlighted as the highest performing algorithm for 0, 25, 50 and 75% reciprocity datasets and NetEmd with size 4 orbits for 100% reciprocity. The same agreement between AUPR and $\overline{P}$ is found on the dataset of real world directed networks, with the top performer being ICA with 6 components with an AUPR of 0.891 but not highlighted in Table 7.
The conclusions regarding the difference between our proposed NetEmd methods and TriadEMD also hold when using AUPR instead of $\overline{P}$; in the synthetic datasets TriadEMD achieves better AUPR scores than both NetEmd and $Weighted\_NetEmd$ with orbits from size up to 3 graphlets, but worse results than NetEmd with size 4 graphlets. In the real world directed networks dataset, both NetEmd and $Weighted\_NetEmd$ with size 3 orbits obtain better AUPR scores than TriadEMD.
D.2 Adjusted Rand Index
The adjusted Rand index [34, 14] is a metric to evaluate the similarity of data clusterings; a corrected for chance version of the Rand index [34] was proposed by Hubert and Arabie [14]. Let $\mathcal{G}=\{G_{1},G_{2},\ldots,G_{N}\}$ be the set of networks we wish to partition into clusters. For the synthetic datasets, the ground truth partition of this set is known, we have 8 models of random graphs and we evaluate a network comparison method by its ability to reconstruct these groups. For the Onnela et al. [30] dataset, the ground truth partition used is the one shown in the supplementary material of [1]. Let $\mathcal{P}^{*}=\{P_{1}^{*},P_{2}^{*},\ldots,P_{c}^{*}\}$ be the ground truth partitions and $\mathcal{P}=\{P_{1},P_{2},\ldots,P_{c}\}$ be the clusters produced by the network comparison algorithms. These clusters are generated by applying a hierarchical clustering algorithm to the matrix of pair-wise distances calculated by the network comparison; in our case we used the implementation by scikit-learn [31], the AgglomerativeClustering class with complete linkage.
Given the two set of clusters, we define the following four quantities:
•
$a$: the number of pairs of elements of $\mathcal{G}$ that are in the same cluster in $\mathcal{P}^{*}$ and in the same cluster in $\mathcal{P}$.
•
$b$: the number of pairs of elements of $\mathcal{G}$ that are in different clusters in $\mathcal{P}^{*}$ and also in different clusters in $\mathcal{P}$.
•
$c$: the number of pairs of elements of $\mathcal{G}$ that are in the same cluster in $\mathcal{P}^{*}$ but in different clusters in $\mathcal{P}$.
•
$d$: the number of pairs of elements of $\mathcal{G}$ that are in different clusters in $\mathcal{P}^{*}$ but in the same cluster in $\mathcal{P}$.
These quantities can be interpreted as the number of true positives, true negatives, false negatives and false positives, respectively, if we consider the attribution of a pair of networks to a cluster as a decision problem. The Rand index is then defined as:
$$RI=\frac{a+b}{a+b+c+d}=\frac{a+b}{\binom{N}{2}}.$$
The general form for correcting a index for chance is:
$$\frac{RI-\mathbb{E}(RI)}{\text{max}(RI)-\mathbb{E}(RI)}.$$
Hubert and Arabie [14] show how to calculate $\mathbb{E}(RI)$, which we omit for brevity, leading to the formula for the adjusted Rand index (assuming a maximum value of 1 for the Rand index when $c$ and $d$ are 0):
$$ARI=\frac{\sum_{ij}\binom{n_{ij}}{2}-\frac{\sum_{i}\binom{a_{i}}{2}\sum_{j}\binom{b_{j}}{2}}{\binom{N}{2}}}{\frac{1}{2}\left[\sum_{i}\binom{a_{i}}{2}+\sum_{j}\binom{b_{j}}{2}\right]-\frac{\sum_{i}\binom{a_{i}}{2}\sum_{j}\binom{b_{j}}{2}}{\binom{N}{2}}},$$
where $n_{ij}$ is the number of networks in common between $P_{i}^{*}$ and $P_{j}$, i.e., $n_{ij}=|P_{i}^{*}\cap P_{j}|$, $a_{i}=\sum_{j=1}^{c}n_{ij}$ and $b_{j}=\sum_{i=1}^{c}n_{ij}$.
D.2.1 Undirected Results
Table 8 shows the ARI for Task 1, Task 2 and real world networks in the undirected case.
As with AUPR, we find that using ARI as a performance metric does not change the conclusions drawn from the results with $\overline{P}$. Recalling that the ARI metric measures the quality of hierarchical clusters generated from a distance matrix, for Task 1, the output of original NetEmd with size 5 orbits leads the best score using this metric. More evidence of the quality of the clusters produced by the original NetEmd with size 5 orbits is to notice that increasing the number of components in ICA or the percentage of variance explained in PCA (and therefore approximating the results of original NetEmd) leads to higher ARI score.
For Task 2, the gain measured by $\overline{P}$ and AUPR when using $ICA\_NetEmd$ with 2 components is also reflected in the ARI score. For the Onnela et al. dataset, the best performance we measure not included in Table 8 is $ICA\_NetEmd$ with size 4 graphlets and 8 components, with the components calculated with size 5 graphlets, with a score of 0.713.
D.2.2 Directed Results
The results using ARI for Task 1 in directed networks are shown in Table LABEL:tab:ari_dir_task1 and for Task 2 and the dataset of real world directed networks in Table 9.
In Task 1, we find that, according to ARI, the performance of all NetEmd variants is significantly below the performance of DGCD with 129 orbits for 0, 25 and 50% reciprocity and of GDA with size 3 graphlets for 75% reciprocity. At 100% reciprocity, NetEmd, $PCA\_NetEmd$ with 80% variance and GDA, all with size 4 graphlets, share the top performance, but with a smaller difference to other algorithms and parameters. These results are a stark contrast to the ones we observe using AUPR and $\overline{P}$, where scores for this task did not differ significantly between all the algorithms.
In Task 2, unlike in the other metrics, $ICA\_NetEmd$ with 2 components is no longer considered the best performer for 0% reciprocity, with TriadEMD achieving the best result instead. On the other hand, according to ARI, $ICA\_NetEmd$ with 2 components is the top performer for 100% reciprocity. For 25, 50 and 75% reciprocity, $ICA\_NetEmd$ with 2 components performs significantly worse when comparing to the results using AUPR and $\overline{P}$. ARI scores indicate that the best performers in these levels of reciprocity are $PCA\_NetEmd$ with 99% variance, NetEmd with size 3 graphlets and $PCA\_NetEmd$ with 95% variance. As we mention previously, using a smaller graphlet size usually leads to better performance when different network sizes and densities are involved. We find partial agreement to this claim when using ARI as the performance metric. At 0% reciprocity, we find that $ICA\_NetEmd$ with 2 components and size 3 orbits achieves an ARI of 0.509 (higher than the score of $ICA\_NetEmd$ with 2 components and size 4 orbits in Table 9, but lower than the score of TriadEMD); at 75% reciprocity, $PCA\_NetEmd$ with 90% variance explained and size 3 orbits achieves an ARI of 0.495 (higher than the score of $PCA\_NetEmd$ with 95% variance explained and size 4 orbits in Table 9).
For the dataset of real world directed networks, ARI disagrees with AUPR and $\overline{P}$, assigning the best performance to GDA with size 3 orbits. This task also favours using smaller graphlets for the comparison, so the $PCA\_NetEmd$ and $ICA\_NetEmd$ results are at a disadvantage compared to the other methods as we only report results with size 4 in Table 9. For these two algorithms with size 3 graphlets, the best ARI scores on this task are 0.524, when using $PCA\_NetEmd$ with 90% explained variance, and 0.489, when using $ICA\_NetEmd$ with 19 components.
On the overall, in spite of some agreements between ARI and AUPR or $\overline{P}$, we find ARI to be an unreliable metric to gauge the performance of network distance measures, as it relies on an intermediate step of clustering that is unrelated to the measures themselves. Using a different clustering method or even a different linkage leads to disparate results compared to the ones we present in this section, which adds a confounding factor when measuring the performance of the network comparison itself. An example of this unreliability is the ARI score of 0 given to GDA with size 3 orbits and to TriadEMD at 100% reciprocity, when the other metrics showed a performance comparable to DGCD.
Appendix E Details of datasets
E.1 Synthetic datasets
We generate 16 datasets using the combinations of number of nodes $N\in\left\{1250,2500,5000,10000\right\}$ and average degree $d\in\left\{10,20,40,80\right\}$. Each dataset contains 10 realizations of each model for a total of 80 networks.
•
Erdős-Rényi (ER) model [7]. A random graph with $n$ is generated by picking $m$ unique edges chosen at random from the $\binom{n}{2}$ possible edges. $m$ was chosen to generate networks with the appropriate density.
•
Barabasi-Albert (BA) preferential attachment model [3]. An initial graph is created with $m$ nodes and new nodes are added iteratively to the network, each new node is connected to $m$ existing nodes, picked randomly with probability proportional to their degree. $m$ was chosen to generate networks with the appropriate density.
•
Geometric random graphs [8], using a 3-dimensional square ($D=3$). Nodes are randomly embedded in a $D$-dimensional space and are connected if the Euclidean distance between them is smaller than a threshold $r$. This threshold parameter was determined by grid search to generate networks with the desired number of edges.
•
Geometric gene duplication model [10]. Starting with an initial network of 5 nodes embedded in a 3-dimensional space, on each iteration a node is chosen randomly to be duplicated. The duplicated node is placed randomly in the 3-dimensional space at an Euclidean distance of at most 2 from the original node. This process is repeated until the desired number of nodes, and nodes at a distance of $r$ or less at connected. This distance $r$ is chosen to generate a network with the appropriate number of edges.
•
Duplication divergence model of Vazquez et al. [44]. The network grows in two stages. In the first stage, a node $v$ is chosen randomly to be duplicated into a node $v^{\prime}$, that keeps all the edges from $v$. The nodes $v$ and $v^{\prime}$ are connected with probability 0.05. In the second stage, one of the duplicated edges ($(v,u)$ or $(v^{\prime},u)$) is chosen randomly and deleted with probability $q$. This process is repeated until the network grows to the desired number of nodes and $q$ is chosen to generate networks with the appropriate density.
•
Duplication divergence model of Ispolatov et al. [17]. Starting from a network with a single edge, a node is chosen randomly and duplicated, with the new node keeping each of the original’s neighbours with probability $p$. This parameter is chosen to generate networks with the desired number of edges.
•
Configuration model, using the Duplication divergence model of Vazquez et al. [44] as the generator for the graphical degree sequence.
•
Watts-Strogatz (WS) model [45]. Nodes are placed in a ring and connected to their $k$ nearest neighbours in both sides of the ring. Each edge is then rewired with probability 0.05 to a new node selected at random. The parameter $q$ is chosen to generate networks with the appropriate average degree.
E.2 Real world networks
The summary statistics of the Onnela et al. [30] and the real world directed networks are given in Table 10.
References
[1]
Ali, W., Rito, T., Reinert, G., Sun, F. & Deane, C. M. (2014)
Alignment-free protein interaction network comparison. Bioinformatics,
30(17), i430–i437.
[2]
Aparicio, D., Ribeiro, P. & Silva, F. (2016) Extending the applicability of
graphlets to directed networks. IEEE/ACM Transactions on Computational
Biology and Bioinformatics, 14(6), 1302–1315.
[3]
Barabási, A.-L. & Albert, R. (1999) Emergence of scaling in random
networks. Science, 286(5439), 509–512.
[4]
Broido, A. D. & Clauset, A. (2019) Scale-free networks are rare. Nature Communications, 10(1), 1017.
[5]
Comon, P. (1994) Independent component analysis, a new concept?. Signal
Processing, 36(3), 287–314.
[6]
Cook, S. A. (1971) The complexity of theorem-proving procedures. Proceedings of the third annual ACM Symposium on Theory of Computing,
151–158.
[7]
Erdős, P. & Rényi, A. (1960) On the evolution of random graphs.
Publications of the Mathematical Institute of the Hungarian Academy of
Sciences, 5(1), 17–60.
[8]
Gilbert, E. N. (1961) Random plane networks. Journal of the Society for
Industrial and Applied Mathematics, 9(4), 533–543.
[9]
Gu, S., Johnson, J., Faisal, F. E. & Milenković, T. (2018) From
homogeneous to heterogeneous network alignment via colored graphlets. Scientific Reports, 8(1), 1–16.
[10]
Higham, D. J., Rašajski, M. & Pržulj, N. (2008) Fitting a
geometric graph to a protein–protein interaction network. Bioinformatics, 24(8), 1093–1099.
[11]
Hinton, G. E. & Salakhutdinov, R. R. (2006) Reducing the Dimensionality of
Data with Neural Networks. Science, 313(5786), 504–507.
[12]
Hočevar, T. & Demšar, J. (2014) A combinatorial approach to
graphlet counting. Bioinformatics, 30(4), 559–565.
[13]
Hotelling, H. (1933) Analysis of a complex of statistical variables into
principal components. Journal of Educational Psychology,
24(6), 417.
[14]
Hubert, L. & Arabie, P. (1985) Comparing partitions. Journal of
Classification, 2(1), 193–218.
[15]
Hyvärinen, A. (1999) The fixed-point algorithm and maximum likelihood
estimation for independent component analysis. Neural Processing
Letters, 10(1), 1–5.
[16]
Hyvärinen, A. & Oja, E. (2000) Independent component analysis:
algorithms and applications. Neural Networks, 13(4-5),
411–430.
[17]
Ispolatov, I., Krapivsky, P. L. & Yuryev, A. (2005) Duplication-divergence
model of protein interaction network. Physical Review E,
71(6), 061911.
[18]
Jolliffe, I. T. (2002) Principal component analysis.
Springer series in statistics. Springer-Verlag.
[19]
Koutra, D., Vogelstein, J. T. & Faloutsos, C. (2013) Deltacon: A principled
massive-graph similarity function. Proceedings of the 2013 SIAM
International Conference on Data Mining, 162–170.
[20]
Kuchaiev, O., Milenković, T., Memišević, V., Hayes, W. &
Pržulj, N. (2010) Topological network alignment uncovers biological
function and phylogeny. Journal of the Royal Society Interface,
7(50), 1341–1354.
[21]
Leskovec, J., Kleinberg, J. & Faloutsos, C. (2005) Graphs over time:
densification laws, shrinking diameters and possible explanations. Proceedings of the eleventh ACM SIGKDD International Conference on Knowledge
Discovery in Data Mining, 177–187.
[22]
Leskovec, J., Kleinberg, J. & Faloutsos, C. (2007) Graph evolution:
densification and shrinking diameters. ACM Transactions on Knowledge
Discovery from Data (TKDD), 1(1).
[23]
Leskovec, J. & Krevl, A. (2014) SNAP Datasets: Stanford Large Network
Dataset Collection. http://snap.stanford.edu/data.
[24]
Li, L., Alderson, D., Doyle, J. C. & Willinger, W. (2005) Towards a theory
of scale-free graphs: definition, properties, and implications. Internet
Mathematics, 2(4), 431–523.
[25]
MacKay, D. J. C. (2003) Information Theory, Inference and Learning
Algorithms.
Cambridge University Press, Cambridge, UK.
[26]
Mamano, N. & Hayes, W. B. (2017) SANA: simulated annealing far outperforms
many other search algorithms for biological network alignment. Bioinformatics, 33(14), 2156–2164.
[27]
McAuley, J. J. & Leskovec, J. (2012) Learning to discover social circles in
ego networks. Advances in Neural Information Processing Systems,
2012, 548–556.
[28]
Milo, R., Itzkovitz, S., Kashtan, N., Levitt, R., Shen-Orr, S., Ayzenshtat, I.,
Sheffer, M. & Alon, U. (2004) Superfamilies of evolved and designed
networks. Science, 303(5663), 1538–1542.
[29]
Newman, M. E., Barabási, A.-L. E. & Watts, D. J. (2006) The
structure and dynamics of networks.
Princeton University Press.
[30]
Onnela, J.-P., Fenn, D. J., Reid, S., Porter, M. A., Mucha, P. J., Fricker,
M. D. & Jones, N. S. (2012) Taxonomies of networks from community
structure. Physical Review E, 86(3), 036104.
[31]
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel,
O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J.,
Passos, A., Cournapeau, D., Brucher, M., Perrot, M. & Duchesnay, E. (2011)
Scikit-learn: Machine Learning in Python. Journal of Machine Learning
Research, 12, 2825–2830.
[32]
Pržulj, N. (2007) Biological network comparison using graphlet degree
distribution. Bioinformatics, 23(2), e177–e183.
[33]
Pržulj, N., Corneil, D. G. & Jurisica, I. (2004) Modeling
interactome: scale-free or geometric?. Bioinformatics, 20(18),
3508–3515.
[34]
Rand, W. M. (1971) Objective criteria for the evaluation of clustering
methods. Journal of the American Statistical Association,
66(336), 846–850.
[35]
Ravasz, E. & Barabási, A.-L. (2003) Hierarchical organization in
complex networks. Physical Review E, 67(2), 026112.
[36]
Ribeiro, P., Aparício, D., Paredes, P. & Silva, F. (2017) GTScanner -
Quick Discovery of Network Motifs.
http://www.dcc.fc.up.pt/~daparicio/software.
Accessed: 2019-08-25.
[37]
Ribeiro, P., Paredes, P., Silva, M. E. P., Aparicio, D. & Silva, F. (2021)
A survey on subgraph counting: concepts, algorithms, and applications to
network motifs and graphlets. ACM Computing Surveys, 54(2).
[38]
Ribeiro, P. & Silva, F. (2014) G-tries: a data structure for storing and
finding subgraphs. Data Mining and Knowledge Discovery, 28(2),
337–377.
[39]
Ripeanu, M., Foster, I. & Iamnitchi, A. (2002) Mapping the gnutella
network: properties of large-scale peer-to-peer systems and implications for
system design. arXiv preprint cs/0209028.
[40]
Rubner, Y., Tomasi, C. & Guibas, L. J. (1998) A metric for distributions
with applications to image databases. Sixth International Conference on
Computer Vision, 59–66.
[41]
Sarajlić, A., Malod-Dognin, N., Yaveroğlu, Ö. N. &
Pržulj, N. (2016) Graphlet-based characterization of directed
networks. Scientific Reports, 6(1), 1–14.
[42]
Shlens, J. (2014) A tutorial on principal component analysis. arXiv
preprint arXiv:1404.1100.
[43]
Tantardini, M., Ieva, F., Tajoli, L. & Piccardi, C. (2019) Comparing
methods for comparing networks. Scientific Reports, 9(1),
1–19.
[44]
Vázquez, A., Flammini, A., Maritan, A. & Vespignani, A. (2003) Modeling
of protein interaction networks. Complexus, 1(1), 38–44.
[45]
Watts, D. J. & Strogatz, S. H. (1998) Collective dynamics of
‘small-world’networks. Nature, 393(6684), 440–442.
[46]
Wegner, A. E., Ospina-Forero, L., Gaunt, R. E., Deane, C. M. & Reinert, G.
(2018) Identifying networks with common organizational principles. Journal of Complex Networks, 6(6), 887–913.
[47]
Xu, X. & Reinert, G. (2018) Triad-based comparison and signatures of
directed networks. International Conference on Complex Networks and
their Applications, 590–602.
[48]
Yang, J. & Leskovec, J. (2015) Defining and evaluating network communities
based on ground-truth. Knowledge and Information Systems,
42(1), 181–213.
[49]
Yaveroğlu, Ö. N., Malod-Dognin, N., Davis, D., Levnajic, Z., Janjic,
V., Karapandza, R., Stojmirovic, A. & Pržulj, N. (2014) Revealing
the hidden language of complex networks. Scientific Reports,
4(1), 1–9.
[50]
Yaveroğlu, Ö. N., Milenković, T. & Pržulj, N. (2015)
Proper evaluation of alignment-free network comparison methods. Bioinformatics, 31(16), 2697–2704.
[51]
Zachary, W. W. (1977) An information flow model for conflict and fission in
small groups. Journal of Anthropological Research, 33(4),
452–473. |
The Cosmological Impact of Luminous TeV Blazars III:
Implications for Galaxy Clusters and the Formation of Dwarf Galaxies
Christoph Pfrommer11affiliation: Heidelberg Institute for Theoretical Studies, Schloss-Wolfsbrunnenweg 35, D-69118 Heidelberg, Germany; [email protected] 22affiliation: Canadian Institute for Theoretical Astrophysics, 60 St. George Street, Toronto, ON M5S 3H8, Canada; [email protected], [email protected]
Philip Chang22affiliation: Canadian Institute for Theoretical Astrophysics, 60 St. George Street, Toronto, ON M5S 3H8, Canada; [email protected], [email protected] 33affiliation: Department of Physics, University of Wisconsin-Milwaukee, 1900 E. Kenwood Boulevard, Milwaukee, WI 53211, USA
and
Avery E. Broderick22affiliation: Canadian Institute for Theoretical Astrophysics, 60 St. George Street, Toronto, ON M5S 3H8, Canada; [email protected], [email protected] 44affiliation: Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo, ON, N2L 2Y5, Canada 55affiliation: Department of Physics and Astronomy, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada
Abstract
A subset of blazars are powerful TeV emitters, dominating the extragalactic
component of the very high energy gamma-ray universe ($E\gtrsim 100\,{\rm G}{\rm eV}$).
These TeV gamma rays generate ultra-relativistic electron-positron pairs via
pair production with the extragalactic background light. While it has
generally been assumed that the kinetic energy of these pairs cascade to GeV
gamma rays via inverse Compton scattering, we have argued in Broderick et al. (2012, Paper I
in this series) that plasma beam instabilities are capable of dissipating
the pairs’ energy locally on timescales short in comparison to the
inverse-Compton cooling time, heating the intergalactic medium (IGM) with a
rate that is independent of density. This dramatically increases the entropy
of the IGM after redshift $z\sim 2$, with a number of important implications
for structure formation: (1) this suggests a scenario for the origin of the
cool core (CC)/non-cool core (NCC) bimodality in galaxy clusters and groups.
Early-forming galaxy groups are unaffected because they can efficiently
radiate the additional entropy, developing a CC. However, late-forming groups
do not have sufficient time to cool before the entropy is gravitationally
reprocessed through successive mergers—counteracting cooling and potentially
raising the core entropy further. This may result in a population of X-ray dim
groups/clusters, consistent with X-ray stacking analyses of optically selected
samples. Hence blazar heating works different than feedback by active
galactic nuclei, which we show can balance radiative cooling but is unable to
transform CC into NCC clusters on the buoyancy timescale due to the weak
coupling between the mechanical energy to the cluster gas. (2) We predict a
suppression of the Sunyaev-Zel’dovich (SZ) power spectrum template on angular
scales smaller than $5\arcmin$ due to the globally reduced central pressure of
groups and clusters forming after $z\sim 1$. This allows for a larger rms
amplitude of the density power spectrum, $\sigma_{8}$, and may reconcile
SZ-inferred values with those by other cosmological probes even after allowing
for a contribution due to patchy reionization. (3) Our redshift dependent
entropy floor increases the characteristic halo mass below which dwarf
galaxies cannot form by a factor of approximately 10 (50) at mean density (in
voids) over that found in models that include photoionization alone. This
prevents the formation of late-forming dwarf galaxies ($z\lesssim 2$) with
masses ranging from $10^{10}$ to $10^{11}\,\mathrm{M}_{\sun}$ for redshifts
$z\sim 2$ to 0, respectively. This may help resolve the “missing satellites
problem” in the Milky Way of the low observed abundances of dwarf satellites
compared to cold dark matter simulations and may bring the observed early star
formation histories into agreement with galaxy formation models. At the same
time, it explains the “void phenomenon” by suppressing the formation of
galaxies within existing dwarf halos of masses $<3\times 10^{10}\,\mathrm{M}_{\sun}$ with a maximum circular velocity
$<60~{}\mathrm{km~{}s}^{-1}$ for $z\lesssim 2$; hence reconciling the number of dwarfs
in low-density regions in simulations and the paucity of those in
observations.
keywords: BL Lacertae objects: general – galaxies: clusters: general –
galaxies: formation – galaxies: dwarf – gamma rays: general – intergalactic
medium
1 Introduction
Extragalactic relativistic jets are powered by accreting super-massive black
holes (or in general the engines of active galactic nuclei, AGNs) and are able
to carry an enormous amount of power out to cosmological distances. Blazars are
a subclass of AGNs where the jet opening angle of typically $\sim 10^{\circ}$
contains our line-of-sight, allowing us to detect the Doppler-boosted radiation.
Blazars are the dominant extragalactic source class in the TeV sky with
currently 36 known objects out of 46 extragalactic sources (of the
remaining 10, 4 are radio galaxies, 2 are starburst galaxies, and 4 are not yet
identified; for a review, see Hinton & Hofmann, 2009).111For an
up-to-date list/visualization of the extragalactic TeV sky, see
http://www.mppmu.mpg.de/$\sim$rwagner/sources/ or
http://tevcat.uchicago.edu/. Recent observations by the Fermi Space
Telescope and ground based imaging atmospheric Cherenkov telescopes (H.E.S.S.,
MAGIC, and VERITAS)222High Energy Stereoscopic System, Major Atmospheric
Gamma Imaging Cerenkov Telescope, and Very Energetic Radiation Imaging Telescope
Array System. demonstrated that most of the electromagnetic power is emitted
in the gamma-ray band. The most extreme blazars achieve energies of up to 10
TeV, giving rise to the class of high-energy peaked BL Lac objects (HBL) while
the somewhat less efficient accelerators in intermediate-energy peaked BL Lac
objects (IBL) are, in some cases, also able to reach energies beyond 100
GeV. The emission mechanism is thought to be inverse Compton scattering of
ultra-relativistic electrons in the jet giving rise to power-law energy spectra
that increase as a function of energy and peak at the maximum energy that the
accelerating process of the radiating relativistic electrons is able to deliver.
The universe is not transparent to very-high energy gamma-ray radiation (VHEGR;
$E\gtrsim 100~{}{\rm G}{\rm eV}$), i.e., a beam of these energetic photons will necessarily
produce electron and positron pairs off of the extragalactic background light
(EBL), with typical mean free paths of VHEGRs ranging from 30 Mpc to 1 Gpc
depending upon gamma-ray energy and source redshift. The pairs produced by
VHEGR radiation have typical Lorentz factors of $10^{5}-10^{7}$.
There are only two possible ultimate destinations for the kinetic energy of the
pairs: the energy can either be channeled into lower energy (GeV) gamma-ray
radiation (to which the universe is transparent), or heat the ambient medium
with a partitioning factor that depends on the relative rates of the
processes. The first process has generally been assumed to dominate the
manner in which these pairs lose energy, almost exclusively through inverse
Compton scattering the cosmic microwave background (CMB) and EBL on a typical
mean free path of $(10-100)~{}{\rm k}{\rm pc}$ today. When the up-scattered gamma-ray is
itself a VHEGR the process repeats, creating a second generation of pairs and
up-scattering additional photons. The result is an inverse Compton cascade
depositing the energy of the original VHEGR in gamma rays with energies
$\lesssim 100~{}{\rm G}{\rm eV}$.
There are, however, problems with this picture. First, the expected inverse
Compton bump has not been seen in the spectra of luminous blazars around
$10\,{\rm G}{\rm eV}$. This could imply the existence of intergalactic magnetic fields that
deflect the pairs out of our line-of-sight, hence reducing the inverse Compton
emission
(Neronov & Vovk, 2010; Tavecchio et al., 2010, 2011; Dermer et al., 2011; Taylor et al., 2011; Dolag et al., 2011; Takahashi et al., 2012; Vovk et al., 2012).
Typical lower limits for magnetic fields would then range from $10^{-19}\,{\rm G}$ to
$10^{-15}\,{\rm G}$, depending on the assumed duty cycle of blazars and are dominated
by void regions, which dominate a typical line-of-sight. The values at the upper
end are of astrophysical interest in the context of the formation of galactic
fields333After adiabatic contraction and a handful of windings, nG field strengths
can be produced from an intergalactic magnetic field of
$\sim 10^{-15}\,{\rm G}$.. Second, the spectral shape of the unresolved
extragalactic gamma-ray background (EGRB), that has been measured by Fermi,
exhibits a steep power-law at energies below $100\,{\rm G}{\rm eV}$ (Abdo et al., 2010).
If blazars contribute substantially to the EGRB, we would expect to see a
flattening toward high energies, due to the inverse Compton cascades, in
conflict with the Fermi EGRB. Traditionally, the EGRB is then used to
constrain the evolution of the luminosity density of VHEGR sources (see,
e.g., Narumoto & Totani, 2006; Kneiske & Mannheim, 2008; Inoue & Totani, 2009; Venters, 2010). Generally, it has
been found that these cannot have exhibited the dramatic rise in numbers by
$z\sim 1$–$2$ seen in the quasar distribution. That is, the co-moving number of
blazars must have remained essentially fixed, at odds with both the large-scale
mass assembly history of the universe, e.g., star formation history, and with
the luminosity history of similarly accreting systems, e.g., the quasar
luminosity density.
In our first companion paper of this series (Broderick et al., 2012, hereafter Paper I), we
argued that instead of initiating an inverse Compton cascade, the pairs
dissipate their kinetic energy locally, heating the intergalactic medium (IGM).
We identified a process that operates on a timescale fast in comparison to
inverse Compton cooling, dominating the latter for luminous TeV blazars,
i.e., for HBL and IBL blazars with an equivalent isotropic luminosity of
$L\gtrsim 10^{42}\,{\rm erg}\,{\rm s}^{-1}$ above $100\,{\rm G}{\rm eV}$. Despite its dilute nature,
the VHEGR-generated beam of ultra-relativistic pairs, propagating through the
IGM, is susceptible to plasma beam instabilities. While the commonly discussed
Weibel and two-stream instabilities are strongly suppressed by finite beam
temperatures, these are special cases of a general filamentary “oblique”
instability which is far more virulent, and strongly insensitive to finite
temperature effects
(Bret et al., 2004, 2005; Bret, 2009; Bret et al., 2010; Lemoine & Pelletier, 2010).
If these instabilities saturate at a rate that is comparable to their linear
growth rate, the beam kinetic energy is directly transferred to electrostatic
modes which rapidly dissipate locally, heating the IGM. If this scenario or an
analogous, similarly efficient mechanism operates in practice, it necessarily
suppresses the inverse Compton cascades and naturally explains the absence of an
inverse Compton bump in the TeV blazar spectra without invoking an intergalactic
magnetic field. At the same time, it allows for a redshift evolution of TeV blazars
that is identical to that of quasars without overproducing the EGRB. In fact,
for plausible parameters of TeV blazar spectra, it is possible to explain the
high-energy part of the EGRB (Paper I) without the need to appeal to exotic phenomena
(e.g., dark matter annihilation, Cavadini et al., 2011).
By dissipating the pairs’ energy into the IGM, plasma instabilities (or similar
processes) provide a novel mechanism for heating the universe. Integrating over
the energy flux per mean free path of all known TeV blazars yields a luminosity
density, or equivalently a local heating rate, that dominates that of
photoheating by more than an order of magnitude at the present epoch, after
accounting for incompleteness corrections (Chang et al., 2012, hereafter Paper II). We
have demonstrated that the local TeV blazar luminosity function is consistent
with a scaled version of the quasar luminosity function (Hopkins et al., 2007), thus
the conservative assumption is that they evolve similarly, presumably due to the
same underlying accretion physics. With this assumption, we showed in Paper II
that for the redshifts at which blazar heating is likely to be important,
$z\lesssim 3.5$, the heating rate will be relatively uniform throughout space.
Between $z\sim 3.5$ and 6 it may experience order 50% fluctuations, and by
$z\gtrsim 6$, it will exhibit significant stochasticity with order unity
deviations.
This heating differs from other feedback prescriptions in an important way:
since the number density of EBL photons and that of TeV blazars are nearly
homogeneously distributed on cosmological scales, so is the resulting pair
density. Hence, the implied heating rate is also homogeneous, i.e., the
volumetric blazar heating rate is uniform and independent of IGM
density444The term “blazar heating” exclusively denotes the dissipation
of gamma-ray luminosity of HBL and IBL blazars between $100\,{\rm G}{\rm eV}$ and
$10\,{\rm T}{\rm eV}$ with an equivalent isotropic luminosity of $L\gtrsim 10^{42}\,{\rm erg}\,{\rm s}^{-1}$ (see Paper II for details).. Due to the large mean
free path of TeV-photons, which is much larger than the turn-around radius of
any virialized structures, the heating process is not expected to be dominated
by contributions in highly biased regions at late times for $z\lesssim 3.5$. (At early times, when blazar heating exhibits considerable fluctuations,
clustering bias could substantially modify the phenomenology of the heating
mechanism.) The effect of a uniform heating rate is that the energy deposited
per baryon is substantially larger in more tenuous regions of the universe. As
a result, underdense regions experience a larger temperature increase, producing
an inverted temperature-density relation in voids, asymptotically approaching
$T\propto\rho_{g}^{-1}$. Generally, we found in Paper II that without any fine
tuning it is possible to reproduce the inverted temperature-density relation at
$z=2-3$ inferred by high-redshift Ly$\alpha$ studies (Bolton et al., 2008; Viel et al., 2009), while
simultaneously satisfying the temperature constraints at $z=2$ (e.g.,
those by Lidz et al., 2010) and leaving the local Ly$\alpha$ forest unaffected.
In a follow-up paper by Puchwein et al. (2011), we used hydrodynamic simulations
of cosmological structure formation to explicitly demonstrate that blazar
heating provides not only an excellent description of the one- and two-point
statistics of high-redshift Ly$\alpha$ forest spectra but also the line width and
column density distribution. This detailed agreement includes reproducing the
observed mean transmission, and is achieved using the most recent estimate of
the evolution of the photoionizing background without any tuning. These
successes are due specifically to the salient properties of blazar heating, in
particular its excess energy injection into the low density IGM and its
continuous nature.
In this work, we are interested in the impact of such an important heating
mechanism on cosmological structure formation. While we propose a
well-motivated, physical heating mechanism that uses a certain type of plasma
instability, our conclusions concerning the thermal history of the IGM, the Ly$\alpha$ forest, as well as structure formation remain robust in that they only rely on a
mechanism that dissipates the energy of TeV blazars independently of the density
and employs a redshift evolution similar to that of the quasar luminosity
function. To study the impact on structure formation, we turn to the evolution
of the entropy because in the absence of radiative cooling, entropy is conserved
upon adiabatic compression and can only be increased through dissipation of
gravitational energy in structure formation shocks or heating due to
photo-ionization. Hence, entropy is a unique thermodynamic variable with which
to learn about the impact of non-gravitational feedback processes. We identify
two different classes of objects where the time variable minimum entropy, or
entropy floor, of the IGM induced by blazar heating might dramatically change
our present picture of structure formation: the structure of galaxy groups and
clusters, and the formation of dwarf galaxies. Before addressing the
consequences of blazar heating for each, we introduce each separately,
highlighting the relevant outstanding problems.
1.1 The entropy problem in galaxy groups
In the absence of non-gravitational energy injection, the X-ray luminosity of
clusters ($L_{x}$) is expected to exhibit a self-similar scaling with intracluster
medium (ICM) temperature ($T$) as $L_{x}\propto T^{2}$ from purely gravitational
and shock dynamics (Kaiser, 1986; Evrard & Henry, 1991; Evrard et al., 1996). However, this cannot
extend from clusters down to groups without vastly overproducing the
extragalactic soft X-ray background. Indeed, groups are observed to have a
smaller X-ray luminosity compared to the self-similar expectation yielding a
steeper scaling with temperature, $L_{x}\propto T^{3}$ (Markevitch, 1998). This
can be explained if the gas was initially preheated to an entropy floor of
$\sim 100~{}{\rm k}{\rm eV}\,{\rm c}{\rm m}^{2}$ which reconciles the simulated background with
observations (see Voit, 2005, for a review).
In principle, there are three physical processes that could produce such an
entropy floor for groups, all of which may occur in practice, raising the
question of which, if any, is dominant: (1) catastrophic cooling and collapse of
the low-entropy gas at the centers of halos, allowing accretion-shock heated gas
to adiabatically flow inward and replace the condensed gas with its elevated
entropy level (Voit & Bryan, 2001a, b; Voit et al., 2003), (2) an early epoch of
global entropy injection prior to the formation of groups and clusters,
typically referred to as “preheating”
(e.g., Kaiser, 1991; Evrard & Henry, 1991; Ponman et al., 1999; Balogh et al., 1999; Pen, 1999; Borgani et al., 2001; Bryan & Voit, 2001; Croft et al., 2001; Bialek et al., 2001; Babul et al., 2002; Voit et al., 2003, 2005; Borgani & Viel, 2009; Stanek et al., 2010),
and (3) self-regulated AGN feedback at the cores of groups and clusters
(e.g., Churazov et al., 2001; Sijacki & Springel, 2006; Sijacki et al., 2007, 2008; McNamara & Nulsen, 2007; Puchwein et al., 2008; Booth & Schaye, 2009; McCarthy et al., 2010; Dubois et al., 2010; Teyssier et al., 2011).
Catastrophic cooling has been found to be unstable in large-scale numerical
hydrodynamic simulations, typically resulting in an untenably large fraction of
baryons in stars (see Borgani & Kravtsov, 2009, for a review). More importantly,
runaway cooling generally precludes the existence of the observed hot groups,
making it clear that it cannot be the sole explanation of the low X-ray
luminosity of groups. However, radiative cooling is a physical process that will
occur if it is not perfectly balanced by some heating process and the
observations of multiphase gas in cool cores (CCs) suggest that cooling is
happening (Donahue et al., 2011), presumably through thermal instability
(McCourt et al., 2012). Hence, while some of the elevated entropies in
groups have to be attributed to cooling, the absence of a cooling catastrophe
implies a (non-gravitational) energy feedback process that will inevitably also
raise the core entropy.
In contrast, preheating has met with broad success in explaining the observed
$L_{x}-T$ correlation, ostensibly by suppressing the formation of dense cluster
cores at low masses and therefore reducing $L_{x}$ for group-sized
objects. Typically, the redshift range adopted for the injection of the entropy,
$z\gtrsim 3$, ensures that preheating is complete well before any galaxy group or
cluster has turned around from the Hubble expansion and started to collapse.
Hence the entropy, $K_{e}=kTn_{e}^{-2/3}$ (where $n_{e}$ is the electron density
and $k$ is the Boltzmann constant), is generally injected at the lowest possible
gas density (and thus temperature) at that redshift. This minimizes the heating
necessary to produce the observed impact upon the entropy, typically $1\,{\rm k}{\rm eV}$
per particle. Numerical simulations incorporating a minimum entropy of the gas,
or entropy floor, of $K_{e}\simeq(100$–$200)\,{\rm k}{\rm eV}\,{\rm c}{\rm m}^{2}$ at redshifts
around $z\simeq 3$–$4$, are able to steepen the $L-T$ relation from the purely
gravitational collapse estimate ($L_{x}\propto T^{2}$), and find broad agreement
with observations
(Kaiser, 1991; Balogh et al., 1999; Bialek et al., 2001; Voit et al., 2002, 2003; Stanek et al., 2010).
Nevertheless, there are serious observational difficulties confronting the
standard preheating scenarios, apart from the theoretical problem of lacking a
physical model for the heating process. First, after preheating, star formation
in $L_{*}$ and lower mass galaxies is significantly suppressed
(e.g., Oh & Benson, 2003; Benson & Madau, 2003) and hence inconsistent with observational
data that finds, e.g., the peak of the star formation rate at redshifts $z\sim 2$. Second, the presence of a high entropy floor everywhere in the universe
makes it impossible for groups to radiate away the excess entropy and impossible
to explain the existence of a subset of X-ray luminous, CC groups with steep
entropy profiles, and low values of the central entropy
(McCarthy et al., 2008; Fang & Haiman, 2008; Cavagnolo et al., 2009).
Heating via feedback processes, e.g., from star formation (stellar winds and
supernova) and AGNs (for a review, see McNamara & Nulsen 2007), has already been
shown to have a large impact on the formation and history of galaxy clusters and
groups. Self-regulated, inhomogeneous energy feedback mechanisms are very
successful in globally stabilizing the group and cluster atmospheres, and in
particular, preventing the cooling catastrophe. The resulting gas mass
profiles, gas fractions, and the $L_{x}-T$ correlation compare impressively to
those observed (Sijacki et al., 2008; Puchwein et al., 2008). While there seems to be a
globally convergent scenario emerging with convincing energetics and duty
cycles, the actual heating process has yet to be identified and may involve
interesting astrophysics, e.g., turbulence (Enßlin & Vogt, 2006), cosmic rays
(Guo & Oh, 2008; Enßlin et al., 2011), or plasma instabilities (Kunz et al., 2011).
As we will show in this paper, the entropy injection by blazars is in some sense
an amalgam of both the preheating and feedback mechanisms. Unlike typical AGN
feedback, the effect of the blazar heating is not localized, operating at much
larger distances typically and thus over considerably longer timescales. Unlike
instantaneous preheating, the blazars provide a time-dependent entropy
injection rate, peaking near $z\sim 1$. As a consequence, we will show that the
formation of $L_{*}$ galaxies is not suppressed and early-forming groups are
little affected by blazar heating, having had time to cool and develop an X-ray
luminous CC; avoiding the primary difficulties with the
standard preheating scenario. On the other hand, the blazar-heated ICM ending up
in late-forming groups will not have had sufficient time to radiate the
additional entropy away before it can be gravitationally reprocessed in merging
shocks which are ubiquitous in a hierarchically growing universe. We will argue
that this can lead to elevated entropy core values resembling those of non-cool core (NCC) clusters.
1.2 The dwarf problem in our Galaxy and nearby voids
The $\Lambda$ cold dark matter ($\Lambda$CDM) concordance cosmology predicts
that Milky Way-sized halos should contain many more dwarf-sized halos than the
observed number of dwarf galaxies, the so-called “substructure problem”, or
within the local context the “missing satellites problem” (for a recent
review, see Kravtsov, 2010). Closely related to this problem is the “void
phenomenon” which is the apparent discrepancy between the number of dwarfs in
low-density regions in simulations and the paucity of those in observations
(Peebles, 2001). In principle, both problems can be solved in three general
ways: by suppressing the formation of dwarf halos, suppressing the formation of
galaxies within existing dwarf halos, and/or suppressing star formation within
dwarf galaxies. In the case of the “void phenomenon”, there is a forth class
of models that solves the problem. Considering interacting dark matter and dark
energy as mediated, e.g., by a Yukawa coupling, implies a fifth force that
reduces late accretion onto halos and pushes matter out of voids, hence
resolving the discrepancy of dwarf abundances in voids (Farrar & Peebles, 2004; Nusser et al., 2005). In the following, we review the three classes of models that
are able to solve both problems simultaneously.
Suppressing the formation of dwarf halos is difficult to accomplish within the
context of $\Lambda$CDM, requiring modifications to the standard cosmological
paradigm, such as interacting dark matter (Spergel & Steinhardt, 2000), modifications to
the seed perturbation spectrum (Zentner & Bullock, 2003), or warm dark matter
(WDM Dalcanton & Hogan, 2001; Macciò & Fontanot, 2010). While we will discuss the last of
these in Section 3.3, given the current success the $\Lambda$CDM model
has had predicting the observed halo structures (see,
e.g., Dalal & Kochanek, 2002; Mao et al., 2004), we will not comment upon these possibilities
further.
Suppressing the formation of dwarf galaxies, i.e., preventing the accretion of
baryons by existing dwarf halos, may be accomplished in principle by a variety
of mechanisms, including photoionization heating
(Efstathiou, 1992; Kauffmann et al., 1993; Quinn et al., 1996; Thoul & Weinberg, 1996; Kitayama & Ikeuchi, 2000; Bullock et al., 2000, 2001; Chiu et al., 2001; Somerville, 2002; Dijkstra et al., 2004)
or accretion shock heating
(Scannapieco et al., 2001; Kravtsov et al., 2004; Sigward et al., 2005). If baryons can
collect, they may not be able to cool efficiently due to a lack of H i as a result
of photoionization (Haiman et al., 1997, 2000) or intrinsically
low metallicities
(Kravtsov et al., 2004; Kaufmann et al., 2007; Tassis et al., 2008; Robertson & Kravtsov, 2008; Gnedin et al., 2009).
Nevertheless, recent numerical simulations which self-consistently include the
photoionizing background due to star formation have found that while it does
have a pronounced effect, it cannot suppress dwarf galaxy formation at the level
implied by observations (Hoeft et al., 2006; Okamoto et al., 2008; Nickerson et al., 2011). More
importantly, the metallicity distribution of some dwarfs is inconsistent with
dwarfs generally being pre-reionization relics
(Dolphin et al., 2005; Fenner et al., 2006; Holtzman et al., 2006; Orban et al., 2008), and therefore the
suppression of dwarf formation must have occurred more recently than the epoch
of reionization.
Once formed, the gas may be removed from dwarf galaxies via photo-evaporation
(Barkana & Loeb, 1999; Shapiro et al., 2004) and feedback from the first supernovae
(Mac Low & Ferrara, 1999; Dekel & Woo, 2003; Mashchenko et al., 2008; Jubelgas et al., 2008; Wadepuhl & Springel, 2011; Nickerson et al., 2011; Uhlig et al., 2012).
Tidal interactions of satellite dwarf halos with the Milky Way may result in a
dramatic decrease in their mass, and to a lesser extent in circular velocity,
after $z\sim 2$ (Kravtsov et al., 2004; Nickerson et al., 2011). However, these processes
also strip material from larger halos, with the result that the smallest dwarf
spheroidal galaxies presently within the Local Group may have had a mass at
formation that was much larger than currently observed, and thus were capable
of building up a sizeable stellar component in their seemingly shallow potential
wells.
As we will show in this work, the heating due to blazars provides an additional
mechanism to suppress the formation of dwarfs. Unlike photoionization models,
which typically invoke the heating at reionization, blazar heating provides a
well defined, time-dependent suppression mechanism, with the suppression rising
dramatically after $z\sim 2$. In addition, due to its insensitivity to density,
the heating from blazars suppresses structure formation most efficiently in the
low-density regions that are responsible for late-forming dwarf halos. As a
result, the impact from blazars is not degenerate with variations in the
parameters of reionization and/or tidal interactions.
1.3 Structure of this Paper
This is the third in a series of three papers that discuss the potential
cosmological impact of TeV emission from blazars. Paper I provides a
plausible mechanism for the local dissipation of the TeV luminosity, effectively
producing an additional heating process within the IGM, and its implications
upon high-energy gamma-ray observations. Paper II estimates the magnitude of
the new heating term, describes the associated modifications to the thermal
history of the IGM, and shows how this can explain some recent observations of
the Ly$\alpha$ forest. Paper III, this paper, considers the impact the new heating
term will have upon the structure and statistics of galaxy clusters and groups,
and upon the ages and properties of dwarf galaxies throughout the universe,
generally finding that blazar heating can help explain outstanding questions in
both cases.
In Section 2, we show that blazar heating necessarily implies
the injection of a tremendous amount of entropy and employ the implication for
the evolution of the warm and hot phases of the IGM. In particular, we show the
broad implications of this heating on the X-ray population of clusters
(Section 2.1), the entropy structure of galaxy groups/clusters
(Section 2.2), the CC/NCC cluster bimodality
(Section 2.3), and its impact on the Sunyaev-Zel’dovich (SZ)
power-spectrum (Section 2.4). We discuss the impact of this heating on
the formation of structure in the universe in Section 3, showing from the viewpoint of linear theory that the formation of
dwarf galaxies that end up in Milky Way-sized halos (Section 3.1) as
well as those in voids (Section 3.2) are suppressed at late redshift
($z\lesssim 2$). Finally, we conclude in Section 4.
The calculations presented below assume the WMAP7 cosmology (Komatsu et al., 2011)
with $h_{0}=0.704$, $\Omega_{\mathrm{DM}}=0.227$, $\Omega_{B}=0.0456$,
$\Omega_{\Lambda}=0.728$, $\sigma_{8}=0.81$, $n_{s}=0.967$, and a matter transfer
function that accounts for the baryonic features (Eisenstein & Hu, 1998).
2 The Evolution of Entropy and the Hot Phase of the Intergalactic Medium
The injection of large amounts of energy into the IGM by blazars is accompanied
by a substantial increase in the IGM entropy. Here, we discuss the impact of
this additional entropy upon the evolution of the gravitationally heated warm
and hot components of the IGM. In doing so we neglect radiative cooling, an
approximation that is well justified in the low-density regions of the IGM
generally and one that we will justify for late-forming groups and clusters
below. In this case the entropy of the IGM necessarily increases during
gravitational collapse (e.g., due to structure-formation shocks, feedback,
and/or photoheating). As a consequence, the entropy injected by TeV blazars
places an elevated floor upon the entropy of late-forming structures within the
IGM. If radiative cooling is permitted (e.g., at the centers of early-forming
clusters), the gas entropy may decrease,555According to the second law of
thermodynamics, the entropy of an isolated system which is not in equilibrium
has to increase over time and can only remain constant during a reversible
process. In the case of radiative cooling, the gas shares its entropy with the
cosmic radiation field—a process that effectively removes entropy from the
gaseous phase. However, the total entropy of the system including gas and
radiation increases in this process. reducing the impact of the blazar
heating. In this work, we will encounter two different definitions of
entropy666These definitions of entropy are related to the standard
thermodynamic definition of entropy per particle by $s=k\ln K^{3/2}+s_{0}$,
where $s_{0}$ is a constant that depends upon fundamental constants and a
mixture of particle masses. which are proportional to each other, namely,
$$K_{e}\equiv\frac{kT}{n_{e}^{2/3}}\quad\mbox{and}\quad K\equiv\frac{P}{\rho_{g}%
^{5/3}}=\frac{kT}{\mu m_{p}\rho_{g}^{2/3}}.$$
(1)
$K_{e}$ is a quantity conveniently used in the X-ray literature as it can be
directly constructed from the observables temperature, $T$, and electron
density, $n_{e}$. The quantity $K$ is the constant of proportionality in the
equation of state $P(\rho_{g})$ for an adiabatic monatomic gas with mass
density $\rho_{g}$ and pressure $P$.
The evolution of the temperature and the entropy are shown in Figure
1 for a fluid element at overdensity $\delta=\rho/\bar{\rho}-1=0$ (where $\bar{\rho}$ and $\rho$ denote the average and
local matter density) for the cases of pure photoheating and two realizations of
blazar heating. Our standard model is normalized to the observed number of TeV
blazars, accounts for an incompleteness correction for incomplete sky coverage
at TeV energies and duty cycle, employs conservative assumptions of spectral
corrections and contributing source classes for blazar heating, and has a
heating rate of $7\times 10^{-8}\,\mbox{eV cm}^{-3}\,\mbox{Gyr}^{-1}$ at $z=0$
(Paper II). The heating rate in the optimistic model is a factor two larger,
implying a rate of $1.4\times 10^{-7}\,\mbox{eV cm}^{-3}\,\mbox{Gyr}^{-1}$ at
$z=0$ and matches the inverted temperature-density relation found in high-redshift Ly$\alpha$ observations (Bolton et al., 2008; Viel et al., 2009; Puchwein et al., 2011). Since any mechanism that injects energy
will produce a corresponding increase in $K_{e}$, in the absence of cooling the
entropy accumulates monotonically regardless of the mechanism responsible for
heating the IGM. Nevertheless, some general statements can be made about the
distinction between photoheating and that due to blazars.
Due to the ionization balance maintained between recombination and
photoionization, photoheating produces a generic entropy injection profile
following the epoch of reionization; this is simply the “loss of memory”
effect (Hui & Gnedin, 1997; Hui & Haiman, 2003). In the absence of the blazar component,
photoheating of hydrogen is the main source of heating for the universe at mean
density at late times. The heating rate is given by the rate at which hydrogen
recombines and is then reionized. In photoionization equilibrium, the
temperature of the system will adjust such that the timescales of the net
photoheating and recombination cooling rates become equal implying a steady
state. Since recombination does not depend on the UV photon density the rate of
photoionization is also independent of the UV photon density. Hence the late
time photoheating of the IGM is independent of the number of ionizing photons,
and thus the number of ionizing sources (above the ionization threshold). As a
consequence, the photoheating contribution to the IGM entropy at $z=0$ is
limited to near $K_{e}\approx 8\,{\rm k}{\rm eV}\,{\rm c}{\rm m}^{2}$ for $\delta=0$.
Such an argument does not work for blazar heating. Here the efficiency is close
to the maximum of 100% all the time777In general, the efficiency for
heating depends on the saturated, nonlinear damping rate of the maximally
growing mode of the dissipating plasma instability, $\Gamma_{\mathrm{M}}$, and the
inverse Compton cooling rate, $\Gamma_{\mathrm{IC}}$, and scales as
$\Gamma_{\mathrm{M}}/(\Gamma_{\mathrm{M}}+\Gamma_{\mathrm{IC}})$. We assume that the
nonlinear damping rate is equal (or of order) the linear growth rate (see
Section 3.5 of Paper I). For the linear growth rate of the oblique
instability, this efficiency is close to unity for luminous blazars (Paper
I). Numerical simulations for a mildly relativistic pair beam penetrating into
a hot, dense background plasma suggest that a significant fraction ($\sim 20\%$) of the beam energy is heating the background plasma through the oblique
instability before the two-stream plasma instability takes over, potentially
dissipating another fraction of the beam kinetic energy
(Bret et al., 2010). provided there are sufficient numbers of EBL
photons and each point in space is reached by a number of blazar beams of TeV
photons (which is likely the case; see Section 3.2 of Paper II). Therefore, the
heating rate, and thus temperature of the IGM, is nearly linearly dependent upon
the VHEGR luminosity density of TeV blazars. Correspondingly, the entropy
injection from blazars depends sensitively upon the history of the TeV blazar
population, and thus the cumulative contribution of blazars to the present-day
$K_{e}$ is somewhat uncertain. Nevertheless, given our conservative estimate of
the blazar luminosity density in Paper I, fixing it to the quasar luminosity
density, we find that blazar heating raises the entropy substantially, starting
around the epoch of He ii reionization ($z\sim 3.5$) and by $z=0$, the inclusion
of blazar heating raises the entropy of the $\delta=0$ fluid element to
$K_{e}\approx(50-100)\,{\rm k}{\rm eV}\,{\rm c}{\rm m}^{2}$, approximately an order of magnitude larger than the
case of photoheating alone.
As discussed in Paper II, the magnitude of the temperature enhancement due to
blazar heating is density dependent: while the photoheating rate depends
linearly on density, blazar heating is independent of density and dominates
photoheating for regions with $\delta\lesssim 10$. Thus, blazar heating is
expected to have the largest effects in voids. In Figures 2
and 3 we show the entropy-density relation ($K_{e}$
vs. $1+\delta$) for a variety of redshifts, ranging from $z=0.5$ to $z=4$. For
comparison, Figure 2 shows the case of photoheating only:
the entropy-density relation reflects the effect of photoionization which
deposits a uniform energy per baryon during H and He ii reionization implying
$K\propto(1+\delta)^{-2/3}$. This effect is quickly erased due to the “loss of
memory” effect as well as adiabatic cooling due to the Hubble expansion
approaching an asymptotically constant entropy-density relation at late times.
Figure 3 shows the effect of TeV blazar heating on the
entropy-density relation for our standard (top) and optimistic (bottom) model.
As expected, blazars have the most dramatic effects upon $K_{e}$ in low-density
regions ($1+\delta\simeq 0.1$) due to the spatially uniform and IGM density
independence of their volumetric heating rate. This steepens the
entropy-density relation, causing it to approach an asymptotics of $K\propto(1+\delta)^{-5/3}$ in voids. In fact, blazar-induced entropies can exceed those
due to photoionization by a factor of $125$–$250$ in these low-density regions
($1+\delta\simeq 0.1$, see right-hand panels of Figure 3),
reaching $K_{e}\simeq(1250$–$2500)\,{\rm k}{\rm eV}\,{\rm c}{\rm m}^{2}$ by $z=0.5$. For high density
regions ($1+\delta\simeq 10$) the effect is still pronounced, increasing the IGM
entropy by up to an order of magnitude, i.e.,
$K_{e}\approx(10$–$20)\,{\rm k}{\rm eV}\,{\rm c}{\rm m}^{2}$ at $z=0.5$.
Apart from increasing the mean entropy, blazar heating also increases the
scatter in entropy at fixed density which is especially noticeable for $z=0.5$.
Fractionally, this scatter appears to be roughly twice as large when blazar
heating is included, though in absolute terms the scatter is much larger for the
blazar cases. This is because even in linear theory, the entropy (and
temperature) of any patch of overdensity $\delta$ depends on its “collapse”
history. Namely, whether it collapses like a Zel’dovich pancake, i.e., in one
dimension first which implies a smaller entropy and temperature, or more
spherically which produces a greater entropy and temperature, i.e., more akin to
accretion onto a local overdensity.
The fractional scatter in $K_{\mathrm{blazar}}/K_{\mathrm{photo}}$ is much larger than
that found in either entropy individually. The reason is that the scatter
induced by blazar heating is anti-correlated with that induced by photoheating
due to their very different dependencies upon IGM density. Patches that begin
at low densities when blazar heating is ignored are biased toward lower
entropies as a consequence of an extended period of slow recombination. When
blazar heating is included, these same patches are more efficiently heated, and
therefore biased toward higher entropies.
The redshift evolution of blazar heating introduces additional stochasticity:
patches at a given density sample a distribution of turnaround times and hence
preheated entropy values. After turnaround, the gas is adiabatically
compressed and moves on an adiabat (a line of constant $K_{e}$) to higher density
with an entropy value that depends on the “collapse” time, and the steeper
entropy-density relation generates the larger scatter at any $\delta\gtrsim 1$.
Hence by preheating the universe, the first shells of gas that are able to
collapse onto an object do not experience shock heating as they are only
adiabatically compressed which results in an entropy floor of the object after
formation. The later collapsing shells experience weaker shocks with a smaller
Mach number due to their already elevated entropy level.
Associated with the large increase in the entropy of the IGM are a number of
observational effects. In the following we discuss the impact blazar heating
has upon the correlation between cluster X-ray luminosity and temperature, the
entropy profiles and the CC/NCC bimodality of clusters and groups, and the SZ
power spectrum. In each we note the unique effects imposed by the relatively
recent nature of the entropy injection.
2.1 Implications for the X-ray emission of groups and clusters
Blazar heating raises the entropy floor for $z\lesssim 2$ with its peak
contribution around $z\sim 1$ and acts on scales much larger than the turnaround
region of clusters. The formation epoch of groups and clusters is roughly
matched to the epoch of heating. As a consequence, early-forming, i.e., old
groups may be little affected by blazar heating, had time to cool and develop an
X-ray luminous core, potentially representing the class of CC
groups/clusters. On the other hand, the blazar-heated IGM collapsing into
late-forming, young groups will not have had sufficient time to cool. These
groups will “remember” the elevated entropy floor as an extended core that
substantially changes the initial conditions for their subsequent hierarchical
evolution. However, because the heating occurs at late times, and thus after
the first groups and clusters have begun to form, a larger energy input is
required in comparison to the traditionally employed early preheating models.
For the observability of such a non-gravitational entropy floor and its
implication on the thermal history of the ICM, it is important to consider its
cooling timescale for typical entropy values of the IGM around $z\simeq 0.5$ and
parameters typical of the intra-group medium:
$$\displaystyle t_{\mathrm{cool}}=\frac{3nkT}{2n_{e}n_{\mathrm{H}}\Lambda(T,Z)}=%
4.5\,{\rm G}{\rm yr}\times\left(\frac{K_{e}}{75\,{\rm k}{\rm eV}\,{\rm c}{\rm m%
}^{2}}\right)^{3/2}\\
\displaystyle\times\left(\frac{kT}{1\,{\rm k}{\rm eV}}\right)^{-1/2}\left(%
\frac{\Lambda(kT,Z)}{\Lambda(1\,{\rm k}{\rm eV},0.3Z_{\sun})}\right)^{-1}.$$
(2)
Here $n=2.2n_{\mathrm{H}}$, $\Lambda(T,Z)$ is the cooling function at a given $T$
and metallicity, $Z$, (which is a relatively flat function of $T$ near
$kT=1\,{\rm k}{\rm eV}$ and $0.3Z_{\sun}$, Sutherland & Dopita, 1993). A look-back time of
$4.5\,{\rm G}{\rm yr}$ corresponds to a redshift of $z=0.425$.
Instantaneous preheating by blazars (see Figure 1) is
incapable of directly producing the highest entropies (up to
$600\,{\rm k}{\rm eV}\,{\rm c}{\rm m}^{2}$) that are presently observed in a few clusters
(Cavagnolo et al., 2009). Comparing the cooling time to the dynamical time of a
cluster/group of approximately $\sim 1$ Gyr or the typical timescales of
significant turbulent pressure support of $\sim 2$ Gyr after a cluster merger
which provides continuous heating throughout (Paul et al., 2011), we expect our
heating to indirectly impact the entropy distribution through
gravitational reprocessing of the blazar preheated core entropy, leading to a
gravitational amplification of entropy that we now explain.
A larger central entropy, or equivalently, a lower central density888For
a cluster with a given mass, a larger core entropy implies a lower gas density
as the (constant) core temperature $kT\propto Kn^{2/3}$ reflects the virial
value imparted by the cluster’s potential depth., of a merging cluster
facilitates shock heating which implies an increase of the core entropy of the
final object compared to that of a dense cooled core. CCs have a large
inertia causing them to be rather resilient against shock heating, typically
surviving the merger with only a marginally elevated entropy level which is then
subject to fast radiative cooling. This effect gives rise to the well-known
overcooling problem in cosmological simulations of galaxy clusters if cooling is
not counteracted by any feedback process (e.g., Borgani & Kravtsov, 2009).
In order to quantify the effect of gravitational reprocessing of a preheated
entropy core, let us define a net entropy amplification factor through shock
heating. This is easiest done by comparing two types of simulations of galaxy
cluster formation: one with gravitational physics only and one where
gravitational physics is supplemented by a preheating epoch of the IGM at mean
density with a uniform entropy floor of $K_{\mathrm{floor}}$ which precedes the
turnaround and formation of a galaxy cluster. In the first case, the core
entropy structure is set by gravitational formation shocks leading to a central
entropy of $K_{\mathrm{grav},0}$. In the preheating case, the core entropy is
raised to an elevated level after nonlinear evolution during virialization,
$K_{\mathrm{pre},0}$. Hence, the net entropy amplification factor is then defined
as the ratio of the core entropy in the preheating case, $K_{\mathrm{pre},0}$, to
that obtained by adding the IGM entropy floor value to the entropy in
gravitational heating case only, $K_{\mathrm{floor}}+K_{\mathrm{grav},0}$.
Non-radiative cosmological simulations of galaxy clusters and groups demonstrate
that the net entropy amplification factor can reach values ranging from 3 to 5
for clusters and groups, respectively, in the high-entropy case with a
preheating value of $K_{\mathrm{floor}}=100\,{\rm k}{\rm eV}\,{\rm c}{\rm m}^{2}$ imposed at $z=3$ (see
Figures 1 and 2 in Borgani et al., 2005). For the low-entropy case with
$K_{\mathrm{floor}}=25\,{\rm k}{\rm eV}\,{\rm c}{\rm m}^{2}$, this amplification factor is only mildly
reduced to $2-5$ for clusters and groups, respectively. This entropy
amplification factor seems to be reduced for radiative cluster simulations that
include cooling and star formation, possibly due to the assumed early epoch of
instantaneous entropy injection at $z=3$ that facilitates cooling of the entropy
floor thereafter (see Figures 3 and 4 in Borgani et al., 2005). Groups,
however, still remain severely affected by an early epoch of preheating. Thus
in combination with gravitational amplification, blazar heating could have an
important effect on the subsequent thermodynamic evolution of late-forming
groups and possibly the clusters they evolve into. This, however, is subject to
nonlinear structure formation and needs to be carefully studied with
cosmological hydrodynamic simulations containing a large sample of
well-resolved galaxy clusters which sample the full distribution of formation
redshifts.
The impact of the entropy injection due to blazar heating upon clusters is
somewhat less clear. Clusters form in highly biased regions through the mergers
of smaller, virialized systems such as groups. In most cases, these need to
have already formed by $z=1$ which can be inferred from the mass accretion
history in Figure 4. This shows that on average, the most massive
group progenitor of a cluster with virial mass999In this Section, we
define the virial mass $M_{200\,c}$ as the mass of a sphere enclosing a mean
density that is 200 times the critical density of the universe.
$M_{200\,c}=10^{14}\,\mathrm{M}_{\sun}$ crossed a mass threshold of $3\times 10^{13}\,\mathrm{M}_{\sun}$ before $z=1$. At this time the entropy floor due to
blazar heating was still smaller with typical values
$K(\delta=0,z=1)\approx(25$–$50)\,{\rm k}{\rm eV}\,{\rm c}{\rm m}^{2}$ implying a smaller entropy core
in groups at higher redshifts $z\gtrsim 1$. Hence, we naively may not expect
blazar heating to have a large impact upon the X-ray luminosity and entropy
profiles of clusters. However, this is not the case for two reasons.
First, the mass accretion rate, defined by the logarithmic slope $S=d\log M_{200\,c}/d\log a(z)$, is larger for more massive systems and at higher
redshifts (where we have introduced the cosmic scale factor, $a\equiv 1/(1+z)$;
see bottom panel of Figure 4). The mass accretion history of a
halo observed at $z=0$ grows on average as $M(z)=M_{0}\,\exp(-\xi z)$
(Wechsler et al., 2002). The single free parameter in the model, $\xi$, can be
related to a characteristic formation redshift $1+z_{c}=\xi/S$. Hence we
find ourselves in the fast accretion regime if the mass accretion rate is larger
than a characteristic value usually taken to be $S=2$. In particular, the
relative mass accretion rates increase from $z=0$ to 2 by a factor 3 for
clusters ($M_{200\,c}=10^{15}\mathrm{M}_{\sun}$) and 10 for groups
($M_{200\,c}=10^{13}\mathrm{M}_{\sun}$) (see also Gottlöber et al., 2001). A
larger relative mass accretion rate implies that the gas in the core is heated
by accretion processes at a faster rate. This means that a lower preheated
core entropy value early-on can in principle be gravitationally processed at a
substantially higher rate and, as a result of this, move onto a higher adiabat
with a longer cooling time.
Second, blazars like AGNs follow the clustering bias of the matter density field
and hence turn on first in highly biased regions, i.e., regions that evolve into
clusters and super-clusters. At late times ($z\lesssim 3.5$) blazar heating is
expected to be nearly spatially uniform, since more than a single blazar
contribute significantly to the local heating rate of any given patch of the
universe (Paper II). At much lower redshifts, the number of contributing
sources grows dramatically, approaching $10^{3}$ at $z=0$, and thus today blazar
heating is nearly homogeneous. Prior to $z\simeq 3.5$, however, blazar heating
may exhibit $\sim 50\%$ fluctuations locally, due to the paucity of blazars in
the early universe. By $z\simeq 6$ as much as 75% of the local heating can be
due to a single object, implying large Poisson fluctuations in the heating
rate. This results in a clustering bias at early times.
The rare first blazars are expected to appear in highly biased regions,
corresponding to those that later evolve into groups and clusters, and therefore
themselves be clustered. The likelihood of a patch of the universe being heated
by such a blazar is then enhanced in the biased regions, where both the heating
rate is larger (since the VHEGR density is larger) and the probability of being
covered by a blazar in the cluster is larger.101010In the limit of a few,
highly clustered blazars, the VHEGR intensity declines exponentially from the
clustering site for a spatially constant distribution of EBL photons, giving
rise to an exponentially decreasing heating rate (on the scale of the VHEGR
mean free path which is larger than the clustering length scale). However, if
the distribution of EBL photons is also clustered and correlates spatially
with the highly biased regions (which is expected in a hierarchically growing
universe), the heating rate of those biased regions will be additionally
enhanced in comparison to the void regions that have a low probability of
being covered by a blazar. We stress that the effect of spatially and
temporarily inhomogeneous blazar heating due to the clustering bias is
expected to be absent at late times ($z\lesssim 3.5$) due to the frequent
occurrence of blazars. It is this early preheating in groups that may
facilitate their evolution into NCC clusters (via the gravitational reprocessing
of the high-entropy cores). Therefore, while we will focus upon
group/small-cluster mass scale in our study of the entropy structure immediately
after group/cluster formation, gravitational reprocessing combined with the bias
in the early blazars suggest that these effects may be important for more
massive clusters as well, propagating the effect of blazar heating up the mass
hierarchy.
The formation time of groups and clusters determines the blazar contribution to
their entropy, inducing an intrinsic scatter in the $L_{x}-T$ relation associated
with the distribution of collapse histories. As a result, we would expect to
find systematically higher entropies in younger groups and possibly clusters
provided gravitational reprocessing was effective. There is some evidence that
this is the case. Optically selected, and therefore young, group and cluster
samples have on average lower X-ray surface brightness and smaller gas mass
fractions compared to X-ray-selected samples
(Mahdavi et al., 2000; Hicks et al., 2008; Dai et al., 2010). Optically bright, small to moderately
massive clusters ($kT>4\,{\rm k}{\rm eV}$) at redshifts $z=0.6$–$1.1$ are under-luminous
in X-rays for a given $T$, which implies a reduced gas density and by extension
an enhanced core entropy by roughly a factor of two (Hicks et al., 2008).
Similarly, an X-ray stacking analysis of ROSAT data based on optically
selected groups at low redshift ($0.5\,{\rm k}{\rm eV}<kT<2\,{\rm k}{\rm eV}$) from the Two Micron All
Sky Survey catalog finds systematically lower gas mass fractions than expected,
$f_{\mathrm{gas}}$, (within an over-density of 500 times the critical density of the
universe) and flatter temperature profiles (Dai et al., 2010)111111We caution
the reader that such a stacking analysis could have potential biases: the
stacked X-ray spectrum in a given richness bin might be dominated by a few hot
systems and fitting an average spectrum could then bias the temperatures and
hydrostatic masses high and hence the gas fractions low. Eddington bias from
more numerous smaller systems can additionally lower the average gas fraction
at a given optical richness. Before drawing far reaching conclusions, careful
mock analyses are needed to confirm these results., again implying larger
entropies. Taken at face value, this is in conflict with careful X-ray studies
using Chandra data (Sun et al., 2009) if both samples are believed to
represent the same underlying distribution. Alternatively, this is perfectly
consistent if the entropy of clusters and groups varies with formation time,
with the peak entropy injection occurring near $z\sim 1$. Both are well matched
to the properties of AGN feedback generally, and blazar heating especially.
2.2 Thermodynamic structure of galaxy groups and clusters
The large injections of entropy necessarily influence the formation of cosmic
structure. While the incorporation of an instantaneous entropy floor in
numerical simulations does reproduce the $L_{x}-T$ scaling relations well, it is
ad hoc and fails to produce the observed entropy profiles in a subset of X-ray
luminous, cool core groups with steep entropy profiles and low values of the
central entropy (see Section 1.1). Thus, it is clear that the
effects of heating the ICM are more complicated than the introduction of a
constant, global entropy floor (McCarthy et al., 2008; Fang & Haiman, 2008). Here we explore
whether blazar heating may be capable of producing the observed entropy profiles
as well as some of the features of the distribution of large-scale structures
observed.
While large-scale numerical simulations are required to study the entropy
evolution of groups and clusters in detail, we can use the conservative property
of entropy to estimate the effect of blazar heating on the thermodynamics of
galaxy groups and (to a lesser extent) clusters. Smooth accretion, in which
cold gas enters the cluster through a spherically symmetric accretion shock,
results in self-similar entropy profiles (Voit, 2005). Characteristic
values for the physical parameters are given by their average values within the
virial radius, $R_{200}$ (that we define as the radius of a sphere enclosing a
mean density that is 200 times the critical density of the universe):
$$\displaystyle n_{e,\,200}$$
$$\displaystyle=200\,x_{e}X_{\mathrm{H}}f_{b}\frac{\rho_{\mathrm{cr}}}{m_{p}}=1.%
7\times 10^{-4}E^{2}(z)\,{\rm c}{\rm m}^{-3}$$
(3)
$$\displaystyle kT_{200}$$
$$\displaystyle=\frac{GM_{200\,c}\mu m_{p}}{2R_{200}}=\frac{\mu m_{p}}{2}\left[1%
0\,GH(z)M_{200\,c}\right]^{2/3}$$
$$\displaystyle=1\left(\frac{M_{200\,c}\,E(z)}{6\times 10^{13}M_{\sun}}\right)^{%
2/3}\,{\rm k}{\rm eV}$$
$$\displaystyle K_{e,\,200}$$
$$\displaystyle=\frac{kT_{200}}{n_{e,\,200}^{2/3}}=\frac{\mu m_{p}^{5/3}}{2}%
\left(\frac{4\pi}{15}\,\frac{G^{2}M_{200\,c}}{(1+X_{\mathrm{H}})\,f_{b}H(z)}%
\right)^{2/3}$$
$$\displaystyle=326\left(\frac{M_{200\,c}\,E^{-1}(z)}{6\times 10^{13}M_{\sun}}%
\right)^{2/3}\,{\rm k}{\rm eV}\,{\rm c}{\rm m}^{2}\,,$$
where $f_{b}=\Omega_{b}/\Omega_{m}$ is the universal baryon fraction, the electron
fraction is defined as the ratio of electron and hydrogen number densities, $x_{\rm e}=n_{\rm e}/n_{\mathrm{H}}=(X_{\mathrm{H}}+1)/(2\,X_{\mathrm{H}})=1.158$, in which we assumed a
fully ionized fluid with a primordial hydrogen mass fraction $X_{\mathrm{H}}=0.76$,
and $\rho_{\mathrm{cr}}=\rho_{\mathrm{cr}}(z)=3H^{2}(z)/(8\pi G)$ is the
critical mass density, in which the Hubble function $H(z)$ is given by
$$\frac{H^{2}(z)}{H_{0}^{2}}=E^{2}(z)=(1+z)^{3}\Omega_{m}+(1+z)^{2}(1-\Omega_{m}%
-\Omega_{\Lambda})+\Omega_{\Lambda}\,.$$
(4)
Of particular importance here is that the entropy scales as $K_{e}\propto r^{1.1}$
(Voit, 2005). The predictions of this simple model agree well with
numerical simulations (Tozzi & Norman, 2001) and the entropy profiles in the outer
regions of clusters inferred from recent X-ray observations by Chandra and
XMM-Newton (Cavagnolo et al., 2009; Pratt et al., 2010). Within the smooth accretion
model the stratified profile is built from the inside out, with later accreted
shells containing larger entropy due to gravitational heating at accretion
shocks. Thus, if radiative cooling can be neglected, the entropy distribution
of the gas in the cluster core reflects the entropy of the IGM immediately prior
to the initial collapse of the group/cluster. This assumption is reasonable as
long as the radiative cooling time (Equation (2)) is longer than
the time interval between successive mergers, or equivalently the mass accretion
timescale (see discussion in Sect. 2.1).
In practice, the smooth-accretion picture is over-simplified. Most of
the ultimately accreted gas is not smoothly distributed, but rather
contained within virialized substructures, and therefore has already
been shock heated, fundamentally altering the way in which the entropy
profile is generated. This results in a more complex morphology of
the dissipating structure-formation shocks, which exhibit a rich
network of shock fronts at which the gravitational energy of the gas
is dissipated (Miniati et al., 2000; Ryu et al., 2003; Pfrommer et al., 2006).
Nevertheless, despite this apparent chaos, non-radiative galaxy
cluster simulations that have sufficiently high resolution find
approximately self-similar entropy, density, and temperature
structures outside of the core region, independent of the numerical
method used (Frenk et al., 1999; Voit et al., 2005), yielding a universal entropy
profile of
$$K_{e}(r,z)=1.45\,K_{e,\,200}(z)\,(r/R_{200})^{1.2}.$$
(5)
The reason for this is that the low entropy/high density gas arranges itself
such that it finds itself at the bottom of the gravitational potential and
eventually mixes with the surrounding halo gas as can be seen by rewriting the
hydrostatic equation,
$$\frac{dP}{dr}=-\frac{GM(<r)}{r^{2}}\left(\frac{P}{K}\right)^{3/5}.$$
(6)
Magnetic draping of cluster fields during gravitational settling provides a
thermal insulation of these low-entropy parcels
(Lyutikov, 2006; Dursi & Pfrommer, 2008; Pfrommer & Dursi, 2010), and hence this settling occurs
adiabatically, producing a stratified core in which the hot, high-entropy halo
gas remains at large radii.
If the universe is preheated, the central entropy of a newly formed group
or cluster is replaced with a flattened core. Initially, the entropy of the
core is set by that of the IGM at the time the second gas shell is accreted,
forming an accretion shock and adiabatically compressing the gas in the first.
Employing only heating by blazars and structure formation shocks, we show the
resulting entropy profiles for groups of a variety of virial masses and
formation redshifts in Figure 5, exploring the distribution around
the median object mass. These are based upon Equations (3) and
(5) as well as the floor values implied by Figure
1. Interestingly, the blazar-induced entropy floor at $z=0.5$ is
comparable to the gravitationally established entropies in groups of
$3\times 10^{13}\,M_{\sun}$ at radii $(0.2$–$0.4)\,R_{200}$ for our standard and
optimistic blazar models, respectively. The implied core sizes are similar to
observationally accessible radii of $R_{2500}\sim 0.3\,R_{200}$ of optically
selected clusters (see Figure 1 of Pratt et al., 2010). In our picture, these
correspond to young clusters with recent formation times.
Thus far we have implicitly assumed the instantaneous formation approximation,
i.e., that we may identify a particular redshift at which to calculate
$K_{e,\,200}$ and the core entropy level due to blazar heating. This is
naturally identified with the redshift at which the group/cluster has assembled
half of its mass, after which strong structure-formation shocks develop. Figure
4 shows the accretion histories for groups/clusters in the mass range
of relevance here, noting explicitly the half-mass redshifts for a variety of
accretion histories. Typically, these occur after $z\simeq 1$, by which time
blazar heating has had an opportunity to inject significant amounts of entropy
into the IGM, implying that the TeV blazars can have a substantial impact upon
the structure of groups/clusters in practice.121212The half-mass redshifts
depend on the adopted definition for the halo mass (and of course cosmology
which we fix here for simplicity). While we choose to use $M_{200\,c}$ in this
section, we note that the half-mass redshift are somewhat smaller when
adopting the definition for the virial mass with an overdensity that varies
with redshift (Bryan & Norman, 1998). In this case, we obtain $z_{0.5}=\{0.85,0.74,0.62,0.48,0.36\}$ for our halos, $M_{\mathrm{vir}}=\{1.3,3.8,13,38,130\}\times 10^{13}\,M_{\sun}$. However, we note that median mass accretion
histories are too simplified to assess the impact of blazar heating upon
group/cluster entropy profiles in detail since they do not address the radial
redistribution of accreted gas. Furthermore, we do not attempt to address the
generation of entropy by gravitational reprocessing during mergers of later
accreted material in Figure 5. After this explicit demonstration of
the impact of a redshift-dependent entropy floor on cluster entropy profiles, we
turn to the implications of these for the cluster population as a whole.
2.3 Implications for the bimodality of core entropy values
The thermodynamic properties of clusters in their centers show a clear
bimodality, which is traditionally separated into two classes: CC and NCC
clusters. The former are defined to have temperature profiles which decline
significantly toward the center whereas the central temperature distribution of
the latter remains constant and often correlates with merger events. As a
result, the distribution of core entropy also appears to be bimodal in clusters,
with the CC population peaking at $K_{e,0}\sim 15\,{\rm k}{\rm eV}\,{\rm c}{\rm m}^{2}$ and NCCs at
$K_{e,0}\sim 150\,{\rm k}{\rm eV}\,{\rm c}{\rm m}^{2}$ separated by a gap between $K_{e,0}\sim(30$–$50)\,{\rm k}{\rm eV}\,{\rm c}{\rm m}^{2}$. Roughly half of the entire population of galaxy
clusters in the Chandra archival sample show cooling times that are longer
than 2 Gyr and have high-entropy cores typical of NCCs (Cavagnolo et al., 2009).
This CC/NCC bimodality appears to be real and not due to archival bias as a
complementary approach has shown with a statistical sample (Sanderson et al., 2009).
Before we discuss how blazars may impact on the relative abundance of cluster
populations, we will shortly review how the currently favored hypothesis of AGN
feedback compares to data. The possibility that AGN feedback can raise the core
entropy $K_{e,0}$ to values representative of NCCs on the buoyancy timescale
(e.g., Guo & Oh, 2009) is not supported by observations. This is explicitly
shown in Figure 6 which correlates $K_{e,0}$ with the volume work done
by the expanding bubbles, $E_{\mathrm{cav}}$, and with the cavity power,
$P_{\mathrm{cav}}$, in systems that show X-ray cavities inflated by AGN bubbles. The
total energy required to inflate a cavity is equal to its enthalpy, given by
$$E_{\mathrm{cav}}=\frac{\gamma}{\gamma-1}\,PV_{\mathrm{tot}}=4\,PV_{\mathrm{tot}}$$
(7)
assuming a relativistic equation of state $\gamma=4/3$ within the bubbles. The
cavity power is estimated using $P_{\mathrm{cav}}=E_{\mathrm{cav}}/t_{\mathrm{buoyancy}}$
(Bîrzan et al., 2004; Rafferty et al., 2006).131313The buoyancy timescale could be
either an overestimate of the true age since the cavity is expected to move
outward supersonically during the early, momentum-dominated phase of the
jet. It could also be an underestimate of the true age since magnetic draping
provides an additional drag force slowing down the rise of the bubble
(Dursi & Pfrommer, 2008). We compare the energy used to inflate the cavities to the
gas binding energy of the core region within a spherical region of radius
$R_{2500}\simeq R_{200}/3$,
$$\displaystyle E_{b,2500}$$
$$\displaystyle=f_{\mathrm{gas,2500}}\,\frac{GM_{2500}^{2}}{2R_{2500}}=\frac{f_{%
\mathrm{gas,2500}}}{2}M_{2500}^{5/3}\left[10\,GH(z)\right]^{2/3}$$
(8)
$$\displaystyle\simeq 1\times 10^{60}\,{\rm erg}\left(\frac{kT_{X}}{1\,{\rm k}{%
\rm eV}}\right)^{3.23}$$
at $z=0$. Here we use the phenomenological scalings obtained by X-ray
observations of $h_{70}\,M_{2500}=M_{5}\,(kT_{\mathrm{X}}/5\,{\rm k}{\rm eV})^{1.64\pm 0.06}$, with
$M_{5}=(2.5\pm 0.1)\times 10^{14}\,h_{70}^{-1}\,\mathrm{M}_{\sun}$ (Vikhlinin et al., 2006)
and $f_{\mathrm{gas},2500}=(0.0347\pm 0.0016)\,(kT_{\mathrm{X}}/1\,{\rm k}{\rm eV})%
^{0.509\pm 0.034}$
(Sun et al., 2009).
$P_{\mathrm{cav}}$ or $E_{\mathrm{cav}}$ both measure the energy and power that is in
principle available for heating the ICM, if it can be efficiently tapped by some
process.141414We note that of this $4\,PV_{\mathrm{tot}}$, only $PV_{\mathrm{tot}}$
is available in form of mechanical energy while the internal energy $U=3\,PV_{\mathrm{tot}}$ is presumably still stored within the bubbles. What fraction of
this internal energy is eventually thermalized, and thus can potentially
contribute to unbinding the cluster gas and/or raising the core entropy, is
not a priori clear. If the energy is stored in cosmic rays or magnetic fields,
it may be transferred to the thermal pool via cosmic ray Alfvén-wave heating
(Kulsrud & Pearce, 1969) or magnetic reconnection, respectively. Nevertheless,
the uncertainty induced by this is small in comparison to the many orders of
magnitude increase in cavity energy required to explain the non-cool core
clusters. However, as shown in Figure 6, even the most energetic
and powerful AGN outbursts, with $E_{\mathrm{cav}}\sim 10^{62}\,{\rm erg}$ and
$P_{\mathrm{cav}}\sim 10^{46}\,{\rm erg}\,{\rm s}^{-1}$, e.g., MS0735+7421 and Zwicky 2701,
which are energetically capable of unbinding the gas in the core regions, are
unable to disrupt the CC and transform the cluster into an NCC state on a
buoyancy timescale. This is apparent from the low core entropy values (with a
median $K_{e,0}=15\,{\rm keV\,cm^{2}}$) of typical CC clusters. This is
quantified with a linear Pearson correlation coefficient of 0.71 correlating
$\log K_{e,0}$ with $\log P_{\mathrm{cav}}$ or $\log E_{\mathrm{cav}}$, respectively.
This proves that the $PdV$ work done by these expanding cavities is transferred
inefficiently to the surrounding medium, i.e., the ICM entropy produced by AGN
inflated bubbles is far less that the virial value of $K_{e}\sim 540\,{\rm k}{\rm eV}\,{\rm c}{\rm m}^{2}\times(M_{200\,c}/\mathrm{M}%
_{14})^{2/3}$, but enough to arrest
overcooling.
On the other hand, there is a strong anticorrelation between the radio power of
the brightest cluster galaxy and $K_{e,0}$ for nearby clusters ($z<0.2$),
implying that bright radio emission is preferentially “on” for $K_{e,0}\lesssim 40\,{\rm k}{\rm eV}{\rm c}{\rm m}^{2}$ (see Figure 2 in Cavagnolo et al., 2008). While AGN
feedback seems to be unable to transform a CC to an NCC cluster (on a buoyancy
timescale), it appears to be critical in stabilizing the thermal atmospheres
from entering a cooling catastrophe and collapsing. In principle the impact of
AGN-induced turbulence on heat transport (conductively and advectively) could
result in a CC to NCC metamorphosis on a much longer ($>$ Gyr) timescale
(Parrish et al., 2010; Ruszkowski & Oh, 2010). This is because the temperature difference
between the maximum of the temperature profiles to the cold center is at most a
factor of three (Vikhlinin et al., 2006), implying that heat transport could
initially increase the central entropy by a similar factor as $K_{e,0}\propto kT$. In the absence of radiative cooling, the associated pressure enhancement
would adiabatically expand and thereby cool the gas, hence restoring the
temperature gradient. Sustained conduction and advection could further increase
the central entropy. However, the timescale for such a process is long in
comparison to radiative cooling timescales ($\lesssim 1$ Gyr) as well as
cluster assembling and merging timescales which questions the possibility of
this mechanism to boost the central entropies to values of $K_{e,0}\simeq 600\,{\rm k}{\rm eV}\,{\rm c}{\rm m}^{2}$ which are at the tail end of the distribution
(Cavagnolo et al., 2009).
Because blazar heating does not inject large amounts of entropy at $z\gtrsim 2$,
and blazars do not efficiently heat high density regions ($1+\delta>10$),
objects that have already collapsed by that time or shortly thereafter will not
be significantly affected by blazars (barring the effect of clustering bias at
early redshifts). In our scenario, these early-forming groups evolve into CC
systems at the present epoch that potentially need to be stabilized by a
self-regulated feedback process, e.g., provided by the radio mode of AGNs.
However, the subset of groups that forms after $z\simeq 1$ can be severely
affected. If such a group is viewed shortly after its formation, it should still
exhibit the elevated core entropies associated with the prior blazar heating.
If such a late-forming group has a cooling timescale long in comparison to the
interval between cluster/group mergers, merger shocks can gravitationally
reprocess the entropy cores and amplify them by a factor of up to five
(Borgani et al., 2005, see also Section 2.1). These late-forming
groups would then evolve into NCC systems.151515Using Chandra
observations, there has been a claim that incidence rate of cool core clusters
at redshifts $z>0.5$ is much smaller than their fraction at low redshifts
(Vikhlinin et al., 2007). We note that the development of the characteristic
cuspy X-ray brightness profiles for cool cores at $z=0$ requires the cooling
time to be considerably shorter than the formation time—a criterion that is
often not fulfilled at the (high) redshift in question. Second, in order to
construct their sample, Vikhlinin et al. (2007) discarded regions with AGN
emission which can be correlated with cool cores, potentially biasing their
absolute cool core rates low. Hence, this observation is not in contradiction
with our proposed scenario. For sufficiently late-forming clusters that
experience a series of fast successive merger events and, hence, avoid
substantial cooling phases, we expect that gravitational reprocessing should
boost the central core entropy in the most extreme case from
$\sim 100\,{\rm k}{\rm eV}\,{\rm c}{\rm m}^{2}$ to $\sim 600\,{\rm k}{\rm eV}\,{\rm c}{\rm m}^{2}$, allowing for reprocessing of
the blazar-heated entropy floor due to gravitational heating. Typically,
however, blazar heated entropies at the time of turnaround for late-forming
groups/clusters are $\sim 50\,{\rm k}{\rm eV}\,{\rm c}{\rm m}^{2}$. Allowing for modest cooling periods
in between mergers should yield smaller median core entropy values of
$\sim(100-150)\,{\rm k}{\rm eV}\,{\rm c}{\rm m}^{2}$ after accounting for gravitational
reprocessing. These estimates compare favorably with observed values of NCC
clusters (Cavagnolo et al., 2009).
We point out that it is very natural for systems with a unimodal
distribution in core entropy values following group formation to
evolve into a bimodal distribution today as consequence of both the
cooling instability and the gravitational reprocessing of (temporally
increasing) elevated entropy cores. The observed CC and NCC cluster
populations are centered upon the two attractor solutions that a
galaxy cluster can evolve into. In the
case of CC clusters, the core evolution is driven by the well-known overcooling
problem, encountered in cosmological simulations of galaxy clusters. Below a
critical core entropy, purely hydrodynamical mergers are incapable of disrupting
a compact CC systems and transforming it into an NCC object (Poole et al., 2008).
For NCC clusters, the core evolution is driven by the rapid (in comparison to
$t_{\mathrm{cool}}$, which effectively sets the critical core entropy value within a
given epoch) succession of mergers, which has been demonstrated to further
elevate the core entropy values substantially (Borgani et al., 2005).
Interestingly, this solution is not a runaway solution; instead, the
gravitational bootstrapping of blazar preheated entropy adjusts to the system
size. Hence, in this picture, the core entropy should never be able to exceed
the entropy at the virial radius (according to virial arguments) and most likely
reach only values that are a fraction of that (at least for cluster systems) due
to radiative cooling and the modest blazar preheated entropy values in
comparison to early preheating models.
To conclude, we demonstrate explicitly that the core entropy values in CC
clusters have a very weak correlation with the mechanical energy and power of
X-ray cavities inflated by AGNs. Hence, $PdV$ work done by these expanding
cavities is transferred inefficiently to the surrounding medium. This strongly
suggests that while AGN feedback seems to be critical in stabilizing CC systems,
it cannot transform CC into NCC systems (at least on the buoyancy
timescale). With this evidence it seems even more pressing to pursue alternative
solutions such as the presented blazar heating scenario in combination with
gravitational reprocessing that provides a plausible scenario for the observed
CC/NCC bimodality. Future cosmological hydrodynamical simulations that include
the effect of clustering bias of blazar heating are needed to study these
considerations in greater detail.161616A detailed prediction of the core
entropy distribution of clusters at a given mass (or temperature) depends on
the merger history of all objects at that mass (to understand which number
fraction of clusters was channeled into the cooling branch due to
comparatively fast radiative cooling), the distribution of the departure times
of the gas that ended up in the core from average densities (to address the
exact magnitude of blazar heating for the population of clusters rather than
individual systems), and quantifying the efficiency of gravitational
re-processing for the entire population of groups and clusters
(Borgani et al. (2005) only simulated a group and a cluster for different
variants of physics).
2.4 Implications for the Sunyaev-Zel’dovich power spectrum
The thermal SZ effect provides a direct probe of the gas properties of groups
for $z\gtrsim 0.5$. The SZ effect arises from CMB photons that inverse Compton
scatter off thermal electrons within the hot plasma in galaxy clusters and
groups, producing a localized perturbation to the CMB spectrum
(Sunyaev & Zel’dovich, 1972; Sunyaev & Zeldovich, 1980). The thermal SZ effect directly measures the thermal
electron pressure in the gas and has the important property that its amplitude
is independent of redshift. The pressure fluctuation spectrum of unresolved
groups and clusters dominates the CMB power spectrum on angular scales smaller
than $3\arcmin$ (corresponding to a multipole moment $\ell\simeq 3000$) and half
of the SZ power spectrum signal at $\ell\simeq 3000$ comes from groups with
$M_{500}<2\times 10^{14}M_{\sun}$ and $z>0.5$ (Trac et al., 2011; Battaglia et al., 2011). At
these scales, the SZ power spectrum depends on the square of the Fourier
transform of the average pressure profile of clusters/groups; a more
concentrated pressure profile implies more power, a smoother one less.
A population of groups/clusters with high core entropies,
$K_{0}=(50$–$100)\,{\rm k}{\rm eV}\,{\rm c}{\rm m}^{2}$, implies a smoother pressure core distribution
than if the cores had cooled since formation and developed a more concentrated
entropy profile. This is very similar to the effect of AGN feedback which
injects entropy into the core of groups, smoothing out the resulting pressure
profile (Battaglia et al., 2010). Adopting empirically motivated “universal”
pressure profiles constrained by X-ray observations, the peak amplitude of the
SZ power spectrum is reduced for NCC pressure profiles as compared to those of
CC clusters (Efstathiou & Migliaccio, 2011). Hence, we expect the effect of blazar
heating to result in a suppression of the SZ power spectrum for scales
$\ell\gtrsim 2000$, which probe the pressure profile mostly inside $R_{500}$. An
abundant population of late-forming groups with high $K_{0}$-values (that has not
been taken into account in any numerical modeling of the SZ power spectrum so
far, e.g., by Battaglia et al. 2010 or Trac et al. 2011) would furthermore
reduce the thermal SZ power spectrum in comparison to these numerical
approaches. This has potentially important observational consequences since
angular scales around $3\arcmin$ are the sweet spot for current telescopes
measuring the high-$\ell$ CMB angular power spectrum, e.g., the South Pole
Telescope
(SPT; Lueker et al., 2010; Shirokoff et al., 2011; Keisler et al., 2011; Vanderlinde et al., 2010) and the
Atacama Cosmology Telescope (ACT; see,
e.g., Fowler et al., 2010; Dunkley et al., 2011; Marriage et al., 2011).
In addition to the astrophysical dependence, the amplitude of the SZ power
spectrum also depends very sensitively on cosmological parameters that are
responsible for the growth of structure, $C_{\ell}\propto\sigma_{8}^{7\ldots 9}(\Omega_{b}h)^{2}$, where the rms amplitude of the (linear) density power spectrum
on cluster-mass scales is denoted by $\sigma_{8}$. Hence, there is an interesting
degeneracy in the amplitude (and shape) of the SZ power spectrum between the
cosmological information (dominated by $\sigma_{8}$) and the astrophysical
information contained in the average pressure profile. Currently, numerical
models of the SZ power spectrum are consistent with the data at the 1 $\sigma$
level (Dunkley et al., 2011; Shirokoff et al., 2011). However, after allowing for a
substantial signal from patchy reionization (Iliev et al., 2007, 2008), which
boosts the kinetic SZ effect and hence the total SZ signal, the power predicted
by these models becomes uncomfortably high. Suppression of power due to blazar
heating represents a promising mechanism by which to reconcile the expected and
observed SZ power spectrum.
Unfortunately, the impact of blazar heating is degenerate with energy injection
from AGN feedback in the dense cores of groups at early times (where there are
also very little constraints). Here, we sketch a promising idea on how to
discriminate between the effects of AGN feedback and blazar heating on the SZ
power spectrum. In particular, AGN feedback and blazar heating vary as a
function of cluster mass and redshift. Their effects on the pressure profile of
groups and clusters and, hence, their imprint on the SZ signal also vary as a
function of cluster mass and redshift. By cross-correlating CMB maps (where
foreground components and primary anisotropies have been subtracted) with deep
optical redshift surveys that are separately binned in redshift and cluster
mass, i.e., optical richness estimators, we can perform SZ tomography. Such
tomographic SZ power spectra (similar to Figures 7 and 8 of Battaglia et al., 2011, for the case
of AGN feedback) will enable us to derive a redshift and
mass-dependent mean cluster pressure profile. Potentially, this could
disentangle the different non-gravitational energy injections of AGN feedback
and blazar heating in clusters. However, large cosmological hydrodynamical
simulations are required to obtain detailed predictions of either process which
shall be subject to future work.
3 Structure formation and dwarfs
The entropy injected into the IGM by blazars not only modifies the structures of
groups and potentially clusters, but also has an observable effect upon
cosmological structure formation. Heating the IGM produces higher IGM
pressures, which in turn suppress the gravitational instability on sufficiently
small scales. Thus, there is a characteristic length scale, and hence
characteristic mass ($M_{C}$), below which objects will not form. The particular
value of this critical mass depends upon how structures form in practice, and
generally requires a fully nonlinear study of structure formation (e.g., that
provided by large-scale numerical simulations). Nevertheless, we may estimate
the relevant characteristic length and mass scales via linear perturbation
theory.
A very rough idea of the impact of blazar heating may be derived from the
Jeans wavenumber, obtained by balancing the sound crossing and free-fall
timescales:
$$k_{J}(a)\equiv\frac{a}{c_{s}(a)}\,\sqrt{4\pi G\bar{\rho}(a)}.$$
(9)
where for convenience, we have introduced the cosmic scale factor,
$a\equiv 1/(1+z)$, $\bar{\rho}(a)\equiv\Omega_{m}(a)\rho_{\mathrm{cr}}(a)$ is the
mean total mass density of the universe, $c_{s}(a)\equiv\sqrt{5kT(a)/3\mu m_{p}}$ is the linear sound speed (in which $T$ denotes the temperature at mean
density and $\mu=0.588$ denotes the mean molecular weight for a fully ionized
medium of primordial element abundance). In a static background universe,
perturbations on scales smaller than $2\pi a/k_{J}$ are stable, i.e., the ambient
gas pressure is sufficient to counteract gravitational collapse. As a result,
cooling, fragmentation, and star formation are suppressed within objects of mass
less than the Jeans mass, $M_{J}$,
$$M_{J}(a)\equiv\frac{4\pi}{3}\,\bar{\rho}(a)\,\left(\frac{2\pi a}{k_{J}(a)}%
\right)^{3}=\frac{4\pi^{5/2}}{3}\,\frac{c_{s}^{3}(a)}{G^{3/2}\bar{\rho}^{1/2}(%
a)}\,.$$
(10)
The Jeans mass associated with a universe which has been heated by TeV
blazars, $M_{J,{\rm blazar}}$, is necessarily larger than one which has
only undergone photoionization heating, $M_{J,{\rm photo}}$ by
$$\frac{M_{J,{\rm blazar}}}{M_{J,{\rm photo}}}=\left(\frac{c_{\mathrm{s,blazar}}%
}{c_{\mathrm{s,photo}}}\right)^{3}=\left(\frac{T_{\mathrm{blazar}}}{T_{\mathrm%
{photo}}}\right)^{3/2}\gtrsim\left\{\begin{array}[]{rl}18,&\mbox{stand. model}%
,\\
50,&\mbox{opt. model},\end{array}\right.$$
(11)
where we used a temperature ratio of
$T_{\mathrm{blazar}}/T_{\mathrm{photo}}\gtrsim\{7,14\}$ at the present
epoch for the standard and optimistic model, respectively (see
Figure 1 and the discussion surrounding the
standard and optimistic blazar heating histories). That is, blazar
heating increases the mass of the smallest collapsed objects by more
than an order of magnitude with the exact value depending on the
adopted numbers of blazars contributing to the heating.
While the Jeans mass provides a way to estimate the importance of blazar
heating, $M_{J}$ typically exceeds $M_{C}$ by up to an order of magnitude
because it neglects the Hubble expansion. That is, it fails to account for the
time required for the pressure to influence the evolving gas distribution. A
more rigorous treatment of linearized density perturbations in a baryon-dark
matter fluid finds that the overdensity in dark matter, $\delta_{d}(t,k)$, and the
overdensity in the baryon distribution, $\delta_{b}(t,k)$, both of which are
functions of time and comoving wavenumber, are related by
$$\frac{\delta_{b}(t,k)}{\delta_{d}(t,k)}=1-\frac{k^{2}}{k_{F}^{2}}+\mathcal{O}(%
k^{4})\,,$$
(12)
for some $k_{F}$ (Gnedin & Hui, 1998). The “filtering scale”,
associated with $k_{F}$, defines a size below which baryonic
perturbations are smoothed despite the growth of background dark
matter perturbations. The filtering wavenumber can be related to
$k_{J}$ by
$$\displaystyle\frac{1}{k_{F}^{2}(t)}=\frac{1}{D_{+}(t)}\int_{0}^{t}dt^{\prime}a%
^{2}(t^{\prime})\frac{\ddot{D}_{+}(t^{\prime})+2H(t^{\prime})\dot{D}_{+}(t^{%
\prime})}{k_{J}^{2}(t^{\prime})}\int_{t^{\prime}}^{t}\frac{dt^{\prime\prime}}{%
a^{2}(t^{\prime\prime})}\,,$$
(13)
where $D_{+}(t)$ is the linear growth function and is dependent upon the cosmology
(Gnedin, 2000). While $k_{F}$ is related to $k_{J}$, at any time those
can be very different since $k_{F}$ is an integral over the past evolution of the
Jeans scale, weighted by the appropriately scaled growth function.
The associated mass scale, defined in analogy with the Jeans mass, is
$$M_{F}(a)\equiv\frac{4\pi}{3}\,\bar{\rho}(a)\,\left(\frac{2\pi a}{k_{F}(a)}%
\right)^{3}\,,$$
(14)
(the details of how this is computed in practice are collected in Appendices
A and B). To understand the physics underlying the
filtering mass, it is instructive to connect the filtering mass to the entropy
of the IGM. To this end, we use the definition for the entropy $K$ of
Equation (1). As we show in Appendix A, the filtering scale $\lambda_{F}=2\pi a/k_{F}$ is an integral over the entropy evolution,
$$\displaystyle\frac{1}{k_{F}^{2}(a)}$$
$$\displaystyle=$$
$$\displaystyle\frac{A_{0}}{D_{+}(a)}\int_{0}^{a}da^{\prime}K(a^{\prime})\frac{D%
_{+}(a^{\prime})}{a^{\prime 3}E(a^{\prime})}\int_{a^{\prime}}^{a}\frac{da^{%
\prime\prime}}{a^{\prime\prime 3}E(a^{\prime\prime})},$$
(15)
$$\displaystyle A_{0}$$
$$\displaystyle=$$
$$\displaystyle\frac{5}{3}\,\left(\frac{3\,\Omega_{m}}{8\pi GH_{0}}\right)^{2/3}.$$
(16)
Here $E(a)=H(a)/H_{0}$ is the dimensionless Hubble function. Since $M_{F}\propto k_{F}^{-3}$, the linear filtering mass is also determined by the entropy evolution
at mean density and appropriately scaled with the linear growth function, Hubble
function, and cosmic scale factor. Since the integrand in
Equation (15) is a positive definite function, the linear
filtering mass increases monotonically with time.
We expect the strong increase of the entropy due to blazar heating to drive a
comparably strong increase in $M_{F}$, though slightly delayed due to the
effective weighting function in $M_{F}$. Physically, this implies that the entropy
floor delivered by photo- or blazar heating at densities around the cosmic mean
is then conserved by adiabatic compression during structure formation and
directly translates to a linear filtering mass $M_{F}$. However, during this
nonlinear process of structure formation, the entropy can be additionally
augmented through dissipation of gravitational energy in cosmic
structure-formation shocks or decreased through radiative cooling. The
competition of these two processes determines which will dominate and in turn
sets the critical mass $M_{C}$ which is required for the condensation of gas into
a dark matter halo and eventually the formation of a galaxy.
As with $M_{J}$, $M_{F}$ typically exceeds the $M_{C}$ observed in simulations by a
substantial factor. Whether a halo can accrete gas is determined by the gas
temperature (or equivalently entropy) at the virial radius which can only be
obtained through hydrodynamic cosmological simulations. Estimated values for
this critical mass using cosmological simulations of nonlinear structure
formation with only photoheating yield $M_{C}(z=0)=6.5\times 10^{9}h^{-1}M_{\sun}$ (Hoeft et al., 2006; Okamoto et al., 2008)—one order of magnitude smaller than
the linear analog $M_{F}$ at $z=0$ (Okamoto et al., 2008, see also
Figure 7). This discrepancy becomes less for
increasing redshift leading to reasonably good agreement at high redshift ($z>6$) between $M_{F}$ and $M_{C}$. To model the nonlinear behavior of $M_{F}$, we can
introduce a correction factor $C(z)=M_{C}(z)/M_{F}(z)=(1+z)^{1.1}/11.8$, where
$M_{F}(z)$ has been modeled from the temperature evolution at mean density in the
cosmological simulations and includes weakly nonlinear aspects of structure
formation (Okamoto et al., 2008; Macciò et al., 2010). However, validating this
approximation for $M_{C}$ will ultimately require numerical simulations that
include blazar heating.
Figure 7 shows $M_{F}$ as a function of redshift for a void and the
cosmic mean when various heating mechanisms are considered (which agrees
for the standard cosmology with Figure 5 of Gnedin, 2000). Note that for all
of the thermal histories we considered, the resulting $M_{F}$ nicely follows the
expected analytical form over its full range of validity prior to recombination
(which necessarily implies an Einstein-de Sitter universe, see the derivation in
Appendix B). While blazar heating increases $M_{F}$ in voids as
early as $z\sim 3$, perturbations at the cosmic mean are affected slightly later
($z\sim 2.5$). The extra suppression of the formation of small galaxies due to
blazar heating amounts to a mass suppression factor of $\sim 25-70$ for voids and
$\sim 6.5-15$ for the cosmic mean today. The range indicates the uncertainty of the
blazar heating models and the larger value belongs to the model that matches the
inverted temperature-density relation in the Ly$\alpha$ forest found by Viel et al. (2009), i.e.,
the optimistic model. The resulting estimates for the nonlinear characteristic
mass $M_{C}$ are also shown in Figure 7 with red lines for our models
with and without blazar heating. Apparently the increase of $M_{C}$ due to blazar
heating is able to counteract the suppression of $M_{F}$ in nonlinear theory. We
note that our estimate for $M_{C}$ is slightly decreasing for $z<1$ in the model
that employs only photoheating. This decrease is an artifact of our linear
theory $M_{F}$ which by construction does not model nonlinear aspects of
structure formation such as shell crossing or formation shocks that raise the
temperature of some patches already at mean density.
We now turn to two outstanding problems in cosmological structure formation that
the recent blazar heating may help to address: the missing satellite (Section
3.1) and void dwarf (Section 3.2) problems. Finally in
Section 3.3 we discuss how blazar heating may naturally address some
of the difficulties with WDM cosmologies of galaxy formation.
3.1 Dwarf satellites in the Milky Way
The heating due to blazars provides an additional mechanism to suppress the
formation of dwarfs. Unlike photoionization models, which typically invoke the
heating at reionization, blazar heating provides a well defined, time-dependent
suppression mechanism, with the suppression rising dramatically after $z\sim 2$.
This can be seen explicitly by the steep increase in filtering mass for these
redshifts in Figure 7. In addition, due to the homogeneous nature
of the heating, i.e., a constant volumetric rate that is independent of density,
the heating from blazars suppresses structure formation most efficiently in the
low-density regions responsible for late-forming dwarf halos due to their
negative bias, where the energy deposited per baryon is larger.
In fact, the star formation histories of dwarf galaxies provide a strong
constraint upon the magnitude of blazar heating: if dwarf galaxy formation is
suppressed by high-energy gamma-ray emission from blazars, all dwarf star
formation histories must begin prior to $z\simeq 2$, roughly the redshift at
which blazar heating becomes significant. Over the past decade most of Local
Group dwarfs have been observed with the Hubble Space Telescope shifting
the focus from counting dwarfs to resolving the individual stellar population
within these objects. Thus, it has become possible to construct detailed
color-magnitude diagrams of every dwarf galaxy and therefore their detailed star
formation histories (Dolphin et al., 2005; Holtzman et al., 2006; Orban et al., 2008). While the
data shows a great variety of star formation histories—some continuous, some
bursty, some truncated—they all have in common that they extend beyond a
lookback time of $10\,{\rm G}{\rm yr}$ (corresponding to $z=2$). There is no known case of
a dwarf galaxy that formed its first set of stars after $z\simeq 2$. The fact that
there exists an old stellar population in every dwarf in the Local Group
provides a very important test that our model of blazar heating successfully
passes.
The degree to which blazar heating can suppress the number of dwarf galaxies
depends upon the redshifts at which they are typically formed. By combining
high-resolution $N$-body simulations of the evolution of Galaxy-sized halo with
semianalytic models of galaxy formation, Macciò & Fontanot (2010) and
Macciò et al. (2010) have inferred the statistics, formation
time,171717Macciò et al. (2010) defined the formation time of dwarfs as the
redshift where the progenitors of dwarfs that end up within the Milky Way halo
today exceeded a virial temperature of $T_{200}=10^{4}\,{\rm K}$ such that H i
cooling of the gas becomes possible in these halos. That virial temperature
corresponds to a mass threshold of $M_{200}\simeq 10^{9}\,M_{\sun}$ at $z=1$ and
$10^{8}\,M_{\sun}$ at $z=10$. and accretion histories of the Milky Way
satellites. Their findings are in good agreement with recent work by other
groups (Koposov et al., 2009; Muñoz et al., 2009; Guo et al., 2010; Busha et al., 2010; Font et al., 2011) and
thus can be regarded as representative for this model class. They find that the
distribution of formation times is bimodal as a result of the suppression of hot
gas accretion in low-mass halos due to the photoionization background; while
$2/3$ of today’s satellites form at redshifts ranging from $3<z<12$, $1/3$ of
all satellites form late at $z<3$ with most of them at $z<1.5$. Because
their formation is marked by the first time H i cooling can effectively
form the first stars, their stellar populations are necessarily younger than
their formation time of $z<1.5$ in direct conflict with observed ages of the
stellar populations in all of the Local Group dwarfs181818This assumes star
formation in primordial gas. However, other processes such as metal pollution
from adjacent galaxies might cause stars to form earlier at lower virial
temperatures of $T_{200}<10^{4}\,{\rm K}$.
(Dolphin et al., 2005; Holtzman et al., 2006; Orban et al., 2008). The model of
Macciò & Fontanot (2010) that successfully reproduced the satellite luminosity
function of the Milky Way assumed the linear theory filtering mass (Gnedin, 2000, see
solid blue line in Figure 7) which was shown to
significantly overproduce the characteristic halo mass scale below which baryons
cannot condense and form stars (Okamoto et al., 2008, see solid red line in
Figure 7). Adopting their less efficient nonlinear
filtering mass formalism, the number of satellite galaxies with intermediate
magnitudes $M_{V}\sim 10$ increases and creates a bump in the luminosity function
which is clearly inconsistent with the data (Macciò et al., 2010).
Inspection of nonlinear estimates for the filtering mass $M_{F}$ in Figure
7 shows that heating from blazars could prevent the condensation of
baryons in halos of masses $10^{10}M_{\sun}$ at redshifts $z\sim 2$ and likely up
to $10^{11}M_{\sun}$ at $z=0$. At mean density, these nonlinear estimates for
$M_{F}$ correspond to the original linear values of Gnedin (2000) which have
been shown to yield the observed dwarf satellite abundances and luminosity
functions (Somerville, 2002; Macciò et al., 2010). At the same time, the strongly
rising $M_{F}$ in the blazar heating models after $z\sim 2$ is able to suppress
the population of late-forming dwarfs (Macciò & Fontanot, 2010) that are in
conflict with measured star formation histories of dwarf galaxies. Thus, not
only can blazar heating potentially play a significant role in explaining the
observed abundances of dwarf galaxies but also their old star formation
histories.
Finally, we argue that the redshift evolution of the blazar heating rate should
manifest itself as stochasticity in the satellite luminosity function at fixed
host halo mass. Host halos show a distribution of formation times
(e.g., Wechsler et al., 2002). This distribution is inherited by the host’s
satellite dwarfs which on average form earlier than the host. As demonstrated in
Figure 1, blazar heating implies an entropy floor that
dramatically increases after $z\sim 2$. Hence, the distribution of formation
times of satellite dwarfs at fixed host halo mass results in a distribution of
pre-collapsed entropy of these dwarf halos. As a result, this leaves
early-forming dwarfs relatively unchanged but suppresses the baryon fraction or
even the formation of late-forming dwarfs. This results in different cooling
histories for different dwarfs, which, in turn, might modify the stellar content
that condenses out in these systems. Hence, blazar heating provides an
apparently substantial, physically motivated scatter in the satellite luminosity
function, or distribution of mass-to-light ratio at fixed host halo mass. This
physically motivated stochasticity has important implications for abundance
matching techniques that need to be taken into account and complicates their
use. We note that some of this stochasticity is already seen in mildly nonlinear
theory (which captures some aspects of the formation history) and manifests
itself as a scatter in entropy at large 1+$\delta$ in Figure 3.
3.2 The Void Phenomenon and the Faint-End Slope of the Galaxy Luminosity
and H i-Mass Function
As outlined in Section 1.2, the “void phenomenon” is closely
related to the substructure problem. Both show a strong discrepancy in the
abundance of dark matter (sub-)halos and paucity of luminous dwarf galaxies that
are thought to be hosted by these halos. Before we discuss in detail how blazar
heating impacts on dwarf galaxies in voids, we review the latest discussion on
the status of the problem itself which has recently been disputed to even exist.
Using a halo occupation distribution model approach, Tinker & Conroy (2009) claim
to have explained the problem as they find agreement between luminosity
functions, nearest neighbor statistics, and void probability function of faint
galaxies. However, a very high resolution simulation of the local volume of 8
Mpc around the Milky Way predicts a factor of 10 more dwarf halos than observed
dwarf galaxies in mini-voids with sizes ranging from 1 to 4.5 Mpc, hence
reinforcing the “void phenomenon” (Tikhonov & Klypin, 2009). While the
agreement between theory and observations is good for dwarfs with masses
$M_{200m}\gtrsim 10^{10}\,M_{\sun}$ (corresponding to maximum circular velocities
of $\upsilon_{c}\gtrsim 40~{}\mathrm{km~{}s}^{-1}$),191919We quote the values
corresponding to the lower bounds in their models that assumed a normalization
of the matter power spectrum of $\sigma_{8}=0.9$. This seems to be a
conservative choice when comparing to the critical mass threshold of
$M_{200m}>6\times 10^{9}\,M_{\sun}$ or equivalently $\upsilon_{c}>35~{}\mathrm{km~{}s}^{-1}$
which assumed a $\sigma_{8}=0.75$. For consistency reasons with the literature,
in Section 3, this section, we define the virial mass,
$M_{200m}$, as the mass of a sphere enclosing a mean density that is 200 times
the mean density of the universe. it fails below, suggesting that
Tinker & Conroy (2009) did not sample the relevant mass scales. Moreover, their
analysis only demonstrated the self-consistency of the halo occupation
distribution model and did not make a comparison between observations and the
predictions of $\Lambda$CDM.
The discrepancy of the luminosity function on dwarf scales is confirmed by other
recent studies that compare the circular velocity function of H i observations
with those obtained through dissipationless simulations (Zavala et al., 2009; Zwaan et al., 2010; Trujillo-Gomez et al., 2011). At $\upsilon_{c}<80~{}\mathrm{km~{}s}^{-1}$
(corresponding to $M_{200m}<10^{11}\,M_{\sun}$), the latest study finds a slight
deviation between theory and observations which amounts to a significant
overprediction of more than 10 times the number of observed systems at
$\upsilon_{c}=40~{}\mathrm{km~{}s}^{-1}$. While completeness of the observed samples could be
an explanation of some of the differences, it is unlikely that it can account
for all of the observed effects since the blind H i sample of the HIPASS survey
is thought to be complete down to $M_{\mathrm{H}\,\textsc{i}}<5.5\times 10^{7}\,M_{\sun}$ out
to a distance of 5 Mpc (Zwaan et al., 2010). Since gas-rich galaxies dominate at
the low mass end of the luminosity function, their sample should give an
accurate measurement of the abundance of dwarfs if these galaxies contain enough
neutral gas to be detected.
While blazar heating significantly increases the entropy, and hence the
associated filtering mass, at mean density, it does so more dramatically within
the voids. To address how this substantially increased heating manifests itself
in the number of void dwarfs, it is instructive to compare the formation
timescales of dwarf halos as a function of environment. Void dwarf galaxies are
negatively biased and form later than their field or cluster analogs (halos of a
given mass tend to be older in clusters and younger in voids for masses smaller
than the typical mass scale that is presently entering the nonlinear regime of
perturbation growth). This is because for galaxies forming on a large-scale
underdense mode, more time has to elapse before these galaxies acquire enough
overdensity to decouple from the Hubble expansion and subsequently collapse.
Thus, the median redshift of formation for a dwarf galaxy with halo mass
$2.4\times 10^{10}\,\mathrm{M}_{\sun}$ is $z_{\mathrm{form}}=2.1$ within cluster and 1.6
in voids (Hahn et al., 2007a). Moreover, the distribution of formation times of
void galaxies is more sharply confined around the median value than those for
clusters, which exhibit a long tail of formation redshifts, extending to
$z\sim 6$ (Hahn et al., 2007b). Our estimate of $M_{C}$ in underdense regions with
$1+\delta=0.5$ demonstrates that blazar heating prevents the condensation of
baryons in halos of masses of $10^{10}M_{\sun}$ at redshifts $z<2.4$ and $2.8\times 10^{11}M_{\sun}$ at $z=0$ (see Figure 7). In particular, halos
of $2\times 10^{10}M_{\sun}$ can be suppressed after $z=1.8$ which is earlier
than the median formation redshift of these galaxies. Depending on the exact
definition of voids, this suggests that more than half of the galaxies with
masses $M<2\times 10^{10}M_{\sun}$, corresponding to a maximum circular velocity
of $\upsilon_{c}<45~{}\mathrm{km\,s}^{-1}$, can be suppressed or severely affected by blazar
heating.
Such a preheated entropy floor not only suppresses dwarf formation at late
times ($z\lesssim 2$), but may also modify galaxy formation at the low-mass end.
Assuming that low-mass halos are embedded in a preheated medium with an entropy
floor of $10\,\mathrm{keV\,cm}^{2}$ at $z\lesssim 2$ simultaneously matches data at
the faint-end slope of the galaxy luminosity function as well as of the H i-mass function (Mo et al., 2005). This heuristic assumption almost exactly
coincides with the predictions of our blazar heating models (see
Figure 1). As a result of such a preheating, only a
fraction of the gas in a proto-galaxy region would be able to cool and be
accreted into the final galaxy halo by the present time. If the accreted gas
resides in the diffuse phase, it does not lose angular momentum to the dark
matter, thereby possibly continuing to form large galaxy discs in low-density
environments (Mo & Mao, 2002).
In summary, the entropy floor and filtering mass due to blazar heating are
dramatically increased in voids as a result of the constant volumetric heating
rate, which leads to an inverted temperature-density relation in low-density regions
(Paper II). In combination with the later formation epoch of dwarfs at these
low densities, this implies a very efficient mechanism for suppressing void
dwarf formation in collapsed dark matter halos. Hence, our model provides an
elegant physical solution to the void phenomenon described by
Peebles (2001).
3.3 Suppression of Dwarfs in Warm Dark Matter Cosmologies
Recent dissipationless $\Lambda$CDM simulations produce not only far too many
dark matter satellites but also show that the most massive subhalos in
simulations of the Milky Way may be too dense to host any of its observed bright
satellites with luminosities $L_{V}>10^{5}\mathrm{L}_{\sun}$, i.e., these massive
satellites in simulations attain their maximum circular velocity at too small
radii in comparison to the observed dwarf satellites
(Boylan-Kolchin et al., 2011b, a). These dark subhalos have
circular velocities at infall of $30-70\,\mathrm{km~{}s}^{-1}$ and infall masses of
$(0.2-4)\times 10^{10}\mathrm{M}_{\sun}$. In principle, this puzzle can be solved
(or partially solved) in the following ways: by increasing the stochasticity of
galaxy formation on these scales, by reducing the central (dark matter)
densities by means of very efficient and violent baryonic feedback processes
acting on timescales much faster than the free-fall time, by assuming a total
mass of the Milky Way at the lower end of the allowed uncertainty interval
(i.e., $\sim 8\times 10^{11}\,\mathrm{M}_{\odot}$) in combination with a shallower
subhalo density profile of the Einasto form as measured in simulations
(Vera-Ciro et al., 2012), or by allowing these subhalos to initially form with
lower concentrations as would be the case, for example, if the dark matter were
made of warm, rather than cold particles (Lovell et al., 2012).
In the limit of heavy WDM particles, e.g., the (sterile) neutrino with $m_{\nu}c^{2}\gtrsim 10\,{\rm k}{\rm eV}$ that is created in the early universe through mixing with
an active neutrino, structure formation proceeds almost indistinguishably from
CDM for all current observational probes (Seljak et al., 2006). For smaller
masses, however, the free streaming of neutrinos erases all fluctuations on
scales smaller than the free streaming length, which is roughly proportional to
their temperature and inversely proportional to their mass. Using the Ly$\alpha$ forest power spectrum measured by the Sloan Digital Sky Survey and
high-resolution spectroscopy observations in combination with CMB and galaxy
clustering constraints still allows for a neutrino with mass $m_{\nu}c^{2}>2.5~{}{\rm k}{\rm eV}$ (95% c.l.) that decoupled early while in thermal equilibrium
(Seljak et al., 2006).202020Since blazar heating dramatically changes the
thermal history of the IGM (Paper II) and may be responsible for the inverted
temperature-density relation at $z=2-3$ inferred by high-redshift Ly$\alpha$ studies
(Bolton et al., 2008; Viel et al., 2009; Puchwein et al., 2011), it is not clear whether these limits
on sterile neutrino properties are weakened in the presence of blazar
heating.
Dissipationless high-resolution simulations of the evolution of a Galaxy-sized
halo have shown that (sub-)halos form and are accreted later onto the main halo
in WDM scenarios compared to the standard CDM paradigm, due to the lack of power
on small scales in WDM (Macciò & Fontanot, 2010). In WDM scenarios with relatively
low values of the particle masses, $m_{\nu}c^{2}=(2-5)$ keV, there are almost no
halos with $z_{\mathrm{form}}\geq 11$, suggesting that the fraction of late-forming
dwarfs $z_{\mathrm{form}}\lesssim 1.5$ is increased over the CDM scenario. This late
formation epoch of dwarfs reinforces the star formation history problem of Local
Group dwarfs: these late-forming dwarfs contain young stellar populations, in
direct conflict with the old ages of the stellar populations ($\tau>10$ Gyr) in
all of the Local Group dwarfs (Dolphin et al., 2005; Holtzman et al., 2006; Orban et al., 2008).
However, these are precisely the objects that blazar heating most strongly
affects, as the strongly rising filtering mass (or entropy floor) after $z\sim 2$
is able to suppress the population of these late-forming dwarfs. Hence, while
in this scenario the free streaming of WDM erases power on scales smaller than
dwarfs, blazar heating reconciles the theoretically expected and the observed
star formation histories and alleviates standard objections to galaxy formation
in WDM cosmologies.
3.4 Impact on the formation of $L_{*}$ galaxies
In the previous sections, we argued that a blazar-heated entropy floor is able
to suppress late dwarf formation in voids and the Milky Way as well as modify
the thermodynamical profiles of galaxy groups and clusters. Hence, it is natural
to ask whether there would be any effect of blazar heating on $L_{*}$ galaxy
formation which represents the mass scale in between these two extremes. Blazar
heating is not powerful enough to raise the mean temperature at $\delta=0$ of
even the most extreme patches above $10^{5}\,{\rm K}$ (see Figure 9 in Paper II). The
classical criteria of galaxy formation are a short cooling time compared to the
dynamical time and to the age of the universe, $t_{\mathrm{cool}}\lesssim H^{-1}$
and $t_{\mathrm{cool}}\lesssim t_{\mathrm{dyn}}$
(Rees & Ostriker, 1977; Silk, 1977; White & Rees, 1978), which are easily fulfilled
even in the presence of blazar heating (see, e.g., Figure 1
in Rees & Ostriker, 1977). This implies that these galaxies can radiate away the
additional entropy that the gas attained prior to collapse due to blazar heating
within a free-fall time. It is, however, interesting to speculate whether the
blazar-heated high-entropy gas at low redshift $z<1$ has any impact on the late
accretion of gas into the hot reservoir of baryons from which gas cools and
fuels the late time star formation. Higher entropy gas should shock further out
in the halo than pre-cooled gas of lower entropy which is denser and can provide
a larger ram pressure. Blazar heated gas has high entropy and is more
dilute. Hence, it is more easily torqued (by dissipative processes or magnetic
fields) and therefore change its angular momentum distribution. Thus, it is
implausible that blazar heating dramatically changes the ordinary mode of galaxy
formation, but we might anticipate blazar heating to starve the late-time
accretion and potentially slow down subsequent star formation. These ideas are
subject to verification by numerical simulations of galaxy formation.
4 Conclusions
TeV blazar heating results in a dramatic increase in the entropy of the IGM
following He ii reionization around $z\sim 3.5$. Since the IGM entropy evolution
is critical for the formation and structure of collapsed objects, blazars
heating has a significant impact upon both. We have identified two mass ranges
(or classes of objects) for which the TeV blazar-induced entropy floor should
have a substantial effect. Galaxy groups and clusters, which are forming
near the peak entropy injection rate ($z\sim 1$) and exhibit core entropies that
are comparable to that implied by blazar heating; and dwarf galaxies,
which are susceptible to the rapidly rising entropy floor generated by blazars.
Below, we describe the consequences for each in more detail.
Galaxy groups and clusters. Immediately after formation, groups at fixed
mass should have a continuous distribution of core entropy values, depending on
the formation redshift and the temporally variable heating mechanism. The fate
of these groups is determined by the ratio of the cooling time, $t_{\mathrm{cool}}$,
to the timescale between cluster/group mergers, $t_{\mathrm{merger}}$. If this ratio
is smaller than unity, the group can radiate the elevated core entropy away and
evolve into a CC which survives the successive hierarchical
growth. Alternatively, if $t_{\mathrm{cool}}>t_{\mathrm{merger}}$, merger shocks can
gravitationally reprocess the entropy cores and amplify them. Those groups can
then evolve into NCC systems. Hence, it is not necessary to produce all of the
observed central entropy in the IGM before collapse, but it is also possible to
achieve this through gravitational heating, provided there is a certain minimum
entropy delivered by some putative heating process corresponding to a minimum
$t_{\mathrm{cool}}$. An increasing entropy floor also implies an increasing cooling
time, hence the cluster-averaged $t_{\mathrm{cool}}/t_{\mathrm{merger}}$ increases. It
follows that systems that evolve into CC systems today are on average
early-forming, i.e., old systems. In contrast, NCCs are on average young
systems.
We argue that systems with a unimodal distribution in core entropy values after
group formation should naturally evolve into a bimodal distribution. The reason
for this are the two attractor solutions of the group/cluster system, driven by
the cooling instability and gravitational reprocessing of (temporally
increasing) elevated entropy cores, resulting in the observed populations of CC
and NCC systems, respectively. Such an elevated entropy core level in groups
might explain a population of X-ray dim groups with low gas fractions. We show
that the core entropy values in CC clusters have a very weak correlation with
the mechanical energy and power of X-ray cavities inflated by AGNs. Apparently,
$PdV$ work done by the expanding cavities is an inefficient heating process that
does not generate much entropy. This strongly suggests that while AGN feedback
seems to be critical in stabilizing CC systems, it cannot transform CC into NCC
systems (at least on the buoyancy timescale).
Our blazar-induced entropy history seems to be well matched for the formation
times of today’s groups, but at first sight less so for clusters which form in
highly biased regions through mergers of groups. Those had to form even earlier
when the entropy for the average IGM was still rising with typical values
at $z\simeq 1$ of $K_{0}\simeq(25-50)\,{\rm k}{\rm eV}~{}{\rm c}{\rm m}^{2}$. However, two effects
positively interfere to counteract the apparently smaller effect of blazar
heating in clusters. First, the mass accretion rate is larger for larger
systems and at higher redshifts. This suggests that the earlier forming group
progenitors of clusters can tolerate a smaller blazar-heated entropy floor after
collapse which will then be gravitationally processed at a faster rate and able
to counteract the smaller cooling timescales. Second, blazars also turn on
first in highly biased regions, and thus the IGM in the vicinity of clusters
should experience blazar heating earlier than low-density regions. We speculate
that this effect in combination with the faster gravitational reprocessing of an
elevated entropy core in denser regions would help in propagating the effect of
blazar heating to the scale of massive clusters.
Changing the thermodynamic structure of groups and potentially clusters has also
an impact on the SZ power spectrum which is sensitive to cosmological parameters
such as $\sigma_{8}$ and the thermal pressure profile. An increased core entropy
level implies a smoother pressure profile and hence decreases the power in the
SZ power spectrum on scales smaller than $3\arcmin$ which are probed by current
experiments such as SPT and ACT. Lowering the astrophysical signal can allow for
larger values of $\sigma_{8}$, potentially reducing some (minor) tension with the
data especially after allowing for a contribution due to patchy reionization.
Dwarf galaxy formation. We demonstrate that the redshift-dependent entropy
floor increases the characteristic halo mass, $M_{C}$, below which dwarf galaxies
cannot form by a factor of 6.5–15 for the cosmic mean ($\delta=0$) and by a
factor of 25–70 for voids ($\delta=-0.5$). The range indicates the uncertainty
in the number of blazars that contribute to the heating and the upper envelope
matches the observations of an inverted temperature-density relation in the Ly$\alpha$ forest. The increase of $M_{C}$ prevents the formation of late-forming dwarf
galaxies ($z\lesssim 2$) with masses ranging from $10^{10}$ to
$10^{11}\,\mathrm{M}_{\sun}$ for redshifts $z\sim 2$ to 0, respectively. This may
resolve the “missing satellites problem” in the Milky Way, i.e., the low
observed abundance of dwarf satellites compared to CDM simulations. It also
brings the observed early star formation histories of Local Group dwarfs into
agreement with galaxy formation models that predicted a population of
late-forming objects, in conflict with the data. At the same time, it provides
a plausible explanation for the “void phenomenon” which is the apparent
discrepancy of the number of dwarfs in low-density regions in CDM simulations
and the paucity of those in observations. Blazar heating suppresses the
formation of galaxies within existing dwarf halos of masses $<3\times 10^{10}\,\mathrm{M}_{\sun}$ with a maximum circular velocity $<60~{}\mathrm{km~{}s}^{-1}$
for $z\lesssim 2$. Additionally, the phenomenology of such a preheating mechanism
matches heuristic assumptions that were adopted to match the faint-end slope of
the galaxy luminosity function as well as of the H i-mass function, in
particular for low-density environments.
We conclude that the presented scenario of blazar heating holds the promise for
solving some of the most outstanding problems in high-energy gamma-ray
astrophysics, the IGM as probed by the high-redshift Ly$\alpha$ forest, the formation
of galaxies, and clusters of galaxies. At the same time, it provides an
astrophysical solution to these problems such as the “missing satellite
problem” or the “void phenomenon” which have been claimed to require new
physics beyond the concordance cosmological $\Lambda$CDM model.
Acknowledgements.We thank Tom Abel, Marco Ajello, Marcelo Alvarez, Arif Babul,
Roger Blandford, James Bolton, Mike Boylan-Kolchin, Luigi Costamante, Andrei
Gruzinov, Peter Goldreich, Martin Haehnelt, Andrey Kravtsov, Hojun Mo, Ue-li
Pen, Ewald Puchwein, Volker Springel, Chris Thompson, Matteo Viel, Marc Voit,
and Risa Wechsler for useful discussions. We are indebted to Peng Oh for his
encouragement and useful suggestions. We thank Steve Furlanetto for kindly
providing technical expertise. We also thank the referee for a thorough reading
of the manuscript and for his constructive comments. These computations were
performed on the Sunnyvale cluster at CITA. A.E.B. and P.C. are supported by
CITA. A.E.B. gratefully acknowledges the support of the Beatrice D. Tremaine
Fellowship. C.P. gratefully acknowledges financial support of the Klaus Tschira
Foundation and would furthermore like to thank KITP for their hospitality during
the galaxy cluster workshop. This research was supported in part by the
National Science Foundation under Grant No. NSF PHY05-51164.
Appendix A Computing the Filtering Mass Generally
Here we collect the relevant details for computing the filtering mass,
$M_{F}$, associated with a particular cosmological and thermodynamic
evolution of the IGM. Recall that the filtering wavenumber is related
to the Jeans wavenumber via
$$\displaystyle\frac{1}{k_{F}^{2}(t)}=\frac{1}{D_{+}(t)}\int_{0}^{t}dt^{\prime}a%
^{2}(t^{\prime})\frac{\ddot{D}_{+}(t^{\prime})+2H(t^{\prime})\dot{D}_{+}(t^{%
\prime})}{k_{J}^{2}(t^{\prime})}\\
\displaystyle\times\int_{t^{\prime}}^{t}\frac{dt^{\prime\prime}}{a^{2}(t^{%
\prime\prime})}\,.$$
We can simplify Equation (13) by noting that the
linear growth function obeys the following equation
$$\ddot{D}_{+}(t)+2H(t)\dot{D}_{+}(t)=4\pi G\bar{\rho}D_{+}(t),,$$
(16)
in which the Hubble function is given by
$$\frac{H^{2}(a)}{H_{0}^{2}}=E^{2}(a)=a^{-3}\Omega_{m}+a^{-2}(1-\Omega_{m}-%
\Omega_{\Lambda})+\Omega_{\Lambda}\,.$$
(17)
Substituting the integration variable in Equation (13) by the
scale factor $a$, we obtain
$$\frac{1}{k_{F}^{2}(a)}=\frac{1}{D_{+}(a)}\int_{0}^{a}da^{\prime}\frac{c_{s}^{2%
}(a^{\prime})}{H_{0}^{2}}\frac{D_{+}(a^{\prime})}{a^{\prime}E(a^{\prime})}\int%
_{a^{\prime}}^{a}\frac{da^{\prime\prime}}{a^{\prime\prime 3}E(a^{\prime\prime}%
)}.$$
(18)
Following Carroll et al. (1992), the linear growth function can be
computed by
$$D_{+}(a)=\frac{5}{2}\,\Omega_{m}\,E(a)\,\int_{0}^{a}da^{\prime}\frac{1}{a^{%
\prime 3}E^{3}(a^{\prime})}.$$
(19)
For any given temperature evolution of the IGM (which enters via the sound
speed), we can compute a redshift evolution of the filtering scale from
Equation (18). It is convenient to define a filtering mass by analogy
with the Jeans mass:
$$M_{F}(a)\equiv\frac{4\pi}{3}\,\bar{\rho}(a)\,\left(\frac{2\pi a}{k_{F}(a)}%
\right)^{3}.$$
(20)
Finally, using a definition for the entropy of Equation (1), we can
derive the filtering scale $\lambda_{F}=2\pi a/k_{F}$ in terms of the entropy,
$$\displaystyle\frac{1}{k_{F}^{2}(a)}$$
$$\displaystyle=$$
$$\displaystyle\frac{A_{0}}{D_{+}(a)}\int_{0}^{a}da^{\prime}K(a^{\prime})\frac{D%
_{+}(a^{\prime})}{a^{\prime 3}E(a^{\prime})}\int_{a^{\prime}}^{a}\frac{da^{%
\prime\prime}}{a^{\prime\prime 3}E(a^{\prime\prime})},$$
(21)
$$\displaystyle A_{0}$$
$$\displaystyle=$$
$$\displaystyle\frac{5}{3}\,\left(\frac{3\,\Omega_{m}}{8\pi GH_{0}}\right)^{2/3}.$$
(22)
Appendix B Filtering mass in an Einstein-de Sitter universe
For sufficiently early times, all matter-dominated Friedmann-Lemaître model
universes asymptotically approach an Einstein-de Sitter universe ($z\gtrsim 2$ for our
$\Lambda$CDM universe), for which the cosmological constant $\Lambda=0$ and the
curvature $\Omega_{k}=0$. For such a universe, the growth function $D_{+}(a)\propto a$
which allows us to considerably simplify Equation (18), yielding
$$\frac{1}{k_{F}^{2}(a)}=\frac{3}{a}\int_{a_{\mathrm{min}}}^{a}da^{\prime}\,%
\frac{1}{k_{J}^{2}(a^{\prime})}\,\left[1-\left(\frac{a^{\prime}}{a}\right)^{1/%
2}\right].$$
(23)
Note that we replaced the unphysical lower integration limit 0 by the recombination
scale factor, $a_{a_{\mathrm{min}}}$, as baryon perturbations can only start to
grow after recombination. Employing the relation of Equation (20) we can
rewrite this to give
$$M_{F}^{2/3}(a)=\frac{3}{a}\int_{a_{\mathrm{min}}}^{a}da^{\prime}\,M_{J}^{2/3}(%
a^{\prime})\,\left[1-\left(\frac{a^{\prime}}{a}\right)^{1/2}\right].$$
(24)
After recombination, the residual electron density couples the gas temperature
still to that of the CMB via Compton interactions. Hence, we expect the gas
temperature to scale as $T\propto a^{-1}$. At $z\simeq 150$, the Compton
interaction rate drops below the Hubble expansion rate such that the gas
experiences adiabatic expansion with $T\propto a^{-2}$. Hence for the time after
matter-radiation equality, where $\bar{\rho}=\bar{\rho}_{0}\,a^{-3}$, we can write
in general
$$\frac{T(a)}{T_{0}}=\left(\frac{a}{a_{0}}\right)^{-\alpha},\quad\mbox{and}\quad%
\frac{k_{J}(a)}{k_{J,0}}=\left(\frac{a}{a_{0}}\right)^{(\alpha-1)/2}.$$
(25)
This definition for the Jeans scale implies a Jeans mass at a fiducial scale
factor, $a_{0}$, of
$$M_{J,0}\equiv\frac{4\pi}{3}\,\bar{\rho}(a_{0})\,\left(\frac{2\pi a_{0}}{k_{J,0%
}}\right)^{3}.$$
(26)
If we substitute Equation (25) into Equation (24), we can obtain an
analytical solution for the filtering mass before reionization,
$$M_{F}(a)=M_{J,0}\left\{\frac{3}{a}\int_{a_{\mathrm{min}}}^{a}da^{\prime}\,%
\left(\frac{a}{a_{0}}\right)^{1-\alpha}\,\left[1-\left(\frac{a^{\prime}}{a}%
\right)^{1/2}\right]\right\}^{3/2}.$$
(27)
Evaluating this integral for the two regimes before and after the freezeout of
Compton interactions of the gas with the CMB photons, yields the following analytic solutions,
$$M_{F}(a)=M_{J,0}\left\{\begin{aligned} &\displaystyle\left[\frac{6a_{0}}{a}%
\left(\sqrt{\frac{a_{\mathrm{min}}}{a}}-1+\frac{1}{2}\,\ln\frac{a}{a_{\mathrm{%
min}}}\right)\right]^{3/2}\\
&\displaystyle\qquad\qquad\qquad\qquad{\rm for}~{}150<z<1100\\
&\displaystyle\left[1-3\,\frac{a_{\mathrm{min}}}{a}+2\,\left(\frac{a_{\mathrm{%
min}}}{a}\right)^{3/2}\right]^{3/2}\\
&\displaystyle\qquad\qquad\qquad\qquad{\rm for}~{}z_{\mathrm{reion}}\leq z\leq
1%
50\,.\end{aligned}\right.$$
(28)
References
Abdo et al. (2010)
Abdo, A. A., et al. 2010, \prl, 104, 101101
Babul et al. (2002)
Babul, A., Balogh, M. L., Lewis, G. F., & Poole, G. B. 2002, \mnras,
330, 329
Balogh et al. (1999)
Balogh, M. L., Babul, A., & Patton, D. R. 1999, \mnras, 307, 463
Barkana & Loeb (1999)
Barkana, R., & Loeb, A. 1999, \apj, 523, 54
Battaglia et al. (2011)
Battaglia, N., Bond, J. R., Pfrommer, C., & Sievers, J. L. 2011,
arXiv:1109.3711
Battaglia et al. (2010)
Battaglia, N., Bond, J. R., Pfrommer, C., Sievers, J. L., & Sijacki,
D. 2010, \apj, 725, 91
Benson & Madau (2003)
Benson, A. J., & Madau, P. 2003, \mnras, 344, 835
Bialek et al. (2001)
Bialek, J. J., Evrard, A. E., & Mohr, J. J. 2001, \apj, 555, 597
Bîrzan et al. (2004)
Bîrzan, L., Rafferty, D. A., McNamara, B. R., Wise, M. W., &
Nulsen, P. E. J. 2004, \apj, 607, 800
Bolton et al. (2008)
Bolton, J. S., Viel, M., Kim, T., Haehnelt, M. G., & Carswell, R. F.
2008, \mnras, 386, 1131
Booth & Schaye (2009)
Booth, C. M., & Schaye, J. 2009, \mnras, 398, 53
Borgani et al. (2005)
Borgani, S., Finoguenov, A., Kay, S. T., Ponman, T. J., Springel, V.,
Tozzi, P., & Voit, G. M. 2005, \mnras, 361, 233
Borgani et al. (2001)
Borgani, S., Governato, F., Wadsley, J., Menci, N., Tozzi, P.,
Lake, G., Quinn, T., & Stadel, J. 2001, \apjl, 559, L71
Borgani & Kravtsov (2009)
Borgani, S., & Kravtsov, A. 2009, arXiv:0906.4370
Borgani & Viel (2009)
Borgani, S., & Viel, M. 2009, \mnras, 392, L26
Boylan-Kolchin et al. (2011a)
Boylan-Kolchin, M., Bullock, J. S., & Kaplinghat, M. 2011a,
arXiv:1111.2048
Boylan-Kolchin et al. (2011b)
—. 2011b, \mnras, 415, L40
Bret (2009)
Bret, A. 2009, \apj, 699, 990
Bret et al. (2004)
Bret, A., Firpo, M., & Deutsch, C. 2004, \pre, 70, 046401
Bret et al. (2005)
—. 2005, Physical Review Letters, 94, 115002
Bret et al. (2010)
Bret, A., Gremillet, L., & Dieckmann, M. E. 2010, Physics of Plasmas,
17, 120501
Broderick et al. (2012)
Broderick, A. E., Chang, P., & Pfrommer, C. 2012, \apj in print,
arXiv:1106.5494
Bryan & Norman (1998)
Bryan, G. L., & Norman, M. L. 1998, \apj, 495, 80
Bryan & Voit (2001)
Bryan, G. L., & Voit, G. M. 2001, \apj, 556, 590
Bullock et al. (2000)
Bullock, J. S., Kravtsov, A. V., & Weinberg, D. H. 2000, \apj, 539, 517
Bullock et al. (2001)
—. 2001, \apj, 548, 33
Busha et al. (2010)
Busha, M. T., Alvarez, M. A., Wechsler, R. H., Abel, T., & Strigari,
L. E. 2010, \apj, 710, 408
Carroll et al. (1992)
Carroll, S. M., Press, W. H., & Turner, E. L. 1992, \araa, 30, 499
Cavadini et al. (2011)
Cavadini, M., Salvaterra, R., & Haardt, F. 2011, arXiv:1105.4613
Cavagnolo et al. (2008)
Cavagnolo, K. W., Donahue, M., Voit, G. M., & Sun, M. 2008, \apjl,
683, L107
Cavagnolo et al. (2009)
—. 2009, \apjs, 182, 12
Chang et al. (2012)
Chang, P., Broderick, A. E., & Pfrommer, C. 2012, \apj in print,
arXiv:1106.5504
Chiu et al. (2001)
Chiu, W. A., Gnedin, N. Y., & Ostriker, J. P. 2001, \apj, 563, 21
Churazov et al. (2001)
Churazov, E., Brüggen, M., Kaiser, C. R., Böhringer, H., &
Forman, W. 2001, \apj, 554, 261
Croft et al. (2001)
Croft, R. A. C., Di Matteo, T., Davé, R., Hernquist, L., Katz,
N., Fardal, M. A., & Weinberg, D. H. 2001, \apj, 557, 67
Dai et al. (2010)
Dai, X., Bregman, J. N., Kochanek, C. S., & Rasia, E. 2010, \apj, 719,
119
Dalal & Kochanek (2002)
Dalal, N., & Kochanek, C. S. 2002, \apj, 572, 25
Dalcanton & Hogan (2001)
Dalcanton, J. J., & Hogan, C. J. 2001, \apj, 561, 35
Dekel & Woo (2003)
Dekel, A., & Woo, J. 2003, \mnras, 344, 1131
Dermer et al. (2011)
Dermer, C. D., Cavadini, M., Razzaque, S., Finke, J. D., Chiang, J.,
& Lott, B. 2011, \apjl, 733, L21
Dijkstra et al. (2004)
Dijkstra, M., Haiman, Z., Rees, M. J., & Weinberg, D. H. 2004, \apj,
601, 666
Dolag et al. (2011)
Dolag, K., Kachelriess, M., Ostapchenko, S., & Tomàs, R. 2011,
\apjl, 727, L4
Dolphin et al. (2005)
Dolphin, A. E., Weisz, D. R., Skillman, E. D., & Holtzman, J. A. 2005,
astro-ph/050643
Donahue et al. (2011)
Donahue, M., de Messières, G. E., O’Connell, R. W., Voit, G. M.,
Hoffer, A., McNamara, B. R., & Nulsen, P. E. J. 2011, \apj, 732, 40
Dubois et al. (2010)
Dubois, Y., Devriendt, J., Slyz, A., & Teyssier, R. 2010, \mnras, 409,
985
Dunkley et al. (2011)
Dunkley, J., et al. 2011, \apj, 739, 52
Dursi & Pfrommer (2008)
Dursi, L. J., & Pfrommer, C. 2008, \apj, 677, 993
Efstathiou (1992)
Efstathiou, G. 1992, \mnras, 256, 43P
Efstathiou & Migliaccio (2011)
Efstathiou, G., & Migliaccio, M. 2011, arXiv:1106.3208
Eisenstein & Hu (1998)
Eisenstein, D. J., & Hu, W. 1998, \apj, 496, 605
Enßlin et al. (2011)
Enßlin, T., Pfrommer, C., Miniati, F., & Subramanian, K. 2011,
\aap, 527, 99
Enßlin & Vogt (2006)
Enßlin, T. A., & Vogt, C. 2006, \aap, 453, 447
Evrard & Henry (1991)
Evrard, A. E., & Henry, J. P. 1991, \apj, 383, 95
Evrard et al. (1996)
Evrard, A. E., Metzler, C. A., & Navarro, J. F. 1996, \apj, 469, 494
Fang & Haiman (2008)
Fang, W., & Haiman, Z. 2008, \apj, 680, 200
Farrar & Peebles (2004)
Farrar, G. R., & Peebles, P. J. E. 2004, \apj, 604, 1
Fenner et al. (2006)
Fenner, Y., Gibson, B. K., Gallino, R., & Lugaro, M. 2006, \apj, 646,
184
Font et al. (2011)
Font, A. S., et al. 2011, arXiv:1103.0024
Fowler et al. (2010)
Fowler, J. W., et al. 2010, \apj, 722, 1148
Frenk et al. (1999)
Frenk, C. S., et al. 1999, \apj, 525, 554
Gnedin (2000)
Gnedin, N. Y. 2000, \apj, 542, 535
Gnedin & Hui (1998)
Gnedin, N. Y., & Hui, L. 1998, \mnras, 296, 44
Gnedin et al. (2009)
Gnedin, N. Y., Tassis, K., & Kravtsov, A. V. 2009, \apj, 697, 55
Gottlöber et al. (2001)
Gottlöber, S., Klypin, A., & Kravtsov, A. V. 2001, \apj, 546, 223
Guo & Oh (2008)
Guo, F., & Oh, S. P. 2008, \mnras, 384, 251
Guo & Oh (2009)
—. 2009, \mnras, 400, 1992
Guo et al. (2010)
Guo, Q., White, S., Li, C., & Boylan-Kolchin, M. 2010, \mnras, 404,
1111
Hahn et al. (2007a)
Hahn, O., Carollo, C. M., Porciani, C., & Dekel, A.
2007a, \mnras, 381, 41
Hahn et al. (2007b)
Hahn, O., Porciani, C., Carollo, C. M., & Dekel, A.
2007b, \mnras, 375, 489
Haiman et al. (2000)
Haiman, Z., Abel, T., & Rees, M. J. 2000, \apj, 534, 11
Haiman et al. (1997)
Haiman, Z., Rees, M. J., & Loeb, A. 1997, \apj, 476, 458
Hicks et al. (2008)
Hicks, A. K., et al. 2008, \apj, 680, 1022
Hinton & Hofmann (2009)
Hinton, J. A., & Hofmann, W. 2009, \araa, 47, 523
Hoeft et al. (2006)
Hoeft, M., Yepes, G., Gottlöber, S., & Springel, V. 2006, \mnras,
371, 401
Holtzman et al. (2006)
Holtzman, J. A., Afonso, C., & Dolphin, A. 2006, \apjs, 166, 534
Hopkins et al. (2007)
Hopkins, P. F., Richards, G. T., & Hernquist, L. 2007, \apj, 654, 731
Hui & Gnedin (1997)
Hui, L., & Gnedin, N. Y. 1997, \mnras, 292, 27
Hui & Haiman (2003)
Hui, L., & Haiman, Z. 2003, \apj, 596, 9
Iliev et al. (2008)
Iliev, I. T., Mellema, G., Pen, U., Bond, J. R., & Shapiro, P. R.
2008, \mnras, 384, 863
Iliev et al. (2007)
Iliev, I. T., Pen, U., Bond, J. R., Mellema, G., & Shapiro, P. R.
2007, \apj, 660, 933
Inoue & Totani (2009)
Inoue, Y., & Totani, T. 2009, \apj, 702, 523
Jubelgas et al. (2008)
Jubelgas, M., Springel, V., Enßlin, T., & Pfrommer, C. 2008, \aap,
481, 33
Kaiser (1986)
Kaiser, N. 1986, \mnras, 222, 323
Kaiser (1991)
—. 1991, \apj, 383, 104
Kauffmann et al. (1993)
Kauffmann, G., White, S. D. M., & Guiderdoni, B. 1993, \mnras, 264, 201
Kaufmann et al. (2007)
Kaufmann, T., Wheeler, C., & Bullock, J. S. 2007, \mnras, 382, 1187
Keisler et al. (2011)
Keisler, R., et al. 2011, \apj, 743, 28
Kitayama & Ikeuchi (2000)
Kitayama, T., & Ikeuchi, S. 2000, \apj, 529, 615
Kneiske & Mannheim (2008)
Kneiske, T. M., & Mannheim, K. 2008, \aap, 479, 41
Komatsu et al. (2011)
Komatsu, E., et al. 2011, \apjs, 192, 18
Koposov et al. (2009)
Koposov, S. E., Yoo, J., Rix, H.-W., Weinberg, D. H., Macciò,
A. V., & Escudé, J. M. 2009, \apj, 696, 2179
Kravtsov (2010)
Kravtsov, A. 2010, Advances in Astronomy, 2010
Kravtsov et al. (2004)
Kravtsov, A. V., Gnedin, O. Y., & Klypin, A. A. 2004, \apj, 609, 482
Kulsrud & Pearce (1969)
Kulsrud, R., & Pearce, W. P. 1969, \apj, 156, 445
Kunz et al. (2011)
Kunz, M. W., Schekochihin, A. A., Cowley, S. C., Binney, J. J., &
Sanders, J. S. 2011, \mnras, 410, 2446
Lemoine & Pelletier (2010)
Lemoine, M., & Pelletier, G. 2010, \mnras, 402, 321
Lidz et al. (2010)
Lidz, A., Faucher-Giguère, C., Dall’Aglio, A., McQuinn, M.,
Fechner, C., Zaldarriaga, M., Hernquist, L., & Dutta, S. 2010, \apj,
718, 199
Lovell et al. (2012)
Lovell, M. R., et al. 2012, \mnras, 420, 2318
Lueker et al. (2010)
Lueker, M., et al. 2010, \apj, 719, 1045
Lyutikov (2006)
Lyutikov, M. 2006, \mnras, 373, 73
Mac Low & Ferrara (1999)
Mac Low, M.-M., & Ferrara, A. 1999, \apj, 513, 142
Macciò & Fontanot (2010)
Macciò, A. V., & Fontanot, F. 2010, \mnras, 404, L16
Macciò et al. (2010)
Macciò, A. V., Kang, X., Fontanot, F., Somerville, R. S.,
Koposov, S., & Monaco, P. 2010, \mnras, 402, 1995
Mahdavi et al. (2000)
Mahdavi, A., Böhringer, H., Geller, M. J., & Ramella, M. 2000,
\apj, 534, 114
Mao et al. (2004)
Mao, S., Jing, Y., Ostriker, J. P., & Weller, J. 2004, \apjl, 604, L5
Markevitch (1998)
Markevitch, M. 1998, \apj, 504, 27
Marriage et al. (2011)
Marriage, T. A., et al. 2011, \apj, 737, 61
Mashchenko et al. (2008)
Mashchenko, S., Wadsley, J., & Couchman, H. M. P. 2008, Science, 319,
174
McCarthy et al. (2008)
McCarthy, I. G., Babul, A., Bower, R. G., & Balogh, M. L. 2008,
\mnras, 386, 1309
McCarthy et al. (2010)
McCarthy, I. G., et al. 2010, \mnras, 406, 822
McCourt et al. (2012)
McCourt, M., Sharma, P., Quataert, E., & Parrish, I. J. 2012, \mnras,
419, 3319
McNamara & Nulsen (2007)
McNamara, B. R., & Nulsen, P. E. J. 2007, \araa, 45, 117
Miniati et al. (2000)
Miniati, F., Ryu, D., Kang, H., Jones, T. W., Cen, R., & Ostriker,
J. P. 2000, \apj, 542, 608
Mo & Mao (2002)
Mo, H. J., & Mao, S. 2002, \mnras, 333, 768
Mo et al. (2005)
Mo, H. J., Yang, X., van den Bosch, F. C., & Katz, N. 2005, \mnras,
363, 1155
Muñoz et al. (2009)
Muñoz, J. A., Madau, P., Loeb, A., & Diemand, J. 2009, \mnras,
400, 1593
Narumoto & Totani (2006)
Narumoto, T., & Totani, T. 2006, \apj, 643, 81
Neronov & Vovk (2010)
Neronov, A., & Vovk, I. 2010, Science, 328, 73
Nickerson et al. (2011)
Nickerson, S., Stinson, G., Couchman, H. M. P., Bailin, J., &
Wadsley, J. 2011, \mnras, 415, 257
Nusser et al. (2005)
Nusser, A., Gubser, S. S., & Peebles, P. J. 2005, \prd, 71, 083505
Oh & Benson (2003)
Oh, S. P., & Benson, A. J. 2003, \mnras, 342, 664
Okamoto et al. (2008)
Okamoto, T., Gao, L., & Theuns, T. 2008, \mnras, 390, 920
Orban et al. (2008)
Orban, C., Gnedin, O. Y., Weisz, D. R., Skillman, E. D., Dolphin,
A. E., & Holtzman, J. A. 2008, \apj, 686, 1030
Parrish et al. (2010)
Parrish, I. J., Quataert, E., & Sharma, P. 2010, \apjl, 712, L194
Paul et al. (2011)
Paul, S., Iapichino, L., Miniati, F., Bagchi, J., & Mannheim, K.
2011, \apj, 726, 17
Peebles (2001)
Peebles, P. J. E. 2001, \apj, 557, 495
Pen (1999)
Pen, U. 1999, \apjl, 510, L1
Pfrommer & Dursi (2010)
Pfrommer, C., & Dursi, L. J. 2010, Nature Physics, 6, 520
Pfrommer et al. (2006)
Pfrommer, C., Springel, V., Enßlin, T. A., & Jubelgas, M. 2006,
\mnras, 367, 113
Ponman et al. (1999)
Ponman, T. J., Cannon, D. B., & Navarro, J. F. 1999, \nat, 397, 135
Poole et al. (2008)
Poole, G. B., Babul, A., McCarthy, I. G., Sanderson, A. J. R., &
Fardal, M. A. 2008, \mnras, 391, 1163
Pratt et al. (2010)
Pratt, G. W., et al. 2010, \aap, 511, 85
Puchwein et al. (2011)
Puchwein, E., Pfrommer, C., Springel, V., Broderick, A. E., & Chang,
P. 2011, arXiv:1107.3837
Puchwein et al. (2008)
Puchwein, E., Sijacki, D., & Springel, V. 2008, \apjl, 687, L53
Quinn et al. (1996)
Quinn, T., Katz, N., & Efstathiou, G. 1996, \mnras, 278, L49
Rafferty et al. (2006)
Rafferty, D. A., McNamara, B. R., Nulsen, P. E. J., & Wise, M. W.
2006, \apj, 652, 216
Rees & Ostriker (1977)
Rees, M. J., & Ostriker, J. P. 1977, \mnras, 179, 541
Robertson & Kravtsov (2008)
Robertson, B. E., & Kravtsov, A. V. 2008, \apj, 680, 1083
Ruszkowski & Oh (2010)
Ruszkowski, M., & Oh, S. P. 2010, \apj, 713, 1332
Ryu et al. (2003)
Ryu, D., Kang, H., Hallman, E., & Jones, T. W. 2003, \apj, 593, 599
Sanderson et al. (2009)
Sanderson, A. J. R., O’Sullivan, E., & Ponman, T. J. 2009, \mnras, 395,
764
Scannapieco et al. (2001)
Scannapieco, E., Thacker, R. J., & Davis, M. 2001, \apj, 557, 605
Seljak et al. (2006)
Seljak, U., Makarov, A., McDonald, P., & Trac, H. 2006, Physical
Review Letters, 97, 191303
Shapiro et al. (2004)
Shapiro, P. R., Iliev, I. T., & Raga, A. C. 2004, \mnras, 348, 753
Shirokoff et al. (2011)
Shirokoff, E., et al. 2011, \apj, 736, 61
Sigward et al. (2005)
Sigward, F., Ferrara, A., & Scannapieco, E. 2005, \mnras, 358, 755
Sijacki et al. (2008)
Sijacki, D., Pfrommer, C., Springel, V., & Enßlin, T. A. 2008,
\mnras, 387, 1403
Sijacki & Springel (2006)
Sijacki, D., & Springel, V. 2006, \mnras, 366, 397
Sijacki et al. (2007)
Sijacki, D., Springel, V., Di Matteo, T., & Hernquist, L. 2007,
\mnras, 380, 877
Silk (1977)
Silk, J. 1977, \apj, 211, 638
Somerville (2002)
Somerville, R. S. 2002, \apjl, 572, L23
Spergel & Steinhardt (2000)
Spergel, D. N., & Steinhardt, P. J. 2000, Physical Review Letters, 84,
3760
Stanek et al. (2010)
Stanek, R., Rasia, E., Evrard, A. E., Pearce, F., & Gazzola, L.
2010, \apj, 715, 1508
Sun et al. (2009)
Sun, M., Voit, G. M., Donahue, M., Jones, C., Forman, W., &
Vikhlinin, A. 2009, \apj, 693, 1142
Sunyaev & Zel’dovich (1972)
Sunyaev, R. A., & Zel’dovich, I. B. 1972, Comments Astrophys. Space Phys.,
4, 173
Sunyaev & Zeldovich (1980)
Sunyaev, R. A., & Zeldovich, I. B. 1980, \araa, 18, 537
Sutherland & Dopita (1993)
Sutherland, R. S., & Dopita, M. A. 1993, \apjs, 88, 253
Takahashi et al. (2012)
Takahashi, K., Mori, M., Ichiki, K., & Inoue, S. 2012, \apjl, 744, L7
Tassis et al. (2008)
Tassis, K., Kravtsov, A. V., & Gnedin, N. Y. 2008, \apj, 672, 888
Tavecchio et al. (2011)
Tavecchio, F., Ghisellini, G., Bonnoli, G., & Foschini, L. 2011,
\mnras, 414, 3566
Tavecchio et al. (2010)
Tavecchio, F., Ghisellini, G., Foschini, L., Bonnoli, G., Ghirlanda,
G., & Coppi, P. 2010, \mnras, 406, L70
Taylor et al. (2011)
Taylor, A. M., Vovk, I., & Neronov, A. 2011, \aap, 529, A144
Teyssier et al. (2011)
Teyssier, R., Moore, B., Martizzi, D., Dubois, Y., & Mayer, L. 2011,
\mnras, 618
Thoul & Weinberg (1996)
Thoul, A. A., & Weinberg, D. H. 1996, \apj, 465, 608
Tikhonov & Klypin (2009)
Tikhonov, A. V., & Klypin, A. 2009, \mnras, 395, 1915
Tinker & Conroy (2009)
Tinker, J. L., & Conroy, C. 2009, \apj, 691, 633
Tozzi & Norman (2001)
Tozzi, P., & Norman, C. 2001, \apj, 546, 63
Trac et al. (2011)
Trac, H., Bode, P., & Ostriker, J. P. 2011, \apj, 727, 94
Trujillo-Gomez et al. (2011)
Trujillo-Gomez, S., Klypin, A., Primack, J., & Romanowsky, A. J. 2011,
\apj, 742, 16
Uhlig et al. (2012)
Uhlig, M., Pfrommer, C., M., S., Nath, B., Enßlin, T. A., &
Springel, V. 2012, \mnras, subm.
Vanderlinde et al. (2010)
Vanderlinde, K., et al. 2010, \apj, 722, 1180
Venters (2010)
Venters, T. M. 2010, \apj, 710, 1530
Vera-Ciro et al. (2012)
Vera-Ciro, C. A., Helmi, A., Starkenburg, E., & Breddels, M. A. 2012,
arXiv:1202.6061
Viel et al. (2009)
Viel, M., Bolton, J. S., & Haehnelt, M. G. 2009, \mnras, 399, L39
Vikhlinin et al. (2007)
Vikhlinin, A., Burenin, R., Forman, W. R., Jones, C., Hornstrup, A.,
Murray, S. S., & Quintana, H. 2007, in Heating versus Cooling in
Galaxies and Clusters of Galaxies, ed. H. Böhringer, G. W. Pratt,
A. Finoguenov, & P. Schuecker , 48
Vikhlinin et al. (2006)
Vikhlinin, A., Kravtsov, A., Forman, W., Jones, C., Markevitch, M.,
Murray, S. S., & Van Speybroeck, L. 2006, \apj, 640, 691
Voit (2005)
Voit, G. M. 2005, Reviews of Modern Physics, 77, 207
Voit et al. (2003)
Voit, G. M., Balogh, M. L., Bower, R. G., Lacey, C. G., & Bryan,
G. L. 2003, \apj, 593, 272
Voit & Bryan (2001a)
Voit, G. M., & Bryan, G. L. 2001a, \apjl, 551, L139
Voit & Bryan (2001b)
—. 2001b, \nat, 414, 425
Voit et al. (2002)
Voit, G. M., Bryan, G. L., Balogh, M. L., & Bower, R. G. 2002, \apj,
576, 601
Voit et al. (2005)
Voit, G. M., Kay, S. T., & Bryan, G. L. 2005, \mnras, 364, 909
Vovk et al. (2012)
Vovk, I., Taylor, A. M., Semikoz, D., & Neronov, A. 2012, \apjl, 747,
L14
Wadepuhl & Springel (2011)
Wadepuhl, M., & Springel, V. 2011, \mnras, 410, 1975
Wechsler et al. (2002)
Wechsler, R. H., Bullock, J. S., Primack, J. R., Kravtsov, A. V., &
Dekel, A. 2002, \apj, 568, 52
White & Rees (1978)
White, S. D. M., & Rees, M. J. 1978, \mnras, 183, 341
Zavala et al. (2009)
Zavala, J., Jing, Y. P., Faltenbacher, A., Yepes, G., Hoffman, Y.,
Gottlöber, S., & Catinella, B. 2009, \apj, 700, 1779
Zentner & Bullock (2003)
Zentner, A. R., & Bullock, J. S. 2003, \apj, 598, 49
Zhao et al. (2009)
Zhao, D. H., Jing, Y. P., Mo, H. J., & Börner, G. 2009, \apj, 707,
354
Zwaan et al. (2010)
Zwaan, M. A., Meyer, M. J., & Staveley-Smith, L. 2010, \mnras, 403, 1969 |
Hadron-quark Pasta Phase in Massive Neutron Stars
Min Ju
School of Physics, Nankai University, Tianjin 300071, China
Jinniu Hu
School of Physics, Nankai University, Tianjin 300071, China; [email protected]
Hong Shen
School of Physics, Nankai University, Tianjin 300071, China; [email protected]
Abstract
The structured hadron-quark mixed phase, known as the pasta phase,
is expected to appear in the core of massive neutron stars.
Motivated by the recent advances in astrophysical observations,
we explore the possibility of the appearance of quarks inside neutron stars
and check its compatibility with current constraints.
We investigate the properties of the hadron-quark pasta phases and
their influences on the equation of state (EOS) for neutron stars.
In this work, we extend the energy minimization (EM) method to describe the
hadron-quark pasta phase, where the surface and Coulomb contributions are
included in the minimization procedure. By allowing different electron densities
in the hadronic and quark matter phases, the total electron chemical potential
with the electric potential remains constant, and local $\beta$ equilibrium is
achieved inside the Wigner–Seitz cell.
The mixed phase described in the EM method shows the features lying between
the Gibbs and Maxwell constructions, which is helpful for understanding the
transition from the Gibbs construction (GC) to the Maxwell construction (MC) with
increasing surface tension.
We employ the relativistic mean-field model to describe the hadronic matter,
while the quark matter is described by the MIT bag model with vector interactions.
It is found that the vector interactions among quarks can significantly
stiffen the EOS at high densities and help enhance the maximum mass of neutron stars.
Other parameters like the bag constant can also affect
the deconfinement phase transition in neutron stars.
Our results show that hadron-quark pasta phases may appear in the core
of massive neutron stars that can be compatible with current observational
constraints.
Neutron stars — Nuclear astrophysics — neutron star cores — gravitational waves
1 Introduction
The appearance of deconfined quark matter, which is expected to appear in the core of
massive neutron stars, has received increasing attention recently because
of its relevance to astrophysical observations (Lattimer & Prakash, 2016; Baym et al., 2018; Annala et al., 2020).
In the last decade, several breakthrough discoveries in astronomy provided
valuable information and constraints on the properties of neutron stars.
The precise mass measurements of
PSR J1614-2230 ($1.908\pm 0.016M_{\odot}$; Demorest et al. 2010;
Fonseca et al. 2016;
Arzoumanian et al. 2018),
PSR J0348+0432 ($2.01\pm 0.04M_{\odot}$; Antoniadis et al. 2013), and
PSR J0740+6620 ($2.08\pm 0.07M_{\odot}$; Cromartie et al. 2020;
Fonseca et al. 2020)
constrain the maximum neutron-star mass $M_{\rm{max}}$ to be larger than
about $2M_{\odot}$, which poses a challenge to our understanding of the
equation of state (EOS) of superdense matter.
The recent observations by the Neutron Star Interior Composition Explorer (NICER)
provided a simultaneous measurement of the mass and radius for
PSR J0030+0451, which was reported to have a mass of
$1.44_{-0.14}^{+0.15}M_{\odot}$ with a radius of $13.02_{-1.06}^{+1.24}$ km (Miller et al., 2019)
and a mass of $1.34_{-0.16}^{+0.15}M_{\odot}$ with a radius
of $12.71_{-1.19}^{+1.14}$ km (Riley et al., 2019) by two independent groups.
The new measurements by NICER for the most massive known neutron star,
PSR J0740+6620 ($2.08\pm 0.07M_{\odot}$),
showed that it has a radius of $13.7_{-1.5}^{+2.6}$ km by Miller et al. (2021)
and $12.39_{-0.98}^{+1.30}$ km by Riley et al. (2021).
In particular, the discovery of gravitational waves from a binary neutron-star merger
event GW170817 has opened a new era of multimessenger astronomy (Abbott et al., 2017).
Based on the observations of GW170817, the tidal deformability of a canonical $1.4M_{\odot}$
neutron star was estimated to be $70<\Lambda_{1.4}<580$ and the corresponding
radius was inferred to be $10.5<R_{1.4}<13.3$ km (Abbott et al., 2018).
More recently, the gravitational-wave events, GW190425 (Abbott et al., 2020a) and
GW190814 (Abbott et al., 2020b), were reported by LIGO and Virgo Collaborations.
The total mass of the GW190425 system is as large as $3.4^{+0.3}_{-0.1}M_{\odot}$,
which is even more massive than any neutron-star binary observed so far;
hence its gravitational-wave analyses may offer valuable information for the EOS
at high densities and possible phase transitions inside neutron stars.
The GW190814 event was detected from a compact binary coalescence involving a
22.2–24.3$M_{\odot}$ black hole and a 2.50–2.67$M_{\odot}$ compact object
that could be either the heaviest neutron star
or the lightest black hole ever observed (Abbott et al., 2020b).
The discovery of GW190814 has triggered many theoretical efforts exploring
the nature of the secondary object and its implications for high-density
EOS (Fattoyev et al., 2020; Huang et al., 2020; Tews et al., 2021).
The gravitational-wave analyses (Essick & Landry, 2020; Tews et al., 2021) suggested that
the secondary in GW190814 is more likely to be a black hole, but
its possibility as a neutron star cannot be ruled out.
Several studies have suggested that the secondary in GW190814 may be a rapidly
rotating neutron star (Li et al., 2020; Most et al., 2020; Tsokaros et al., 2020; Zhang & Li, 2020).
Moreover, it could be also considered as a heavy neutron star containing deconfined
quark matter, where the inclusion of quarks has significant impact on
the observations (Tan et al., 2020; Demircik et al., 2021; Dexheimer et al., 2021).
In light of these recent developments in astronomy, it would be
interesting and informative to explore possible structures of
the hadron-quark mixed phase in massive neutron stars.
For the description of hadron-quark phase transition in neutron stars,
both Gibbs and Maxwell constructions are often used depending on the
surface tension at the interface (Bhattacharyya et al., 2010).
In the limit of zero surface tension, the Gibbs equilibrium conditions
are satisfied between the two coexisting
phases, and only global charge neutrality is imposed in the mixed
phase (Glendenning, 1992; Schertler et al., 2000; Yang & Shen, 2008; Xu et al., 2010; Wu & Shen, 2017).
On the other hand, the Maxwell construction (MC) is valid for sufficiently large
surface tension, where local charge neutrality is enforced and the transition
takes place at constant pressure (Bhattacharyya et al., 2010; Han et al., 2019; Wu & Shen, 2019).
It is noteworthy that only bulk contributions
are involved in the Gibbs and Maxwell constructions, whereas the finite-size
effects like surface and Coulomb energies are neglected.
In a more realistic case, due to the competition between surface and Coulomb
energies, some geometric structures may be formed in the hadron-quark mixed phase,
known as hadron-quark pasta phases (Heiselberg et al., 1993; Endo et al., 2006; Maruyama et al., 2007; Yasutake et al., 2014; Spinella et al., 2016; Weber et al., 2019; Wu & Shen, 2019).
This structured mixed phase is analogous
to nuclear pasta phase in the inner crust of neutron stars.
Several methods have been developed to study the properties of hadron-quark pasta
phases. In the coexisting phases (CP) method (Wu & Shen, 2019), the hadronic and quark
phases are assumed to satisfy the Gibbs conditions for phase equilibrium,
while the surface and Coulomb energies are taken into account perturbatively.
A more realistic description of the pasta phase has been developed in a series of
works (Endo et al., 2006; Maruyama et al., 2007; Yasutake et al., 2014), where the Thomas–Fermi approximation was used to
describe the density profiles of hadrons and quarks in the Wigner–Seitz cell.
For simplicity, the particle densities in the two coexisting phases are generally
assumed to be spatially constant, and the charge screening effect is neglected.
In our previous works (Wu & Shen, 2019; Ju et al., 2021), we proposed an energy minimization (EM) method
for improving the treatment of surface and Coulomb energies, which play a key role in
determining the structure of the pasta phases. By incorporating the surface and Coulomb
contributions in the EM procedure, one can derive the equilibrium
conditions for coexisting phases that are different from the
Gibbs conditions used in the CP method.
In the present work, we further extend the EM method for exploring the hadron-quark
pasta phases in massive neutron stars.
In the EM method, the hadron-quark pasta phase is described within the Wigner–Seitz
approximation, where the whole space is divided into equivalent cells with a
geometric symmetry. The hadronic and quark phases in a charge-neutral cell are
assumed to be separated by a sharp interface. In the more realistic description
of Tatsumi et al. (2003), there is a thin boundary layer between two separate bulk phases,
which can lead to a difference in the electric potential between the two phases.
As a result, the electron densities in the hadronic and quark phases are allowed
to be different from each other, whereas the electron chemical potentials are equal.
According to this argument, the MC could be stable although the
electron densities in two separate bulk phases are different.
The difference in the electron densities between two separate phases can be
understood as a result of the charge screening effect.
In the present study, we would like to evaluate how important the charge screening
effect is for the hadron-quark pasta phases. Meanwhile, the results in the EM method
will be compared to those obtained with a uniform electron gas in the CP method.
To describe the hadronic phase, we employ the relativistic mean-field (RMF) model
and choose the recently proposed BigApple parameterization (Fattoyev et al., 2020).
In the past decades, the RMF approach based on various energy density functionals
has been successfully applied in the description of finite nuclei and infinite nuclear
matter. Several popular RMF models, such as NL3 (Lalazissis et al., 1997), TM1 (Sugahara & Toki, 1994),
and IUFSU (Fattoyev et al., 2010), have been widely used in astrophysical applications,
since they can not only reproduce experimental data of finite nuclei
but also predict large enough neutron-star masses.
The BigApple model was proposed by Fattoyev et al. (2020) after the discovery of GW190814,
in which they considered various constraints from astrophysical observations as well as
the ground-state properties of finite nuclei.
The BigApple model predicts a maximum neutron-star mass of $M_{\rm{max}}=2.6M_{\odot}$,
while the resulting radius and tidal deformability of neutron stars are consistent
with GW170817 and NICER observations.
It is well known that nuclear symmetry energy and its slope play
an important role in understanding the properties of neutron stars (Oertel et al., 2017; Ji et al., 2019).
There exists a positive correlation between the symmetry energy slope $L$ and the
neutron-star radius (Alam et al., 2016).
The NL3 and TM1 models have a rather large slope parameter, which results in too large
a radius and tidal deformability for a canonical $1.4M_{\odot}$ neutron star
as compared to the estimations from astrophysical observations (Ji et al., 2019).
In the present work, we prefer to employ the BigApple model with a small slope
$L$, which is more consistent with the analysis of the GW170817 event.
Generally, the appearance of hyperons at high densities would considerably soften
the EOS and reduce the maximum neutron-star mass (Oertel et al., 2017).
However, the influence of hyperons may be suppressed by incorporating quark degrees
of freedom or introducing additional repulsion for hyperons (Lonardoni et al., 2015; Gomes et al., 2019).
Currently, there are large uncertainties in the hyperon–nucleon and hyperon–hyperon
interactions due to limited experimental data.
For simplicity, we do not include hyperons in the present calculations
and focus on the influence of deconfined quarks in the core of neutron stars.
We utilize a modified MIT bag model with vector interactions, often referred to as
the vMIT model (Gomes et al., 2019; Han et al., 2019), to describe the quark phase.
It has been shown in the literature that including vector interactions among quarks
can significantly stiffen the EOS at high densities and help to produce massive
neutron stars in agreement with the astronomical observations (Klähn & Fischer, 2015).
The vector interaction in the vMIT model is introduced via the exchange of a vector meson
that is analogous to the $\omega$ meson in the Walecka model (Lopes et al., 2021).
The vector coupling constant is usually treated as a free parameter
and a universal coupling is assumed for all quark flavors.
In Han et al. (2019) and Gomes et al. (2019), the influences of the vector interaction on the properties of compact stars
were studied using the Gibbs and Maxwell constructions to describe the hadron-quark mixed phase
without considering possible geometric structures.
In the present work, we intend to investigate how the hadron-quark pasta phases are
affected by the vector interactions among quarks within the EM method.
We have two aims in this paper. The first is to explore the possibility of
the appearance of hadron-quark pasta phases in the core of massive neutron stars.
By comparing with current constraints, we investigate the compatibility between
the hadron-quark phase transition and astrophysical observations,
as well as the influence of the quark vector interactions.
The second aim is to extend the EM method for describing the hadron-quark
pasta phases with the charge screening effect.
By allowing different electron densities in separate bulk phases,
the local $\beta$ equilibrium is achieved inside the Wigner–Seitz cell.
Although in more precise Thomas–Fermi calculations the electron density
should depend on the position, a simplified treatment of the charge screening
effect with different constant electron densities in the two bulk phases
is helpful for understanding the transition from the GC
to the MC.
Comparing with the GC, the EM method used in this work
involves the effects of finite size and charge screening,
which are essential for determining the structure of the hadron-quark mixed phase.
This paper is arranged as follows.
In Section 2, we briefly describe the framework for describing
the hadronic and quark phases.
The EM method for the hadron-quark pasta phases is presented in Section 3.
In Section 4, we discuss numerical results of the hadron-quark pasta phases
and neutron-star properties.
Section 5 is devoted to a summary.
2 Models for hadronic and quark phases
In this section, we briefly describe the RMF model for the hadronic matter and
the vMIT model for the quark matter. In addition, we explain the model parameters
used in our calculations.
2.1 Hadronic Phase
We employ the RMF model with the BigApple parameterization to describe
the hadronic matter, where nucleons interact through the exchange of various mesons
including the isoscalar–scalar $\sigma$ meson, the isoscalar–vector $\omega$ meson,
and the isovector–vector $\rho$ meson.
The Lagrangian density for hadronic matter consisting of nucleons ($p$ and $n$) and
leptons ($e$ and $\mu$) is written as
$$\displaystyle\mathcal{L}$$
$$\displaystyle=$$
$$\displaystyle\sum_{i=p,n}\bar{\psi}_{i}\left\{i\gamma_{\mu}\partial^{\mu}-M-g_{\sigma}\sigma-\gamma_{\mu}\left[g_{\omega}\omega^{\mu}+\frac{g_{\rho}}{2}\tau_{a}\rho^{a\mu}\right]\right\}\psi_{i}$$
(1)
$$\displaystyle+\frac{1}{2}\partial_{\mu}\sigma\partial^{\mu}\sigma-\frac{1}{2}m^{2}_{\sigma}\sigma^{2}-\frac{1}{3}g_{2}\sigma^{3}-\frac{1}{4}g_{3}\sigma^{4}$$
$$\displaystyle-\frac{1}{4}W_{\mu\nu}W^{\mu\nu}+\frac{1}{2}m^{2}_{\omega}\omega_{\mu}\omega^{\mu}+\frac{1}{4}c_{3}\left(\omega_{\mu}\omega^{\mu}\right)^{2}$$
$$\displaystyle-\frac{1}{4}R^{a}_{\mu\nu}R^{a\mu\nu}+\frac{1}{2}m^{2}_{\rho}\rho^{a}_{\mu}\rho^{a\mu}+\Lambda_{\rm{v}}\left(g_{\omega}^{2}\omega_{\mu}\omega^{\mu}\right)\left(g_{\rho}^{2}\rho^{a}_{\mu}\rho^{a\mu}\right)$$
$$\displaystyle+\sum_{l=e,\mu}\bar{\psi}_{l}\left(i\gamma_{\mu}\partial^{\mu}-m_{l}\right)\psi_{l},$$
where $W^{\mu\nu}$ and $R^{a\mu\nu}$ represent the antisymmetric field
tensors for $\omega^{\mu}$ and $\rho^{a\mu}$, respectively.
Under the mean-field approximation, the meson fields are treated as classical
fields, which are denoted by
$\sigma=\left\langle\sigma\right\rangle$,
$\omega=\left\langle\omega^{0}\right\rangle$,
and $\rho=\left\langle\rho^{30}\right\rangle$.
These mean fields can be obtained by solving a set of coupled equations in the RMF model.
For the hadronic matter in $\beta$ equilibrium, the chemical potentials
satisfy the relations $\mu_{p}=\mu_{n}-\mu_{e}$ and $\mu_{\mu}=\mu_{e}$.
At zero temperature, the chemical potentials of nucleons and leptons are expressed as
$$\displaystyle\mu_{i}$$
$$\displaystyle=$$
$$\displaystyle\sqrt{{k_{F}^{i}}^{2}+{M^{\ast}}^{2}}+g_{\omega}\omega+\frac{g_{\rho}}{2}\tau_{3}\rho,\hskip 14.22636pti=p,n,$$
(2)
$$\displaystyle\mu_{l}$$
$$\displaystyle=$$
$$\displaystyle\sqrt{{k_{F}^{l}}^{2}+m_{l}^{2}},\hskip 85.35826ptl=e,\mu,$$
(3)
with $\tau_{3}=+1$ and $-1$ for protons and neutrons, respectively.
The effective nucleon mass is defined as $M^{\ast}=M+g_{\sigma}{\sigma}$.
The total energy density and pressure in hadronic matter are written as
$$\displaystyle\varepsilon_{\rm{HP}}$$
$$\displaystyle=$$
$$\displaystyle\sum_{i=p,n}\varepsilon^{i}_{\rm{FG}}+\sum_{l=e,\mu}\varepsilon^{l}_{\rm{FG}}$$
(4)
$$\displaystyle+\frac{1}{2}m^{2}_{\sigma}{\sigma}^{2}+\frac{1}{3}{g_{2}}{\sigma}^{3}+\frac{1}{4}{g_{3}}{\sigma}^{4}+\frac{1}{2}m^{2}_{\omega}{\omega}^{2}$$
$$\displaystyle+\frac{3}{4}{c_{3}}{\omega}^{4}+\frac{1}{2}m^{2}_{\rho}{\rho}^{2}+3{\Lambda}_{\rm{v}}\left(g^{2}_{\omega}{\omega}^{2}\right)\left(g^{2}_{\rho}{\rho}^{2}\right),$$
$$\displaystyle P_{\rm{HP}}$$
$$\displaystyle=$$
$$\displaystyle\sum_{i=p,n}P^{i}_{\rm{FG}}+\sum_{l=e,\mu}P^{l}_{\rm{FG}}$$
(5)
$$\displaystyle-\frac{1}{2}m^{2}_{\sigma}{\sigma}^{2}-\frac{1}{3}{g_{2}}{\sigma}^{3}-\frac{1}{4}{g_{3}}{\sigma}^{4}+\frac{1}{2}m^{2}_{\omega}{\omega}^{2}$$
$$\displaystyle+\frac{1}{4}{c_{3}}{\omega}^{4}+\frac{1}{2}m^{2}_{\rho}{\rho}^{2}+\Lambda_{\rm{v}}\left(g^{2}_{\omega}{\omega}^{2}\right)\left(g^{2}_{\rho}{\rho}^{2}\right),$$
where $\varepsilon^{i}_{\rm{FG}}$ and $P^{i}_{\rm{FG}}$ denote the Fermi gas contributions
of species $i$ with a mass $m_{i}$ and degeneracy $N_{i}$,
$$\displaystyle\varepsilon^{i}_{\rm{FG}}$$
$$\displaystyle=$$
$$\displaystyle N_{i}\int_{0}^{k^{i}_{F}}\frac{d^{3}k}{(2\pi)^{3}}\sqrt{k^{2}+m_{i}^{2}},$$
(6)
$$\displaystyle P^{i}_{\rm{FG}}$$
$$\displaystyle=$$
$$\displaystyle\frac{N_{i}}{3}\int_{0}^{k^{i}_{F}}\frac{d^{3}k}{(2\pi)^{3}}\frac{k^{2}}{\sqrt{k^{2}+m_{i}^{2}}}.$$
(7)
For nucleons, the effective mass $m_{i}=M^{\ast}$ is used and $N_{i}=2$ represents the spin degeneracy.
In the present calculations, we employ the BigApple parameterization given in Fattoyev et al. (2020),
which could provide an accurate description of ground-state properties for finite nuclei
across the nuclear chart. Its predictions for infinite nuclear matter at the saturation density
$n_{0}=0.155\,\rm{fm}^{-3}$ are
$E_{0}=-16.34\,\rm{MeV}$ for energy per nucleon,
$K=227\,\rm{MeV}$ for incompressibility,
$E_{\rm{sym}}=31.3\,\rm{MeV}$ for symmetry energy,
and $L=39.8\,\rm{MeV}$ for the slope of symmetry energy.
The small slope parameter $L$ in the BigApple model leads to acceptable radius
and tidal deformability for a canonical $1.4M_{\odot}$ neutron star,
as compared to the estimations from astrophysical observations.
Moreover, the BigApple model predicts a maximum neutron-star mass of $2.6M_{\odot}$,
which is sufficiently large for exploring possible phase transitions in neutron stars.
2.2 Quark Phase
We use a modified MIT bag model with vector interactions (vMIT) to describe the quark phase.
The inclusion of repulsive vector interactions among quarks plays
a crucial role in obtaining a stiff high-density EOS that is required by
the observations of $\sim 2M_{\odot}$ neutron stars.
In the vMIT model, the vector interaction is introduced via the exchange of
a vector meson with the mass $m_{V}$. We consider the quark matter consisting
of three flavor quarks ($u$, $d$, and $s$) and leptons ($e$ and $\mu$).
The Lagrangian density of the vMIT model in the mean-field approximation is written as
$$\displaystyle\mathcal{L}$$
$$\displaystyle=$$
$$\displaystyle\sum_{i=u,d,s}\left[\bar{\psi}_{i}\left(i\gamma_{\mu}\partial^{\mu}-m_{i}-g_{V}\gamma_{\mu}V^{\mu}\right)\psi_{i}-B\right.$$
(8)
$$\displaystyle\left.+\frac{1}{2}m_{V}^{2}V_{\mu}V^{\mu}\right]\Theta+\sum_{l=e,\mu}\bar{\psi_{l}}\left(i\gamma_{\mu}\partial^{\mu}-m_{l}\right)\psi_{l},$$
where $B$ denotes the bag constant and $\Theta$ is the Heaviside step function
representing the confinement of quarks inside the bag.
The nonzero mean field $V_{0}$ is calculated from the equation of motion for the vector meson,
$$\displaystyle m_{V}^{2}V_{0}$$
$$\displaystyle=$$
$$\displaystyle g_{V}\sum_{i=u,d,s}n_{i},$$
(9)
with $n_{i}$ being the number density of the quark flavor $i$.
The quark chemical potential is then given by
$$\displaystyle\mu_{i}=\sqrt{{k^{i}_{F}}^{2}+m_{i}^{2}}+g_{V}V_{0},$$
(10)
which is clearly enhanced by the vector potential.
The total energy density and pressure in quark matter are written as
$$\displaystyle\varepsilon_{\rm{QP}}$$
$$\displaystyle=$$
$$\displaystyle\sum_{i=u,d,s}\varepsilon^{i}_{\rm{FG}}+\sum_{l=e,\mu}\varepsilon^{l}_{\rm{FG}}$$
(11)
$$\displaystyle+\frac{1}{2}\left(\frac{g_{V}}{m_{V}}\right)^{2}\left(n_{u}+n_{d}+n_{s}\right)^{2}+B,$$
$$\displaystyle P_{\rm{QP}}$$
$$\displaystyle=$$
$$\displaystyle\sum_{i=u,d,s}P^{i}_{\rm{FG}}+\sum_{l=e,\mu}P^{l}_{\rm{FG}}$$
(12)
$$\displaystyle+\frac{1}{2}\left(\frac{g_{V}}{m_{V}}\right)^{2}\left(n_{u}+n_{d}+n_{s}\right)^{2}-B,$$
where $\varepsilon^{i}_{\rm{FG}}$ and $P^{i}_{\rm{FG}}$ denote the Fermi gas contributions
of species $i$, as given by Equations (6) and (7).
For the quark flavor $i$, the degeneracy $N_{i}=6$ arises from the spin and color degrees
of freedom, while $m_{i}$ represents the current quark mass.
The vector interactions among quarks are crucial for high-density EOS.
In practice, we vary the vector coupling $G_{V}=\left(g_{V}/m_{V}\right)^{2}$
in the range of 0–0.3 fm${}^{2}$ in order to examine the impact of vector interactions.
We adopt the current quark masses $m_{u}=m_{d}=5.5$ MeV and $m_{s}=95$ MeV in our
calculations. As for the bag constant, we mainly use the value of $B^{1/4}=180$ MeV.
It is well known that the bag constant could significantly affect the EOS of quark matter
and consequently influence the hadron-quark phase transition.
We will compare the results with different choices of $B$ in the vMIT model for quark matter,
to evaluate the influence of the bag constant on the hadron-quark
pasta phases in neutron stars.
3 Hadron-quark pasta phases
To describe the hadron-quark pasta phases, we employ the (EM) method
within the Wigner–Seitz approximation, where the whole space is divided into equivalent
cells with a geometric symmetry.
The coexisting hadronic and quark phases in a charge-neutral cell are assumed to be
separated by a sharp interface, while the particle densities in each phase are taken
to be uniform for simplicity. This is analogous to the compressible liquid-drop
model used in the study of nuclear liquid-gas phase transition at subnuclear
densities (Lattimer & Swesty, 1991; Bao et al., 2014).
It is well known that the surface tension at the interface plays a crucial role
in determining the structure of the mixed phase.
The Gibbs and Maxwell constructions, respectively, correspond to
the two extreme cases of zero and large surface tension.
The hadron-quark mixed phase described in the EM method can be understood as an
intermediate state lying between the Gibbs and Maxwell constructions.
For the EM method used in our previous works (Wu & Shen, 2019; Ju et al., 2021), the electrons
are assumed to be uniformly distributed throughout the whole cell.
This is consistent with the GC, but contradicts the MC
where the requirement of local charge neutrality leads to different
electron densities in the two phases.
In order to understand the transition from the GC to the MC
with increasing surface tension, we extend the EM method by allowing
different electron densities in the hadronic and quark phases in the present study.
This may be caused by the difference in the electric potential between the two phases,
considered as a result of the charge screening effect.
In the EM method, the total energy density of the mixed phase is expressed as
$$\displaystyle\varepsilon_{\rm{MP}}$$
$$\displaystyle=$$
$$\displaystyle\chi\varepsilon_{\rm{QP}}+\left(1-\chi\right)\varepsilon_{\rm{HP}}+\varepsilon_{\rm{surf}}+\varepsilon_{\rm{Coul}},$$
(13)
where $\chi=V_{\rm{QP}}/(V_{\rm{QP}}+V_{\rm{HP}})$ denotes the volume fraction
of the quark phase. The energy densities, $\varepsilon_{\rm{HP}}$ and $\varepsilon_{\rm{QP}}$,
are given by Equations (4) and (11), respectively.
The first two terms of Equation (13) represent the bulk contributions,
while the last two terms come from the finite-size effects.
The surface and Coulomb energy densities are calculated from
$$\displaystyle{\varepsilon}_{\rm{surf}}$$
$$\displaystyle=$$
$$\displaystyle\frac{D\sigma\chi_{\rm{in}}}{r_{D}},$$
(14)
$$\displaystyle{\varepsilon}_{\rm{Coul}}$$
$$\displaystyle=$$
$$\displaystyle\frac{e^{2}}{2}\left(\delta n_{c}\right)^{2}r_{D}^{2}\chi_{\rm{in}}\Phi\left(\chi_{\rm{in}}\right),$$
(15)
with
$$\displaystyle\Phi\left(\chi_{\rm{in}}\right)=\left\{\begin{array}[]{ll}\frac{1}{D+2}\left(\frac{2-D\chi_{\rm{in}}^{1-2/D}}{D-2}+\chi_{\rm{in}}\right),&D=1,3,\\
\frac{\chi_{\rm{in}}-1-\ln{\chi_{\rm{in}}}}{D+2},&D=2,\\
\end{array}\right.$$
(18)
where $D=1,2,3$ denotes the geometric dimension of the cell, and $r_{D}$ represents
the size of the inner phase. $\chi_{\rm{in}}$ is the volume fraction of the inner phase,
i.e., $\chi_{\rm{in}}=\chi$ for droplet, rod, and slab configurations,
and $\chi_{\rm{in}}=1-\chi$ for tube and bubble configurations.
The charge density difference between the hadronic and quark phases is defined as
$$\displaystyle\delta n_{c}=n_{c}^{\rm{HP}}-n_{c}^{\rm{QP}},$$
(19)
with
$$\displaystyle n_{c}^{\rm{HP}}$$
$$\displaystyle=$$
$$\displaystyle n_{p}-n_{e}^{\rm{HP}}-n_{\mu}^{\rm{HP}},$$
(20)
$$\displaystyle n_{c}^{\rm{QP}}$$
$$\displaystyle=$$
$$\displaystyle\frac{2}{3}n_{u}-\frac{1}{3}n_{d}-\frac{1}{3}n_{s}-n_{e}^{\rm{QP}}-n_{\mu}^{\rm{QP}}.$$
(21)
The surface tension $\sigma$ can significantly affect the structure of the mixed phase (Endo et al., 2006; Yasutake et al., 2014; Wu & Shen, 2019).
At present, the value of $\sigma$ is poorly known, so it is usually taken as a free parameter.
In this study, we use the value of $\sigma=40$ MeV fm${}^{-2}$; which is close to the
prediction of the MIT bag model using the multiple reflection expansion method (Ju et al., 2021).
The energy density of the mixed phase, given in Equation (13),
is calculated as a function of the following variables:
$n_{p}$, $n_{n}$, $n_{u}$, $n_{d}$, $n_{s}$, $n_{e}^{\rm{HP}}$, $n_{\mu}^{\rm{HP}}$,
$n_{e}^{\rm{QP}}$, $n_{\mu}^{\rm{QP}}$, $\chi$, and $r_{D}$.
The values of these variables are determined by solving a set of equilibrium equations
between the hadronic and quark phases at a given baryon density $n_{b}$.
In the EM method, the equilibrium conditions for coexisting two phases in the
cell are derived by minimizing the total energy density (13)
under the constraints of baryon number conservation and globe charge neutrality,
which are written as
$$\displaystyle\frac{\chi}{3}\left(n_{u}+n_{d}+n_{s}\right)+\left(1-\chi\right)\left(n_{p}+n_{n}\right)$$
$$\displaystyle=$$
$$\displaystyle n_{b},$$
(22)
$$\displaystyle\chi n_{c}^{\rm{QP}}+\left(1-\chi\right)n_{c}^{\rm{HP}}$$
$$\displaystyle=$$
$$\displaystyle 0.$$
(23)
We introduce the Lagrange multipliers $\mu_{n}$ and $\mu_{e}$
for the constraints, and then perform the minimization for the function
$$\displaystyle w$$
$$\displaystyle=$$
$$\displaystyle\varepsilon_{\rm{MP}}-\mu_{n}\left[\frac{\chi}{3}\left(n_{u}+n_{d}+n_{s}\right)+\left(1-\chi\right)\left(n_{p}+n_{n}\right)\right]$$
(24)
$$\displaystyle+\mu_{e}\left[\chi n_{c}^{\rm{QP}}+\left(1-\chi\right)n_{c}^{\rm{HP}}\right].$$
According to the definition of the chemical potential,
$\mu_{n}=\partial\varepsilon_{\rm{MP}}/\partial n_{n}^{\rm{MP}}$ and
$\mu_{e}=\partial\varepsilon_{\rm{MP}}/\partial n_{e}^{\rm{MP}}$
correspond to the chemical potentials of neutrons and electrons in the mixed phase, respectively.
By minimizing $w$ with respect to the particle densities, we obtain
the following equilibrium conditions for chemical potentials:
$$\displaystyle\mu_{e}$$
$$\displaystyle=$$
$$\displaystyle\mu_{e}^{\rm{HP}}-\frac{2\varepsilon_{\rm{Coul}}}{(1-\chi)\delta n_{c}}=\mu_{e}^{\rm{QP}}+\frac{2\varepsilon_{\rm{Coul}}}{\chi\delta n_{c}},$$
(25)
$$\displaystyle\mu_{p}$$
$$\displaystyle=$$
$$\displaystyle\mu_{n}-\mu_{e}^{\rm{HP}},$$
(26)
$$\displaystyle\mu_{u}$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{3}\mu_{n}-\frac{2}{3}\mu_{e}^{\rm{QP}},$$
(27)
$$\displaystyle\mu_{d}$$
$$\displaystyle=$$
$$\displaystyle\mu_{s}=\frac{1}{3}\mu_{n}+\frac{1}{3}\mu_{e}^{\rm{QP}}.$$
(28)
It is necessary to clarify the definition of the chemical potential, especially for charged particles.
When the electric potential is taken into account, the chemical potential of a charged particle
is gauge dependent as discussed in Tatsumi et al. (2003). In the mixed phase studied here,
the hadronic and quark phases are charged separately, which lead to nonzero electric potentials
in the two phases. We can understand the terms associated with $\varepsilon_{\rm{Coul}}$ in
Equation (25) as the contribution from the electric potential.
The chemical potentials of nucleons and quarks in the above equations are defined by
Equations (2) and (10), where the electric potential is not taken into account.
For the electrons, Equation (25) implies that the total chemical potential including
the contribution from the electric potential remains constant throughout the cell,
although the electron densities in the hadronic and quark phases are different from each other.
Moreover, Equations (26)–(28) imply that local
$\beta$ equilibrium should be reached, which is also satisfied in the Gibbs and Maxwell constructions.
The equilibrium condition for the pressures is derived by minimizing $w$
with respect to the volume fraction $\chi$, which is expressed as
$$\displaystyle P_{\rm{HP}}$$
$$\displaystyle=$$
$$\displaystyle P_{\rm{QP}}-\frac{2\varepsilon_{\rm{Coul}}}{\delta n_{c}}\left[\frac{n_{c}^{\rm{QP}}}{\chi}+\frac{n_{c}^{\rm{HP}}}{1-\chi}\right]$$
(29)
$$\displaystyle\mp\frac{\varepsilon_{\rm{Coul}}}{\chi_{\rm{in}}}\left(3+\chi_{\rm{in}}\frac{{\Phi}^{{}^{\prime}}}{\Phi}\right),$$
where the sign of the last term is “$-$” for droplet,
rod, and slab configurations, while it is “$+$” for
tube and bubble configurations. The pressure of the mixed phase is calculated from the
thermodynamic relation, $P_{\rm{MP}}=\mu_{n}n_{b}-\varepsilon_{\rm{MP}}$,
which is actually equal to $-w$ given in Equation (24).
Comparing with the Gibbs condition of equal pressures,
the additional terms in Equation (29) are caused by the finite-size effects.
When the surface and Coulomb energies are neglected by taking the limit
$\sigma\rightarrow 0$, all the equilibrium equations derived above
would reduce to the Gibbs conditions.
By minimizing $w$ with respect to the size $r_{D}$, we obtain the well-known
relation ${\varepsilon}_{\rm{surf}}=2{\varepsilon}_{\rm{Coul}}$, which leads to
the formula for the size of the inner phase,
$$\displaystyle r_{D}$$
$$\displaystyle=$$
$$\displaystyle\left[\frac{\sigma{D}}{e^{2}\left(\delta n_{c}\right)^{2}\Phi}\right]^{1/3},$$
(30)
whereas the size of the Wigner–Seitz cell is given by
$$\displaystyle r_{C}$$
$$\displaystyle=$$
$$\displaystyle\chi_{\rm{in}}^{-1/D}r_{D}.$$
(31)
In practice, we solve the equilibrium equations at a given baryon density $n_{b}$
for all pasta configurations, and then determine the thermodynamically stable state that
has the lowest energy density. The hadron-quark mixed phase exists only in the density
range where its energy density is lower than that of both hadronic matter and quark matter.
4 Results and Discussion
In this section, we present numerical results for the hadron-quark pasta phases,
which are likely to appear in the core of massive neutron stars.
We employ the RMF model with the BigApple parameterization to describe the hadronic matter.
For the quark matter, we employ the MIT bag model with vector interactions (vMIT),
while the effects of the vector coupling $G_{V}$ and the bag constant $B$ are discussed.
The hadron-quark mixed phases are computed by the EM method described in the
previous section, and the properties of massive neutron stars are calculated
by using the EOS with quarks.
4.1 Hadron-quark mixed phases
In neutron-star matter, with increasing density the structured hadron-quark mixed phase
is expected to appear, which depends on the models used to describe the hadronic and
quark phases. In the present study, we employ the RMF model with the BigApple
parameterization for the hadronic phase, while the quark phase is described
by the vMIT model. To analyze the influence of model parameters on the deconfinement
phase transition, it is convenient to use the Maxwell construction in which the phase
transition appears at the crossing of the hadronic EOS with the quark EOS in the
pressure and chemical potential plane.
This is due to that two coexisting phases in the Maxwell construction must have
the same pressure and baryon chemical potential.
We show in Figure 1 the pressure $P$ as a function of the
neutron chemical potential $\mu_{n}$ for the BigApple model and the vMIT
model with different vector coupling $G_{V}$ (left panel)
and different bag constant $B$ (right panel).
According to the Maxwell equilibrium conditions, the phase transition takes place at
the crossing of the hadronic and quark EOS curves.
One can see from the left panel that a larger $G_{V}$ in the vMIT model corresponds to a
higher transition pressure, which implies that the deconfinement phase transition is
delayed accordingly. In the right panel, we see that the transition pressure increases
as the bag constant $B$ increases. Therefore, it is reasonable to expect that
the formation of hadron-quark pasta phases may be delayed with increasing vector
coupling $G_{V}$ and bag constant $B$ in the vMIT model.
We compute the properties of a structured hadron-quark mixed phase in the EM method,
where the surface and Coulomb energies are included in the minimization procedure.
Compared to the Gibbs calculation without finite-size effects, we obtain a higher
energy for the pasta phase, since the surface and Coulomb terms are always positive.
In Figure 2, we present the energy densities of pasta phases obtained
in the EM method with $B^{1/4}=180$ MeV and $\sigma=40$ MeV fm${}^{-2}$
relative to those of the GC (i.e., $\sigma=0$).
The filled circles represent the transition points between different pasta phases.
It is shown that the pasta configuration changes from droplet to rod, slab, tube,
and bubble with increasing baryon density $n_{b}$.
For comparison, the energy densities of pure hadronic matter and pure quark matter
are displayed by black dotted-dashed and dashed lines, respectively, whereas
the results of the Maxwell construction are shown by green dotted lines.
It is seen that the energy density of the Maxwell construction is higher than
that of the pasta phase, which implies that the structured mixed phase is
more stable than the Maxwell construction in our present calculations.
However, for large enough surface tension $\sigma$, the Maxwell construction
may have lower energy than the pasta phase, which has been discussed
in the literature (Maslov et al., 2019).
To examine the influence of the quark vector interactions, we present the
results with $G_{V}=0.1$, 0.2, and 0.3 fm${}^{2}$ in the left, middle, and right
panels, respectively. One can see that as $G_{V}$ increases,
the mixed phase is shifted toward higher densities with a wider range.
This tread is in agreement with the behavior observed in the Maxwell
construction (see the left panel of Figure 1).
In Table 1, we present the onset densities of the hadron-quark pasta
phases and pure quark matter obtained using different model parameters and methods.
From this table, one can see the effect of $G_{V}$ and the difference between the
EM and CP methods.
It is interesting to study the geometric structure of the hadron-quark mixed phase,
which may be attributed to the competition between surface and Coulomb energies.
For checking the influence of the method used, we compare the results using the EM
method developed in the present work with those obtained in the simple CP method (Wu & Shen, 2019).
We emphasize that in the CP method the Gibbs conditions for phase
equilibrium are enforced,
while the surface and Coulomb energies are taken into account perturbatively,
and therefore the charge screening effect is disregarded in this case.
In contrast, the EM method incorporates the surface and Coulomb contributions
self-consistently in deriving the equilibrium conditions, which leads to
the rearrangement of charged particles known as the charge screening effect.
In order to see how large the charge screening effect is, we display
in Figure 3 the particle density profiles in the Wigner–Seitz cell
for a slab configuration at $n_{b}=0.6\,\rm{fm}^{-3}$.
The calculations are carried out with $B^{1/4}=180$ MeV and $G_{V}=0.2\,\rm{fm}^{2}$.
The hadron-quark interface and the cell boundary are indicated by the vertical lines.
One can see that the structure size of the EM method is larger than that of the CP method.
This is because the charge screening effect tends to reduce the net charge density
in each phase in order to lower the Coulomb energy.
In the negatively charged quark matter and positively charged hadronic matter,
the particle densities $n_{d}$, $n_{s}$, and $n_{p}$ are reduced in the EM method,
whereas $n_{u}$ is enhanced in comparison to that of the CP method.
The electron densities in the quark and hadronic matter are, respectively, reduced and
enhanced due to the same reason. It is expected that as the surface tension $\sigma$
increases, the charge screening effect becomes more pronounced, and finally the local
charge neutrality is reached in the Maxwell construction.
In Figure 4, the size of the Wigner–Seitz cell ($r_{C}$) and that of
the inner phase ($r_{D}$) are plotted as a function of the baryon density $n_{b}$,
where the vMIT model with $B^{1/4}=180$ MeV are adopted for quark matter.
The results with $G_{V}=0$ and 0.2 fm${}^{2}$ are shown in the left and right
panels, respectively. We compare the results obtained from the EM method (thick lines)
with those from the CP method (thin lines) in order to check the charge screening effect.
It is found that both $r_{D}$ and $r_{C}$ of the EM method are larger than that of the CP
method, which can be explained by the rearrangement of charged particles
as shown in Figure 3.
According to Equation (30), a small charge density difference
$\delta n_{c}$ leads to a large pasta size $r_{D}$.
The value $\delta n_{c}=n_{c}^{\rm{HP}}-n_{c}^{\rm{QP}}$ can be observed in Figure 5,
where the charge densities $n_{c}^{\rm{HP}}$ and $n_{c}^{\rm{QP}}$ are shown
as a function of the baryon density $n_{b}$.
One can see that the magnitudes of $n_{c}^{\rm{HP}}$ and $n_{c}^{\rm{QP}}$
in the EM method are significantly reduced due to the charge screening effect,
and hence their difference $\delta n_{c}$ is also reduced in comparison to the CP results.
The reduction of $\delta n_{c}$ caused by the charge rearrangement in the EM method
leads to the increase of $r_{D}$ and $r_{C}$ in Figures 3 and 4.
In addition, a comparison between the EM and CP methods can be found in Table 1,
where the onset and configuration of pasta phases are somewhat different.
As shown in the last two lines of Table 1, due to relatively large Coulomb energies
of the CP method, the tube and bubble configurations are energetically disfavored and
will not appear before the transition to pure quark matter.
A similar analysis is available for understanding the impact of the vector
coupling $G_{V}$ on the pasta size. Compared to the case of $G_{V}=0$ in the left panel
of Figure 4, the values of $r_{D}$ and $r_{C}$ obtained with
$G_{V}=0.2\,\rm{fm}^{2}$ in the right panel are relatively small.
This can be understood from the behavior of $\delta n_{c}$ in Figure 5,
where the values of $\delta n_{c}=n_{c}^{\rm{HP}}-n_{c}^{\rm{QP}}$ with $G_{V}=0.2\,\rm{fm}^{2}$
are clearly larger than those with $G_{V}=0$, which lead to smaller $r_{D}$ and $r_{C}$
in the right panel of Figure 4.
Furthermore, we can see that as the density increases, $r_{D}$ in the droplet, rod, and slab
configurations increases, whereas it decreases in the tube and bubble phases.
These trends are related to a monotonic increase of the quark volume fraction $\chi$
during the phase transition.
4.2 Properties of Neutron Stars
In Figure 6, we show the pressure $P$ as a function of the baryon
density $n_{b}$ for hadronic, mixed, and quark phases. The calculations of
hadron-quark pasta phases are performed in the EM method, where the hadronic
matter is described by the BigApple model and the quark matter by the vMIT model
with $B^{1/4}=180$ MeV.
For comparison, the results of the Gibbs
and Maxwell constructions are displayed by red solid and green dotted lines, respectively.
The pressures with the Maxwell construction remain constant during
the phase transition, whereas the pressures with the GC increase
with $n_{b}$ over a broad range.
It is shown that the results of pasta phases lie between the Gibbs and
Maxwell constructions. As the vector coupling $G_{V}$ increases from left to right panels,
we see that the hadron-quark mixed phases appear at higher densities and pressures.
Especially, the ending of the mixed phase shows a more significant $G_{V}$ dependence
than the beginning, which may be attributed to the increasing quark fraction
during the phase transition.
In Figure 7, we display the particle fraction $Y_{i}=n_{i}/n_{b}$
as a function of the baryon density $n_{b}$.
The shaded area denotes the mixed phase region, where the results of hadron-quark
pasta phases are obtained in the EM method.
We employ the BigApple model for hadronic matter
and the vMIT model with $B^{1/4}=180$ MeV and $G_{V}=0.2\,\rm{fm}^{2}$ for quark matter.
At low densities, the matter consists of neutrons, protons, and electrons,
whereas the muons appear at about $0.11\,\rm{fm}^{-3}$.
When the deconfined quarks are present in the mixed phase,
the quark fractions $Y_{u}$, $Y_{d}$, and $Y_{s}$ increase rapidly together with a decrease of
neutron fraction $Y_{n}$. Meanwhile, $Y_{e}$ and $Y_{\mu}$ decrease significantly
because the quark matter is negatively charged that takes the role of electrons
for satisfying the constraint of global charge neutrality.
On the other hand, the hadronic matter in the mixed phase is positively charged,
so the proton fraction $Y_{p}$ increases slightly at the beginning of the mixed phase.
As the baryon density $n_{b}$ increases, $Y_{p}$ and $Y_{n}$ decrease to very low values
due to the increase of the quark volume fraction $\chi$ in the phase transition.
At sufficiently high densities, the matter turns into a pure quark phase,
where $Y_{u}\approx Y_{d}\approx Y_{s}\approx 1/3$ is nearly achieved under the
conditions of charge neutrality and chemical equilibrium.
The direct Urca (dUrca) process, known as the most efficient mechanism for neutron-star
cooling, is closely related to the particle fractions shown in Figure 7.
In the hadronic matter, the dUrca process, i.e., the electron capture by a proton and
the $\beta$-decay of a neutron, is mainly determined by the proton fraction $Y_{p}$,
which must exceed a critical value to allow simultaneous energy and momentum conservation (Fantina et al., 2013).
In the case of simple $npe$ matter, the dUrca process can occur only for $Y_{p}\geq 1/9$.
When muons are present under the equilibrium condition $\mu_{e}=\mu_{\mu}$,
the critical $Y_{p}$ for the dUrca process is in the range of $11.1\%-14.8\%$ (Lattimer et al., 1991).
According to observations of thermal radiation from neutron stars,
the dUrca process is unlikely to occur in neutron stars with masses below $1.5M_{\odot}$,
since it would lead to an unacceptably fast cooling in disagreement with the observations (Fantina et al., 2013).
In our present calculations, a $1.5M_{\odot}$ neutron star
remains in the pure hadronic phase with a central density of about $0.34\,\rm{fm}^{-3}$,
where the proton fraction is lower than the critical value for the dUrca process.
We note that the BigApple model predicts a rather large threshold density of
$0.64\,\rm{fm}^{-3}$ for the dUrca process, which is related to its small
symmetry energy slope $L$ (see the discussion in Ji et al. 2019).
This result is compatible with the cooling observations of neutron stars.
The properties of static neutron stars are obtained by solving the well-known
Tolman–Oppenheimer–Volkoff (TOV) equation with the EOS over a wide range of densities.
In the present calculations, we use the Baym–Pethick–Sutherland EOS (Baym et al., 1971)
for the outer crust below the neutron drip density, while the inner crust EOS is based
on a Thomas–Fermi calculation using the TM1e parameterization of the RMF model (Shen et al., 2020).
The crust EOS is matched to the EOS of uniform neutron-star matter at the crossing
of the two segments. At very high densities, the quark degrees of freedom are
taken into account using the EM method for the hadron-quark pasta phases.
In Figure 8, we display the predicted mass-radius relations of
neutron stars, together with several constraints from astrophysical observations.
Compared to the results using pure hadronic EOS (solid lines),
the inclusion of quarks leads to an obvious reduction of the maximum
neutron-star mass $M_{\mathrm{max}}$. As shown in the left panel,
the results of massive stars are strongly dependent on the vector coupling $G_{V}$.
It is found that both the onset of hadron-quark pasta phases (filled circles) and
the values of $M_{\mathrm{max}}$ (filled squares) increase with increasing $G_{V}$.
For a canonical $1.4M_{\odot}$ neutron star, only the pure hadronic phase is present.
The prediction of the BigApple model for the radius $R_{1.4}$ is compatible
with the constraints inferred from the GW170817 event and NICER for PSR J0030+0451.
In order to examine the impact of the bag constant $B$, we compare in the right panel
the results between $B^{1/4}=180$ MeV (blue dashed line) and $B^{1/4}=170$ MeV (green dashed line).
The reduction of $M_{\mathrm{max}}$ with $B^{1/4}=170$ MeV is more pronounced than
with $B^{1/4}=180$ MeV. This is because the formation of hadron-quark pasta phases
appears at lower densities for $B^{1/4}=170$ MeV (see Table 1),
which leads to a stronger softening of the EOS.
To analyze the effects of the hadron-quark phase transition in more detail, we show
in Table 2 the resulting properties of neutron stars with the maximum mass.
It is found that both $G_{V}$ and $B$ can significantly affect the structure and maximum mass
of neutron stars, whereas the influence of the charge screening (i.e., the difference between the
EM and CP methods) is relatively small.
In most cases, a structured hadron-quark mixed phase can be formed in massive neutron stars
inside the radius $R_{\mathrm{MP}}$. The pure quark matter appears in the interior of
neutron stars only for the cases with small $B$ where the central
density $n_{b}^{c}$ is larger than the onset of the pure quark phase.
We find that for all cases listed in Table 2, the central density of
a canonical $1.4M_{\odot}$ neutron star is not high enough to form hadron-quark pasta
phases, so it remains in a pure hadronic phase.
These results are qualitatively consistent with the arguments of Annala et al. (2020),
which suggested that the matter in the core of a $1.4M_{\odot}$ neutron star
is compatible with nuclear model calculations, whereas the matter in the interior
of a $2M_{\odot}$ neutron star exhibits characteristics of the deconfined quark phase.
In Figure 9, we plot the dimensionless tidal deformability $\Lambda$ as a function
of the neutron-star mass $M$. The results of the pure hadronic EOS
are compared to those including the hadron-quark mixed phase described in the EM method
with $B^{1/4}=180$ MeV and $G_{V}=0.2$ fm${}^{2}$.
It is shown that the inclusion of quarks leads to small differences in $\Lambda$ for
massive neutron stars, which is related to the reduction of the radius as shown in Figure 8.
Considering the constraints on $\Lambda_{1.4}$ inferred from the analysis of GW170817
and GW190814, the BigApple model provides acceptable results for the tidal deformability
of neutron stars. The effects of the hadron-quark phase transition can be observed only
for very massive stars as shown in the inset of Figure 9.
5 Summary
Motivated by the recent advances in astrophysical observations, we studied the
properties of the hadron-quark pasta phases, which may appear in the interior
of massive neutron stars. The structured mixed phase is described
within the Wigner–Seitz approximation, where the whole space is divided into
equivalent cells with a geometric symmetry. The coexisting hadronic and quark
phases inside the cell are assumed to be separated by a sharp interface
with constant densities in each phase.
We extended the EM method for describing the hadron-quark pasta phases
by allowing different electron densities in the two coexisting phases,
which is helpful for understanding the transition from the GC
to the MC. In the EM method, the surface and Coulomb energies
are included in the minimization procedure, which results in different
equilibrium equations from the Gibbs conditions.
Comparing to the simple CP method, the treatment of surface and Coulomb
energies in the EM method leads to the rearrangement of charged particles,
known as the charge screening effect, which can significantly affect
the structure of the hadron-quark mixed phase.
The resulting EOS obtained from the EM method was found to lie between those
of the GC and MC.
In the present study, we have employed the RMF model with the BigApple
parameterization to describe the hadronic matter, while the quark matter
is described by a modified MIT bag model with vector interactions (vMIT).
The BigApple model could provide a good description of finite nuclei across
the nuclear chart, while its prediction for the maximum neutron-star mass
is as large as $2.6M_{\odot}$. In addition, the BigApple model predicts
acceptable radius and tidal deformability for a canonical $1.4M_{\odot}$ neutron
star, as compared to the estimations from astrophysical observations.
For the quark matter in the vMIT model, the vector interactions among quarks
could significantly stiffen the EOS at high densities and help to enhance
the maximum mass of neutron stars. We found that as the vector coupling $G_{V}$
increases, the hadron-quark pasta phases appear at higher densities,
and especially the ending of the mixed phase shows a more significant $G_{V}$
dependence than the beginning.
In addition, a larger $G_{V}$ leads to a smaller pasta size, which is caused
by relatively large charge density. Meanwhile, the pasta size is also affected
by the charge screening effect in the EM method, where the rearrangement
of charged particles can lower the Coulomb energy and enhance the pasta size.
We investigated the properties of massive neutron stars containing the hadron-quark
pasta phases. It was found that the inclusion of quarks could
considerably soften the EOS and reduce the maximum mass of neutron stars.
The results of massive stars clearly depend on the vector coupling $G_{V}$
and the bag constant $B$ used. A large $G_{V}$ corresponds to a weak reduction of
$M_{\mathrm{max}}$ relative to that of pure hadronic stars.
In most cases considered in this study, a structured hadron-quark
mixed phase can be formed in the interior of massive neutron stars,
whereas a canonical $1.4M_{\odot}$ neutron star remains in a pure hadronic phase.
Our results showed that the presence of quarks inside neutron stars could be
compatible with current constraints inferred from NICER data and
gravitational-wave observations.
This work was supported in part by the National Natural Science Foundation of
China (grants Nos. 12175109 and 11775119).
References
Abbott et al. (2017)
Abbott, B. P., Abbott, R., Abbott, T. D., et al.
2017, Phys. Rev. Lett., 119, 161101
Abbott et al. (2018)
Abbott, B. P., Abbott, R., Abbott, T. D., et al.
2018, Phys. Rev. Lett., 121, 161101
Abbott et al. (2020a)
Abbott, B. P., Abbott, R., Abbott, T. D., et al.
2020, ApJ, 892, L3
Abbott et al. (2020b)
Abbott, B. P., Abbott, R., Abbott, T. D., et al.
2020, ApJ, 896, L44
Alam et al. (2016)
Alam, N., Agrawal, B. K., Fortin, M., Pais, H., Providência, C., Raduta, Ad. R., & Sulaksono, A.
2016, Phys. Rev. C, 94, 052801(R)
Annala et al. (2020)
Annala, E., Gorda, T., Kurkela, A., Nättilä, J., & Vuorinen, A.
2020, NatPh, 16, 907
Antoniadis et al. (2013)
Antoniadis, J., Freire, P. C. C., Wex, N., et al.
2013, Sci, 340, 448
Arzoumanian et al. (2018)
Arzoumanian, Z., et al. 2018, ApJS, 235, 37
Bao et al. (2014)
Bao, S. S., Hu, J. N., Zhang, Z. W., & Shen, H.
2014, Phys. Rev. C, 90, 045802
Baym et al. (1971)
Baym, G., Bethe, H. A., & Pethick, C. J.
1971, Nucl. Phys. A, 175, 225
Baym et al. (2018)
Baym, G., Hatsuda, T., Kojo, T., Powell, P. D., Song, Y., & Takatsuka, T.
2018, RPPh, 81, 056902
Bhattacharyya et al. (2010)
Bhattacharyya, A., Mishustin, I. N., & Greiner, W.
2010, JPhG, 37, 025201
Cromartie et al. (2020)
Cromartie, H. T., Fonseca, E., Ransom, S. M., et al.
2020, NaAs, 4, 72
Demircik et al. (2021)
Demircik, T., Ecker, C., & Järvinen, M.
2021, ApJ, 907, L37
Demorest et al. (2010)
Demorest, P. B., Pennucci, T., Ransom, S. M., Roberts, M. S. E., & Hessels, J. W. T.
2010, Natur, 467, 1081
Dexheimer et al. (2021)
Dexheimer, V., Gomes, R., Klähn, T., Han, S., & Salinas, M.
2021, Phys. Rev. C, 103, 025808
Endo et al. (2006)
Endo, T., Maruyama, T., Chiba, S., & Tatsumi, T.
2006, PThPh, 115, 337
Essick & Landry (2020)
Essick, R., & Landry, P.
2020, ApJ, 904, 80
Fantina et al. (2013)
Fantina, A. F., Chamel, N., Pearson, J. M., & Goriely, S.
2013, A&A, 559, A128
Fattoyev et al. (2020)
Fattoyev, F. J., Horowitz, C. J., Piekarewicz, J., & Reed, B.
2020, Phys. Rev. C, 102, 065805
Fattoyev et al. (2010)
Fattoyev, F. J., Horowitz, C. J., Piekarewicz, J., & Shen, G.
2010, Phys. Rev. C, 82, 055802
Fonseca et al. (2016)
Fonseca, E., et al. 2016, ApJ, 832, 167
Fonseca et al. (2020)
Fonseca, E., et al. 2021, ApJ, 915, L12
Glendenning (1992)
Glendenning, N. K.
1992, Phys. Rev. D, 46, 1274
Gomes et al. (2019)
Gomes, R. O., Char, P., & Schramm, S.
2019, ApJ, 877, 139
Han et al. (2019)
Han, S., Mamun, M. A. A., Lalit, S., Constantinou, C., & Prakash, M.
2019, Phys. Rev. D, 100, 103022
Heiselberg et al. (1993)
Heiselberg, H., Pethick, C. J., & Staubo, E. F.
1993, Phys. Rev. Lett., 70, 1355
Huang et al. (2020)
Huang, K. X., Hu, J. N., Zhang, Y., & Shen, H.
2020, ApJ, 904, 39
Ji et al. (2019)
Ji, F., Hu, J. N., Bao, S. S., & Shen, H.
2019, Phys. Rev. C, 100, 045801
Ju et al. (2021)
Ju, M., Wu, X. H., Ji, F., Hu, J. N., & Shen, H.
2021, Phys. Rev. C, 103, 025809
Klähn & Fischer (2015)
Klähn, T. & Fischer, T.
2015, ApJ, 810, 134
Lalazissis et al. (1997)
Lalazissis, G. A., Köning, J., & Ring, P.
1997, Phys. Rev. C, 55, 540
Lattimer & Swesty (1991)
Lattimer, J. M., & Swesty, F. D.
1991, Nucl. Phys. A, 535, 331
Lattimer et al. (1991)
Lattimer, J. M., Pethick, C. J., Prakash, M., & Haensel, P.
1991, Phys. Rev. Lett., 66, 2701
Lattimer & Prakash (2016)
Lattimer, J. M., & Prakash, M.
2016, PhR, 621, 127
Li et al. (2020)
Li, J. J., Sedrakian, A., & Weber, F.
2020, PhLB, 810, 135812
Lonardoni et al. (2015)
Lonardoni, D., Lovato, A., Gandolfi, S., & Pederiva, F.
2015, Phys. Rev. Lett., 114, 092301
Lopes et al. (2021)
Lopes, L., L., Biesdorf, C., & Menezes, D. P.
2021, PhyS, 96, 065303
Maruyama et al. (2007)
Maruyama, T., Chiba, S., Schulze, H.-J., & Tatsumi, T.
2007, Phys. Rev. D, 76, 123015
Maslov et al. (2019)
Maslov, K., Yasutake, N., Blaschke, D., Ayriyan, A., Grigorian, H.,
Maruyama, T., Tatsumi, T., & Voskresensky, D. N.
2019, Phys. Rev. C, 100, 025802
Miller et al. (2019)
Miller, M. C., Lamb, F. K., Dittmann, A. J., et al.
2019, ApJ, 887, L24
Miller et al. (2021)
Miller, M. C., Lamb, F. K., Dittmann, A. J., et al.
2021, ApJ, 918, L28
Most et al. (2020)
Most, E. R., Papenfort, L. J., Weih, L. R., & Rezzolla, L.
2020, MNRAS Lett, 499, L82
Oertel et al. (2017)
Oertel, M., Hempel, M., Klähn, T., & Typel, S.
2017, RvMP, 89, 015007
Riley et al. (2019)
Riley, T. E., Watts, A. L., Bogdanov, S., et al.
2019, ApJ, 887, L21
Riley et al. (2021)
Riley, T. E., Watts, A. L., Ray, P. S., et al.
2021, ApJL, 918, L27
Schertler et al. (2000)
Schertler, K., Greiner, C., Schaffner-Bielich, J., & Thoma, M. H.
2000, Nucl. Phys. A, 677, 463
Shen et al. (2020)
Shen, H., Fan, J., Hu, J. N., & Sumiyoshi, K.
2020, ApJ, 891, 148
Spinella et al. (2016)
Spinella, W. M., Weber, F., Contrera, G. A., & Orsaria, M. G.
2016, EPJA, 52, 61
Sugahara & Toki (1994)
Sugahara, Y., & Toki, H.
1994, Nucl. Phys. A, 579, 557
Tan et al. (2020)
Tan, H., Noronha-Hostler, J., & Yunes, N.
2020, Phys. Rev. Lett., 125, 261104
Tatsumi et al. (2003)
Tatsumi, T., Yasuhira, M., & Voskresensky, D.
2003, Nucl. Phys. A, 718, 359
Tews et al. (2021)
Tews, I., Pang, P. T. H., Dietrich, T., Coughlin, M. W., Antier, S.,
Bulla, M., Heinzel, J., & Issa, L.
2021, ApJ, 908, L1
Tsokaros et al. (2020)
Tsokaros, A., Ruiz, M., & Shapiro, S. L.
2020, ApJ, 905, 48
Weber et al. (2019)
Weber, F., Farrell, D., Spinella, W. M., Malfatti, G., Orsaria, M. G.,
Contrera, G. A., & Maloney, I.
2019, Univ, 5, 169
Wu & Shen (2017)
Wu, X. H. & Shen, H.
2017, Phys. Rev. C, 96, 025802
Wu & Shen (2019)
Wu, X. H. & Shen, H.
2019, Phys. Rev. C, 99, 065802
Xu et al. (2010)
Xu, J., Chen, L. W., Ko, C. M., & Li, B. A.
2010, Phys. Rev. C, 81, 055803
Yang & Shen (2008)
Yang, F. & Shen, H.
2008, Phys. Rev. C, 77, 025801
Yasutake et al. (2014)
Yasutake, N., Łastowiecki, R., Benić, S., Blaschke, D.,
Maruyama, T., & Tatsumi, T.
2014, Phys. Rev. C, 89, 065803
Zhang & Li (2020)
Zhang, N. B., & Li, B. A.
2020, ApJ, 902, 38 |
How Good Is NLP? A Sober Look at NLP Tasks through the
Lens of Social Impact
Zhijing Jin
Max Planck Institute for Intelligent Systems
Tübingen, Germany
[email protected]
&Geeticka Chauhan
MIT
[email protected]
\ANDBrian Tse
Oxford
[email protected]
&Mrinmaya Sachan
ETH Zürich
[email protected]
&Rada Mihalcea
University of Michigan
[email protected]
Abstract
Recent years have seen many breakthroughs in natural language processing (NLP), transitioning it from a mostly theoretical field to one with many real-world applications. Noting the rising number of applications of other machine learning and AI techniques with pervasive societal impact, we anticipate the rising importance of developing NLP technologies for social good. Inspired by theories in moral philosophy and global priorities research, we aim to promote a guideline for social good in the context of NLP.
We lay the foundations via the moral philosophy definition of social good, propose a framework to evaluate the direct and indirect real-world impact of NLP tasks, and
adopt the methodology of global priorities research to identify priority causes for NLP research. Finally, we use our theoretical framework to provide some practical guidelines for future NLP research for social good.111Our data and code are available at http://github.com/zhijing-jin/nlp4sg_acl2021. In addition, we curate a list of papers and resources on NLP for social good at https://github.com/zhijing-jin/NLP4SocialGood_Papers.
1 Introduction
Advances on multiple NLP fronts have given rise to a plethora of applications that are now integrated into our daily lives. NLP-based intelligent agents like Amazon Echo and Google Home have entered millions of households Voicebot (2020).
NLP tools are now prevalent on phones, in cars, and in many daily services such as Google search and electronic health record analysis Townsend (2013).
In the current COVID-19 context, NLP has already had important positive social impact in the face of a public health crisis. When the pandemic broke out, Allen AI collected the CORD-19 dataset Wang et al. (2020) with the goal of helping public health experts efficiently sift through the myriad of COVID-19 research papers that emerged in a short time period. Subsequently, NLP services such as Amazon Kendra were deployed to help organize the research knowledge around COVID-19 Bhatia et al. (2020). The NLP research community worked on several problems like the question-answering and summarization system CAiRE-COVID Su et al. (2020), the expressive interviewing conversational system Welch et al. (2020) and annotation schemas to help fight COVID-19 misinformation online Alam et al. (2020); Hossain et al. (2020).
As NLP transits from theory into practice and into daily lives, unintended negative consequences that early theoretical researchers did not anticipate have also emerged, from the toxic language of Microsoft’s Twitter bot Tay Shah and Chokkattu (2016), to the leak of privacy of Amazon Alexa Chung et al. (2017).
A current highly-debated topic in NLP ethics is GPT-3 Brown et al. (2020), whose risks and harms include encoding gender and racist biases Bender et al. (2021).
It is now evident that we must consider the negative and positive impacts of NLP as two sides of the same coin, a consequence of how NLP and more generally AI pervade our daily lives. The consideration of the negative impacts of AI has engendered the recent and popular interdisciplinary field of AI ethics, which puts forth issues such as algorithmic bias, fairness, transparency and equity with an aim to provide recommendations for ethical development of algorithms.
Highly influential works in AI ethics include Buolamwini and Gebru (2018); Mitchell et al. (2019); Raji et al. (2020); Chen et al. (2019); Blodgett et al. (2020). AI for social good (AI4SG) Tomašev et al. (2020) is a related sub-field that benefits from results of AI ethics and while keeping ethical principles as a pre-requisite, has the goal of creating positive impact and addressing society’s biggest challenges. Work in this space includes Wang et al. (2020); Bhatia et al. (2020); Killian et al. (2019); Lampos et al. (2020).
Active conversations about ethics and social good have expanded broadly, in the NLP community as well as the broader AI and ML communities. Starting with early discussions in works such as Hovy and Spruit (2016); Leidner and Plachouras (2017), the communities introduced the first workshop on ethics in NLP Hovy et al. (2017) and the AI for social good workshop Luck et al. (2018), which inspired various follow-up workshops at venues like ICML and ICLR. The upcoming NLP for Positive Impact Workshop Field et al. (2020) finds inspiration from these early papers and workshops. In 2020, NeurIPS required all research papers to submit broader impact statements Castelvecchi (2020); Gibney (2020). NLP conferences followed suit and introduced optional ethical and impact statements, starting with ACL in 2021 Association for Computational Linguistics (2021).
With the growing impact of our models in daily lives, we need comprehensive guidelines for following ethical standards to result in positive impact and prevent unnecessary societal harm. Tomašev et al. (2020) provide general guidelines for successful AI4SG collaborations through the lens of United Nations (UN) sustainable development goals (SDGs) United Nations (2015) and Hovy and Spruit (2016); Leidner and Plachouras (2017) begin the ethics discussions in NLP. However, there is room for iteration in terms of presenting a comprehensive picture of NLP for social good, with an evaluation framework and guidelines.
At the moment, researchers eager to make a beneficial contribution need to base their research agenda on intuition and word of mouth recommendations, rather than a scientific evaluation framework.
To this end, our paper presents a modest effort to the understanding of social good, and sketches thinking guidelines and heuristics for NLP for social good. Our main goal is to answer the question:
Given a specific researcher or team with skills $s$, and the set of NLP technologies $\bm{T}$ they can work on, what is the best technology $t\in\bm{T}$ for them to optimize the social good impact $I$?
In order to answer this overall question, we take a multidisciplinary approach in our paper:
•
section 2 relies on theories in moral philosophy to approach what is social good versus bad (i.e., the sign and rough magnitude of impact $I$ for a direct act $a$);
•
section 3 relies on causal structure models as a framework to estimate $I$ for $t\in\bm{T}$, considering that $t$ can be an indirect cause of impact;
•
section 4 relies on concepts from global priorities research and economics to introduce a high-level framework to choose a technology $t$ that optimizes the social impact $I$;
•
section 5 applies the above tools to analyze several example NLP directions, and provides a practical guide on how to reflect on the social impact of NLP.
We acknowledge the iterative nature of a newly emerging field in NLP for social good, requiring continuing discussions on definitions and the development of ethical frameworks and guidelines.
Echoing the history of scientific development Kuhn (2012), the goal of our work is not to provide a perfect, quantitative, and deterministic answer about how to maximize social good with our NLP applications. The scope of our work is to take one step closer to a comprehensive understanding, through high-level philosophies, thinking frameworks, together with heuristics and examples.
2 What is social good?
Defining social good can be controversial. For example, if we define saving energy as social good, then what about people who get sick because of not turning on the air-conditioner on a cold day?
Therefore, social good is context-dependent, relevant to people, times, and states of nature Broome (2017).
This section is to provide a theoretical framework about the social impact $I$ for a direct act $a$.
2.1 Moral philosophy theories
We can observe that for some acts, it is relatively certain to judge whether the impact is positive or negative. For example, solving global hunger is in general a positive act. Such judgement is called intuitionalism Sidgwick (1874), a school of moral philosophy.
There are many areas of social impact that cannot receive consensus by intuitions. To find analytical solutions to these debatable topics, several moral philosophies have been proposed.
We introduce below three categories of philosophical perspectives to judge moral laws Kagan (2018), and provide the percentage of professional philosophers who support the theory Bourget and Chalmers (2014):
1.
Deontology: emphasizes duties or rules, endorsed by 25.9% philosophers;
2.
Consequentialism: emphasizes consequences of acts, endorsed by 23.6% philosophers;
3.
Virtue ethics: emphasizes virtues and moral character, endorsed by 18.2% philosophers.
Note that the above three schools, deontology, consequentialism, and virtue ethics, follows the standard textbook introductions for normative ethics in the analytic philosophy tradition. It is also possible for future research to consider different perspectives while defining social good.
A practical guide for using these philosophies.
The three perspectives provide us dimensions to think about the impact $I$ of an act $a$, so that the final decision is (hopefully) more reliable than one single thought which is subject to biases. Such decomposition practices are often used in highly complicated analyses (e.g., business decisions), such as radar charts to rate a decision/candidate or SMART goals.
A practical guide for using moral philosophies to judge an act $a$ is to think along each of the three perspectives, collect estimations of how good the act $a$ is from the three dimensions, and merge them. For example, using NLP for healthcare to save lives can be good from all three perspectives, and thus it is an overall social good act.
When merging judgements from the above philosophical views, there can be tradeoffs, such as sacrificing one life for five lives in the
Trolley problem Thomson (1976), which scores high on consequentialism but low on deontology and virtue ethics. One solution by the moral uncertainty theory MacAskill et al. (2020) is to favor acts with more balanced judgements on all criteria, and reject acts that are completely unacceptable on any criterion.
2.2 Principles for future AI
Many agencies from academia, government, and industries have proposed principles for future AI Jobin et al. (2019), which can be regarded as a practical guide by deontology.
Zeng et al. (2019) surveyed the principles of the governance of AI proposed by 27 agencies. The main areas are as follows (with keywords):
•
Humanity: beneficial, well-being, human right, dignity, freedom, education, human-friendly.
•
Privacy: personal information, data protection, explicit confirmation, control of the data, notice and consent.
•
Security: cybersecurity, hack, confidential.
•
Fairness: justice, bias, discrimination.
•
Safety: validation, test, controllability.
•
Accountability: responsibility.
•
Transparency: explainable, predictable, intelligible.
•
Collaboration: partnership, dialog.
•
Share: share, equal.
•
AGI: superintelligence.
3 Evaluating the indirect impact of NLP
Given the general moral guide to judge an act with direct impacts, we now step towards the second stage – understanding the downstream impact of scientific research which typically has indirect impacts. For example, it is not easily tractable to estimate the impact of some linguistic theories.
To sketch a solution, this section will first classify NLP tasks by the dimension of theory$\rightarrow$application, and then provide an evaluation framework for $I$ of a technology $t$ that may have indirect real-life impacts.
3.1 Classifying tasks from upstream to downstream
To evaluate each NLP research topic, we propose four stages in the theory$\rightarrow$application development, as shown in Figure 1, and categorize the 570 long papers from ACL 2020222https://www.aclweb.org/anthology/events/acl-2020/#2020-acl-main according to the four stages in Figure 2. Details of the annotation are in Appendix A.
The four stages are as follows.
Stage 1. Fundamental theories.
Fundamental theories are the foundations of knowledge, such as
linguistic theories by Noam Chomsky.
In ACL 2020, the most prevalent topic for papers in Stage 1 is linguistics theory in Figure 2. Importantly, Stage 1’s main goal is the advancement of knowledge, and to widen the potentials for later-stage research.
Stage 2. Building block tools.
Moving one step from theory towards applications is the research on building block tools, which
serves as important building blocks and toolboxes for downstream technologies. The most frequently researched Stage-2 topics at ACL 2020 are information extraction, model design, and interpretability (in Figure 2).
Stage 3. Applicable tools.
Applicable tools are pre-commercialized NLP systems which can serve as the backbones of real-world applications. This category includes NLP tasks such as dialog response generation, question answering, and machine translation.
The most common research topics in this category are dialog, machine translation, and question answering (in Figure 2).
Stage 4. Deployed applications/products.
Deployed applications often build upon tools in Stage 3, and wrap them with user interfaces, customer services, and business models.
Typical examples of Stage-4 technologies include Amazon Echo, Google Translate, and so on.
The top three topics of ACL 2020 papers in this category are ways to address misinformation (e.g., a fact checker for news bias), dialog, and NLP for healthcare.
3.2 Estimating impact
Direct impacts of Stage-4 technologies.
A direct impact of NLP development is allowing users more free time. This is evident in automatic machine translation, which saves the effort and time of human translators, or in NLP for healthcare, which allows doctors to more quickly sift through patient history. Automatic fake news detection frees up time for human fact-checkers, to aid them in more quickly detecting fake news through the increasing number of digital news articles being published.
The impact of more user free time is varied. In the case of healthcare, NLP can free up time for more personalized patient care, or allow free time for activities of choice, such as spending time on passion projects or more time with family. We recognize these varied impacts of NLP deployment, and recommend user productivity as one way to measure it.
Note that there can be positive as well as negative impact associated with rising productivity, and the polarity can be decided according to Section 2.1.
Typical positive impacts of NLP technology include better healthcare and well-being, and in some cases it indirectly helps with avoiding existential risks, sustainability, and so on. Typical negative impacts include more prevalent surveillance, propaganda, breach of privacy, and so on.
For example, intelligent bots can improve efficiency at work (to benefit economics), and bring generally better well-being for households, but they might leak user privacy Chung et al. (2017).
Thus, estimating the overall end impact of a technology $t$ in the Stage 4 needs to accumulate over a set of aspects $\bm{AS}$:
$$\displaystyle I(t)=\sum_{as\in\bm{AS}}\mathrm{scale}_{as}(t)\cdot\mathrm{impact}_{as}(t)~{},$$
(1)
where $\mathrm{scale}_{as}(t)$ is the usage scale of applications of technology $t$ used in the aspect $as$, and $\mathrm{impact}_{as}(t)$ is the impact of $t$ in this aspect.
Indirect impacts of early stage technologies.
Although the direct impact of Stage-4 technologies can be estimated by Eq. (1), it is difficult to calculate the impact of a technology in earlier stages (i.e., Stage 1-3).
We can approach the calculation of indirect impacts $I$ of an early-stage technology $t$ by a structural causal model. As shown in the causal graph $\mathcal{G}$ in Figure 3, each technology $t$ is in a causal chain from its parent vertex set $\mathrm{PA}(t)$ (i.e., upstream technologies that directly causes the invention of $t$), to its children vertex set (i.e., downstream technologies directly resulting from $t$). Formally, we denote
a directed (causal) path in $\mathcal{G}$ as a sequence of distinct vertices $(t_{1},t_{2},\dots,t_{n})$ such that $t_{i+1}\in\mathrm{CH}(t_{i})$ for all $i=1,\dots,n-1$. We call $t_{n}$ a descendant of $t_{1}$. After enumerating all paths, we denote the set of all descendants of $t$ as $\mathrm{DE}(t)$. Specifically, we denote all descendant nodes in Stage 4 as $\mathrm{Stage}$-4 $\mathrm{DE}(t)$.
Hence, the impact of any technology $t$ is the sum of impact of all its descendants in Stage 4:
$$\displaystyle I(t)$$
$$\displaystyle=\sum_{x\in\mathrm{Stage}\text{-4 }\mathrm{DE}(t)}p(x)\cdot c_{x}(t)\cdot I(x)~{},$$
(2)
where $p(x)$ is the probability that the descendent technology $x$ can be successfully developed, $c_{x}(t)$ is the contribution of $t$ to $x$, and $I(x)$ can be calculated by Eq. (1). This formula can also be interpreted from the light of do-calculus Pearl (1995) as $P(X|\mathrm{do}(t))-P(X)$, for $X\in\mathrm{Stage}\text{-4 }\mathrm{DE}(t)$, which means the effect of intervention $\mathrm{do}(t)$ on Stage 4 descendants.
Note that Eq. (1) and (2) are meta frameworks, and we leave it to future work to utilize these for assessing the social impact of their work.
3.3 Takeaways for NLP tasks
With the growing interest of AI and NLP publication venues (e.g., NeurIPS, ACL) in ethical and broader impact statements, it will be useful and important for researchers to have practical guidelines on evaluating the impact of their NLP tasks.
We first introduce some thinking steps to estimate the impact of research on an NLP task $t$:
(S1)
Classify the NLP task $t$ into one of the four stages (section 3.1)
(S2)
If $t$ is in Stage 4, think of the set of aspects $\bm{AS}$ that $t$ will impact, the scale of applications, and aspect-specific impact magnitude. Finally, estimate impact using Eq. (1).
(S2’)
If $t$ is in Stage 1-3, think of its descendant technologies, their success rate, and the contribution of $t$ to them. Finally, estimate impact using Eq. (1) and (2).
Next, we introduce some high-level heuristics to facilitate fast decisions:
(H1)
For earlier stages (i.e., Stage 1-2), it is challenging to quantify the exact social impact. Their overall impact tends to lean towards positive as they create more knowledge that benefits future technology development.
(H2)
Developers of Stage-4 technologies should be the most careful about ethical concerns. Enumerate the use cases, and estimate the scale of each usage by thinking of the stakeholders, economic impact, and users in the market. Finally, evaluate the final impact before proceeding. (E.g., if the final impact is very negative, then abandon or do it with restrictions).
(H3)
For Stage-3 technologies, if their Stage-4 descendants are tractable to enumerate and estimate for their impacts, then aggregate the descendants’ impacts by Eq. 2. Otherwise, treat them like (H1).
4 Deciding research priority
There are many directions for expansion of our efforts for social good; however, due to limited resources and availability of support for each researcher, we provide a research priority list. In this section, we are effectively trying to answer the overall question proposed in Section 1. Specifically, we adopt the practice in the research field global priorities (GP) MacAskill (2015); Greaves and McAskill (2017). We first introduce the high-level decision-making framework in Section 4.1, and then formulate these principles using technical terms in Section 4.2.
4.1 Important/Neglected/Tractable (INT) framework
Our thinking framework to address the research priority follows the practice of existing cost-benefit analysis in GP MacAskill (2015); Greaves and McAskill (2017), which aligns with the norms in established fields such as development economics, welfare economics, and public policy.
We draw an analogy between the existing GP research and NLP for social good. Basically, GP addresses the following problem: given, for example, 500 billion US dollars (which is the annual worldwide expenditure on social good), what priority areas should we spend on? Inspired by this practical setting, we form an analogy to NLP research efforts, namely to answer the question proposed in Section 1 about how to attribute resources and efforts on NLP research for social good.
The high-level intuitions are drawn from the Important/Neglected/Tractable (INT) framework MacAskill (2015), a commonly adopted framework in global priorities research on social good. Assume each agent has something to contribute (e.g., money, effort, etc.).
It is generally effective to contribute to important, neglected, and tractable areas.
4.2 Calculation of priority
Although the INT framework is commonly used in practice of many philanthropy organizations MacAskill (2015), it will be more helpful to formulate it using mathematical terms and economic concepts. Note that the terms we formulate in this section can be regarded as elements in our proposed thinking framework, but they are not directly calculable.333We adapted these terms from GP. Such terms to estimate priority has been successfully used by real-world social good organizations, e.g., GiveWell, Global Priorities Institute, the Open Philanthropy Project (a foundation with over 10 billion USD investment), ReThink Priorities, 80,000 Hours Organization. In the long run, the NLP community may potentially benefit from aligning with GP’s terminology. Still, we do not recommend applying our framework in high-stake settings yet, since it serves only as a starting point currently.
Our end goal is to estimate the cost-effectiveness of contributing a unit time and effort of a certain researcher or team to research on the technology $t$. So far we have a meta framework to estimate the impacts $I$ brought by successful development of a technology $t$. And we introduce the notations in Table 1.
For a researcher $r$, the action set per unit resource is $\{\Delta t|t\in\bm{T}(r)\}$. Equivalently speaking, they can intervene at a node $t$ by the amount of $\Delta t(r)$ in the structured causal graph $\mathcal{G}$ in Figure 3.
The first useful concept is $p(t;r)I(t)$, the expected social impact of research on a technology $t$. Here the success rate $p(t;r)$ is crucial because most research does not necessarily produce the expected outcome. However, if the impact of a technology can be extremely large (for example, prevention of extinction has impact near positive infinity), then even with a very little success rate, we should still devote considerable efforts into it.
The second concept that is worth attention is the marginal impact Pindyck et al. (1995) of one more unit of resources of the researcher $r$ into the technology $t$, calculated as
(3)
For example, if the field associated with the technology is almost saturated, or if many other researchers working on this field are highly competent, then, for a certain research group, blindly devoting time to the field may have little marginal impact. However, on the other hand, if a field is important but neglected, the marginal impact of pushing it forward can be large. This also explains why researchers are passionate about creating a new research field.
The third useful concept is the opportunity cost Palmer and Raftery (1999) to devote researcher $r$’s resources into the technology $t$ instead of a possibly more optimal technology $t^{\star}$. Formally, the opportunity cost is calculated as
$$\displaystyle t^{\star}(r)$$
$$\displaystyle:=\operatorname*{arg\,max}_{x}\Delta I(x(r)),$$
(4)
$$\displaystyle\mathrm{Cost}(t;r)$$
$$\displaystyle:=\Delta I(t^{\star}(r);r)-\Delta I(t;r)~{},$$
(5)
where $t^{\star}$ is the optimal technology that can bring the largest expected improvement of social impact.
The opportunity cost conveys the important message that we should not just do good, but do the best, because the difference from good to best can be a large loss.
Estimating the variables.
Note that the frameworks we have proposed so far are at the meta level, useful for guiding thought experiments, and future research. Exact calculations are not possible with the current state of research in NLP for social good, although achievable in the future.
A practical insight is that NLP researchers estimate the impact of their research via qualitative explanations (natural language) or rough quantitative ones. For example, the introduction section of most NLP papers or funding proposals is a natural language-based estimation of the impact of the research.
Such estimations can be useful to some extent Hubbard and Drummond (2011), although precise indicators of impact can motivate the work more strongly.
We can also borrow some criteria from effective altruism, a global movement that establishes a philosophical framework, and also statistical calculations of social good. One of the established metrics for calculating impact is called the “quality-adjusted life years” (QALYs) proposed by MacAskill (2015). QALYs count the number of life years (calibrated by life quality such as health conditions) that an act helps to increase.
5 Evaluating NLP tasks
In this section, we will first try to categorize the current state of NLP research for social good based on ACL 2020 papers, and then highlight NLP topics that are aligned with the UN’s SDGs. We will conclude with a practical checklist and case studies of common NLP tasks using this checklist.
5.1 Current state of NLP research for social good – ACL 2020 as a case study
We want to compare the ideal priority list with the current distribution of NLP papers for social good. As a case study of the current research frontier, we plot the topic distribution of the 89 ACL 2020 papers that are related to NLP for social good in Figure 4. We also show the portion of papers by the 10 countries with the most social-good papers. Our annotation details are in Appendix A.
Illustrated in Figure 4, most social-good papers work on interpretability, tackling misinformation (e.g., fact-checking for news), and healthcare (e.g., to increase the capacity of doctors). In terms of countries, the US has the most papers on interpretability, and no papers on NLP for education, NLP for legal applications, and some other topics. China has few papers on interpretability, although interpretability is the largest topic. India has no papers on fighting misinformation, although it is the second largest topic. Only 5 countries have publications across more than two social good topics.
Please refer to Appendix B for more analyses such as social-good papers by academia vs. industries.
However, compared with the UN’s SDGs United Nations (2015), the current NLP research (at least in the scope of ACL conference submissions) lacks attention to other important cause areas such as tackling global hunger, extreme poverty, clean water and sanitation, and clean energy. There are also too few research papers on NLP for education, although education is the 4th most important area in SDGs.
One cause of this difference is value misalignment. Most NLP research is supported by stakeholders and funding agencies, which have a large impact on the
current research trends or preferences in the NLP community. The perspective from social good with a framework to calculate the priority list has still not reached many in the NLP community.
Although we do not have data on expenditure in each NLP subarea, we can get a glimpse of the value misalignment in general. Table 2 shows the annual spending of some cause areas. Note that the ranking of the expenditure does not align with our priority list for social good. For example, luxury goods are not as important as global poverty, but luxury goods cost 1.3 trillion USD each year, almost five times the expenditure in global poverty.
5.2 Aligning NLP with social good
In this subsection, we list the top priorities according to UN’s SDGs United Nations (2015). For each goal, in Table 3 we include examples of existing NLP research, and suggest potential NLP tasks that can be developed (labeled as (proposed)).
5.3 Checklist
As a practical guide, we compile the takeaways of this paper into a list of heuristics that might be helpful for future practioners of NLP for social good.
To inspect the social goodness of an NLP research direction (especially in Stage 3-4), the potential list of questions to answer is as follows:
(Q1)
What kind of people/process will benefit from the technology?
(Q2)
Does it reinforce the traditional structure of beneficiaries?
I.e., what groups of underprivileged people can be benefited?
(e.g., by gender, demographics, socio-economic status, country, native languages, disability type)
(Q3)
Does it contribute to SDG priority goals such as poverty, hunger, health, education, equality, clean water, and clean energy?
(Q4)
Can it directly improve quality of lives? E.g., how many QALYs might it result in?
(Q5)
Does it count as (a) mitigating problems brought by NLP, or (b) proactively helping out-of-NLP social problems?
5.4 Case studies by the checklist
We conduct some case studies of NLP technologies using the checklist.
Low-resource NLP & machine translation.
This category includes NLP on low-resource languages, such as NLP for Filipino Sagum et al. (2019); Cruz et al. (2020), and machine translation in general. Because this direction expands the users of NLP technologies from English-speaking people to other languages, it benefits people speaking these languages (Q1), and helps to narrow the gap between English-speaking and non-English speaking end users (Q2), although it is still likely that people who can afford intelligent devices will benefit more than those who cannot. This category can contribute directly to goals such as equality and education, and indirectly to other goals because translation of documents in general helps the sharing of information and knowledge (Q3). It directly improves quality of lives, for example, for immigrants who may have difficulties with the local language (Q4). Thus, it counts as social good category (b) in (Q5).
Transparency, interpretability, algorithmic fairness and bias.
Research in this direction can impact users who need more reliable decision-making NLP, such as the selection process for loans, jobs, criminal judgements, and medical treatments (Q1). It can shorten the waiting time of candidates and still make fair decisions regardless of spurious correlations (Q2) (Q4). It reduces inequality raised by AI, but not increasing equality over man-made decisions, at least by the current technology (Q2). Thus, it is social good category (a) in (Q5).
Green NLP.
Green NLP reduces the energy consumption of large-scale NLP models. Although it works towards the goal of affordable and clean energy (Q3) by neutralizing the negative impact of training NLP models, but it does not impact out-of-NLP energy problems. Green NLP belongs to social good category (a) in (Q5). It does not have large impacts directly targeted at (Q1), (Q2) and (Q4).
QA & dialog.
People who can afford devices embedded with intelligent agents can use it, which is about 48.46% of the global population BankMyCell (2021) (Q1). So this benefits people with higher socio-economic status, and benefits English speaking people more than others, not to mention job replacements for labor-intensive service positions (Q2). It does not contribute to priority goals except for education and healthcare for people who can afford intelligent devices (Q3). Nonetheless, it can improve the quality of lives for its user group (Q4). It can be regarded as social good of category (b) in (Q5).
Information extraction, NLP-powered search engine & summarization.
This direction speeds up the information compilation process, which can increase the productivity in many areas. About 50% of the world population have access to the Internet and thus can use it Meeker (2019) (Q1) (Q2). This category indirectly helps education, and the information compilation process of other goals (Q3).
It can largely improve the lives of its user group because people gather information very frequently (e.g., do at least one Google search every day) (Q4). Thus, it belongs to social good category (b) in (Q5).
NLP for social media.
Research on social media provides tools for multiple parties. Social scientists can mine interesting trends and cultural phenomena; politicians can survey constituents’ opinions and influence them; companies can investigate user interests and expand their markets (Q1). The caveat of dual use is large, and heavily rely on the stakeholders’ intent: exploitation of the tools will lead to bleach of user privacy, and information manipulation, whereas good use of the tools can help evidence-based policy makers (social good category (a) in (Q5)), and help to understand the driving principles of democratic behavior and combat the mechanisms that undermine it (social good category (b) in (Q5)). Such diverse possibilities of parties who use them leave (Q2) and (Q4) unanswerable. Also, this research direction has limited (and often indirect) contribution to priorities such as poverty and hunger, unless the related policies are in heat discussion online
(Q3).
6 Conclusion
This paper presented a meta framework to evaluate NLP tasks in the light of social good, and proposed a practical guide for practitioners in NLP. We call for more attention towards awareness and categorization of social impact of NLP research, and we envision future NLP research taking on an important social role and contributing to multiple priority areas. We also acknowledge the iterative nature of this emerging field, requiring continuing discussions, improvements to our thinking framework and different ways to implement it in practice. We highlight that the goal of our work is to take one step closer to a comprehensive understanding of social good rather than introducing a deterministic answer about how to maximize social good with NLP applications.
Acknowledgments
We thank Bernhard Schoelkopf, Kevin Jin, and Qipeng Guo for insightful discussions on the main ideas and methodology of the paper. We thank Osmond Wang for checking the economic concepts in the paper. We also thank Chris Brockett for checking many details in the paper. We thank the labmates in the LIT lab at University of Michigan, especially Laura Biester, Ian Stewart, Ashkan Kazemi, and Andrew Lee for constructive feedback. We also thank labmates at the MIT MEDG group, especially William Boag and Peter Szolovits for their constructive feedback. We also received many feedbacks based on the first version of the paper, – we thank Niklas Stoehr for constructive suggestions to help some arguments be more comprehensive in the current version. We thank Jingwei Ni for the help with the annotation of the country and affiliation of the ACL 2020 papers.
Ethical and societal implications
Our paper establishes a framework to better understand the definition of social good in the context of NLP research, and lays out a recommended direction on how to achieve it. The contributions of our paper could benefit a focused, organized and accountable development of NLP for social good.
The data used in our work is public, and without privacy concerns.
References
Agrawal et al. (2010)
Rakesh Agrawal, Sreenivas Gollapudi, Krishnaram Kenthapadi, Nitish Srivastava,
and Raja Velu. 2010.
Enriching textbooks through data mining.
In Proceedings of the First ACM Symposium on Computing for
Development, pages 1–9.
Alam et al. (2020)
Firoj Alam, Fahim Dalvi, Shaden Shaar, Nadir Durrani, Hamdy Mubarak, Alex
Nikolov, Giovanni Da San Martino, Ahmed Abdelali, Hassan Sajjad, Kareem
Darwish, et al. 2020.
Fighting the covid-19 infodemic in social media: a holistic
perspective and a call to arms.
arXiv preprint arXiv:2007.07996.
Asai et al. (2018)
Akari Asai, Sara Evensen, Behzad Golshan, Alon Y. Halevy, Vivian Li, Andrei
Lopatenko, Daniela Stepanov, Yoshihiko Suhara, Wang-Chiew Tan, and Yinzhan
Xu. 2018.
HappyDB: A corpus of 100, 000 crowdsourced happy moments.
In Proceedings of the Eleventh International Conference on
Language Resources and Evaluation, LREC 2018, Miyazaki, Japan, May 7-12,
2018. European Language Resources Association (ELRA).
Association for Computational Linguistics (2021)
Association for Computational Linguistics. 2021.
Ethics FAQ at
ACL-IJCNLP 2021.
Atapattu et al. (2015)
Thushari Atapattu, Katrina Falkner, and Nickolas Falkner. 2015.
Educational question answering motivated by question-specific concept
maps.
In International Conference on Artificial Intelligence in
Education, pages 13–22. Springer.
BankMyCell (2021)
BankMyCell. 2021.
How many
smartphones are in the world?
Belinkov et al. (2017)
Yonatan Belinkov, Lluís Màrquez, Hassan Sajjad, Nadir Durrani, Fahim
Dalvi, and James Glass. 2017.
Evaluating layers
of representation in neural machine translation on part-of-speech and
semantic tagging tasks.
In Proceedings of the Eighth International Joint Conference on
Natural Language Processing (Volume 1: Long Papers), pages 1–10, Taipei,
Taiwan. Asian Federation of Natural Language Processing.
Bender et al. (2021)
Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret
Shmitchell. 2021.
On the dangers of stochastic parrots: Can language models be too big.
In Proceedings of the 2021 Conference on Fairness,
Accountability, and Transparency.
Bhatia et al. (2020)
Parminder Bhatia, Kristjan Arumae, Nima Pourdamghani, Suyog Deshpande, Ben
Snively, Mona Mona, Colby Wise, George Price, Shyam Ramaswamy, and Taha
Kass-Hout. 2020.
AWS CORD19-search: A scientific literature search engine for
COVID-19.
arXiv preprint arXiv:2007.09186.
Biester et al. (2020)
Laura Biester, Katie Matton, Janarthanan Rajendran, Emily Mower Provost, and
Rada Mihalcea. 2020.
Quantifying
the effects of COVID-19 on mental health support forums.
In Proceedings of the 1st Workshop on NLP for COVID-19
(Part 2) at EMNLP 2020, Online. Association for Computational Linguistics.
Blodgett et al. (2020)
Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020.
Language
(technology) is power: A critical survey of “bias” in NLP.
In Proceedings of the 58th Annual Meeting of the Association
for Computational Linguistics, pages 5454–5476, Online. Association for
Computational Linguistics.
Bourget and Chalmers (2014)
David Bourget and David J Chalmers. 2014.
What do philosophers believe?
Philosophical studies, 170(3):465–500.
Broome (2017)
John Broome. 2017.
Weighing goods: Equality, uncertainty and time.
John Wiley & Sons.
Brown et al. (2020)
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan,
Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom
Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens
Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott
Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec
Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language models are few-shot learners.
In Advances in Neural Information Processing Systems 33: Annual
Conference on Neural Information Processing Systems 2020, NeurIPS 2020,
December 6-12, 2020, virtual.
Buchner et al. (2014)
Barbara Buchner, Morgan Herve-Mignucci, Chiara Trabacchi, Jane Wilkinson,
Martin Stadelmann, Rodney Boyd, Federico Mazza, Angela Falconer, and Valerio
Micale. 2014.
Global landscape of climate finance 2015.
Climate Policy Initiative, 32.
Buolamwini and Gebru (2018)
Joy Buolamwini and Timnit Gebru. 2018.
Gender shades: Intersectional accuracy disparities in commercial
gender classification.
In Conference on fairness, accountability and transparency,
pages 77–91. PMLR.
Castelvecchi (2020)
Davide Castelvecchi. 2020.
Prestigious ai meeting takes steps to improve ethics of research.
Nature.
Chen et al. (2019)
Irene Y Chen, Peter Szolovits, and Marzyeh Ghassemi. 2019.
Can ai help reduce disparities in general medical and mental health
care?
AMA journal of ethics, 21(2):167–179.
Chong et al. (2010)
Miranda Chong, Lucia Specia, and Ruslan Mitkov. 2010.
Using natural language processing for automatic detection of
plagiarism.
In Proceedings of the 4th International Plagiarism Conference
(IPC-2010).
Chung et al. (2017)
Hyunji Chung, Michaela Iorga, Jeffrey Voas, and Sangjin Lee. 2017.
Alexa, can i trust you?
Computer, 50(9):100–104.
Cruz et al. (2020)
Jan Christian Blaise Cruz, Jose Kristian Resabal, James Lin, Dan John Velasco,
and Charibeth Cheng. 2020.
Investigating the true
performance of transformers in low-resource languages: A case study in
automatic corpus creation.
CoRR, abs/2010.11574.
D’arpizio et al. (2015)
Claudia D’arpizio, Federica Levato, Daniele Zito, and Joelle de Montgolfier.
2015.
Luxury goods worldwide market study.
Bain & Company’s report.
Dernoncourt et al. (2017a)
Franck Dernoncourt, Ji Young Lee, and Peter Szolovits. 2017a.
NeuroNER: an
easy-to-use program for named-entity recognition based on neural networks.
In Proceedings of the 2017 Conference on Empirical Methods in
Natural Language Processing: System Demonstrations, pages 97–102.
Association for Computational Linguistics.
Dernoncourt et al. (2017b)
Franck Dernoncourt, Ji Young Lee, Ozlem Uzuner, and Peter Szolovits.
2017b.
De-identification of patient notes with recurrent neural networks.
Journal of the American Medical Informatics Association,
24(3):596–606.
Evensen et al. (2019)
Sara Evensen, Yoshihiko Suhara, Alon Y. Halevy, Vivian Li, Wang-Chiew Tan,
and Saran Mumick. 2019.
Happiness
entailment: Automating suggestions for well-being.
In 8th International Conference on Affective Computing and
Intelligent Interaction, ACII 2019, Cambridge, United Kingdom, September
3-6, 2019, pages 62–68. IEEE.
Ferrara (2011)
Peter Ferrara. 2011.
America’s ever expanding welfare empire.
Forbes, April, 22.
Field et al. (2020)
Anjalie Field, Shrimai Prabhumoye, Maarten Sap, Zhijing Jin, Jieyu Zhao, and
Chris Brockett. 2020.
1st
workshop on nlp for positive impact.
Gibney (2020)
Elizabeth Gibney. 2020.
The battle for ethical ai at the world’s biggest machine-learning
conference.
Nature, 577(7792):609–609.
Gopinath et al. (2020)
Divya Gopinath, Monica Agrawal, Luke Murray, Steven Horng, David Karger, and
David Sontag. 2020.
Fast, structured clinical documentation via contextual autocomplete.
In Machine Learning for Healthcare Conference, pages 842–870.
PMLR.
Greaves and McAskill (2017)
Hilary Greaves and William McAskill. 2017.
A research agenda for the Global Priorities Institute.
University of Oxford, London.
Hossain et al. (2020)
Tamanna Hossain, Robert L. Logan IV, Arjuna Ugarte, Yoshitomo Matsubara, Sean
Young, and Sameer Singh. 2020.
COVIDLies: Detecting COVID-19 misinformation on social media.
In Proceedings of the 1st Workshop on NLP for COVID-19
(Part 2) at EMNLP 2020, Online. Association for Computational Linguistics.
Hovy et al. (2017)
Dirk Hovy, Shannon Spruit, Margaret Mitchell, Emily M. Bender, Michael Strube,
and Hanna Wallach, editors. 2017.
Proceedings of the
First ACL Workshop on Ethics in Natural Language Processing. Association
for Computational Linguistics, Valencia, Spain.
Hovy and Spruit (2016)
Dirk Hovy and Shannon L Spruit. 2016.
The social impact of natural language processing.
In Proceedings of the 54th Annual Meeting of the Association
for Computational Linguistics (Volume 2: Short Papers), pages 591–598.
Hubbard and Drummond (2011)
Douglas W Hubbard and David Drummond. 2011.
How to measure anything.
Wiley Online Library.
Jiang et al. (2017)
Ye Jiang, Xingyi Song, Jackie Harrison, Shaun Quegan, and Diana Maynard. 2017.
Comparing attitudes to
climate change in the media using sentiment analysis based on latent
dirichlet allocation.
In Proceedings of the 2017 Workshop: Natural Language
Processing meets Journalism, NLPmJ@EMNLP, Copenhagen, Denmark, September 7,
2017, pages 25–30. Association for Computational Linguistics.
Jobin et al. (2019)
Anna Jobin, Marcello Ienca, and Effy Vayena. 2019.
The global landscape of ai ethics guidelines.
Nature Machine Intelligence, 1(9):389–399.
Kagan (2018)
Shelly Kagan. 2018.
Normative ethics.
Routledge.
Killian et al. (2019)
Jackson A Killian, Bryan Wilder, Amit Sharma, Vinod Choudhary, Bistra Dilkina,
and Milind Tambe. 2019.
Learning to prescribe interventions for tuberculosis patients using
digital adherence data.
In Proceedings of the 25th ACM SIGKDD International Conference
on Knowledge Discovery & Data Mining, pages 2430–2438.
Kim et al. (2017)
Joo-Kyung Kim, Young-Bum Kim, Ruhi Sarikaya, and Eric Fosler-Lussier. 2017.
Cross-lingual transfer learning for pos tagging without cross-lingual
resources.
In Proceedings of the 2017 conference on empirical methods in
natural language processing, pages 2832–2838.
Koenecke and Feliu-Fabà (2019)
Allison Koenecke and Jordi Feliu-Fabà. 2019.
Learning twitter user
sentiments on climate change with limited labeled data.
CoRR, abs/1904.07342.
Köhn (2015)
Arne Köhn. 2015.
What’s in an embedding? analyzing word embeddings through
multilingual evaluation.
In Proceedings of the 2015 Conference on Empirical Methods in
Natural Language Processing, pages 2067–2073.
Kuhn (2012)
Thomas S Kuhn. 2012.
The structure of scientific revolutions.
University of Chicago press.
Lampos et al. (2020)
Vasileios Lampos, Simon Moura, Elad Yom-Tov, Ingemar J Cox, Rachel McKendry,
and Michael Edelstein. 2020.
Tracking COVID-19 using online search.
arXiv preprint arXiv:2003.08086.
Leidner and Plachouras (2017)
Jochen L Leidner and Vassilis Plachouras. 2017.
Ethical by design: Ethics best practices for natural language
processing.
In Proceedings of the First ACL Workshop on Ethics in Natural
Language Processing, pages 30–40.
Leiter et al. (2020a)
Richard Leiter, Enrico Santus, Zhijing Jin, Katherine Lee, Miryam Yusufov,
Edward Moseley, Yujie Qian, Jiang Guo, and Charlotta Lindvall.
2020a.
An artificial intelligence algorithm to identify documented symptoms
in patients with heart failure who received cardiac resynchronization therapy
(s717).
Journal of Pain and Symptom Management, 59(2):537–538.
Leiter et al. (2020b)
Richard E Leiter, Enrico Santus, Zhijing Jin, Katherine C Lee, Miryam Yusufov,
Isabel Chien, Ashwin Ramaswamy, Edward T Moseley, Yujie Qian, Deborah Schrag,
et al. 2020b.
Deep natural language processing to identify symptom documentation in
clinical notes for patients with heart failure undergoing cardiac
resynchronization therapy.
Journal of Pain and Symptom Management, 60(5):948–958.
Lende and Raghuwanshi (2016)
Sweta P Lende and MM Raghuwanshi. 2016.
Question answering system on education acts using nlp techniques.
In 2016 world conference on futuristic trends in research and
innovation for social welfare (Startup Conclave), pages 1–6. IEEE.
Luck et al. (2018)
Margaux Luck, Jonnie Penn, Tristan Sylvain, Mark Crowley, Joseph Paul Cohen,
Margaux Luck, Sean McGregor, Myriam Côté, Valentine Goddard, Margaux Luck,
Kenny Chen, Arsene Fansi Tchango, Joseph Paul Cohen, and Yoshua Bengio. 2018.
Ai for social good
workshop.
Luo et al. (2018)
Yuan Luo, Yu Cheng, Özlem Uzuner, Peter Szolovits, and Justin Starren.
2018.
Segment convolutional neural networks (seg-cnns) for classifying
relations in clinical notes.
Journal of the American Medical Informatics Association,
25(1):93–98.
MacAskill et al. (2020)
Michael MacAskill, Krister Bykvist, and Toby Ord. 2020.
Moral uncertainty.
Oxford University Press.
MacAskill (2015)
William MacAskill. 2015.
Doing good better: Effective altruism and a radical new way to
make a difference.
Guardian Faber Publishing.
Madnani and Cahill (2018)
Nitin Madnani and Aoife Cahill. 2018.
Automated scoring: Beyond natural language processing.
In Proceedings of the 27th International Conference on
Computational Linguistics, pages 1099–1109.
Meeker (2019)
Mary Meeker. 2019.
Internet
trends report; 2019.
Mitchell et al. (2019)
Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman,
Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019.
Model cards for model reporting.
In Proceedings of the conference on fairness, accountability,
and transparency, pages 220–229.
Nie et al. (2020)
Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe
Kiela. 2020.
Adversarial
NLI: A new benchmark for natural language understanding.
In Proceedings of the 58th Annual Meeting of the Association
for Computational Linguistics, pages 4885–4901, Online. Association for
Computational Linguistics.
Palmer and Raftery (1999)
Stephen Palmer and James Raftery. 1999.
Opportunity cost.
Bmj, 318(7197):1551–1552.
Pearl (1995)
Judea Pearl. 1995.
Causal diagrams for empirical research.
Biometrika, 82(4):669–688.
Pérez-Rosas et al. (2019)
Verónica Pérez-Rosas, Xinyi Wu, Kenneth Resnicow, and Rada Mihalcea.
2019.
What makes a good
counselor? learning to distinguish between high-quality and low-quality
counseling conversations.
In Proceedings of the 57th Annual Meeting of the Association
for Computational Linguistics, pages 926–935, Florence, Italy. Association
for Computational Linguistics.
Pindyck et al. (1995)
Robert S Pindyck, Daniel L Rubinfeld, and Prem L Mehta. 1995.
Microeconomics, volume 4.
Prentice Hall Englewood Cliffs, NJ.
Raji et al. (2020)
Inioluwa Deborah Raji, Andrew Smart, Rebecca N White, Margaret Mitchell, Timnit
Gebru, Ben Hutchinson, Jamila Smith-Loud, Daniel Theron, and Parker Barnes.
2020.
Closing the ai accountability gap: defining an end-to-end framework
for internal algorithmic auditing.
In Proceedings of the 2020 Conference on Fairness,
Accountability, and Transparency, pages 33–44.
Sagum et al. (2019)
Ria Ambrocio Sagum, Aldrin D Ramos, and Monique T Llanes. 2019.
Ficobu: Filipino wordnet construction using decision tree and
language modeling.
International Journal of Machine Learning and Computing,
9(1):103–107.
Sap et al. (2019)
Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A Smith. 2019.
The risk of racial bias in hate speech detection.
In Proceedings of the 57th annual meeting of the association
for computational linguistics, pages 1668–1678.
Schwartz et al. (2020)
Roy Schwartz, Jesse Dodge, Noah A. Smith, and Oren Etzioni. 2020.
Green ai.
Commun. ACM, 63(12):54–63.
Shah and Chokkattu (2016)
Saqib Shah and Julian Chokkattu. 2016.
Microsoft kills AI chatbot Tay (twice) after it goes full Nazi.
Sheehan et al. (2019)
Evan Sheehan, Chenlin Meng, Matthew Tan, Burak Uzkent, Neal Jean, Marshall
Burke, David B. Lobell, and Stefano Ermon. 2019.
Predicting economic
development using geolocated wikipedia articles.
In Proceedings of the 25th ACM SIGKDD International
Conference on Knowledge Discovery & Data Mining, KDD 2019, Anchorage,
AK, USA, August 4-8, 2019, pages 2698–2706. ACM.
Sidgwick (1874)
Henry Sidgwick. 1874.
The methods of ethics.
Macmillan and Co.
Stanovsky et al. (2019)
Gabriel Stanovsky, Noah A. Smith, and Luke Zettlemoyer. 2019.
Evaluating gender bias
in machine translation.
In Proceedings of the 57th Annual Meeting of the Association
for Computational Linguistics, Florence, Italy. Association for
Computational Linguistics.
Strubell et al. (2019)
Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019.
Energy and policy considerations for deep learning in nlp.
arXiv preprint arXiv:1906.02243.
Su et al. (2020)
Dan Su, Yan Xu, Tiezheng Yu, Farhad Bin Siddique, Elham Barezi, and Pascale
Fung. 2020.
CAiRE-COVID: A question answering and query-focused multi-document
summarization system for COVID-19 scholarly information management.
In Proceedings of the 1st Workshop on NLP for COVID-19
(Part 2) at EMNLP 2020, Online. Association for Computational Linguistics.
Sun et al. (2020)
Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou.
2020.
Mobilebert: a
compact task-agnostic BERT for resource-limited devices.
In Proceedings of the 58th Annual Meeting of the Association
for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages
2158–2170. Association for Computational Linguistics.
Taghipour and Ng (2016)
Kaveh Taghipour and Hwee Tou Ng. 2016.
A neural approach to automated essay scoring.
In Proceedings of the 2016 conference on empirical methods in
natural language processing, pages 1882–1891.
Thomson (1976)
Judith Jarvis Thomson. 1976.
Killing, letting die, and the trolley problem.
The Monist, 59(2):204–217.
Todd (2017)
Benjamin Todd. 2017.
The case for reducing existential risks.
Tomašev et al. (2020)
Nenad Tomašev, Julien Cornebise, Frank Hutter, Shakir Mohamed, Angela
Picciariello, Bec Connelly, Danielle CM Belgrave, Daphne Ezer, Fanny Cachat
van der Haert, Frank Mugisha, et al. 2020.
Ai for social good: unlocking the opportunity for positive impact.
Nature Communications, 11(1):1–6.
Townsend (2013)
Hilary Townsend. 2013.
Natural language processing and clinical outcomes: the promise and
progress of nlp for improved care.
Journal of AHIMA, 84(2):44–45.
UNESCO (2017)
UNESCO. 2017.
Facts and figures: R&d expenditure.
United Nations (2015)
United Nations. 2015.
UN Sustainable Development
Goals.
Voicebot (2020)
Voicebot. 2020.
Amazon Echo
& Alexa stats.
Wang et al. (2020)
Lucy Lu Wang, Kyle Lo, Yoganand Chandrasekhar, Russell Reas, Jiangjiang Yang,
Darrin Eide, Kathryn Funk, Rodney Kinney, Ziyang Liu, William Merrill, et al.
2020.
Cord-19: The COVID-19 open research dataset.
ArXiv.
Welch et al. (2020)
Charles Welch, Allison Lahnala, Veronica Perez-Rosas, Siqi Shen, Sarah Seraj,
Larry An, Kenneth Resnicow, James Pennebaker, and Rada Mihalcea. 2020.
Expressive
interviewing: A conversational system for coping with COVID-19.
In Proceedings of the 1st Workshop on NLP for COVID-19
(Part 2) at EMNLP 2020, Online. Association for Computational Linguistics.
Xu et al. (2020)
Zhentao Xu, Verónica Pérez-Rosas, and Rada Mihalcea. 2020.
Inferring
social media users’ mental health status from multimodal information.
In Proceedings of the 12th Language Resources and Evaluation
Conference, pages 6292–6299, Marseille, France. European Language Resources
Association.
Yunpeng et al. (2019)
Cui Yunpeng, Wang Jian, and Liu Juan. 2019.
The
development of deep learning based natural language processing (nlp)
technology and applications in agriculture.
Journal of Agricultural Big Data, 1(1):38.
Zeng et al. (2019)
Yi Zeng, Enmeng Lu, and Cunqing Huangfu. 2019.
Linking artificial
intelligence principles.
In Workshop on Artificial Intelligence Safety 2019 co-located
with the Thirty-Third AAAI Conference on Artificial Intelligence 2019
(AAAI-19), Honolulu, Hawaii, January 27, 2019, volume 2301 of CEUR
Workshop Proceedings. CEUR-WS.org.
Zoph et al. (2016)
Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016.
Transfer learning for
low-resource neural machine translation.
In Proceedings of the 2016 Conference on Empirical Methods in
Natural Language Processing, pages 1568–1575, Austin, Texas. Association
for Computational Linguistics.
Appendix A ACL 2020 paper annotations
For the case study on ACL 2020 papers, such as Figure 2 and 4, we collect the 570 long papers at ACL 2020. An NLP researcher with four years of research experience conducted the entire annotation, so that the categorization is consistent across all papers.444The annotation file has been uploaded to the softconf system.
The first annotation task is to categorize all papers into one of the four stages in the theory$\rightarrow$application development. We showed the annotator the description of the four stages in Section 3.1. Next, provided with the title, abstract, and PDF of each paper, the annotator was asked to annotate which of the four stages each paper belongs to. The annotator had passed a test batch before starting the large-scale annotation.
The second annotation task is to annotate the research topics of the papers related to social good at ACL 2020. If the paper has a clear social good impact (89 out of 570 papers), the annotator needs to classify the topic of the paper into one of the given categories: bias mitigation, education, equality, fighting misinformation, green NLP, healthcare, interpretability, legal applications, low-resource language, mental healthcare, robustness, science literature parsing, and others. For the other meta information such as countries, or academia vs. industry, we decide based on the information of the leading first author.
Appendix B More statistics about ACL 2020 papers
For the case study on ACL 2020 papers, we further investigate the following statistics.
Stage 1-4 by countries.
Recall that in Figure 2 of the main paper, we plot the distributions of papers by the four stages, and highlight the most frequent topics in each stage. Additionally, it is also interesting to explore the distribution of stages for different countries. In Figure 5, we have the following observations:
China does not have Stage-1 papers (i.e., fundamental theories), although it has the second largest total number of papers. The reason might be that there are not many Chinese researchers on linguistic theories who publish at English conferences.
Most countries’ number of papers in the four stages follows the overall trend (i.e., Stage-2 papers $>$ Stage-3 papers $>$ Stage-4 papers $>$ Stage-1 papers), with a few exceptions. For example, China has almost the same number of papers in Stage 2 and 3, Germany has more papers in Stage 4 (i.e., deployed applications) than in Stage 3, and Canada has the most papers in Stage 3.
Social good topics by academia vs. industry.
As we call for more research attention to NLP for social good, it is important to understand the affiliations behind the current social good papers. A coarse way is to look at the affiliation of the first author, and inspect whether the main work of the paper is done by people from academia or industry.
As in Figure 6, overall academia publishes several times more papers on social good than the industry. This ratio is higher than the average ratio of papers from academia out of all ACL 2020 papers (389 from academia out of 570). Industry does not have ACL 2020 papers on topics such as NLP ethics. Note that using statistics from ACL papers alone could be limiting because researchers in academia typically present almost all research achievements through publications, but many industry researchers do not publish in public venues such as ACL, although their research may impact various products. |
The Mauna Kea Observatories Near-Infrared Filter Set. II.
Specifications for a New $JHKL^{\prime}M^{\prime}$ Filter Set for
Infrared Astronomy
A. T. Tokunaga
Institute for Astronomy, University of Hawaii,
Honolulu, HI 96822
[email protected]
D.A. Simons
Gemini Observatory, Northern Operations Center, 670 N. A‘ohoku
Place,
Hilo, HI 96720
[email protected]
W. D. Vacca
Max-Planck-Institut für extraterrestrische Physik
Postfach 1312, D-85741 Garching, Germany
[email protected]
Abstract
We present a description of a new 1–5 $\mu$m filter set similar to
the long-used $JHKLM$ filter set derived from that of Johnson. The
new Mauna Kea Observatories Near-Infrared (MKO-NIR) filter set is
designed to reduce background noise, improve photometric
transformations from observatory to observatory, provide greater
accuracy in extrapolating to zero air mass, and reduce the color
dependence in the extinction coefficient in photometric reductions.
We have also taken into account the requirements of adaptive optics
in setting the flatness specification of the filters. A complete
technical description is presented to facilitate the production of
similar filters in the future.
infrared: general — instrumentation: photometers
††slugcomment: PASP, in press. 17 Oct. 2001
1 Introduction
The rationale for a new set of infrared filters was presented by
Simons & Tokunaga (2001; hereafter Paper I). The goals of the design
of this new filter set are similar to those of Young, Milone, & Stagg
(1994), namely to construct a filter set that minimized color terms in
the transformations between photometric systems and that reduced the
uncertainty in determining the absolute calibration of photometric
systems. However, in contrast to the filters proposed by Young et
al., the filters in this new set were also designed to maximize
throughput, in addition to minimizing the effects of atmospheric
absorption. The reduced dependence on atmospheric absorption also
permits photometry to be less sensitive to water vapor variations and
to the altitude of the observatory. Most importantly, the new filters
permit extrapolation to zero air mass with small errors.
We required large-size, high-quality filters for several facility
instruments for the Gemini and Subaru Telescopes.
Since the cost of each filter is dominated by the
technical difficulties of the coating process it is desirable to place
as many substrates as possible into the coating chamber in order to
reduce the cost per filter. Hence there is a strong economic driver
to produce custom filters in a consortium production run. This has
the added benefit that with more observatories involved there would be
greater standardization among the observatories.
Although this filter set was designed for the Gemini and Subaru
Telescopes, all of the optical/infrared observatories at Mauna Kea are
presently using these filters (NASA Infrared Telescope Facility,
United Kingdom Infrared Telescope, Canada-France-Hawaii Telescope,
Keck, Gemini, Subaru). In addition, the filter set was discussed
informally by the IAU Working Groups on IR Photometry and on Standard
Stars at the 2000 General Assembly, and endorsed as the preferred
“standard” near-infrared photometric system, to be known as the
Mauna Kea Observatories Near-Infrared (MKO-NIR) photometric system.
In this paper we describe a set of filters that were fabricated
according to the definitions presented in Paper I. We include a list
of specifications so that similar filters can be produced in the
future. It is our hope that if enough observatories adopt these
filters, we will have greater uniformity among photometric systems, as
well as reduced systematic errors when comparing observations from
different observatories.
2 Filter Specifications
We present here the list of technical specifications that were
required to be satisfied by the filter manufacturer. The center,
cut-on, and cut-off wavelengths are given in Table 1, and they follow
the filter definitions given in Paper I. The specifications for
substrate flatness, parallelism of the filters, and use of a single
substrate are required for use with adaptive optics systems.
1.
Out-of-band transmission: $<$10${}^{-4}$ out to 5.6 $\mu$m.
This specification is required for use with InSb detectors. For
HgCdTe 2.5 $\mu$m cut-off detectors, the out-of-band blocking can be
specified for wavelengths less than 3.0 $\mu$m instead of 5.6 $\mu$m.
The desired blocking is better than 10${}^{-4}$, but practical
considerations such as cost and manufacturing difficulty make it
impossible to go lower. In cases where blocking to this level or
better is not possible, a separate blocking filter should be used.
PK-50 is a suitable blocker for wavelengths less than 2.0 $\mu$m. For
longer wavelengths, an interference blocking filter may be required
for InSb detectors.
2.
Operating temperature: 65 K. Cold filter scans of witness samples to be provided, together with prediction of wavelength shift
with temperature.
The filters are expected to be used at 50–77 K, but the specified
filter temperature was set at 65 K primarily because the Gemini and
Subaru instruments were designed for use with InSb detectors and with
cryogenic motors inside the cryostat. This requires cooling to
65 K to avoid excess thermal emission from the motors
within the cryostat.
3.
Average transmission: $>$80% (goal $>$90%).
4.
Transmission between the 80% points to have a ripple of less than $\pm$5%.
5.
Cut-on and cut-off: $\pm$0.5% wavelength error.
6.
Roll-off: %slope $\leq$2.5%.
The %slope $=$
$[\lambda(80\%)-\lambda(5\%)]/\lambda(5\%)\times 100$, where
$\lambda(80\%)$ is the wavelength at 80% transmission and
$\lambda(5\%)$ the wavelength at 5% transmission.
Ideally the filters should have a square “boxcar” shape for maximum
throughput. The specifications on the average transmission, ripple,
cut-on and cut-off uncertainty, and roll-off uncertainty are a
compromise between the desired sharp edge, practical manufacturing
specifications, and cost.
7.
Substrate surfaces parallel to $\leq$5${}^{\prime\prime}$.
8.
Substrate flatness: $<$$0.0138\lambda/(n-1)$, where $n$ is the index of refraction of the substrate (compatible with
AO systems).
For example, for $n=1.5$, $\lambda=2200$ nm, the substrate flatness
should be $<$61 nm. For $n=3.4$, $\lambda=2200$ nm, the substrate
flatness should be $<$13 nm.
This is a flatness specification, not a roughness specification. The
Strehl ratio (SR) can be approximated as SR $\sim 1-[(2\pi/\lambda)w]^{2}$, where $w$ is the rms wavefront error (see Schroeder
1987). This expression is valid for a single reflective surface. For
two surfaces in transmission, we have SR $\sim 1-[(n-1)(2\pi/\lambda)w\sqrt{2}]^{2}$, where $n$ is the index of refraction of
the substrate. For SR = 0.985, $w=0.01378[\lambda/(n-1)]$.
Alternatively, one could specify $\lambda/10$ peak-to-valley at 0.63
$\mu$m, which is approximately $\lambda/40$ rms, or $\sim$15 nm rms.
This would be suitable for glass substrates ($n=1.5$) at 1.0–2.2
$\mu$m as well as silicon substrates ($n=3.4$) at 2.2–4.8 $\mu$m.
It should be noted that after coating, the substrate will deform under
the stress of the coatings, taking on a concave or convex shape.
However, this does not have any effect on the wavefront error in
transmission since the surfaces are parallel.
9.
Single-substrate to be used (cemented filters not acceptable).
Cemented filters are not considered acceptable because of potential
large surface deformations when cooled. In cases where a high SR is
not needed, cemented filters may be acceptable.
Items 7–9 are the main requirements for filters to be used in adaptive
optics. The specified flatness conforms to an SR of 0.985.
10.
Filter to be designed for a tilt of 5${}^{\circ}$ (to suppress ghost images).
Items 9–10 assume that these filters will be placed at the pupil image
within a camera and the filters are tilted to the optical axis by
5${}^{\circ}$ to avoid ghost images in the focal plane. Alternatively,
these filters could be designed to be used with no tilt, but with a
wedged substrate.
11.
Scratch/Dig: 40/20.
This item specifies the maximum size of scratches and digs (pits) that
are permitted on the surface of the substrate. The specification
40/20, which is acceptable in
most ground-based astronomy applications, requires that
scratches be no wider than 4 $\mu$m and that the pits be no wider
than 200 $\mu$m in diameter. A technical description of this
specification is given by Bennett & Mattsson (1999).
12.
Diameter: 60 mm, or cut to requested size.
The coating is typically not good near the edge and some of the filter
is blocked by the filter mount. Therefore the filter diameter is
typically specified to be about 10% larger than the actual size
required by the optics.
13.
Maximum thickness: 5 mm, including blocker for InSb detectors.
Items 12–13 depends on the application. The
thickness/diameter ratio must be of a practical value for polishing
and mechanical strength and is usually specified to be about 0.1.
14.
No radioactive materials such as thorium to be used (to avoid spurious detector noise spikes).
3 Filter Fabrication and Filter Profiles
A number of vendors were contacted to solicit bids on the filter
production for the consortium of buyers. The vendor selected was OCLI
of Santa Rosa, California. This selection was determined both by
price of production, ability to meet the specifications, maturity of
the production facilities, and willingness to accept individual orders
from the consortium. The filters were cut to size by the vendor for
each order.
A production run was ordered to accommodate the filter needs of
the following observatories and institutions:
Anglo-Australian Observatory,
California Institute of Technology,
Canada-France-Hawaii Telescope, Carnegie Institution,
Center for Astrophysics, Cornell University,
European Southern Observatory,
Gemini Telescopes, Kiso Observatory,
Korean Astronomy Observatory, Kyoto University,
MPE-Garching, MPI-Heidelberg,
NASA Goddard Space Flight Center,
NASA Infrared Telescope Facility,
National Astronomical Observatory of Japan,
National Optical Astronomy Observatories,
Nordic Optical Telescope,
Ohio State University, Osservatorio Astrofisico di Arcetri,
Subaru Telescope, United Kingdom Infrared Telescope,
University of California Berkeley,
University of California Los Angeles, Univ. of Cambridge,
University of Grenoble, Univ. of Hawaii,
University of Wyoming,
and the William Herschel Telescope.
Figures 1(a)–(g) show the transmission profiles of the filters
that were produced. The profiles are shown at 65 K, as obtained
by extrapolating the filter shift with temperature from room temperature
to 65 K. Figure 2 shows the $J,H,K_{s},L^{\prime}$, and $M^{\prime}$ filter
profiles superimposed on the atmospheric transmission at Mauna Kea.
4 Photometric properties
Using a modified version of the ATRAN program (Lord 1992), we
constructed models of the telluric absorption as a function of air
mass to investigate the photometric properties of the new filter set.
Absorption spectra covering the range 0.8–5.0 $\mu$m were generated
for 10 zenith angles between 0${}^{\circ}$ and 70${}^{\circ}$ for each of
five values of the precipitable water vapor content (0.5, 1.0, 2.0,
3.0, and 4.0 mm). The zenith angles correspond to a range of air
masses between 1.0 and 2.9. The values of the water vapor content
span the range typically found on Mauna Kea (see, e.g., Morrison et
al. 1973; Warner 1977; Wallace & Livingston 1984). To examine the
photometric characteristics down to an air mass of 0.0, we also
calculated spectra for five additional air mass values $<$1.0 (0.05,
0.10, 0.25, 0.50, and 0.75) for each of the water vapor values. These
spectra were generated with ATRAN using a model atmosphere in which
both the gas and water vapor content were given by the standard model
values (appropriate for Mauna Kea at an air mass of 1) scaled by the
air mass value. The water content was then scaled by an additional
factor given by the ratio of the desired water vapor value to the
standard model value computed by ATRAN for the atmosphere above Mauna
Kea. All spectra were computed for an altitude of 4158 m (which is
the altitude of the NASA Infrared Telescope Facility) and with a
resolution of approximately 100,000.
The telluric absorption spectra were then multiplied by a similar
resolution model spectrum of Vega obtained from R. Kurucz (2001). For
each filter, we computed synthetic magnitudes by multiplying the
resulting spectra by the normalized filter transmission curves and
integrating over the wavelength range spanned by the filter.
Magnitude differences relative to the original model spectrum of Vega
were calculated. In Fig. 3 we show the magnitude differences versus
air mass for the case of 2.0 mm precipitable water.
For each filter and water vapor value, the run of magnitude
differences as a function of air mass was fitted with a least-squares
routine with two functional forms: (1) a linear fit between air masses
of 1 and 3 and (2) the rational expression given by Young et al.
(1994; see their eq. 7) for the entire range of air masses. The
latter expression has the form
$$\Delta m=(a+bX+cX^{2})/(1+dX)$$
(1)
where $a$, $b$, $c$, and $d$ are fitting constants and $X$
is the air mass. A detailed discussion of this function is given by
Young (1989). Both fits are shown in Fig. 3, and the coefficients
are given in Tables 2 and 3.
The linear fit simulates the typical photometric reduction technique
employed by ground-based observers to obtain the extinction
coefficient in magnitude per air mass. The constant term (the
intercept of the linear fit) yields the systematic error in the
photometric magnitude incurred by adopting this particular functional
form of the atmospheric extinction and extrapolating to zero air mass.
Inspection of the values given in Table 2 indicates that systematic
errors $>$0.05 mag result from a linear extrapolation for
the $K^{\prime}$ and $M^{\prime}$ filters for all values of the water vapor. The
errors are decreased only slightly if the fit is restricted to the
air mass range 1–2.
As can be seen in Fig. 3, equation (1) provides an excellent fit to
all the data points. However, the determination of the four
coefficients of such a function from actual observations of standard
stars is impractical for most observing programs, as a large number of
measurements over a range of air masses are required. Even if these
observations were carried out, reliable values can be obtained only if
measurements in the completely inaccessible air mass range between 0
and 1 were available. The values in Table 3 are presented for
comparison to other photometric filters, such as those discussed by
Young et al. (1994).
The proposed filters show greatly improved photometric properties over
older filter sets. Krisciunas et al. (1987) summarized the average
extinction coefficients at Mauna Kea. Comparison of our values given
in column 3 of Table 2 with those given in Table I by Krisciunas et
al. (1987) reveals that at the typical water vapor of 2–3 mm of
precipitable water for Mauna Kea, the reduction in the extinction
coefficient is about a factor of 8 at J. In addition, comparison of
the values given in column 2 of Table 2 with those given in Table III
of Manduca & Bell (1979) shows that the reduction in the error of
extrapolation to zero air mass (which they denote as $\Delta$) is
about a factor of 9 at J compared to the KPNO winter values.
As was pointed out by Manduca & Bell (1979), the extinction error
upon extrapolation to zero air mass does not affect differential
photometry, in which one compares observed fluxes of target objects to
those of established standard stars. The extinction errors in this
case will be absorbed into the photometric zero points. It should be
noted, however, that the extinction error may vary with the
temperature of the object being observed. This variation leads to a
color term in the extinction coefficient. However, as demonstrated by
Manduca & Bell, this term is generally small, amounting to an error
of $<$0.001 mag for most filters. Only the $J$ band filter
analyzed by Manduca & Bell exhibited a significant color dependence.
Given that the new $J$ filter is less affected by the atmospheric
absorption, we expect that the extinction color term for this filter
will be as small as those for the other filters.
The extinction errors introduced by adopting a linear fit may,
however, affect absolute photometric measurements as, for example, in
the establishment of an exo-atmospheric magnitude system. The latter
subject has a number of additional complications beyond the scope of
the present paper. We simply wish to point out that such a system
requires some way to eliminate the extinction error (for example, with
narrowband filters; see Young et al. 1994).
5 Summary
A set of 1–5 $\mu$m filters designed for improved photometry and for
use with adaptive optics is described. Although the filter profiles are
optimized for maximum throughput, they avoid most of the detrimental
effects of atmospheric absorption bands. A production run of
these filters has been completed. It is our hope that widespread
use of these filters will help produce more uniform photometric
results, reduce the magnitude of the color transformation among
observatories, and reduce the uncertainty in reductions to zero or
unit air mass.
We thank E. Atad, J. Elias, S. Leggett, K. Matthews, and especially T.
Hawarden for discussions and input on the filter specifications. We
also thank S. Lord for providing a modified version of ATRAN that
allowed the calculation of the atmospheric transmission between
air masses of 0.0 and 1.0. Filter profiles in electronic form may be
found at http://irtf.ifa.hawaii.edu/Facility/nsfcam/hist
/newfilters.html.
ATT acknowledges the support of NASA Contract NASW-5062 and NASA Grant
No. NCC5-538. DS acknowledges the support of the Gemini Observatory,
which is operated by the Association of Universities for Research in
Astronomy, Inc., under a cooperative agreement with the NSF on behalf
of the Gemini partnership: the National Science Foundation (United
States), the Particle Physics and Astronomy Research Council (United
Kingdom), the National Research Council (Canada), CONICYT (Chile), the
Australian Research Council (Australia), CNPq (Brazil), and CONICET
(Argentina).
References
Bennett (1999)
Bennett, J. M., & Mattsson, L. 1999,
Introduction to Surface Roughness and Scattering (2d ed., Washington,
DC: Opt. Soc. Am.)
Krisciunas (1987)
Krisciunas, K., et al. 1987, PASP,
99, 887
Kurucz (2001)
Kurucz, R. 2001,
http://kurucz.harvard.edu/stars.html
Lord (1992)
Lord, S. D. 1992, NASA Tech. Mem. 103957
Manduca (1979)
Manduca, A., & Bell, R. A. 1979,
PASP, 91, 848.
Morrison (1973)
Morrison, D., Murphy, R. E., Cruikshank, D. P., Sinton, W. M., & Martin,
T. Z., 1973, PASP, 85, 255
Schroeder (1987)
Schroeder, D.J. 1987, Astronomical
Optics (San Diego: Academic Press) 191
Simons (2001)
Simons, D., & Tokunaga, A. T. 2001,
PASP, in press (Paper I)
Wallace et al. (1984)
Wallace, L., & Livingston, W.
1984, PASP, 96, 182
Warner (1977)
Warner, J. W. 1977, PASP, 89, 724
Young et al. (1989)
Young, A. T. 1989, in Infrared
Extinction and Standardization, Lecture Notes in Physics Vol. 341,
ed. E. F. Milone (Berlin: Springer), 6
Young et al. (1994)
Young, A.T., Milone, E.F., &
Stagg, C.R. 1994, A&AS, 105, 259
Fig. 1(a)–(g). Filter profiles at a
temperature of 65 K. These were measured by the vendor at room
temperature, then shifted according to the measured change with
temperature of a witness sample.
Fig. 2. $J,H,K_{s},L^{\prime},M^{\prime}$ filter profiles
superimposed on the atmospheric transmission at Mauna Kea kindly
provided by G. Milone for 1 mm precipitable water vapor and an
air mass of 1.0.
Fig. 3. Extinction plot for 2.0 mm precipitable water.
The dashed lines are the linear fits to the magnitudes in the air mass
range of 1.0–3.0. The solid lines are the fit to equation (1) in
the air mass range of 0.0–3.0. |
Renormalization group approach to the interacting
boson fermion systems
T. Domański${}^{(1)}$ and J. Ranninger${}^{(2)}$
${}^{(1)}$ Institute of Physics, M. Curie Skłodowska
University, 20-031 Lublin, Poland
${}^{(2)}$ Centre de Recherches sur les Très Basses Temperatures,
CNRS, 38-042 Grenoble, France
(December 2, 2020)
Abstract
We study a pseudogap region of the mixed boson fermion system
using a recent formulation of the renormalization group technique
through the set of infinitesimal unitary transformations.
Renormalization of fermion energies gives rise to a depletion of
the low energy states (pseudogap) for temperatures $T^{*}>T>T_{c}$
which foreshadows appearance of the pairwise correlations with
their long range phase coherence being missed. With a help of
the flow equations for boson and fermion operators we analyze
spectral weights and finite life times of these quasiparticles
caused by interactions.
Many experimental measurements of e.g. the angle resolved
photoemission (ARPES), tunneling spectroscopy (STM), specific
heat as well as the optical and magnetic characteristics
(see for an overview [1]) point at a presence of
pseudogap in the normal phase spectrum of underdoped cuprate
superconductors. It is believed that a proper description
of this phenomenon would be crucial for understanding
of the HTSC mechanism.
Model of itinerant fermions coupled via the charge
exchange interaction to boson particles [2],
for example of bipolaronic origin, has been shown
to posses such a pseudogap in the normal phase fermion
spectrum. Its occurrence has been obtained on a basis
of the selfconsistent perturbation theory [3],
diagrammatic expansion [4] and also by solving
the dynamical mean field equations for this model
within the non-crossing approximation (NCA) [5].
Unfortunately, these techniques have not been able
to study a superconducting phase of the model, so
evolution of the pseudogap into the true superconducting
gap could not be investigated. Only very recently
we have reported some prior results for the boson fermion
(BF) model using the flow equation method [6]
where superconducting gap and normal phase pseudogap
could be treated on equal footing. Both gaps originate
from the same source (from the exchange interaction
between fermions and bosons) but they are differently
scaled against a magnitude of this interaction and
the total concentration of charge carriers. Still
the most intriguing question concerning the evolution
from the pseudo- to the superconducting gap is poorly
understood due to technical difficulties (it is a very
tough problem to solve the flow equations for $2+\epsilon$ dimensional system). Some attempts in
this direction are currently under consideration.
In this paper we address another important issue
concerning life time effects of the quasiparticles.
The high resolution ARPES data [7]
reveal that normal phase of underdoped cuprates
has the marginal Fermi liquid type properties
where the quasiparticle weight and life time
are very sensitive to temperature. We want to check
how the life times of fermions and bosons depend
on temperature $T$ and momentum using the BF model
scenario.
Flow equation for any arbitrary operator, say $\hat{A}(l)$,
is given by
$$\displaystyle\frac{d\hat{A}(l)}{dl}=\left[\hat{\eta}(l),\hat{A}(l)\right]$$
(1)
where $\hat{\eta}$ is a generating operator of the continuous
canonical transformation. The initial BF model Hamiltonian
consists of the free part
$$\displaystyle\hat{H}_{0}=\sum_{k,\sigma}(\varepsilon_{k}^{\sigma}-\mu)\hat{c}_%
{k\sigma}^{\dagger}\hat{c}_{k\sigma}+\sum_{q}(E_{q}-2\mu)\hat{b}_{q}^{\dagger}%
\hat{b}_{q}$$
(2)
and the interaction
$$\displaystyle\hat{H}_{int}=\frac{1}{\sqrt{N}}\sum_{k,p}\left(v_{k,p}\hat{b}^{%
\dagger}_{k+p}\hat{c}_{k\downarrow}\hat{c}_{p\uparrow}+v^{*}_{k,p}\hat{b}_{k+p%
}\hat{c}^{\dagger}_{p\uparrow}\hat{c}^{\dagger}_{k\downarrow}\right).$$
(3)
In order to decouple fermion from boson subsystems
we have previously chosen $\eta$ in a form [6]
$$\displaystyle\hat{\eta}(l)=-\sum_{k,q}\left(\alpha_{k,q}(l)v_{k,q}(l)\hat{b}^{%
\dagger}(l)\hat{c}_{k,\downarrow}(l)\hat{c}_{q,\uparrow}(l)-h.c.\right)$$
(4)
with $\alpha_{k,q}=\varepsilon_{k}^{\downarrow}+\varepsilon_{q}^{\uparrow}-E_{k+q}$. Using the general flow
equation (1) we transformed the Hamilton operator
$\hat{H}(l)$ so that in the limit $l\rightarrow\infty$
the charge exchange interaction got eliminated $v_{k,q}(\infty)=0$.
In a course of the continuous canonical transformation
the initial boson and fermion operators change according
to the following flow equations
$$\displaystyle\frac{d\hat{b}_{q}(l)}{dl}$$
$$\displaystyle=$$
$$\displaystyle\sum_{k}\alpha_{k,q-k}(l)v_{k,q-k}(l)\hat{c}_{k,\downarrow}(l)%
\hat{c}_{q-k,\uparrow}(l)$$
(5)
$$\displaystyle\frac{d\hat{c}_{k,\uparrow}(l)}{dl}$$
$$\displaystyle=$$
$$\displaystyle-\sum_{q}\alpha_{q,k}(l)v_{k,q}(l)\hat{b}_{q+k}(l)\hat{c}_{q,%
\downarrow}^{\dagger}(l).$$
(6)
Similar equations can be derived for their hermitean conjugates
$\hat{b}_{q}^{\dagger}(l)$, $\hat{c}_{k,\uparrow}^{\dagger}(l)$.
By inspecting (5) we notice that initial boson
annihilation operator is coupled to the electron pair operator
$\hat{c}_{k,\downarrow}(l)\hat{c}_{q-k,\uparrow}(l)$.
Thereof one should in a next step determine a corresponding
flow equation for the finite momentum pair operator, however
this process will never end (like equations of motion for
the Greens functions). There would be the higher and higher
order product of operators getting involved into the differential
flow equations.
Following the other works using such continuous canonical
transformation [8]-[10] we
terminate the flow equations by postulating some (physical)
Ansatz. In a context of the BF model it seems natural
to postulate the following decompositions
$$\displaystyle\hat{b}_{q}(l)$$
$$\displaystyle=$$
$$\displaystyle X_{q}(l)\hat{b}_{q}+\sum_{k}Y_{q,k}(l)\hat{c}_{k,\downarrow}\hat%
{c}_{q-k,\uparrow},$$
(7)
$$\displaystyle\hat{c}_{k,\sigma}(l)$$
$$\displaystyle=$$
$$\displaystyle A_{k}(l)\hat{c}_{k,\sigma}+\sum_{q}B_{k,q}(l)\hat{b}_{q+k}\hat{c%
}_{q,-\sigma}^{\dagger},$$
(8)
where $\hat{b}_{q}\equiv\hat{b}_{q}(l=0)$ and
$\hat{c}_{k,\sigma}\equiv\hat{c}_{k,\sigma}(l=0)$.
The initial $l=0$ values are
$$\displaystyle\begin{array}[]{ll}A_{k}(0)=1,&\hskip 28.452756ptB_{k,q}(0)=0,\\
X_{q}(0)=1,&\hskip 28.452756ptY_{k,q}(0)=0.\end{array}$$
(9)
Unknown parameters $Y_{q,k}(l)$, $B_{k,q}(l)$ can be
found directly from the flow equations (5,6)
$$\displaystyle\frac{dB_{k,q}(l)}{dl}$$
$$\displaystyle=$$
$$\displaystyle-\alpha_{q,k}(l)v_{q,k}(l)X_{q+k}(l)A_{q}(l)$$
(10)
$$\displaystyle\frac{dY_{q,k}(l)}{dl}$$
$$\displaystyle=$$
$$\displaystyle\alpha_{k,q-k}(l)v_{k,q-k}(l)A_{k}(l)A_{q-k}(l).$$
(11)
There is no unambiguous way for determining the other
missing coefficients $A_{k}(l)$, $X_{q}(l)$ because with
the Ansatz (7,8) the flow
equations still do not close. In what follows bellow
we determine these parameters using the constraint
$$\displaystyle\left[b_{q}(l),b_{q^{\prime}}^{\dagger}(l)\right]$$
$$\displaystyle=$$
$$\displaystyle\delta_{q,q^{\prime}},$$
(12)
$$\displaystyle\left\{c_{k,\sigma}(l),c_{k^{\prime},\sigma^{\prime}}^{\dagger}(l%
)\right\}$$
$$\displaystyle=$$
$$\displaystyle\delta_{k,k^{\prime}}\delta_{\sigma,\sigma^{\prime}}.$$
(13)
These statistical commutation/anticommutation relations yield
$$\displaystyle A_{k}(l)^{2}+\sum_{q}B_{k,q}(l)^{2}\left(f_{q,\downarrow}-b_{k+q%
}\right)$$
$$\displaystyle=$$
$$\displaystyle 1,$$
(14)
$$\displaystyle X_{q}(l)^{2}+\sum_{k}Y_{q,k}(l)^{2}\left(1-f_{k,\downarrow}-f_{q%
-k,\uparrow}\right)$$
$$\displaystyle=$$
$$\displaystyle 1.$$
(15)
Here $f_{k,\sigma}$, $b_{q}$ denote the Fermi and Bose
distribution functions. They appear after introducing
the normal ordered forms for operators left of the
commutation (12) and anticommutation (13)
when substituting (7,8).
We solved numerically a set of the coupled equations
(10,11,14,15)
simultaneously with the additional flow equations
for $\varepsilon_{k}^{\sigma}(l)$,$E_{k}(l)$ and
$v_{k,q}(l)$ given in the Ref. [6].
Initially, for $l=0$, we assumed: a) the one dimensional
tight binding dispersion $\varepsilon_{k}^{\sigma}(l=0)=-2t\cos{k}$, b) localized bosons $E_{q}(l=0)=\Delta_{B}$ and c) local exchange interaction
$v_{k,q}(l=0)=v$. For the sake of comparison with
the previous studies of this model we used $D=4t\equiv 1$,
$\Delta_{B}=-0.6$, $v=0.1$ and total concentration of
carries fixed to be 1.
In figure 1 we plot the momentum dependence of $A_{k}$
coefficient for $l=\infty$ (when fermion subsystem is
decoupled from the boson one). Considerable variation
of the parameter $A_{k}$ from its initial value
(being $1$) can be observed for such momenta which
are located near $k^{*}\simeq 0.89$. As discussed
in the Ref. [6] this situation corresponds
to the resonant scattering processes when $\varepsilon_{k*}+\varepsilon_{-k*}=\Delta_{B}$. It is worth mentioning
that for $n_{tot}=1$ the Fermi momentum approaches
this $k^{*}$ from bellow when temperature decreases.
We could think of the coefficient $A_{k}$ as a
measure of the spectral weight for fermions. The
missing part $|1-A_{k}|$ is transfered towards
some composite objects (mixtures of fermions and
bosons or fermion pairs). Taking into account
the fact that for a decreasing temperature $k_{F}$
shifts closer and closer to $k^{*}$, we shall thus
observe a diminishing spectral weight of fermions.
This is in agreement with interpretation of the ARPES
experimental results [7].
For a completeness we illustrate also the resulting
($l=\infty$) values of the boson coefficients given in
(7). Parameter $X_{q}$ shows only a small
deviation from its initial value $1$. This time such
a change is most effective for boson momenta close to
$2k^{*}$. It is exactly the same region of the Brillouin
zone where the effective boson dispersion shows a kink
(see figure 3).
The missing part of the boson spectral weight
is transferred to the fermion pairs, as given
by the Ansatz (7). Bottom figure 2
shows that $\sum_{k}Y_{q,k}$ (which roughly
measures the fermion pairs spectral weight)
changes a lot for $q\sim 2k^{*}$. Again,
if we recall that for a decreasing temperature
$k_{F}\rightarrow k^{*}$ we can conclude
that there is an increasing probability of
finding more and more fermion pairs, even though
our system is not in the superconducting phase.
This is one of possible ways for observing
the precursor effect.
In conclusion we studied evolution of fermion
and boson operators in the BF model. During a
continuous transformation which eliminated the
exchange interaction we observe transformation
of the initial fermion and boson particles
into the composite objects. In particular
we find that fermion (boson) spectral weights
at $k_{F}$ ($2k_{F}$) are reduced when temperature
decreases. This effect is in an agreement with
the recent interpretation of the ARPES measurements
for underdoped Bi2212 cuprates. To clarify a
problem of the quasiparticles life time
we need a more detailed study of the correlation
functions. Such an analysis is currently under
investigation and the results will be published
elsewhere.
References
[1]
T. Timusk and B. Stratt, Rep. Prog. Phys. 62, 61 (1999).
[2]
J. Ranninger and S. Robaszkiewicz, Physica
B 135, 468 (1985).
[3]
J. Ranninger, J.M. Robin and M. Eschrig,
Phys. Rev. Lett. 74, 4027 (1995);
P. Devillard and J. Ranninger,
Phys. Rev. Lett. 84, 5200 (2000).
[4]
H.C. Ren, Physica C 303, 115 (1998).
[5]
J.M. Robin, A. Romano and J. Ranninger,
Phys. Rev. Lett. 81, 2756 (1998).
[6]
T. Domański and J. Ranninger, Phys. Rev. B 63, 134505 (2001).
[7]
M.R. Norman, A. Kamiński, J. Mesot and J.C. Campuzano,
Phys. Rev. B 63,140508 (2001).
[8]
S. Kehrein and A. Mielke, Ann. Phys. (Leipzig) 6, 90 (1997).
[9]
M. Ragawitz and F. Wegner, Eur. Phys. J. B
8, 9 (1999).
[10]
W. Hofstetter and S. Kehrein, Phys. Rev. B 63, 140402 (2001). |
Multi-mode Interferometer for Guided Matter Waves
Erika Andersson${}^{1}$
Tommaso Calarco${}^{2}$
Ron Folman${}^{3}$
Mauritz Andersson${}^{4}$
Björn Hessmo${}^{5}$
Jörg Schmiedmayer${}^{3}$
${}^{1}$ Department of Physics, Royal Institute of Technology,
SE-10044 Stockholm, Sweden
(Present address: Department of Physics and Applied Physics,
University of Strathclyde, Glasgow G4 0NG, Scotland)
${}^{2}$ Institut für Theoretische Physik, Universität Innsbruck,
A-6020 Innsbruck, Austria
European Centre for Theoretical Studies in Nuclear Physics and Related
Areas, 38050 Villazzano (TN) Italy
${}^{3}$ Physikalisches Institut, Universität Heidelberg, D-69120
Heidelberg Germany
${}^{4}$Department of Quantum Chemistry, Uppsala University, S-75120
Uppsala, Sweden
${}^{5}$ Dept. of Microelectronics and Information Technology,
Royal Institute of Technology, S-164 40 Kista, Sweden
(November 25, 2020)
Abstract
Atoms can be trapped and guided with electromagnetic fields,
using nano-fabricated structures. We describe the fundamental
features of an interferometer for guided matter waves, built of
two combined Y-shaped beam splitters. We find that such a device
is expected to exhibit high contrast fringes even in a multi-mode
regime, analogous to a white light interferometer.
pacs: PACS number(s): 03.75.Be, 03.65.Nk
[
] Interferometers are very sensitive
devices, and have provided both insights into fundamental
questions and valuable instruments for applications. The
sensitivity of matter-wave interferometers [1] has been
shown to be much better than that of light interferometers in
several areas such as the observation of inertial effects
[2]. Because of this high sensitivity, interferometers
have to be built in a robust manner to be applicable. This could
be achieved by guiding the matter waves with microfabricated
structures, and by integration of the components into a single
compact device, as has been done with optical devices.
In this Letter, we describe such an interferometer for matter
waves propagating in a time-independent guiding potential, in
analogy with propagation of light in optical fibers (for recent
microtrap proposals see [3]). This interferometer has
the surprising feature that interference is observed even if many
levels in the guide are occupied, as is the case when the source
is thermal, or for cold fermions, even below the Fermi
temperature. The multi-mode interferometer is built by combining
two Y-shaped beam splitters [4], capable of coherently
splitting or recombining many incoming transverse modes, and
arranging the interferometer geometry so that all the different
transverse states give the same phase shift pattern. The beam
splitters are formed by splitting the atom guiding potential
symmetrically into two identical output guides. This was recently
demonstrated on an atom chip [4, 5]; alternatively,
this may be realized by optical confinement [6].
The analysis is done in two dimensions [7], similar to
solid state electron interferometers [8].
The shape of the guiding potential in the transverse direction $x$ changes
with the longitudinal coordinate $z$, from a single harmonic well of
frequency $\omega$ to two wells of frequency $2\omega$ separated by a
distance $d(z)$, and then back again to a single guide (see Fig. 1a and b).
Let us consider a particle entering in the transverse ground
state and with longitudinal kinetic energy $E_{\rm kin}$. In the
limit where the atomic transverse motion in the guide (related to
$\omega$ and the ground state size) is very fast with respect to
the rate of transverse guide displacement $\frac{d}{dt}d(z)$ as
seen by the moving atom (related to the beam splitter opening
angle $\frac{d}{dz}d(z)$ and $E_{\rm kin}$), the particle will
adiabatically follow the lowest energy level throughout the
interferometer. This corresponds to the condition $E_{\rm kin}[\frac{d}{dz}d(z)]^{2}\ll\hbar\omega$. As it can be seen in Fig.
1b, when the two guide arms are far apart (i.e., when
$d(z)\gg\sqrt{\hbar/m\omega}$), any eigenstate of the incoming
guide potential evolves into a superposition of eigenstates
$|n\rangle_{l}$ and $|n\rangle_{r}$, corresponding to the left and
right arms, respectively:
$$\displaystyle|2n\rangle\longrightarrow\frac{1}{\sqrt{2}}\left[|n\rangle_{l}+|n%
\rangle_{r}\right]$$
$$\displaystyle|2n+1\rangle\longrightarrow\frac{1}{\sqrt{2}}\left[|n\rangle_{l}-%
|n\rangle_{r}\right].$$
(1)
Levels $|2n\rangle$ and $|2n+1\rangle$ become practically
degenerate inside the interferometer when the arms are widely
split, as Fig. 1c shows. For example, it is easy to visualize how
a transverse odd state $|1\rangle$ splits in the middle to become
a superposition of two ground states having between them a $\pi$
phase, while the even state $|0\rangle$ does the same, but with a
$0$ relative phase. As the states become degenerate, an
asymmetric perturbation (e.g., a differential phase shift) will
couple the odd and even symmetries, thus inducing a mixing between
the two states (see also [3]).
Numerical two-dimensional wave-packet calculations for the lowest
35 modes [9] confirmed that no transitions between
transverse states occur, as long as the motion is adiabatic
in the sense discussed above, and as long as no asymmetric
perturbation is present. In an ideal case, coherent splitting and
recombination for all transverse modes can be achieved
[10].
As mentioned, mixing within pairs of degenerate states inside the
interferometer can occur if the wavefunction experiences a phase
difference between the two arms. Let us therefore introduce a
phase shift $\Delta\phi$ between the interferometer arms, either
by making one arm longer by $\Delta l$, giving $\Delta\phi=k\Delta l$, where $\hbar k$ is the momentum in the longitudinal
($z$) direction, or by applying an additional potential $U$,
resulting in $\Delta\phi=\frac{m}{\hbar^{2}k}\int Udz$. Both
phase shifts are independent of the transverse state in the
guide, and they are dispersive, meaning that the resulting phase
shift
depends on $k$. In the following, we shall refer to the phase shift caused by a
path length difference $\Delta l$.
Following equation Multi-mode Interferometer for Guided Matter Waves and its time inverse, an
incoming even state is transformed as follows while transversing
through the two beam splitters BS1 and BS2:
$$\displaystyle|2n\rangle$$
$$\displaystyle\stackrel{{\scriptstyle BS1+\Delta\phi}}{{\longrightarrow}}$$
$$\displaystyle\frac{1}{\sqrt{2}}[e^{i\Delta\phi/2}|n\rangle_{l}+e^{-i\Delta\phi%
/2}|n\rangle_{r}]$$
(2)
$$\displaystyle\stackrel{{\scriptstyle BS2}}{{\longrightarrow}}$$
$$\displaystyle\frac{1}{\sqrt{2}}[\cos(\Delta\phi/2)|2n\rangle+i\sin(\Delta\phi/%
2)|2n+1\rangle].$$
By taking into account the analogous transformation rule for the nearby odd
state, we find that we can describe the interferometer
in terms of a matrix
$$\begin{array}[c]{cc}{1\over\sqrt{2}}\left(\begin{array}[c]{ll}\cos(\Delta\phi/%
2)&i\sin(\Delta\phi/2)\\
i\sin(\Delta\phi/2)&\cos(\Delta\phi/2)\end{array}\right)\end{array}$$
(3)
in the basis of the two incoming and outgoing transverse
eigenmodes $|2n\rangle$ and $|2n+1\rangle$. Figure
2 shows the above qualitative behavior in an
actual simulation.
Let us now discuss the longitudinal degree of freedom. Consider a
very cold wave packet with longitudinal kinetic energy
$E_{kin}=k^{2}\hbar^{2}/2m$, where $k$ is the mean longitudinal
momentum of the wave packet. In the transverse direction it
occupies the transverse ground state $|0\rangle$ of the incoming
guide. We have chosen the ground state energy in the
interferometer arms, $\hbar\omega^{\prime}$, to be twice that of
the input and output guides, $\hbar\omega$ (see Fig.
1). This implies that if the longitudinal kinetic
energy is too small, $E_{kin}<\frac{1}{2}\hbar(\omega^{\prime}-\omega)=\frac{1}{2}\hbar\omega$, the wave packet will be
reflected already at the first beam splitter. For
$E_{kin}>\frac{1}{2}\hbar\omega$, it will split between the two
interferometer arms to occupy the transverse ground states in both
arms, slowing down in the longitudinal direction to ensure energy
conservation. If the wave packet experiences a phase shift
$\Delta\phi=k\Delta l$ in one of the arms, part of it will, after the recombination at
the second beam splitter, exit in the first excited transverse
state $|1\rangle$ of the outgoing guide, as described by matrix
(3), again provided the longitudinal kinetic energy
is large enough. The phase shift is dependent on $k$, and thus
the different $k$ components of the wave packet will obtain
different phase shifts. The components with phase shifts close to
$2N\pi$ will exit in the transverse ground state, and those with
phase shifts close to $(2N+1)\pi$ will exit in the first excited
state. If $\frac{1}{2}\hbar\omega<E_{kin}<\hbar\omega$, the
transition to the first excited transverse state will be
forbidden, and this part of the wave packet will be reflected. If
the kinetic energy of the wave packet fulfills $E_{kin}\gg\hbar\omega$, transitions to the first excited transverse state at the
recombination beam splitter are possible, and no sizeable back
reflection occurs. The interference pattern may then be observed
by looking at the populations in the states $|0\rangle$ and
$|1\rangle$ in the outgoing guide, as presented in figure
2.
We note that the part of the wave packet making the transition to
the state $|1\rangle$ will lose kinetic energy to compensate for
the additional transverse energy $\hbar\omega$ needed for the
transition. Therefore the part of the wave packet in state
$|1\rangle$ will travel slower than the part in $|0\rangle$, by
an amount $\Delta v\simeq\omega/k$. As explicitly shown in the
third column of Fig. 2 for $\Delta\phi=3\pi/2$, the two outgoing components will separate longitudinally.
Similarly, a wave packet entering in the first excited transverse
state, acquiring an odd phase shift $(2N+1)\pi$, will exit in the
ground state, gaining potential energy, and propagating faster
than the components exiting in the first excited transverse
state. To observe this separation between the two outgoing parts
of the wave packet, one has to introduce a pulsed source, which
is what we will assume from now on.
The reasoning above is easily extended to higher transverse modes.
Components of a wave packet acquiring an even phase shift will
exit in the same transverse state as they entered, while
components acquiring an odd phase shift will exit in the
neighboring transverse state ($|2n\rangle\rightarrow|2n+1\rangle$, or $|2n+1\rangle\rightarrow|2n\rangle$), lose
(gain) kinetic energy accordingly, and propagate slower (faster)
by the same $\Delta v$, independent of the transverse
energy level. Thus, for a system of temperature $T$, filling
$2n+1$ transverse modes, the interferometer is actually composed
of $n$ disjunct interferometers, all giving rise to the same
longitudinal pattern.
If the energy spread of the longitudinal wave packet is large
enough, $\Delta k\gg\pi/\Delta l$, a longitudinal interference
pattern will form within the wave packet, as shown in
Fig. 3. This pattern, shown here for the
states $|0\rangle$ and $|1\rangle$, will be the same for all
transverse states $|2n\rangle$ and $|2n+1\rangle$. The two
density patterns will, at the exit of the interferometer, add up
to the same wave packet shape one would expect if the
interferometer was not there [11]. Looking at the total
wave packet, not distinguishing between the different transverse
levels, one only sees its envelope. The two transverse state
components will, however, propagate with different velocities, as
outlined above, and after some time they will rephase as shown in
Fig. 3. This will happen once enough time
has passed for $|\Delta v|$ to overcome the difference in the
position of the pattern peaks. Since $|\Delta v|$ is independent
of the incoming transverse mode, all patterns will re-phase at
the same time and position, and a multi transverse mode operation
could be achieved.
In an actual multi mode experiment, the simplest input state will
be a thermal atomic cloud. Such a state is described by a mixed
density matrix.
The longitudinal states are obtained in the following way: At the
start of the experiment, we imagine a thermal atomic cloud
trapped in both transverse and longitudinal directions, with
transverse confinement by the same trapping frequency $\omega$ as
in the incoming guide. In the longitudinal direction, the initial
trap is approximated by an infinitely high box of width $L$. The
states we consider are the eigenfunctions of this trap. At time
$t=0$, one wall of the box is opened up and the longitudinal
states start propagating in the positive $z$ direction according
to their momentum distributions. Each longitudinal state
can be approximated by a wave packet with mean
momentum $k=(n+1)\pi/L$ and momentum spread $\Delta k=2\sqrt{\pi}/L$ [14].
We carry out a numerical calculation of the interference pattern
after the interferometer, starting from a thermal state as
described. We take into account the $k$-dependent phase shifts
acquired by the plane wave components of the longitudinal wave
packets, resulting in transitions between different transverse
states according to Eq. (3). The plane wave
components are slowed down or sped up, if necessary, to ensure
energy conservation. The emergence of an interference pattern,
despite the incoherent sum over the transverse and longitudinal
states, is confirmed by this numerical calculation. Figure
4 gives an example of an interference pattern
obtained with a typical “hot” atom ensemble with a temperature
of $200$ $\mu$K and guides with a trap frequency of
$10^{5}$s${}^{-1}$, already realized on an atom chip [12, 13].
Even though hundreds of transverse levels are populated, high
contrast fringes are observed.
To summarize, we have shown that a multi-mode interferometer may
be realized in a two-dimensional geometry, using two symmetric
Y-shaped beam splitters. Furthermore, though beyond the scope
of the present work, we expect three-dimensional evolution to
exhibit qualitatively the same behavior under certain conditions.
As Y-shaped microfabricated beam splitters have been realized, we
expect the road to be open for the experimental realization of
robust guided matter wave interferometers.
We would like to thank Peter Zoller for enlightening discussions.
R.F. is grateful to Yoseph Imry for his insight into mesoscopic
systems. E.A. would like to thank Helmut Ritsch for his kind
hospitality. This work was supported by the Austrian Science
Foundation (FWF), project SFB 15-07, by the Istituto Trentino di
Cultura (ITC) and by the European Union, contract Nr.
IST-1999-11055 (ACQUIRE), HPMF-CT-1999-00211 and
HPMF-CT-1999-00235.
References
[1]
Proceedings of the International Workshop on Matter Wave
Interferometry, ed. by G. Badurek, H. Rauch, and A. Zeilinger
[Physica (Amsterdam) 151B, No. 1-2 (1988)]; Atom
Interferometry, ed. by P. Berman, Academic Press (1997).
[2]
For example see
A. Peters, K.Y. Chung and S. Chu, Nature 400, 849
(1999);
T.L. Gustavson, A. Landragin and M.A. Kasevich,
J. Class. and Quant. Grav. 17, 2385 (2000); and
references there in.
[3]
Recently, several proposals for matter-wave
interferometers have been made in the context of splitting and
combining a microtrap ground state with a time dependent
potential:
E.A. Hinds, C.J. Vale and M.B. Boshier, Phys. Rev. Lett. 86,
1462 (2001);
W. Hänsel, J. Reichel, P. Hommelhoff and T.W.Hänsch,
quant-ph/0106162.
[4]
D. Cassettari, B. Hessmo, R. Folman, T. Maier, J. Schmiedmayer,
Phys. Rev. Lett. 85, 5483 (2000).
[5]
D. Müller, et al., Phys. Rev. A 63, 041602(R) (2001).
[6]
O. Houde et al., Phys. Rev. Lett. 85, 5543 (2000).
[7]
In 2D confinement the out-of-plane transverse
dimension is either subject to a much stronger confinement, or
the potential is separable. For an experimental realization, see
H. Gauck et al. Phys. Rev. Lett. 81, 5298 (1998);
T. Pfau priv. comm. (2000);
E.A. Hinds, M.G. Boshier, I.G. Hughes Phys. Rev. Lett. 80,
645 (1998);
R.J.C. Spreeuw, et al. Phys. Rev. A 61, 053604 (2000).
[8]
E. Buks et al., Nature 391, 871-874 (1998).
[9]
The numerical calculation is based on the split-operator method with
a pseudospectral method for derivatives. See e.g.
M. Feit, J. Fleck Jr, and A. Steiger, J. Comput. Phys. 47,
412 (1982);
B. M. Garraway, K.-A. Suominen, Rep. Prog. Phys 58, 365
(1995).
[10]
This is an advantage over four-port beam splitter designs relying
on tunneling through a barrier between two guides – see
E. Andersson, M. T. Fontenelle, and S. Stenholm, Phys. Rev. A
59, 3841 (1999).
In the latter, the splitting ratios for incoming wave
packets are very different for different transverse modes, since
the tunneling probability depends strongly on the energy of the
particle. Even for a single mode, the splitting amplitudes,
determined by the barrier width and height, are extremely
sensitive to experimental noise. On the contrary, the operation
of the Y-shaped beam splitter, is based on a symmetric barrier,
rising in the center of the transverse wave function, splitting
it independently of the mode number.
[11]
This is only approximately true, because inside the interferometer
there are different propagation velocities due to the different
potential energies of even and odd incoming states. In the
calculations, this small effect is included.
[12]
R. Folman et al., Phys. Rev. Lett. 84, 4749
(2000); current traps on our chips achieve $\omega>10^{6}$ s${}^{-1}$.
[13]
For other atom chip experiments see
J. Reichel et al., Phys. Rev. Lett. 83, 3398 (1999);
D. Müller et al., Phys. Rev. Lett. 83, 5194 (1999);
N.H. Dekker, et al., Phys. Rev. Lett. 83, 3398 (1999).
[14]
In this model, and for a reasonable source size $L$, the wave
packet energy spread will be relatively small and no complete
pattern of several oscillations is expected to be observable
within a single packet. This however, does not alter the final
result as no coherence between different longitudinal
$k$-components is needed. The only coherence evoked in the
interferometer is that between the right and left paths, which in
turn requires that a single wave packet have a coherence length
longer than the path length difference $\Delta l$. The number of
observed oscillations will then only depend on the longitudinal
energy spread of the source. |
Linear quantum state diffusion for non-Markovian
open quantum systems
Walter T. Strunz 111e-mail: [email protected]
Department of Physics, Queen Mary and Westfield College, University
of London, Mile End Road, London E1 4NS, United Kingdom
(October 7, 1996)
Abstract
We demonstrate the relevance of complex Gaussian stochastic
processes to the stochastic state vector description of non-Markovian
open quantum systems. These processes express the general
Feynman-Vernon path integral propagator for open quantum systems as the
classical ensemble average over stochastic pure state propagators
in a natural way.
They are the coloured generalization of complex Wiener processes
in quantum state diffusion stochastic Schrödinger equations.
pacs: 03.65.Bz, 05.40.+j, 42.50.Lc
††preprint: QMW-PH-96-17
I Introduction
The reduced density operator of a quantum subsystem
is obtained from the total density operator by
tracing over the environmental degrees of freedom.
Feynman and Vernon [1] derive
the propagator ${\cal J}(x_{f},x^{\prime}_{f},t;x_{0},x^{\prime}_{0},0)$ of the reduced density
matrix $\rho(x,x^{\prime},t)$ in terms of a
double path integral [see also Feynman and Hibbs [2]]
$${\cal J}(x_{f},x^{\prime}_{f},t;x_{0},x^{\prime}_{0},0)=\int_{x_{0},0}^{x_{f},%
t}{\cal D}[x]\int_{x^{\prime}_{0},0}^{x^{\prime}_{f},t}{\cal D}[x^{\prime}]%
\exp\left\{\frac{i}{\hbar}(S[x]-S[x^{\prime}])\right\}\;{\cal F}[x,x^{\prime}],$$
(1)
where $S[x]$ is the classical action functional of the subsystem alone.
The influence functional ${\cal F}[x,x^{\prime}]$
combines the effects of the environmental initial state, its Hamiltonian and
the interaction Hamiltonian between subsystem and environment, on the
subsystem.
It is also shown in [1, 2] that the
general influence functional which is at most quadratic in
the coordinates must be of the form
$${\cal F}[x,x^{\prime}]=\exp\left\{-\int_{0}^{t}d\tau\int_{0}^{\tau}d\sigma\;[x%
_{\tau}-x^{\prime}_{\tau}][\alpha(\tau,\sigma)x_{\sigma}-\alpha^{*}(\tau,%
\sigma)x^{\prime}_{\sigma}]\right\},$$
(2)
with a positive, Hermitian kernel
$$\alpha(\tau,\sigma)=\alpha^{*}(\sigma,\tau).$$
(3)
I.1 Real kernels
Feynman and Vernon [1, 2] emphasize that if the kernel is not
only Hermitian but real, the influence functional
can be obtained from a real Gaussian stochastic process
$F(\tau)$
(a fluctuating force), with statistical properties
$$\displaystyle\langle F(\tau)\rangle$$
$$\displaystyle=$$
$$\displaystyle 0$$
(4)
$$\displaystyle\langle F(\tau)F(\sigma)\rangle$$
$$\displaystyle=$$
$$\displaystyle\alpha(\tau,\sigma)=\alpha^{*}(\tau,\sigma).$$
Here, and throughout the paper, $\langle\ldots\rangle$ denotes the classical
ensemble average over the stochastic processes.
Using the general formula
$$\langle\exp\int_{0}^{t}d\tau[f(\tau)F(\tau)]\rangle=\exp\frac{1}{2}\int_{0}^{t%
}d\tau\int_{0}^{t}d\sigma[f(\tau)\alpha(\tau,\sigma)f(\sigma)]$$
(5)
for arbitrary functions $f(\tau)$,
the propagator ${\cal J}$ for
the density operator can be stochastically decoupled
into stochastic pure-state propagators $G_{F}$,
$${\cal J}(x_{f},x^{\prime}_{f},t;x_{0},x^{\prime}_{0},0)=\langle G_{F}(x_{f},t;%
x_{0},0)G^{*}_{F}(x^{\prime}_{f},t;x^{\prime}_{0},0)\rangle,$$
(6)
with
$$G_{F}(x_{f},t;x_{0},0)=\int_{x_{0},0}^{x_{f},t}{\cal D}[x]\exp\left\{\frac{i}{%
\hbar}S[x]+i\int_{0}^{t}d\tau x_{\tau}F(\tau)\right\}.$$
(7)
The total action functional in the exponent of the path integral
propagator (7) now represents the
additional influence of the stochastic force
$\hbar F(\tau)$ on the subsystem.
Thus, real kernels are equivalent to ordinary unitary, but stochastic,
quantum dynamics.
I.2 Complex kernels
Genuine environments, however, not only induce fluctuations
in the subsystem but also dissipation. These are represented by complex
kernels $\alpha(\tau,\sigma)$
and can therefore not be simulated
by a stochastic potential. As an example, Feynman and Vernon
[1, 2]
derive the influence functional (2) analytically
for the case of a linear coupling to a heat bath of
harmonic oscillators with temperature $T$.
Caldeira and Leggett further elaborate
this approach in [3] [see also Grabert [4] and Weiss
[5]], resulting in the complex
quantum Brownian motion kernel
$$\alpha(\tau,\sigma)=\frac{\gamma m}{\pi\hbar}\int_{0}^{\Omega}d\omega\;\omega%
\left\{\coth\left(\frac{\hbar\omega}{2kT}\right)\cos\left(\omega(\tau-\sigma)%
\right)-i\sin\left(\omega(\tau-\sigma)\right)\right\},$$
(8)
where $\Omega$ is a bath cut-off frequency, $m$ the mass of
the particle and $\gamma$ the damping rate.
Remarkably, it has been shown only recently by Diósi [6] that
even in the general
case of a complex kernel like (8), the Feynman-Vernon
propagator (1)
allows a stochastic decoupling similar
to (6). His result is based on the tricky construction of
a real Gaussian process, whose correlation function
has been given implicitly in terms of $\alpha$. From the point of view
of applications, however,
the use of these processes appears rather difficult. The
deeper reason behind
this construction comes from relativistic measurement theory [7].
It is the aim of this paper to present an alternative and much
simpler stochastic
decoupling of the general Feynman-Vernon influence functional, based on
complex Gaussian stochastic processes. In their white noise
version, these processes have been introduced
from symmetry considerations
in the quantum state diffusion (QSD) stochastic Schrödinger
equation [8, 9, 10, 11, 12], describing
Markovian open quantum systems. They also appear in measurement
theories, particularly in cases where the apparatus is represented
by a Bosonic reservoir [see [13] for more references].
Markovian stochastic state vector methods
have proven indispensable for many applications, particularly
in quantum optics [14].
Our result represents a first step
towards an applicable non-Markovian stochastic state vector theory,
required for instance in solid state theory [4, 5].
In a recent related work Kleinert and Shabanov [15],
derive operator quantum Langevin equations
in the Heisenberg picture corresponding to the propagator (1).
In this paper, however, we stick to path integrals and state vectors
in the Schrödinger picture.
We review basic properties of complex Gaussian processes
in Sect. 2. In Sect. 3 we show how they enable the
stochastic decoupling of the general Feynman-Vernon propagator,
resulting in linear
non-Markovian quantum state diffusion.
We conclude with a short discussion and further comments in the
final Sect. 4.
II Complex Gaussian stochastic processes
Here we review complex Gaussian processes $Z(\tau)$ with
stochastic properties
$$\displaystyle\langle Z(\tau)\rangle$$
$$\displaystyle=$$
$$\displaystyle 0,$$
(9)
$$\displaystyle\langle Z(\tau)Z(\sigma)\rangle$$
$$\displaystyle=$$
$$\displaystyle 0\;\;\mbox{and}$$
$$\displaystyle\langle Z(\tau)Z^{*}(\sigma)\rangle$$
$$\displaystyle=$$
$$\displaystyle\gamma(\tau,\sigma).$$
Such processes $Z(\tau)$ can only be constructed
if the complex correlation $\gamma$ is
Hermitian and positive, which is automatically
fulfilled by the quantum environments we are interested in.
The processes $Z(\tau)$ with properties (9) are the natural
coloured generalization of complex Wiener processes $\xi(\tau)$
with corresponding Itô increments $d\xi$ with properties
$$(d\xi)^{2}=0\;\mbox{and}\;|d\xi|^{2}=dt,$$
(10)
as they arise from symmetry considerations
in the quantum state diffusion theory
of Markovian open quantum systems [8, 9, 10, 11].
Complex processes with properties (9) are also common in quantum
measurement theories [13].
Writing $Z(\tau)=X(\tau)+iY(\tau)$ we find that conditions
(9) are fulfilled for
$$\displaystyle\langle X(\tau)X(\sigma)\rangle=\langle Y(\tau)Y(\sigma)\rangle$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{2}\mbox{Re}\left\{\gamma(\tau,\sigma)\right\}\;\;\mbox{and}$$
(11)
$$\displaystyle\langle X(\tau)Y(\sigma)\rangle=-\langle Y(\tau)X(\sigma)\rangle$$
$$\displaystyle=$$
$$\displaystyle-\frac{1}{2}\mbox{Im}\left\{\gamma(\tau,\sigma)\right\}.$$
We see that the crucial advantage of complex processes is to
allow a non-vanishing correlation between their real and imaginary part,
regarding
the real processes $X(\tau)$ and $Y(\tau)$ as one joint real Gaussian
process $(X(\tau),Y(\tau))$. This
construction leads to the imaginary part of the correlation function
$\gamma(\tau,\sigma)$ in (9).
The relevant formula for complex processes $Z(\tau)$ that replaces
formula (5) for real processes is
$$\langle\exp\int_{0}^{t}d\tau\left[f(\tau)Z(\tau)+g(\tau)Z^{*}(\tau)\right]%
\rangle=\exp\int_{0}^{t}d\tau\int_{0}^{t}d\sigma\left[f(\tau)\gamma(\tau,%
\sigma)g(\sigma)\right],$$
(12)
valid for arbitrary functions $f(\tau)$ and $g(\tau)$.
Notice that this implies
$$\langle\exp\int_{0}^{t}d\tau[f(\tau)Z(\tau)]\rangle=1$$
(13)
in contrast to equation (5) for real processes.
If one wants to generate such processes $Z(\tau)$ numerically, one
can use the construction
$$Z(t)=\int d\tau\;\;\gamma_{\frac{1}{2}}(t,\tau)\xi(\tau),$$
(14)
where $\xi(\tau)$ is an easily generated white complex process and
$$\int d\tau\;\;\gamma_{\frac{1}{2}}(t,\tau)\gamma_{\frac{1}{2}}(\tau,s)=\gamma(%
t,s).$$
(15)
III Stochastic decoupling of the Feynman-Vernon influence functional
We show how the processes $Z(\tau)$
naturally lead to a stochastic decoupling of the Feynman-Vernon
influence functional. We choose processes with
the complex conjugate of the Feynman-Vernon kernel from (2) as
correlation function,
$$\gamma(\tau,\sigma)=\alpha^{*}(\tau,\sigma).$$
(16)
According to formula (12) we find
$$\displaystyle\langle\exp\int_{0}^{t}d\tau\left[x_{\tau}Z(\tau)+x^{\prime}_{%
\tau}Z^{*}(\tau)\right]\rangle$$
$$\displaystyle=$$
$$\displaystyle\exp\int_{0}^{t}d\tau\int_{0}^{t}d\sigma\;\;[x_{\tau}\alpha^{*}(%
\tau,\sigma)x^{\prime}_{\sigma}]$$
$$\displaystyle=$$
$$\displaystyle\exp\int_{0}^{t}d\tau\int_{0}^{\tau}d\sigma\;\;\left[x_{\tau}%
\alpha^{*}(\tau,\sigma)x^{\prime}_{\sigma}+x^{\prime}_{\tau}\alpha(\tau,\sigma%
)x_{\sigma}\right],$$
where we used (3) to get the second line.
This last expression is just the part of the influence
functional (2) that couples $x$ and $x^{\prime}$. Notice, however,
that they are decoupled on the left-hand side of equation (III).
We conclude that we can express the propagator of the density matrix
in the decoupled form
$${\cal J}(x_{f},x^{\prime}_{f},t;x_{0},x^{\prime}_{0},0)=\langle G_{Z}(x_{f},t;%
x_{0},0)G^{*}_{Z}(x^{\prime}_{f},t;x^{\prime}_{0},0)\rangle,$$
(18)
with the stochastic path integral propagator for state vectors,
$$G_{Z}(x_{f},t;x_{0},0)=\int_{x_{0},0}^{x_{f},t}{\cal D}[x]\exp\left\{\frac{i}{%
\hbar}S[x]+\int_{0}^{t}d\tau x_{\tau}Z(\tau)-\int_{0}^{t}d\tau\int_{0}^{\tau}d%
\sigma\left[x_{\tau}\alpha(\tau,\sigma)x_{\sigma}\right]\right\}.$$
(19)
Thus, the Feynman-Vernon path integral propagator (1) for
the density operator is equivalent to the ensemble of pure state
propagators (19).
This is the main result of this paper. The non-local action functional
in (19) reflects the non-Markovian nature of the problem.
Result (18) with (19) allows to describe the
non-Markovian dynamics of the
subsystem in terms of an ensemble of stochastic state vectors
$$|\psi_{Z}(t)\rangle=G_{Z}(t;0)|\psi_{0}\rangle.$$
(20)
If we assume an initial
pure state
$$\rho_{0}=P_{\psi_{0}},$$
(21)
where we use the notation $P_{\psi}=|\psi\rangle\langle\psi|$
for pure state projectors,
we recover the density operator at time $t$ by taking the ensemble average
according to (18),
$$\rho(t)=\langle P_{\psi_{Z}(t)}\rangle.$$
(22)
In Markovian subsystem dynamics [8, 9, 10, 11],
the time evolution of the stochastic state
vectors is given by a stochastic Schrödinger equation.
In fact, in the white noise case
$$\alpha(t,s)=\kappa\delta(t-s),$$
(23)
the path integral propagator (19) becomes local in time,
$$G_{\xi}(x_{f},t;x_{0},0)=\int_{x_{0},0}^{x_{f},t}{\cal D}[x]\exp\left\{\frac{i%
}{\hbar}S[x]+\sqrt{\kappa}\int_{0}^{t}d\xi(\tau)x_{\tau}-\frac{\kappa}{2}\int_%
{0}^{t}d\tau x_{\tau}^{2}\right\},$$
(24)
with a delta-correlated normalized complex process
$\xi(t)=\kappa^{-\frac{1}{2}}Z(t)$.
This is the propagator of the linear quantum state
diffusion stochastic Schrödinger equation
$$|d\psi\rangle=\left(-\frac{i}{\hbar}\hat{H}-\frac{\kappa}{2}{\hat{x}}^{2}%
\right)|\psi\rangle dt+\sqrt{\kappa}{\hat{x}}|\psi\rangle d\xi$$
(25)
with complex Itô increments (10). The path integral theory
of general linear quantum state diffusion equations was developed
in [12], where a rigorous definition of stochastic path
integrals like (24) is given.
As the non-Markovian generalization
of (24), the propagator (19) represents
linear non-Markovian quantum state diffusion.
IV Conclusions
We use complex Gaussian stochastic processes to find
a non-Markovian quantum state diffusion theory
which is equivalent to the Feynman-Vernon density matrix formulation.
Our result offers a helpful tool, since
state vectors are simpler than density operators.
Such a reduction in complexity is essential to tackle many
realistic problems and is most significant numerically, as
is well recognized in the Markovian case, for instance in quantum optics
[14].
To be truly helpful, one must overcome the
difficulties arising from the fact that the stochastic state vectors
$|\psi_{Z}\rangle$ of (20) are not normalized. Also an effective
algorithm to propagate state vectors
with the non-local path integral (19) has to be established.
From a quantum foundational point of view
the question arises in what sense the stochastic
state vectors $|\psi_{Z}\rangle$ can be related to the non-Markovian dynamics
of individual open quantum systems.
In the well established Markovian
case, the use of white complex noise arose from symmetry considerations
in the space of the environment operators of the corresponding master
equation [8, 9, 10, 11].
For a single, Hermitian environment operator like $\hat{x}$
in (25), however,
this symmetry argument does not apply. In this paper we have shown
that even in this case, complex Gaussian processes appear
indispensable as soon as non-Markovian features are taken into
account. These independent arguments underline their relevance
to a stochastic description of general open quantum systems.
V Acknowledgment
I would like to thank Ian C. Percival for helpful discussions and advice.
I am also grateful to Lajos Diósi and Todd A. Brun for detailed comments
on the manuscript. This work was made possible by a Feodor Lynen fellowship
of the Alexander von Humboldt foundation.
References
[1]
R. P. Feynman and F. L. Vernon,
Ann. Phys. 24, 118 (1963).
[2]
R. P. Feynman and A. R. Hibbs,
Quantum mechanics and path integrals
(McGraw-Hill, New York, 1965).
[3]
A. O. Caldeira and A. J. Leggett,
Physica A 121, 587 (1983).
[4]
H. Grabert et al.,
Phys.Rep. 168, 115 (1988).
[5]
U. Weiss, Quantum dissipative systems
(World Scientific, Singapore, 1993).
[6]
L. Diósi, in
Stochastic evolution of quantum states in
open quantum systems and in measurement processes, eds. L. Diósi and
B. Lukács, (World Scientific, Singapore, 1994).
[7]
L. Diósi,
Phys. Rev. A 42, 5086 (1990).
[8]
N. Gisin and I. C. Percival,
Phys. Lett. A 167, 315 (1992).
[9]
N. Gisin and I. C. Percival,
J.Phys.A: Math. Gen. 25, 5677 (1992).
[10]
N. Gisin and I. C. Percival,
J.Phys.A: Math. Gen. 26, 2233 (1993).
[11]
N. Gisin and I. C. Percival,
J.Phys.A: Math. Gen. 26, 2245 (1993).
[12]
W. T. Strunz,
Phys. Rev. A 54, 2664 (1996).
[13]
V. P. Belavkin et. al. (eds.),
Quantum communications and measurement,
(Plenum Press, New York, 1995).
[14]
H. Carmichael,
An open system approach to quantum optics
(Springer, Berlin, 1994).
[15]
H. Kleinert and S.V. Shabanov,
Phys. Lett. A 200, 224 (1995). |
Sign reversal diode effect in superconducting Dayem nanobridges
Daniel Margineda
[email protected]
NEST Istituto Nanoscienze-CNR and Scuola Normale Superiore, I-56127, Pisa, Italy
Alessandro Crippa
NEST Istituto Nanoscienze-CNR and Scuola Normale Superiore, I-56127, Pisa, Italy
Elia Strambini
NEST Istituto Nanoscienze-CNR and Scuola Normale Superiore, I-56127, Pisa, Italy
Yuri Fukaya
SPIN-CNR, IT-84084 Fisciano (SA), Italy
Maria Teresa Mercaldo
Dipartimento di Fisica “E. R. Caianiello”, Università di Salerno, IT-84084 Fisciano (SA), Italy
Mario Cuoco
SPIN-CNR, IT-84084 Fisciano (SA), Italy
Francesco Giazotto
[email protected]
NEST Istituto Nanoscienze-CNR and Scuola Normale Superiore, I-56127, Pisa, Italy
[email protected]
Superconducting Diode Effect, Superconducting electronics
Supercurrent diodes are nonreciprocal electronic elements whose switching current depends on their flow direction.
Recently, a variety of composite systems combining different materials and engineered asymmetric superconducting devices have been proposed.
Yet, ease of fabrication and tunable sign of supercurrent rectification joined to large efficiency have not been assessed in a single platform so far.
Here, we demonstrate that all-metallic superconducting Dayem nanobridges naturally exhibit nonreciprocal supercurrents in the presence of an external magnetic field, with a rectification efficiency up to $\sim 27\%$.
Our niobium nanostructures are tailored so that the diode polarity can be tuned by varying the amplitude of an out-of-plane magnetic field or the temperature in a regime without magnetic screening.
We show that sign reversal of the diode effect may arise from
the high-harmonic content of the current phase relation of the nanoconstriction in combination with vortex phase windings present in the bridge or an anomalous phase shift compatible with anisotropic spin-orbit interactions.
Non-reciprocal charge transport is an essential element in modern electronics as a building block for multiple components such as rectifiers, photodetectors, and logic circuits.
For instance, pn-junctions and Schottky-barrier devices are archetypal semiconductor-based examples of systems, known as diodes, with direction-selective charge propagation. Their operation stems from the spatial asymmetry of the heterojunction that provides inversion symmetry breaking.
Likewise, dissipationless rectification refers to the asymmetric switching of the critical current $(I_{sw})$ required to turn a superconductor into the normal state depending on the current bias polarity.
Breaking both inversion and time-reversal symmetry, which are preserved in conventional superconductors, is the foundational aspect to enable the diode effect, as recently observed in superconducting materials wak17 ; mer23 ; lin22 ; paolucci2023gate and heterostructures and20 ; pal22 ; baur22 ; bau22 ; jeo22 ; pal22 ; sun23 ; lot23 ; tur22 .
Recent experimental findings have boosted a number of theoretical investigations
in superconductors dai22 ; ber22 ; yua22 ; he22 and Josephson junctions (JJs) mis21 ; dav22 ; zha22 .
In particular, several mechanisms have been proposed to account for the supercurrent diode effect (SDE). On the one hand, those based on intrinsic depairing currents focus on finite momentum pairing that arise from the combination of spin-orbit coupling and Zeeman field dai22 ; ber22 ; yua22 ; he22 ; Scammell_2022 , or from Meissner currents dav22 .
On the other hand, other works underline the role of Abrikosov vortices, magnetic fluxes, and screening currents as key-elements for setting out non-reciprocal charge transport in superconductors wambaugh99 ; vodolazov05 ; villegas05 ; devondel05 ; car09 ; cerbu_2013 , such as in systems with trapped Abrikosov vortices lyu21 ; gol22 or in micron-sized Nb-based strips with asymmetric edges sur22 ; cha23 ; hou22 ; sat23 .
Till now, most of the research efforts have aimed to the realization of a SDE that maximizes the rectification efficiency, while the change of its polarity has been reported in a few cases only gol22 ; lot23 ; pal22 ; cos22 ; sun23 ; kawarazaki2022 . The SDE sign reversal has been interpreted as a consequence of
finite momentum pairing pal22 ; cos22 ; kawarazaki2022 ; lot23 requiring in-plane magnetic fields or diamagnetic currents and Josephson vortices gol22 ; sun23 , as well as ascribed to vortex ratchet and asymmetric pinning effects Gillijns07 ; He19 ; Ideue20 ; Ji21 . All these outcomes suggest the need of an effective mastering over the polarity change of the SDE and
its implementation in a simple and monolithic platform suitable for nanoscale miniaturization, not
accomplished yet.
Here, we experimentally demonstrate a sign reversal tunable SDE in elemental superconducting weak links made on niobium (Nb). Nano-sized constrictions of Nb realize Dayem bridges whose switching currents for positive and negative sweep direction, $I^{+}_{sw}$ and $I^{-}_{sw}$, respectively, differ in the absolute value. This difference can be tuned both in amplitude and sign by an out-of-plane magnetic field $(B_{z})$, without inverting the polarity of $B_{z}$.
Thermal effects can lead to two different energy scales for the maximal amplitude and the sign reversal of the diode efficiency.
We show that sign reversal of the non-reciprocal response may arise from the phase shift due to the vortex phase winding or from spin-orbit effects due the material granularity, in either case jointly with a few-harmonic content of the current-phase relation (CPR) of the weak link.
I Metallic diode architectures
We analyze two different geometries of Nb Dayem bridges, i.e., weak links made of a constant-thickness and all-metallic constriction between two superconducting banks lik79 . The schematics of the electronic circuitry and false-color scanning electron micrographs of the devices are shown in Fig. 1a.
In the first type of samples, 25-nm-thick micrometer-wide banks are connected via a link whose length $l$ is $\sim 80$ nm and width $w\sim 180$ nm.
The second type consists of 55-nm-thick banks connected via a quasi-one-dimensional wire with $l\simeq 1$ $\mu$m, and $w\simeq 80$ nm.
Hereafter, we shall refer to the first and second type of bridges as “short” and “long”, respectively.
Both device families are patterned through a single electron-beam lithography step followed by sputter deposition of the Nb thin film and lift-off. A 4-nm-thick Ti layer is pre-sputtered for adhesion purposes.
The differential resistance $R=dV/dI$ versus temperature $T$ of two representative bridges is shown in Fig. 1b.
The first abrupt reduction of $R$ marks the critical temperature of the Nb films $T_{TF}\simeq 8.1(7.9)$ K for the 55(25)-nm-thick sample.
The resistance drops to zero at the critical temperature of the weak link $(T_{c})$, which strongly depends, along with its normal-state resistance $R_{N}$, on the geometry lik79 . While the “short” bridge exhibits $R_{N}\sim$ 40 $\Omega$, in the “long” one has $R_{N}\sim$ 270 $\Omega$.
Below $T_{c}$, dissipationless transport occurs in the bridges owing to Cooper pairs supercurrent.
The temperature dependence of the switching current $I_{sw}$ of both devices is displayed in Fig 1c.
From the fit to the Bardeen equation $I_{sw}(T)=I_{sw}^{0}[1-(\frac{T}{T_{c}})^{2}]^{\frac{3}{2}}$,
we extract a zero-temperature switching current $I_{sw}^{0}\simeq 720$ $\mu$A and a critical temperature $T_{c}^{S}\simeq 4.3$ K for the “short” bridge.
Similarly, for the “long” weak link we obtain $I_{sw}^{0}\simeq 42\,\mu$A and $T_{c}^{L}\simeq 2.1$ K.
From these values, we determine a zero-temperature BCS energy gap $\Delta_{0}$=1.764 $k_{B}T_{c}^{S(L)}\simeq$ 650(320) $\mu$eV for the “short”(“long”) bridge, where $k_{B}$ is the Boltzmann constant.
For the “long” bridge, we deduce a superconducting coherence length $\xi_{0}=\sqrt{\hbar l/(R_{N}wte^{2}N_{F}\Delta_{0})}\simeq$ 11 nm, where $t$ is the film thickness, $N_{F}\simeq 5.33\times 10^{47}J^{-1}m^{-3}$ is the density of states at the Fermi level of Nb jan88 , and $e$ is the electron charge.
Similarly, we can evaluate the London penetration depth $\lambda_{L}=\sqrt{\hbar R_{N}wt/(\pi l\mu_{0}\Delta_{0})}\simeq$ 790 nm, where $\mu_{0}$ is the vacuum magnetic permeability.
Since $w,t\ll\lambda_{L}$, the bridges can be uniformly penetrated by an external magnetic field.
The current vs voltage $(IV)$ characteristics of the “short” and “long” bridges are shown in Fig.1d,e, respectively, for selected values of bath temperature.
The devices show an abrupt transition to the normal state at the switching current $I_{sw}$, and display the typical hysteresis of metallic junctions which originates from Joule heating induced in the bridge when the bias current is swept back from the resistive to the dissipationless state sko76 .
II “Short” Dayem bridge diode performance
Let us now discuss how the “short” Dayem bridge in Fig.1a can be used as a supercurrent diode.
Non-reciprocal dissipationless transport is revealed by comparing the switching currents while sweeping the biasing current from zero to positive values ($I_{sw}^{+}$) or from zero to negative values ($I_{sw}^{-}$) in the presence of an out-of-plane magnetic field $B_{z}$.
The switching currents at $T=0.3$ K are reported in Fig. 2a. The magnetic field increasingly reduces the superconducting gap and thereby both the switching currents.
A linear decrease in $B_{z}$ of $I_{sw}^{+}$ and $|I_{sw}^{-}|$ is observed up to $\sim 0.07$ T.
At larger fields, the dependence of the switching currents on $B_{z}$ is sublinear. Notably, both $I_{sw}(B_{z})$ are not antisymmetric with respect to the magnetic field ($I_{sw}(B_{z})\neq I_{sw}(-B_{z})$) while the symmetry relation $I_{sw}^{+}(B_{z})\simeq-I_{sw}^{-}(-B_{z})$ is respected, within the small experimental fluctuations, as theoretically expected.
This symmetry relation is further confirmed in the switching currents difference $\Delta I_{sw}\equiv I_{sw}^{+}-|I_{sw}^{-}|$ displaying an odd-in-$B_{z}$ superconducting diode effect ($\Delta I_{sw}(B_{z})\simeq-\Delta I_{sw}(-B_{z})$) as shown in Figure 2b. $\Delta I_{sw}$ is characterized by a maximum at $B_{max}\simeq 0.05$ T and a sign inversion at $B_{R}\simeq 0.1$ T where $I_{sw}^{+}$ and $|I_{sw}^{-}|$ have a crossing (see Fig. 2a). From now on, $B_{max}$ indicates the position in field of the rectification peak.
Two $IV$ curves, recorded for magnetic fields lower and larger than $B_{R}$, are plotted in Fig. 2c to emphasize the sign change in the rectification.
Nonreciprocal transport can be conveniently quantified by the rectification efficiency defined as $\eta=\frac{I_{sw}^{+}-|I_{sw}^{-}|}{I_{sw}^{+}+|I_{sw}^{-}|}$.
Figure 2d shows the evolution of $\eta$ versus $B_{z}$ and $T$. $\eta(B_{z})$ is substantially unaffected by thermal effects up to $T\simeq 1.75$ K $=0.41\,T_{c}^{S}$ where a maximum rectification $\eta_{max}\sim 27\%$ is obtained. The evolution of $\eta_{max}$ in temperature is displayed in the top panel of Fig. 2e (left vertical axis).
In addition, we parametrize the diode sensitivity to the magnetic field in the vicinity of the abrupt sign change as $\Gamma=\eta_{max}/(|B_{max}-B_{R}|)$.
A maximum value $\Gamma\sim 650$ T${}^{-1}$ is achieved around $2.25$ K (see Fig. 2e, top panel and right vertical axis).
At higher temperatures, the quantities $\eta_{max}$, $\Gamma$, and the characteristic magnetic fields $B_{max}$ and $B_{R}$ related to rectification (see Fig. 2e, bottom panel)
all decrease in a similar fashion.
The full profile of the rectification efficiency versus $B_{z}$ is better visualized in Fig.2f where $\eta(B_{z})$ is plotted for a few selected values of temperature.
III “Long” Dayem bridge diode performance
Next, we characterize the “long” nanobridge shown in Fig. 1a.
Figure 3a reports the decay of $I_{sw}^{+}$ and $|I_{sw}^{-}|$ as a function of $B_{z}$.
At first, we notice that the switching currents are damped down to $\sim 60$% of their zero-field value at $B_{z}\simeq 0.3$ T, whereas in the previous sample, the same damping is achieved for lower fields ($B_{z}\simeq 0.07$ T see Fig. 2a).
Figure 3b displays $\Delta I_{sw}$ versus $B_{z}$. For low magnetic fields, $\Delta I_{sw}(B_{z})$ exhibits a linear relation.
While increasing $|B_{z}|$ further, $\Delta I_{sw}$ bends and then inverts its trend: an abrupt jump realizes a sign reversal at $|B_{R}|\sim 0.34$ T. Then, a relative peak at $|B_{max}|\simeq 0.38$ T, marked by a red dashed line, represents the field at which maximum rectification efficiency is achieved as before. Next to the sign change, $\Delta I_{sw}$ leaves the clean trend and looks noisy.
Such small jumps are reproducible, thus ruling out a stochastic nature of the underlying processes.
Finally, $\Delta I_{sw}$ oscillates at higher magnetic fields, as shown in the inset of Fig. 3b.
Two $IV$ curves, for fields lower and larger than $B_{R}$, are plotted in Fig. 3c to highlight that the rectification sign changes from negative to positive as the field $B_{z}>0$ increases contrary to the “short” bridge. This change in symmetry is attributed to the vortex nucleation as discussed later.
The magnetic field and temperature dependence of the rectification efficiency $\eta$ is presented as a color plot in Fig. 2d.
The sign change and the maximum rectification are affected by temperature in a different way as compared to the short constriction.
The linear increase of the rectification at low fields smears out with temperature, reducing $B_{R}$ until it vanishes at $T\simeq 1.1$ K $=0.5\,T_{c}^{L}$. Figure 3e shows that $B_{max}$ is more robust in temperature than $B_{R}$: it is still observable at $T\simeq 1.8$ K $=0.8\,T_{c}^{L}$.
The sudden change of sign is quantified by a maximum $\Gamma\sim 360$ T${}^{-1}$ at $0.15$ K.
As before, the profile of rectification efficiency as a function of $B_{z}$ is shown in Fig. 3f for a few selected values of temperature.
The difference in the temperature trend between $B_{R}$ and $B_{max}$ (see bottom panel of Fig. 3e) suggests two different energy scales responsible for the sign reversal and the maximum rectification, as confirmed by measurements obtained in another similar sample (see Extended Data Fig. 1).
Rectification on the second sample exhibits similar $\eta(B_{z})$ lineshape with almost identical $B_{max}$ and $\eta_{max}$ values and temperature dependence. In this sample, low-field features fade more rapidly with temperature, which appears to be sample dependent.
IV Modeling the sign reversal of the diode effect
We propose two physical scenarios compatible with our devices that may explain our experimental findings. Both of them rely on non-sinusoidal CPRs, typical of superconducting nanobridges lik79 , combined with a source of an inversion-symmetry breaker. In model I, this is represented by a supercurrent vortex, while in model II by spin-orbit couplings.
An out-of-plane magnetic field parametrized by $B^{*}$ is considered, where $B^{*}$ is given by the zero rectification field $\eta(B^{*})=0$ and by the vanishing of high-harmonic amplitude.
In model I, the Dayem nanobridge is schematized as a one-dimensional chain of weak links of width $w$ formed by the Nb grains. A supercurrent vortex can nucleate in one of these weak links Aranson1994 ; Aranson_1996 , as sketched in Fig. 4a, which induces a phase winding in the superconducting order parameter.
These vortices have a typical size of the order of $\xi$, so only a few of them can be accommodated within the bridge.
Notice that such vorticity is not a screening of the $B$-field, since the small dimensions of the bridge ($w\ll\lambda_{L}$) allow full penetration of $B_{z}$.
In this framework, the CPR is affected by two phase shifts: the conventional vector potential associated to $B_{z}$ and the phase winding of the vortex.
It is indeed the interplay of these two contributions that is responsible for a sign change of the rectification parameter.
Though on a different length scale, this physical scenario is similar to that of Josephson phase vortices krasnov20 ; gol22 .
The rectification parameter $\eta$ is then evaluated by determining the maximum and minimum values of the Josephson current with respect to the phase bias, see Methods for details.
Figure 4 reports the evolution of $\eta$ in $B$ for different amplitudes of the second harmonic, $I_{2}$, of the CPR. The magnitude of $\eta$ scales with $I_{2}$ showing multiple nodes, whose position in $B$ is independent on $I_{2}$.
The sign change also depends on the position of the vortex, as displayed by Fig. 4c, where $\eta(B)$ is evaluated for a vortex nucleated at different distances from the lateral edge of the bridge ($x_{\nu}$).
This behavior suggests a phase-shift competition dominated at low fields by the vortex phase slip, and at large fields by the vector potential.
Another scenario that is able to describe the sign reversal in the diode rectification can be envisioned by combining the colored CPR of the nanobridge with an anomalous phase shift ass19 ; str20 ; may20 ; mar23 ; ber22 induced by spin-orbit interactions and magnetic fields.
In particular, we can expect that mirror symmetry can be locally or globally broken in polycrystalline filmsPark19 , thereby leading to spin-orbit interaction
of both Rashba and Dresselhaus types (see Methods).
Figure 4d shows a sketch of the bridge modeled as an effective $SS^{\prime}S$ structure, where the $S$ and $S^{\prime}$ components have different amplitudes of the superconducting gap and different spin-orbit couplings breaking horizontal and vertical mirror symmetries.
The anisotropic spin-orbit interaction generates an anomalous phase shift in the CPR that varies with the magnetic field, as explicitly shown in Extended Data Fig. 2.
The anomalous phase is then introduced in the CPR via a phenomenological parameter $\Gamma_{B}$ providing a first-order cosine component in the Fourier expansion, i.e., $I=\sum_{n}I_{n}\sin(\varphi)+\Gamma_{B}\cos(\varphi)$ (see Methods for details).
The anomalous phase $\varphi_{0}$ is related to the amplitude of $\Gamma_{B}$, while we assume a linear damping of the high order harmonics $I_{n}=I_{n,0}(1-B/B^{*})$ which defines the scale $B^{*}$.
From model II, the diode sign reversal takes place only in the presence of a sizable third harmonic component.
Figure 4e reports $\eta(B)$ for some values of $I_{3,0}$. By increasing $I_{3,0}$, the sign inversion gets more pronounced, whereas the maxima and minima of the rectification ($B_{max}^{s}\simeq 0.74B^{*}$, $B_{min}^{s}\simeq 0.89B^{*}$) are barely affected by the weight of the harmonic.
Moreover, by including more harmonics in the CPR, the lineshape of $\eta(B)$ is modified. For instance, Figure 4f shows that a fourth-order harmonic affects the magnetic field dependence by substantially removing the sign change.
V Discussion
The comparison between our experimental findings and the proposed models reveals some important features supporting the proposed mechanisms.
In particular, for both bridges $\eta$ has an almost monotonic damping in temperature that can be explained in both models by the reduction of high-order harmonics. This is expected in long metallic weak links where the CPR evolves from highly distorted to sinusoidal-like shapes at large temperatures lik79 ; gol04 .
Moreover, as shown in Figs. 2e and 3e,
$B_{max}$, is temperature resilient until $T\geq 0.5\,T_{c}$.
This feature is fairly captured by both models, as the maximum rectification looks almost independent of the harmonic content (see Fig. 4b for model I and Fig. 4e for model II).
However, “long” bridges exhibit features that are mostly accounted by model I, while “short” ones are more compatible with model II.
For example, in long (short) bridges the sign reversal is present below (above) $B_{max}$ as shown in Fig. 3b and Fig. 2b to be compared with Fig. 4b and e, respectively.
Multiple sign reversal nodes appear in high fields only for long bridges, as shown in Fig. 3b and well described by the interferometric mechanism of model I. While the rectification lineshape given by model II presents only one inversion node.
Moreover, the quick damping of the rectification inversion observed at low fields ($<$ 0.3 T) in Fig. 3f is captured by the vortex dynamics described in Fig. 4c.
The relative size of the vortex $\xi/w$ is temperature dependent and influences the vortex position in the bridge. Thus, it is plausible to expect that variations of the vortex size mostly affect the rectification lineshape
at low fields while remaining substantially unchanged for larger fields, as shown in Fig.4c.
Finally, it is interesting to note that by extending the proposed models to in-plane magnetic fields, a sizable supercurrent rectification is anticipated but without sign reversal. In particular, for model I, no phase shift is expected from the spatial dependence of the vector potential, since the orbital coupling between an in-plane field and the electron momentum becomes negligible. Thus, the source of phase interference with the vortex winding is eliminated. For model II, the anomalous phase and the harmonic content would be differently affected by an in-plane Zeeman field compared to the out-of-plane orientation not resulting in sign reversal.
VI Conclusions
In summary, we have demonstrated the implementation of supercurrent diodes in Nb Dayem nanobridges. By breaking the time-reversal symmetry with an out-of-plane magnetic field, we demonstrate that both the amplitude and sign of the rectification amplitude and sign can be tuned without inverting the polarity of the applied field.
We have developed two theoretical models to account for the sources of time- and inversion-symmetry breaking, one based on a vortex phase winding, and one that takes into account the spin-orbit interactions present in polycrystalline heavy materials.
Yet, a quantitative description of the supercurrent diode effect in metallic nanoconstrictions should account for both scenarios, which complement each other and can coexist.
Furthermore, the fabrication process is simple when compared to that of other platforms, a compelling step towards scalability.
Analogous nanobridges can be realized from several elemental superconductors currently at the base of other architectures, such as nanocryotrons mccaughan14 , rapid single-flux quanta (RSFQ) likharev91 and memories ligato21 , which would ease a potential integration.
Finally, the sharp sign reversal of the diode rectification allows us to envisage applications of Dayem nanobridges as $B$-field threshold detectors. When biased in the vicinity of the rectification node, small variations of an environmental magnetic field would result in modifications of the sign of the rectification parameter.
References
(1)
Wakatsuki, R. et al.
Nonreciprocal charge transport in noncentrosymmetric
superconductors.
Sci. Adv. 3,
e1602390 (2017).
(2)
Díez-Mérida, J. et al.
Symmetry-broken Josephson junctions and
superconducting diodes in magic-angle twisted bilayer graphene.
Nat. Commun. 14,
2396 (2023).
(3)
Lin, J.-X. et al.
Zero-field superconducting diode effect in
small-twist-angle trilayer graphene.
Nat. Phys. 18,
1221–1227 (2022).
(4)
Paolucci, F., De Simoni, G. &
Giazotto, F.
A gate-and flux-controlled supercurrent diode
effect.
Appl. Phys. Lett.
122, 042601
(2023).
(5)
Ando, F. et al.
Observation of superconducting diode effect.
Nature 584,
373–376 (2020).
(6)
Pal, B. et al.
Josephson diode effect from cooper pair momentum in
a topological semimetal.
Nat. Phys. 18,
1228–1233 (2022).
(7)
Bauriedl, L. et al.
Supercurrent diode effect and magnetochiral
anisotropy in few-layer NbSe2.
Nat. Commun. 13,
4266 (2022).
(8)
Baumgartner, C. et al.
Supercurrent rectification and magnetochiral effects
in symmetric Josephson junctions.
Nat. Nanotechnol.
17, 39–44
(2022).
(9)
Jeon, K.-R. et al.
Zero-field polarity-reversible Josephson
supercurrent diodes enabled by a proximity-magnetized Pt barrier.
Nat. Mater. 21,
1008–1013 (2022).
(10)
Sundaresh, A., Väyrynen, J. I.,
Lyanda-Geller, Y. & Rokhinson, L. P.
Diamagnetic mechanism of critical current
non-reciprocity in multilayered superconductors.
Nat. Commun. 14,
1628 (2023).
(11)
Lotfizadeh, N. et al.
Superconducting diode effect sign change in epitaxial
Al-InAs Josepshon junctions.
arXiv:2303.01902 (2023).
(12)
Turini, B. et al.
Josephson diode effect in high-mobility InSb
nanoflags.
Nano Lett. 22,
8502–8508 (2022).
(13)
Daido, A., Ikeda, Y. &
Yanase, Y.
Intrinsic superconducting diode effect.
Phys. Rev. Lett.
128, 037001
(2022).
(14)
Ilić, S. &
Bergeret, F. S.
Theory of the supercurrent diode effect in Rashba
superconductors with arbitrary disorder.
Phys. Rev. Lett.
128, 177001
(2022).
(15)
Yuan, N. F. Q. & Fu, L.
Supercurrent diode effect and finite-momentum
superconductors.
Proc. Natl Acad. Sci.
119, e2119548119
(2022).
(16)
He, J. J., Tanaka, Y. &
Nagaosa, N.
A phenomenological theory of superconductor diodes.
New J. Phys. 24,
053014 (2022).
(17)
Misaki, K. & Nagaosa, N.
Theory of the nonreciprocal Josephson effect.
Phys. Rev. B
103, 245302
(2021).
(18)
Davydova, M., Prembabu, S. &
Fu, L.
Universal Josephson diode effect.
Sci. Adv. 8,
eabo0309 (2022).
(19)
Zhang, Y., Gu, Y., Li,
P., Hu, J. & Jiang, K.
General theory of Josephson diodes.
Phys. Rev. X 12,
041013 (2022).
(20)
Scammell, H. D., Li, J. I. A. &
Scheurer, M. S.
Theory of zero-field superconducting diode effect in
twisted trilayer graphene.
2D Materials 9,
025027 (2022).
(21)
Wambaugh, J. F., Reichhardt, C.,
Olson, C. J., Marchesoni, F. &
Nori, F.
Superconducting fluxon pumps and lenses.
Phys. Rev. Lett.
83, 5106–5109
(1999).
(22)
Vodolazov, D. Y. & Peeters, F. M.
Superconducting rectifier based on the asymmetric
surface barrier effect.
Phys. Rev. B 72,
172508 (2005).
(23)
Villegas, J. E., Gonzalez, E. M.,
Gonzalez, M. P., Anguita, J. V. &
Vicent, J. L.
Experimental ratchet effect in superconducting films
with periodic arrays of asymmetric potentials.
Phys. Rev. B 71,
024519 (2005).
(24)
Van de Vondel, J., de Souza Silva, C. C.,
Zhu, B. Y., Morelle, M. &
Moshchalkov, V. V.
Vortex-rectification effects in films with periodic
asymmetric pinning.
Phys. Rev. Lett.
94, 057003
(2005).
(25)
Carapella, G., Granata, V.,
Russo, F. & Costabile, G.
Bistable Abrikosov vortex diode made of a
Py–Nb ferromagnet-superconductor bilayer structure.
Appl. Phys. Lett.
94 (2009).
242504.
(26)
Cerbu, D. et al.
Vortex ratchet induced by controlled edge roughness.
New J. Phys. 15,
063022 (2013).
(27)
Lyu, Y.-Y. et al.
Superconducting diode effect via conformal-mapped
nanoholes.
Nat. Commun. 12,
2703 (2021).
(28)
Golod, T. & Krasnov, V. M.
Demonstration of a superconducting diode-with-memory,
operational at zero magnetic field with switchable nonreciprocity.
Nat. Commun. 13,
3658 (2022).
(29)
Suri, D. et al.
Non-reciprocity of vortex-limited critical current in
conventional superconducting micro-bridges.
Appl. Phys. Lett.
121, 102601
(2022).
(30)
Chahid, S., Teknowijoyo, S.,
Mowgood, I. & Gulian, A.
High-frequency diode effect in superconducting
$\mathrm{Nb}_{3}\mathrm{Sn}$ microbridges.
Phys. Rev. B
107, 054506
(2023).
(31)
Hou, Y. et al.
Ubiquitous superconducting diode effect in
superconductor thin films.
arXiv:2205.09276 (2022).
(32)
Satchell, N., Shepley, P. M.,
Rosamond, M. C. & Burnell, G.
Supercurrent diode effect in thin film Nb tracks.
arXiv:2301.02706 (2023).
(33)
Costa, A. et al.
Sign reversal of the AC and DC supercurrent diode
effect and 0-$\pi$-like transitions in ballistic Josephson junctions.
arXiv:2212.13460 (2022).
(34)
Kawarazaki, R. et al.
Magnetic-field-induced polarity oscillation of
superconducting diode effect.
Appl. Phys. Express.
15, 113001
(2022).
(35)
Gillijns, W., Silhanek, A. V.,
Moshchalkov, V. V., Reichhardt, C. J. O.
& Reichhardt, C.
Origin of reversed vortex ratchet motion.
Phys. Rev. Lett.
99, 247002
(2007).
(36)
He, A., Xue, C. & Zhou,
Y.-H.
Switchable reversal of vortex ratchet with dynamic
pinning landscape.
Appl. Phys. Lett.
115 (2019).
(37)
Ideue, T., Koshikawa, S.,
Namiki, H., Sasagawa, T. &
Iwasa, Y.
Giant nonreciprocal magnetotransport in bulk trigonal
superconductor $\mathrm{PbTa}$${\mathrm{Se}}_{2}$.
Phys. Rev. Res.
2, 042046 (2020).
(38)
Jiang, J. et al.
Reversible ratchet effects in a narrow
superconducting ring.
Phys. Rev. B
103, 014502
(2021).
(39)
Likharev, K. K.
Superconducting weak links.
Rev. Mod. Phys.
51, 101–159
(1979).
(40)
Jani, A. R., Brener, N. E. &
Callaway, J.
Band structure and related properties of bcc
niobium.
Phys. Rev. B 38,
9425–9433 (1988).
(41)
Skocpol, W. J., Beasley, M. R. &
Tinkham, M.
Self‐heating hotspots in superconducting
thin‐film microbridges.
J. Appl. Phys.
45, 4054–4066
(2003).
(42)
Aranson, I., Gitterman, M. &
Shapiro, B. Y.
Motion of vortices in thin superconducting strips.
J. Low Temp. Phys.
97, 215–228
(1994).
(43)
Aranson, I., Gitterman, M.,
Shapiro, B. Y. & Vinokur, V.
Nucleation, growth and kinetics of the vortex phase
in thin superconducting strips.
Phys. Scrip.
1996, 125 (1996).
(44)
Krasnov, V. M.
Josephson junctions in a local inhomogeneous magnetic
field.
Phys. Rev. B
101, 144507
(2020).
(45)
Assouline, A. et al.
Spin-orbit induced phase-shift in Bi2Se3
Josephson junctions.
Nat. Commun. 10,
126 (2019).
(46)
Strambini, E. et al.
A Josephson phase battery.
Nat. Nanotechnol.
15, 656–660
(2020).
(47)
Mayer, W. et al.
Gate controlled anomalous phase shift in Al/InAs
Josephson junctions.
Nat. Commun. 11,
212 (2020).
(48)
Margineda, D., Claydon, J. S.,
Qejvanaj, F. & Checkley, C.
Observation of anomalous Josephson effect in
nonequilibrium Andreev interferometers.
Phys. Rev. B
107, L100502
(2023).
(49)
Park, J.-S.
Stabilization and self-passivation of symmetrical
grain boundaries by mirror symmetry breaking.
Phys. Rev. Mater.
3, 014602 (2019).
(50)
Golubov, A. A., Kupriyanov, M. Y. &
Il’ichev, E.
The current-phase relation in Josephson junctions.
Rev. Mod. Phys.
76, 411–469
(2004).
(51)
McCaughan, A. N. & Berggren, K. K.
A superconducting-nanowire three-terminal
electrothermal device.
Nano Lett. 14,
5748–5753 (2014).
(52)
Likharev, K. K. & Semenov, V. K.
Rsfq logic/memory family: A new Josephson-junction
technology for sub-terahertz-clock-frequency digital systems.
IEEE Trans. Appl. Supercond.
1, 3–28 (1991).
(53)
Ligato, N., Strambini, E.,
Paolucci, F. & Giazotto, F.
Preliminary demonstration of a persistent Josephson
phase-slip memory cell with topological protection.
Nat. Commun. 12,
5200 (2021).
VII Acknowledgments
This work was funded by the EU’s Horizon 2020 Research and Innovation Framework Program under Grant
Agreement No. 964398 (SUPERGATE), No. 101057977 (SPECTRUM),
and by the PNRR MUR project PE0000023-NQSTI.
VIII Author contributions
D.M. fabricated the samples, conducted the experiments, and analyzed data with inputs from A.C, E.S., and F.G.. Y.F., M.T.M., and M.C. developed the theoretical models describing the experiment.
D.M., A.C., E.S., M.C., and F.G. wrote the manuscript with input from all the authors. E.S. and F.G. conceived the experiment. F.G. supervised and coordinated the project.
All authors discussed the results and their implications equally at all stages.
IX Competing interests
The authors declare no competing financial interests
X Additional information
Extended data are available for this paper at xxxx.
Correspondence and requests for materials should be addressed to D.M.
XI Methods
XI.1 Sample fabrication
Nb strips and constrictions are patterned by e-beam lithography on AR-P 679.04 (PMMA) resist. PMMA residuals are removed by O${}_{2}$-plasma etching after developing. Nb thin films were deposited by sputtering with a base pressure of 2 $\times$ 10${}^{-8}$ Torr in a 4 mTorr Ar (6N purity) atmosphere and liftoff by acetone or AR-P 600.71 remover. A thin Ti layer was previously sputtered to improve Nb adhesion and base pressure in the deposition chamber, resulting in a nominal thickness Ti(4nm)/Nb(25nm) and Ti(4nm)/Nb(55nm) for the so-called ”short” and ”long” nanobridges, respectively.
XI.2 Transport measurements
Transport measurements were carried out in filtered (two-stage RC and $\pi$ filters) cryogen-free ${}^{3}$He-${}^{4}$He dilution refrigerators by a standard 4-point probe technique. DC current-voltage characteristics were measured by sweeping a low-noise current bias positively and negatively, and by measuring the voltage drop across the weak links with a room temperature low-noise pre-amplifier.
The switching currents and error bars were obtained from 5-10 reiterations of the $IV$ curves,
and their accuracy is mostly given by the current step set to $\Delta I<0.002I_{c}(B=0)$, where $I_{c}$ is the switching current of the bridge.
Joule heating is minimized by automatically switching the current off once the device turns into the normal state.
A delay between sweeps was optimized to keep the stability of the fridge temperature lower than 50 mK.
Furthermore, no changes in the switching currents (up to the accuracy given by the standard deviation) were observed in different cooling cycles, by changing the order of the sweeps or by adding an extra delay in the acquisition protocol concluding that hysteretic behavior or local heating is negligible.
XI.3 Theoretical models
The current phase relation for the model I is given by $I=\sum_{n=1,2}\int_{0}^{w}I_{n}\sin\left(\phi(x)+n\varphi\right)dx$. The critical current in both directions is evaluated by determining the maximum and minimum values with respect to the phase bias $\varphi$. Here, the spatial-dependent phase difference $\phi(x)$ is given by the magnetic field and the phase winding contribution due to the vortex krasnov20 . We have assumed that the supercurrent has a subdominant second harmonic contribution, as expected in long, diffusive weak links lik79 ; gol04 .
The spatial dependence of the phase along the transverse direction $x$ is expressed as $\phi(x)=\frac{2\pi d_{b}B_{z}x}{\Phi_{0}}+\phi_{v}(x)$ where $d_{b}$ is a characteristic length of the weak link, related to the width of the junction, and $\phi_{v}(x)=\gamma_{v}\arctan\left[\frac{(x-x_{v})}{y_{v}}\right]$ is the spatially-inhomogeneous phase offset due to the vortex structure on which, $\gamma_{v}$ indicates the sign and amplitude of the winding and $(x_{v},y_{v})$ the position of the vortex core with respect to the boundaries (see Fig.4a). We also assume that the second harmonic amplitude of the supercurrent is vanishing at $B=B^{*}$.
Concerning the model II, the current phase relation is given by $I=\sum_{n}I_{n}\sin(\varphi)+\Gamma_{B}\cos(\varphi)$. The critical currents are evaluated by determining the maximum and minimum values of the Josephson current. For the examined model, we assume that the amplitudes of the $n$-harmonic $I_{n}$ get suppressed by $B$ with a linear rate, i.e. $I_{n}=I_{n,0}(1-\frac{B}{B^{*}})$ for $n\geq 1$. The linear trend is compatible with the observed behavior of the overall supercurrent amplitude as shown in Fig. 2a,3a in the range of applied field where the rectification is nonvanishing. Furthermore, the suppression of the harmonics amplitude with the magnetic field can be related to the reduction of the effective transmission across the grains due to depairing and magnetic interference. The decrease in the transmission implies a reduction of the non-harmonic amplitudes lik79 ; gol04 .
In this model, we have performed a real space simulation for the examined geometry. By solving the Bogoliubov-de Gennes equations on a finite size slab in the presence of an out-of-plane magnetic field, we demonstrate that an anomalous phase shift can be obtained. The simulation is performed for a system size $N_{x}\times N_{y}$ with $N_{x}=150$ and $N_{y}=100$. The employed tight-binding model includes a nearest neighbor hopping amplitude, $t$, and the conventional spin-singlet local pairing amplitude. We apply a phase bias across the weak link and determine the free energy shown in the Extended Data Fig. 2a. The resulting anomalous phase increases with the magnetic field and depends on the strength of the Rashba and Dresselhaus interactions as shown in the Extended Data Fig. 2b.
The linear Rashba term on a lattice for a two-dimensional geometry is expressed as $H_{R}=\alpha_{R}[\sin(k_{x})\sigma_{y}-\sin(k_{y})\sigma_{x}]$ while the Dresselhaus term is given by $H_{D}=\alpha_{D}[\sin(k_{x})\sigma_{x}-\sin(k_{y})\sigma_{y}]$ with $\sigma_{i}(i=x,y)$ being the Pauli matrices associated with the spin angular momentum. We notice that $H_{R}$ breaks the horizontal mirror symmetry while $H_{D}$ breaks both the vertical and horizontal mirror symmetries. This reduced mirror symmetry is expected to be locally or globally broken in granular films Park19 .
Notably, the presence of the Dresselhaus term is crucial to induce an anomalous phase shift in the supercurrent in the presence of an out-of-plane magnetic field.
XII Extended Data |
Processing challenges in the XMM-Newton slew survey
Richard D. Saxton\supita
Bruno Altieri\supita
Andrew M. Read\supitb
Michael J. Freyberg\supitc
M. Pilar Esquej\supita and Diego Bermejo\supita
\skiplinehalf\supitaXMM-SOC
ESAC
Villafranca del Castillo
Apartado 50727
28080 Madrid
Spain;
\supitbDepartment of Physics and Astronomy
University of Leicester
Leicester Le17RH
England;
\supitcMax-Planck-Institut fuer Extraterrestrische Physik
PO Box 1312
85741 Garching
Germany.
Abstract
The great collecting area of the mirrors coupled with the high
quantum efficiency of the EPIC detectors have made XMM-Newton
the most sensitive X-ray observatory flown to date. This is
particularly evident during slew
exposures which, while giving only 15 seconds of on-source time, actually
constitute a 2-10 keV survey ten times deeper than
current ”all-sky” catalogues. Here we report on progress towards
making a catalogue of slew detections constructed from the full,
0.2-12 keV energy band and discuss the challenges associated
with processing the slew data. The fast (90 degrees per hour) slew
speed results in images which are smeared, by different amounts
depending on the readout mode, effectively changing the form of the
point spread function. The extremely low background in slew images changes
the optimum source searching criteria such that searching a single image using
the full energy band is seen to be more sensitive than splitting the
data into discrete energy bands. False detections due to
optical loading by bright stars, the wings of the PSF in very bright sources
and single-frame detector flashes are
considered and techniques for identifying and removing these spurious
sources from the final catalogue are outlined.
Finally, the attitude reconstruction of the satellite during the slewing
manoeuver is complex. We discuss the implications of this on the
positional accuracy of the catalogue.
keywords: XMM-Newton, X-rays, sky surveys, data analysis, slew
\authorinfo
Further author information: (Send correspondence to R.D.S.)
R.D.S.: E-mail: [email protected], Telephone: +34 91 8131306
B.A.: E-mail: [email protected], Telephone: +34 91 8131340
1 INTRODUCTION
XMM-Newton [1] performs slewing manoeuvers between observation targets
with the EPIC cameras open and the other instruments closed. Both
EPIC-pn[2] and EPIC-MOS[3] are operated during slews with
the Medium filter in place and the observing mode set to that of the
previous observation.
The satellite moves between targets by performing
an open-loop slew along the roll and pitch axes
and a closed-loop slew, where measurements from the star tracker are used in addition to the
Sun–sensor measurements to provide a controlled slew about all three axes, to correct for residual errors in the
long open-loop phase. The open-loop slew is performed at a steady rate
of about 90 degrees per hour and it is data from this phase which may be
used to give a uniform survey of the X-ray sky.
Slew Data Files (SDF) have been stored in the XMM-Newton Science Archive (XSA)
from revolution 314 and there are currently 465 SDFs stored
with a mean slew length of 86 degrees (Fig. 1 ).
Not all of these data are scientifically useful and the sky coverage will
be discussed in Sect. 2.
The data are being used to perform three independent surveys, a soft band
(0.2–2 keV) X-ray survey with strong parallels to the ROSAT all–sky survey
[4](RASS), a hard band (2–12 keV) survey and an XMM-Newton full-band
(0.2–12 keV) survey.
Theoretically the good point spread function of the X-ray telescopes
[5] should allow source positions to be determined to an
accuracy of better than 6 arcseconds, similar to that found for faint
objects in the 1XMM catalogue of serendipitous sources detected in pointed
observations (111The First XMM-Newton Serendipitous Source Catalogue, XMM-Newton Survey Science Centre (SSC), 2003.). Any errors in the attitude reconstruction
for the slew could seriously degrade this performance and a major technical
challenge of the data processing is to achieve the nominal accuracy. We
address this issue in Sect. 4.
2 Observations and data analysis
The appearance of a source in the slew depends on the frame time
of the observing mode as photons can only be positioned in space to
an accuracy of one frame. This has major implications for MOS, where the
relatively long frame time of 2.6 seconds, spreads out a source into a 4 arcminute
long streak (Fig. 2). EPIC-pn has a much faster readout and
source extension in the slew direction is less than 18 arcseconds
in all observing modes.
The relatively large source profile in the MOS cameras
and their lower effective area make the EPIC-pn a much superior instrument
for performing a slew survey. For this reason only pn data are being used
in the data analysis.
Sources pass through the field of view of EPIC-pn in about 15 seconds.
This low exposure time leads to a generally very low background
of average 0.1 c/arcminute${}^{2}$
in normal conditions. However, some slews taken at times of enhanced
solar activity do exhibit higher background (Fig. 3) and can
give rise to a large number of spurious sources.
For each slew we have computed average band rates in 6 energy bands
to characterize the general rate level.
In this first processing slews with high background that had an average
count rate exceeding 5.5 c/s in the $7.5-12$ keV band are
being discarded.
In later processings we will also use low-background periods
of contaminated slews using time selections, which will yield
another $\sim 10\%$ exposure time.
A total of 605 slew datasets have been processed and stored in the XSA.
Of these 424 have been made with pn in the useful FF (295), eFF (88)
and LW (41) modes respectively. After removal of the high background
slews we are left with 312 slews, with a mean sky area of the useable
part of the data of $\sim 25$ square degrees, giving a total sky coverage
of some 8,000
square degrees, ignoring overlaps. The sky coverage is uniform but
subject to the vignetting function such that sources passing directly
through the centre of the detector receive an equivalent of 11 seconds
of on-axis exposure while sources further from the centre receive less.
The mean equivalent on-axis exposure time over the sky is 6.3 seconds
and the sky area covered as a function of exposure time is shown in
Figure 4.
Events are recorded initially in RAW or detector coordinates and have to
be transformed, using the satellite attitude history, into sky coordinates.
The tangential plane geometry commonly used to define a coordinate grid for
flat images is only valid for distances of 1–2 degrees from a reference
position, usually placed at the centre of the image. To avoid this
limitation, slew datasets are divided into roughly one square degree
event files, attitude corrected and then converted into images.
This relies on the attitude history of the satellite being accurately known
during the slew; a point which is addressed in section 4 .
2.1 Instrumental aspects
The XMM-Newton Slew Data Files (SDFs) for EPIC-pn
were processed using the epchain package of the
public xmmsas-6.1 plus a small modification for
the oal library.
For diagnostic reasons a few parameters were set to
non-default values (e.g., keeping also events below
150 eV).
For the Slew Survey catalogue
we selected only EPIC-pn exposures
performed in Full Frame (FF),
Extended Full Frame (eFF), and Large Window
(LW) modes, i.e. modes where all 12 CCDs are
integrating (in LW mode only half of each CCD).
The corresponding cycle times are
73.36 ms, 199.19 ms, and 47.66 ms, which converts
to a scanned distance of 6.6 arcseconds, 17.9 arcseconds, and
4.3 arcseconds per cycle time, respectively.
In the Small Window mode only the
central CCD is operated and a window of $64\times 64$
pixels is read out, i.e. only about 1/3 of a CCD.
In the fast modes, Timing and Burst,
only 1-dimensional spatial information
for the central CCD is available
and thus these modes are not very well suited for
source detection. Therefore for these three modes the
Closed filter position will be used in the future instead of
the Medium position, to utilize this unsuable
exposure time for calibration purposes.
2.2 Source search procedure
Pilot studies were performed to investigate the optimum
processing and source-search strategies. By making small changes
to the XMM-Newton standard analysis software
(SAS) we have been able to successfully create and use correct exposure maps
in the source searching. These produced
no unusual effects, although uneven (and heightened) slew exposure is
observed at the end of slews (the ’closed-loop’ phase). We
tested a number of source-searching techniques and found that the
optimum source-searching strategy was usage of a semi-standard
‘eboxdetect (local) + esplinemap + eboxdetect (map) + emldetect’
method, tuned to $\sim$zero background, and performed on a single
image containing just the single events (pattern=0) in the
0.2$-$0.5 keV band, plus single and double events (pattern=0$-$4) in
the 0.5$-$12.0 keV band. This is similar to the technique used for
producing the RASS catalogue [6] and resulted in the
largest numbers of
detected sources, whilst minimising the number of spurious sources
due to detector anomalies (usually caused by non-single, very soft
($<$0.5 keV) events). The source density was found to be $\approx$0.5
sources per square degree to an emldetect detection likelihood threshold
(DET_ML) of 10 (approx 3.9$\sigma$).
In the current and on-going slew pipeline, images and exposure maps
have been created and source-searched. This is being done in 3
separate energy bands: full band (0.2$-$0.5 keV [pattern=0] +
0.5$-$12.0 keV [pattern=0$-$4]), soft band (0.2$-$0.5 keV
[pattern=0] + 0.5$-$2.0 keV [pattern=0$-$4]), and hard band
(2.0$-$12.0 keV [pattern=0$-$4]). In this processing we are now
recording detections down
to an emldetect detection likelihood threshold of 8 (approx
3.4$\sigma$), and detecting $\approx$0.7 sources per square degree.
3 Spurious sources
Systematic effects exist with the instrument and detection software which
lead to a number of spurious detections. The three principle causes are
outlined below.
3.1 Optical loading
EPIC-pn slew exposures are possibly affected by optical loading contamination.
This effect is due to several optical photons (each creating a 3.65 eV charge) piling-up
above the low-energy threshold of 20 ADUs and creating fake X-ray counts.
On pointed observation this effect is removed by offset maps acquired at the start
of each exposure and subtracted on-board, but this is not the case for slew exposures,
where the offset map of the previous (pointed) exposure is applied.
As a consequence, very bright stars could be affected by optical loading in the XMM
slew survey.
Based on the measured optical transmission of the Medium filter and theoretical
considerations, optical loading is expected for stars brighter than magnitude V=3.7,
where more than 5 counts would be due optical photons.
The optical loading has been assessed using bright USNO stars detected in the
slew survey. Figure 5 shows soft band slew counts plotted against their R magnitude.
Stars fainter than R=4 are not affected by optical loading, as no correlation is found
between their count rate and magnitude.
Stars brighter than R=4 could possibly be affected by optical loading counts although it is not yet clear to what extent. Some evidence shows that it would play
only a minor role for stars down to R=2. Two V=2.7 stars have been detected so far with less than 10 counts, much
less than expected from optical loading only, so optical loading might not be an issue
at all.
3.2 Detector flashes
We have created lightcurves with short time bin size ($<1$ s)
in the softest channels to identify short-duration CCD flashes
that occur only for $<200$ ms in several adjacent CCD columns
with a very soft spectrum. Projected onto the sky these can lead
to spurious sources. Figure 6 shows
4 consecutive readout frames containing one of these flashes.
These effects are minimised by only using single-pixel (pattern 0)
events for photon energies less than 500 eV. Nevertheless, some
flashes may be manifest in slew images and so sanity checks of data
and detector performance are made on the
basis of diagnostic images and lightcurves of each individual CCD.
3.3 The wings of very bright sources
It was noticed in the creation of the 1XMM serendipitous source catalogue
that, due to the imperfect modelling of the PSF, a halo of false detections
is often seen around bright sources. The same effect is seen in slew
exposures but due to the reduced exposure time is only important for
very bright sources $\gg 10$ c/s. In addition large extended sources
often result in multiple detections of the same object. It is fairly easy to identify
occurrences by searching for images with a large number of sources.
A histogram of source counts (Fig. 7) shows several outliers
from the main distribution including one image containing 46 sources;
which is actually due to Puppis-A (see Fig 15).
4 Attitude reconstruction and positional accuracy
This section describes the issue of attitude reconstruction in slew
observations,
which is crucial in the determination of source coordinates.
After showing how the
attitude reconstruction is generally performed, we will concentrate on that
of the open-loop slews which have been used in this survey.
The attitude information of the XMM-Newton satellite is provided by
the Attitude and Orbit Control Subsystem (AOCS). A star tracker co-aligned
with the telescopes allows up to a maximum of five stars to be continuously
tracked giving accurate star position data every 0.5 seconds, which
operates in addition to the Sun
sensor that provides a precise Sun-line determination. Such information is
processed resulting in an absolute accuracy of the reconstructed astrometry
of typically 1 arcsecond. For the open-loop slews, large slews outside the
star-tracker field of view of 3 x 4 degrees, the on-board software generates a
three axis momentum reference profile and a two-axis (roll and pitch)
Sun-sensor profile, both based on the ground slew telecommanding. During
slew manoeuvring a momentum correction is superimposed onto the reference
momentum profile and, as there are no absolute measurements for the yaw axis,
a residual yaw attitude error exists at the end of each slew that may be
corrected in the final closed-loop slew.
So far, two types of attitude data can be used as the primary
source of spacecraft positioning during event files processing.
They are the Raw Attitude File (RAF) and the Attitude History File (AHF).
For pointed observations, the RAF provides the attitude information at the maximum possible rate,
with one entry every 0.5 seconds while the AHF is a smoothed and filtered version
of the RAF, with times rounded to the nearest second. In slew datasets the
RAF stores attitude information every 40–60 seconds while the AHF
contains identical positions with timing information in integer seconds.
The user can select which one to use for data processing by
setting an environment variable.
In a pilot study where the AHF was used for attitude reconstruction,
source detection was performed and their correlations with ROSAT and 2MASS
catalogues indicated a slew relative pointing accuracy of $\sim 10$ arcseconds,
enough for a good optical follow up of the sources. However, an absolute
accuracy of 0-60 arcseconds (30 arcsecond mean) was obtained in the slew direction,
resulting in a thin, slew-oriented error ellipse around each source.
This error appears to be consistent with the error introduced by the
quantisation of the time to 1 second in the attitude file and leads us to change the processing software
as a better accuracy should be obtained. Investigating further the errors, the
RAF was used to compute the astrometry for some observations as a test.
In this case, an offset of $\sim 1$ arcminute from the ROSAT positions was found,
but with a smaller scatter compared with the positions returned by the AHF
processing. The consistency of these offsets suggested that
they could be due to a timing issue. This has been confirmed by flight dynamics
who stated that the tracking of up to five stars, mentioned above, produces
a delay from the CCD exposure to data availability of approximately 0.75 seconds.
Subtracting directly this 0.75 seconds from every entry in the
RAF we obtain an optimal attitude file for the processing.
Other issues affecting the astrometry performance appeared after a careful
visual examination of the RAF files, where two types of peculiarities
appeared in some of the slews both affecting a localised region or the totality
of the slew. This means that if a source lies in a
problematic region its position has not been correctly generated.
On the one hand, 5 observations presented sharp discontinuities revealing the
existence of single bad RAF points that have to be determined and removed from
the attitude file before performing the source searching. As an example
a source in the slew 9073300002 was discovered to have a closest ROSAT counterpart at
a distance of 8 arcminutes. Investigation showed that the source was observed at a time
coincident with a large error in the attitude file (Fig. 8). After the bad RAF point
was removed the recalculated source position lies at 11 arcseconds from the ROSAT position.
On the other hand the attitude reconstruction of some slews appeared not smooth but
turbulent in 7 observations (Fig. 9) and this case is still under investigation.
Slew observations have been reprocessed using corrected RAFs and a
subsample of 1260 non-extended sources (defined as having an extent parameter
$<2$ from the emldetect source fitting) with DET_ML$>10$, have been correlated with
several catalogues within a 60 arcsecond offset.
The correlation with the RASS reveals that $\sim 60$% of the slew sources
have an X-ray counterpart of which 68% (90%) lie within 16 (31) arcseconds
(Fig. 10).
This gives confidence that the
majority of slews have well reconstructed attitude. Tests on some of
the outliers show that closer matches are sometimes available using
ROSAT pointed data from the 2RXP and 1RXH catalogues.
To form a sample of catalogues with highly accurate positions
but which minimise the number of false matches, we used the Astrophysical Virtual Observatory (AVO)
to correlate the slew positions against non-X ray SIMBAD catalogues. This
gave 508 matches of which 68% (90%) were contained within 8 (17) arcseconds (Fig. 11),
showing that the positional accuracy of the slew is not much worse than the
observed limit for low significance XMM-Newton sources.
5 Results
To date 138 datasets have been processed giving 2370 sources with
DET_ML$>8$ (1600 with DET_ML$>10$) in the total band and 440 in
the hard X-ray band (220 with DET_ML$>10$). A small pilot study
visualising all DET_ML$>10$ sources from ten slews, showed that
apart from the problems detailed in section 3,
sources appeared
to be real. More sophisticated statistical tests or simulations
will have to be applied
to calculate the fraction of sources with $DET\_ML$
between 8 and 10 which are due
to background fluctuations.
A great variety of sources have been detected, including stars,
galaxies, both interacting and normal, AGN, clusters of galaxies and SNR
plus extremely bright Low-Mass-X-ray
Binaries (LMXB),
with several hundred c/s. These are bright enough to give a
useful spectrum although they suffer seriously from photon pile-up.
As we are essentially
performing three separate surveys, we have immediate access
to hardness ratios for many of the detected sources, and a large
variation in source hardness is seen. About one percent of sources are
detected in more than one slew, yielding short to medium term
(days to months) variability information. One
source, so far detected in three separate slews, appears to have
varied in flux by a factor of $\sim$2.
Figure 12 shows the distribution of sources over the sky
indicating the paths of slews processed so far.
The flux limits for the three surveys are compared with those
of other missions in Fig 13. At a DET_ML of 10(8) sources are
detected to a flux limit of 6(4.5)$\times 10^{-13}$ergs s${}^{-1}$ cm${}^{-2}$
in the soft band and 4(3)$\times 10^{-12}$ergs s${}^{-1}$ cm${}^{-2}$
in the hard band. The mean flux limit over the whole
survey, taking into account the variable effective on-axis exposure
time, is 60% higher than these values.
5.1 Global correlations with the ROSAT all-sky survey
There is a strong overlap between the soft XMM slew catalogue and the
ROSAT all-sky survey (RASS) which is limited by statistics at the
faint flux end of
each survey and by intrinsic source variability. Of the non-extended,
DET_ML$>10$ sources in the soft band slew survey 64% have counterparts within 1
arcminute in the RASS. The fraction drops to 53% for hard band slew sources.
A comparison of the two surveys reveals a mean count rate ratio of $\sim 10$
(Fig. 14),
with one percent of the sources detected in both surveys showing variability
by a
factor in excess of 10. Many more variable sources will be
identified by performing an upper limits analysis on data from both surveys.
The combination of these surveys will enable the long term X-ray variability
of several thousand sources to be studied over a baseline of 10–15 years.
The high variability sources so far identified are formed from Blazars, low-mass X-ray binaries, eclipsing binaries and Seyfert I galaxies.
5.2 Extended sources
The good spatial resolution and low background of XMM-Newton allows the
slew survey to usefully image bright extended sources. The very bright, large, SNR PUPPIS-A was slewed over in 2002 and the X-ray emission shows structure
in a smoothed
image which correlates well with that seen in a pointed ROSAT HRI observation
(Fig. 15).
Nearby clusters of galaxies, such as Abell 3581, can also be clearly
detected as extended (Fig. 16) and there is the possibility that at the
faint end of the survey new clusters or galaxy groups, too
small to be detected as extended in the RASS, will be discovered.
6 Summary
The XMM-Newton slew data constitute a wide area (currently 20% of the sky) shallow
survey whose soft band flux limits are sufficiently deep to provide an
interesting comparison with the RASS and whose hard band limits
represent an order of magnitude improvement over previous missions.
Several technical challenges have been overcome, particularly in understanding
and refining the astrometry and in rejecting spurious sources. The
astrometry is good, with a 1 sigma position error of 8 arcseconds,
easily sufficient to allow an optical follow-up of these high flux
X-ray sources. Data processing is progressing well and the final
total energy band catalogue should contain between three and five
thousand sources,
depending on the final choice of maximum likelihood detection threshold
employed. The hard band catalogue will contain between 400 and 800
sources.
Acknowledgements.This research has made use of the SIMBAD database,
operated at CDS, Strasbourg, France. We thank the Astrophysical Virtual Observatory (AVO) for providing software tools. AVO has been awarded financial support by the European Commission through contract HPRI-CT-2001-50030 under the 5th Framework Programme for research, technological development and demonstration activities. Based on data obtained with XMM-Newton, an ESA science mission with
instruments and contributions directly funded by ESA Member States and NASA.
Congratulations are due to the designers and operators of the
attitude control system for providing an accurate satellite
pointing position throughout slewing manoeuvers.
We thank Georg Lammers and Herman Brunner for providing source
search software which proved to be robust on slew data.
We also thank Mark Tuttlebee, Pedro Rodriguez, John Hoar and Aitor Ibarra
for their help with understanding the slew attitude history files.
References
[1]
F. Jansen, D. Lumb, B. Altieri, J. Clavel, M. Ehle, C. Erd, C. Gabriel,
M. Guainazzi, P. Gondoin, R. Much, R. Munoz, M. Santos, N. Schartel,
D. Texier, and G. Vacanti, “Xmm-newton observatory,” A&A 365,
pp. L1–6, 2001.
[2]
L. Struder, U. Briel, K. Dennerl, R. Hartmann, E. Kendziorra, N. Meidinger,
E. Pfeffermann, C. Reppin, B. Aschenbach, W. Bornemann, et al., “The
european photon imaging camera on xmm-newton: The pn-ccd camera,” A&A 365, pp. L18–26, 2001.
[3]
M. Turner, A. Abbey, M. Arnaud, M. Balasini, M. Barbera, E. Belsole, P. Bennie,
J. Bernard, G. Bignami, M. Boer, et al., “The european photon imaging
camera on xmm-newton: The mos cameras,” A&A 365, pp. L27–35,
2001.
[4]
W. Voges, B. Aschenbach, T. Boller, H. Braeuninger, W. Burkert, K. Dennerl,
J. Englhauser, R. Gruber, F. Haberl, et al., “The rosat all-sky survey
bright source catalogue,” A&A 349, pp. 389–405, 1999.
[5]
B. Aschenbach, U. Briel, F. Haberl, H. Braeuninger, W. Burkert, A. Andreas,
P. Gondoin, and D. Lumb, “Imaging performance of the xmm-newton x-ray
telescopes,” in X-Ray Optics, Instruments, and Missions III,
J. Truemper and B. Aschenbach, eds., Proc. SPIE 4012,
pp. 731–739, 2000.
[6]
R. Cruddace, G. Hasinger, and J. Schmitt, “The application of a maximum
likelihood analysis to detection of sources in the rosat data base,” in Astronomy from Large Databases. Scientific objectives and methodological
approches, F. Murtagh and A. Heck, eds., ESO Conference and Workshop
Proceedings, pp. 177–182, 1988. |
MPC-Based Hierarchical Control of a Multi-Zone Commercial HVAC System
Naren Srivaths Raman
Corresponding author. Email: [email protected]
Rahul Umashankar Chaturvedi
Zhong Guo
and Prabir Barooah
Department of Mechanical and Aerospace Engineering
University of Florida
Gainesville, FL 32611
Abstract
This paper presents a novel architecture for model predictive control (MPC) based indoor climate control of multi-zone buildings to provide energy efficiency. Unlike prior works we do not assume the availability of a high-resolution multi-zone building model, which is challenging to obtain. Instead, the architecture uses a low-resolution model of the building which is divided into a small number of “meta-zones” that can be easily identified using existing data-driven modeling techniques. The proposed architecture is hierarchical. At the higher level, an MPC controller uses the low-resolution model to make decisions for the air handling unit (AHU) and the meta-zones. Since the meta-zones are fictitious, a lower level controller converts the high-level MPC decisions into commands for the individual zones by solving a projection problem that strikes a trade-off between two potentially conflicting goals: the AHU-level decisions made by the MPC are respected while the climate of the individual zones is maintained within the comfort bounds. The performance of the proposed controller is assessed via simulations in a high-fidelity simulation testbed and compared to that of a rule-based controller that is used in practice. Simulations in multiple weather conditions show the effectiveness of the proposed controller in terms of energy savings, climate control, and computational tractability.
1 Introduction
The application of model predictive control (MPC) for commercial heating, ventilation, and air conditioning (HVAC) systems for both energy efficiency and demand flexibility has been an active area of research; see the review articles [1, 2] and the references therein.
Several of the MPC formulations proposed in the past are for buildings with one zone [3, 4, 5, 6] or a small number of zones [7, 8]. A direct extension of such formulations for large multi-zone buildings has two main challenges. First, solving the underlying optimization problem in MPC for a building with a large number of zones is computationally complex because of the large number of decision variables. To reduce the computational complexity, several distributed and hierarchical approaches have been proposed [9, 10, 11, 12, 13, 14, 15]. The second challenge, which has attracted far less attention, is that MPC requires a “high-resolution” model of the thermal dynamics of a multi-zone building.
High-resolution means that the temperature of every zone in the building is a state in the model and the control commands for every zone are inputs in the model. One way of obtaining such a multi-zone model is by first constructing a “white box” model, such as by using a building energy modeling software, and then simplifying it to make it suitable for MPC, e.g., [16]. But constructing a white box model is expensive; it requires significant effort [17]. Moreover, the resulting model may not reflect the building as is. Another way of obtaining a high-resolution multi-zone model is by utilizing data-driven techniques, which use input-output measurements. Getting reliable estimates using data-driven modeling is challenging even for a single-zone building, as a building’s thermal dynamics is affected by a non-trivial and unmeasurable disturbance, the heat gain from occupants and their use of equipment, that strongly affects quality of the identified model [18]. In the case of multi-zone model identification, it becomes intractable since the model has too many degrees of freedom: as many unknown disturbance signals as there are number of zones. To the best of our knowledge, there are no works on reliable identification of multi-zone building models without making assumptions on the nature of the disturbance affecting individual zones [19].
In addition to the challenges mentioned above, most of the prior works—whether on single-zone or on multi-zone buildings—ignore humidity and latent heat in their MPC formulations. The inclusion of moisture requires a computationally convenient cooling and dehumidifying coil model. MPC formulations which exclude humidity can lead to poor humidity control, or higher energy usage as they are unaware of the latent load on the cooling coil [5].
In this work, we propose a humidity-aware MPC formulation for a multi-zone building with a variable air volume (VAV) HVAC system. Figure 1 shows the schematic of such a system.
To overcome the challenges mentioned above, we propose a two-level control architecture. The high-level controller (HLC) decides on the AHU-level control commands. The HLC is an MPC controller that uses a “low-resolution” model of the building with a small number of “meta-zones”, with each meta-zone being a single-zone equivalent of a part of the building consisting of several zones. In the case study presented here, a 33 zone three-floor building is aggregated to a 3 meta-zone model, with each meta-zone corresponding to a floor. The advantage of such an approach is that a high-resolution multi-zone model is not needed as a starting point. Rather, a single-zone equivalent model of each meta zone, in which disturbance in all the zones are aggregated into one signal, can be directly identified from measurements collected from the building. The identification problem of such a single-zone equivalent model is more tractable [20]. In this paper, we use the system identification method from [20], though other identification methods can also be used. Since the HLC uses a low-resolution model with a much smaller number of meta-zones than that in the building, its computational complexity is low. However, this reduction of computational complexity creates a different challenge. Since the decision variables of the optimization problem in the HLC correspond to the meta-zones (air flow rate, temperature, etc.), they do not correspond to those for the actual zones of the building. The low-level controller (LLC) is now used to compute the control commands for individual zones. It does so by solving a projection problem that appropriately distributes aggregate quantities computed by the HLC to individual zones. The LLC uses feedback from each zone to assess their needs and ensures indoor climate of each zone is maintained.
The proposed controller—that includes the HLC and LLC–is hereafter referred to as $MZHC$ which stands for multi-zone hierarchical controller. Its performance is assessed through simulations on a “virtual building” plant. The plant is representative of a section of the Innovation Hub building comprising of 33 zones and is located at the University of Florida campus. The plant is constructed using Modelica [21]. The performance of the proposed controller is compared with that of Dual Maximum controller as a baseline [22]. The dual maximum controller—which is referred to as $BL$ (for baseline)—is a rule-based controller, and is one of the more energy efficient controllers among those used in practice [22]. Simulation results show that using the proposed controller provides significant energy savings when compared to $BL$ while maintaining indoor climate.
Compared to the literature on MPC design for multi-zone building HVAC systems, our work makes four principal contributions, with details discussed in Section 1.1. (i) The first contribution is that the proposed method does not assume availability of a high-resolution model of the multi-zone building which is difficult to obtain. Instead, it can utilize existing data-driven methods that can quickly identify a low-resolution model of the multi-zone building from measurements. (ii) Since the MPC part of the proposed controller uses a low-resolution model with a small number of meta-zones, the method is scalable to buildings with a large number of zones.
Although distributed iterative computation has been proposed in the literature as an alternate approach to reducing computational complexity, ours can be solved in a centralized setting.
(iii) The third contribution is the incorporation of humidity and latent heat in our multi-zone MPC formulation, which has been largely ignored in the literature on MPC for buildings, and especially so in the literature on multi-zone building MPC. Our simulations show that when using $MZHC$, the indoor humidity constraint is active, especially during hot-humid weather. Without humidity being explicitly considered, the controller would have caused high space humidity in an effort to reduce energy use. (iv) The fourth contribution is a realistic evaluation of the proposed controller in a high-fidelity simulation platform that introduces a large plant-model mismatch. In many prior works on multi-zone MPC, the model used by the controller is the same as that used in simulating the plant. In contrast, the only information provided to the proposed controller about the building is sensor measurements (past data for model identification and real-time data during control) and design parameters such as expected occupancy, minimum design airflow rates for each VAV box, etc.
The rest of this paper is organized as follows. Section 1.1 discusses our work in relation to the literature on multi-zone MPC. Section 2 describes a multi-zone building equipped with a VAV HVAC system and the models we use in simulating the plant (the system to be controlled). Section 3 presents the proposed MPC-based hierarchical controller. Section 4 describes a rule-based baseline controller with which the performance of the proposed controller is compared. The simulation setup is described in Section 5. Simulation results are presented and discussed in Section 6. Finally, the main conclusions are provided in Section 7.
1.1 Comparison With Literature on Multi-Zone MPC
Several distributed and hierarchical approaches have been proposed to reduce the computational complexity of MPC for multi-zone buildings [9, 10, 11, 12, 13, 14, 15]. In [9], a hierarchical distributed algorithm called token-based scheduling is proposed to vary the supply airflow rate to the zones. A modified version of this algorithm is used in [10] to minimize the energy consumption of a multi-zone building located at the Nanyang Technological University, Singapore campus.
In [12], a two-layered control architecture is proposed for operating a VAV HVAC system. The upper layer is an open loop controller, while the lower layer is based on MPC and it varies the supply airflow rates to the zones. Similar to [9], [10], and [12], the works [14, 13, 15] consider varying only the zone-level control inputs such as the supply airflow rates and zone temperature set points. These works exclude the AHU-level control inputs such as the conditioned air temperature and outside airflow rate.
Unlike the works mentioned above, the work [23] uses MPC to vary only the AHU-level control inputs; the zone-level control inputs are excluded in this formulation.
One of the few works similar to ours is [11], as they consider both the zone-level and AHU-level control inputs in their formulation. But their algorithm requires a high-resolution multi-zone model, and they do not consider humidity and latent heat in their formulation.
2 System and Problem Description, and Plant Simulator
Our focus is a multi-zone building equipped with a variable-air-volume (VAV) HVAC system, whose schematic is shown in Figure 1. In such a system, part of the air exhausted from the zones is recirculated and mixed with outdoor air. This mixed air is sent through the cooling coil where the air is cooled and dehumidified to the conditioned air temperature ($T_{ca}$) and humidity ratio ($W_{ca}$). This conditioned air is then sent through the supply ducts to the VAV boxes, which have a damper to control airflow, and finally supplied to the zones. Some VAV boxes have reheat coils; they can change temperature of supply air but not humidity, i.e., $T_{sa,i}\geq T_{ca}$ and $W_{sa,i}=W_{ca}$, where $T_{sa,i}$ and $W_{sa,i}$ are the temperature and humidity ratio of supply air to the $i^{th}$ zone. If a VAV box is not equipped with a reheat coil (cooling only), then the temperature of air supplied by it to its zone will be at the conditioned air temperature, i.e., $T_{sa,i}=T_{ca}$.
The control commands for a multi-zone VAV HVAC system with $n_{z}$ zones (i.e., VAV boxes) are:
$$\displaystyle u$$
$$\displaystyle:=(m_{oa},T_{ca},m_{sa,i},T_{sa,i},\,\,i=1,\dots,n_{z}),$$
(1)
where $m_{oa}$ is the outdoor airflow rate, $T_{ca}$ is the conditioned air temperature, $m_{sa,i}$ is the supply airflow rate to the $i^{th}$ zone, and $T_{sa,i}$ is the supply air temperature to the $i^{th}$ zone. Note that the humidity of conditioned air ($W_{ca}$) which is supplied to all the zones is indirectly controlled through $T_{ca}$. Of the $n_{z}$ VAVs/zones in the building, $n_{z}^{rh}$ VAVs are equipped with a reheat coil and $n_{z}-n_{z}^{rh}$ VAVs do not have a reheat coil (cooling only). For the latter, the supply air temperature will be the same as the conditioned air temperature, i.e., $T_{sa,i}(k)=T_{ca}(k)$.
The control commands in (1) are sent as set points to the low-level control loops which are typically comprised of proportional integral (PI) controllers. The role of a climate control system is to vary these control commands so that three main goals are satisfied: (i) ensure thermal comfort, (ii) maintain indoor air quality, and (iii) use minimum amount of energy/cost.
In an HVAC system as the one shown in Figure 1, the supply duct pressure setpoint, $p_{duct}$, is also usually a command that the climate control system has to decide. We assume that the supply duct static pressure setpoint ($p_{duct}$) is controlled based on “trim and respond” strategy [24], which is commonly used in VAV systems, including in the Innovation Hub building that we use as a case study.
2.1 Virtual Building (Simulator)
The virtual building (VB) is a high-fidelity model of a building’s thermal dynamics and its HVAC system that will act as the plant for the controllers. The VB is chosen to mimic part of the Innovation Hub building in Gainesville, FL, USA, which is serviced by AHU-2 (among the two AHUs that serve Phase I). Figures 2 and 3 show photos of the building and the relevant floor plans, respectively. The rooms supplied by the same VAV box are grouped together to form one large space (zone); the zones are enclosed by dotted lines in Figure 3. The first floor has 15 rooms which are grouped into 9 zones, the second floor has 20 rooms which are grouped into 12 zones, and the third floor has 21 rooms which are grouped into 12 zones. In total, there are 56 rooms grouped into 33 zones. The virtual building thus consists of an air handling unit and 33 VAV boxes, of which 29 are equipped with reheat coils, and the remaining 4 do not have reheat coils (cooling only). The zones primarily consist of offices and labs.
We use the Modelica library IDEAS (Integrated District Energy Assessment by Simulation) [25] to model the building’s thermal dynamics.
In order to model a zone we use the RectangularZoneTemplate from the IDEAS library. It consists of six components—which are a ceiling, a floor, and four walls—and an optional window. There are also external connections for each of the walls and the ceiling. Depending on the usage, there are three types of walls: (i) inner wall, which is used as a boundary between zones, (ii) outer wall, which is used as a boundary between outside (atmosphere) and the zone, and (iii) boundary wall, which can be specified a fixed temperature or heat flow. To define a wall, dimensions, type of material, type of wall, and the azimuth angle are required. The dimensions are obtained from the mechanical drawings, the material type is chosen from the predefined materials available in the IDEAS library, the type of wall is chosen based on the zone’s location in the building, and the azimuth angle is computed from the zone’s orientation. Windows are specified according to the drawings, with the glazing material chosen from the IDEAS library. In this way, we model all the zones, which are then connected appropriately to form the overall building; Figure 4 shows the model of floor 1. Since we are only interested in modeling the southern half of Phase-1, the walls that are adjacent to zones in the northern half are assumed to be at $22.22\degree C$ ($72\degree F$).
Inputs to the building thermal dynamics portion of the VB are supply airflow rate ($m_{sa,i}$), supply air temperature ($T_{sa,i}$), and supply air humidity ratio ($W_{sa,i}$) for all the zones. These are implemented using the MassFlowSource$\_$T block from the IDEAS library; an ideal flow source that produces a specified mass flow with specified temperature, composition, and trace substances. Outputs of the simulator are temperature ($T_{z,i}$) and humidity ratio ($W_{z,i}$) of all the zones. The zone temperature and humidity are also influenced by several exogenous inputs: (i) outdoor weather conditions such as solar irradiation ($\eta_{sol}$), outdoor air temperature ($T_{oa}$), etc. which are provided using the ReaderTMY3 block from the IDEAS library, (ii) internal sensible and latent heat loads due to occupants, which are computed based on the number of occupants provided to the zone block, and (iii) internal heat load due to lighting and equipment which is given using the PrescribedHeatFlow from the Modelica standard library.
Cooling and Dehumidifying Coil Model
The cooling coil model has five inputs and two outputs. The inputs are supply airflow rate ($m_{sa}$), mixed air temperature ($T_{ma}$) and humidity ratio ($W_{ma}$), chilled water flow rate ($m_{w}$), and chilled water inlet temperature ($T_{wi}$). The outputs are conditioned air temperature ($T_{ca}$) and humidity ratio ($W_{ca}$).
We use a gray box data-driven model developed in our prior work [5]. The interested readers are referred to Section 2.1.2 of [5] for details regarding the model.
Power Consumption Models
For the HVAC system configuration presented in Figure 1, there are three main components which consume power. They are supply fan, cooling and dehumidifying coil, and reheating coils.
The fan power consumption is modeled as:
$$\displaystyle P_{fan}(k)=\alpha_{fan}m_{sa}(k)^{3},$$
(2)
where $m_{sa}(k)$ is the total supply airflow rate at the AHU [26].
The cooling and dehumidifying coil power consumption is modeled to be proportional to the heat it extracts from the mixed air stream:
$$\displaystyle P_{cc}(k)=\frac{m_{sa}(k)\big{[}h_{ma}(k)-h_{ca}(k)\big{]}}{\eta_{cc}COP_{c}},$$
(3)
where $h_{ma}(k)$ and $h_{ca}(k)$ are the specific enthalpies of the mixed and conditioned air respectively, $\eta_{cc}$ is the cooling coil efficiency, and $COP_{c}$ is the chiller coefficient of performance. Since a part of the return air is mixed with the outside air, the specific enthalpy of the mixed air is:
$$\displaystyle h_{ma}(k)$$
$$\displaystyle=r_{oa}(k)h_{oa}(k)+(1-r_{oa}(k))h_{ra}(k),$$
(4)
where $h_{oa}(k)$ and $h_{ra}(k)$ are the specific enthalpies of the outdoor and return air respectively, and $r_{oa}(k)$ is the outside air ratio: $r_{oa}(k):=\frac{m_{oa}(k)}{m_{sa}(k)}$. The specific enthalpy of moist air with temperature $T$ and humidity ratio $W$ is given by [27]: $h(T,W)=C_{pa}T+W(g_{H_{2}0}+C_{pw}T)$,
where $g_{H_{2}0}$ is the heat of evaporation of water at 0 $\degree C$, and $C_{pa},C_{pw}$ are specific heat of air and water at constant pressure.
The reheating coil power consumption in the $i^{th}$ VAV box is modeled to be proportional to the heat it adds to the conditioned air stream:
$$\displaystyle P_{reheat,i}(k)$$
$$\displaystyle=\frac{m_{sa,i}(k)C_{pa}\big{[}T_{sa,i}(k)-T_{ca}(k)\big{]}}{\eta_{reheat}COP_{h}},$$
(5)
where $\eta_{reheat}$ is the reheating coil efficiency, and $COP_{h}$ is the boiler coefficient of performance.
Overall Plant
The overall plant, i.e., virtual building—consisting of the building thermal model, cooling and dehumidifying coil model, and power consumption models—is simulated using SIMULINK and MATLAB©. The building thermal model is constructed in DYMOLA 2021 and it is exported into an FMU (Functional Mockup Unit). It is then imported into SIMULINK using the FMI Kit for SIMULINK. The remaining models are constructed directly in SIMULINK.
3 Proposed Multi-Zone Hierarchical Control ($MZHC$)
Recall that both the proposed and the baseline controllers need to decide the following control commands:
$$\displaystyle u(k)$$
$$\displaystyle:=[m_{oa}(k),T_{ca}(k),m_{sa,i}(k),T_{sa,i}(k)]^{T}\in\Re^{2+n_{z}+n_{z}^{rh}}.$$
Figure 5 shows the structure of the proposed $MZHC$. The high-level controller is based on MPC and decides the control commands for the AHU: outdoor air flow rate ($m_{oa}$) and conditioned air temperature ($T_{ca}$). The low-level controller is a projection-based feedback controller and decides the control commands for each of the VAV boxes/zones: supply air flow rate ($m_{sa,i}$) and supply air temperature ($T_{sa,i}$). These controllers are described in detail next.
3.1 MPC-Based High-Level Controller (HLC)
The high-level controller (HLC) is based on MPC that uses a low-resolution model of the building which is divided into a small number of meta-zones. Each meta-zone is an aggregation of multiple zones in the real building. This aggregation can be done in any number of ways. In this work we aggregate all the zones in a floor into a meta-zone, which is denoted by $f\in\mathbf{F}:=\{1,\dots,n_{f}\}$, where $n_{f}$ is the total number of floors/meta-zones. The Innovation Hub building has three floors, so we aggregate it into three meta-zones. The set of all VAVs/zones in floor $f$ is denoted as $\mathbf{I_{f}}$ (so $|\cup_{\text{f}\in\mathbf{F}}\mathbf{I_{f}}|=n_{z}$), of which those equipped with reheat coils is denoted as $\mathbf{I_{rh,f}}$ (so $|\cup_{\text{f}\in\mathbf{F}}\mathbf{I_{rh,f}}|=n_{z}^{rh}$). The HLC decides on the following control commands based on the aggregate models:
$$\displaystyle u^{HLC}(k)$$
$$\displaystyle:=\big{(}m_{oa}(k),T_{ca}(k),m_{sa,f}(k),T_{sa,f}(k),$$
$$\displaystyle\qquad\qquad\qquad\qquad\forall f\in\mathbf{F}\big{)}\in\Re^{2+(2\times n_{f})},$$
(6)
where $m_{sa,f}(k):=\sum\limits_{i\in\mathbf{I_{f}}}m_{sa,i}(k)$ is the aggregate (total) supply airflow rate to all the zones in floor/meta-zone $f$ and $T_{sa,f}(k)$ is the aggregate supply air temperature. Of the control commands computed in (3.1), $m_{oa}(k)$ and $T_{ca}(k)$ can be directly sent to the plant. The remaining information computed by the HLC including $m_{sa,f}(k)$ and $T_{sa,f}(k)$ are used by the low-level controller (LLC), described in Section 3.2, to decide on the supply airflow rate ($m_{sa,i}(k)$) and supply air temperature ($T_{sa,i}(k)$) for the individual zones/VAV boxes in each floor.
A comment on notation: all variables with a subscript $i$ are for the individual zones, while the variables with a subscript $f$ represent the aggregate quantities for each meta-zone.
For MPC formulation, we use a model interval of $\Delta t=5$ minutes, a control interval of $\Delta T=15$ minutes, and a prediction/planning interval of $T=24$ hours. So we have $T=N\Delta T$ and $\Delta T=M\Delta t$, where $N=96$ (planning horizon) and $M=3$. The control inputs for $N$ time steps are obtained by solving an optimization problem of minimizing the energy consumption subject to thermal comfort, indoor air quality, and actuator constraints. Then the control commands obtained for the first time step are sent to the plant and the LLC. The optimization problem is solved again for the next $N$ time steps with the initial states of the model obtained from a state estimator, which uses measurements from the plant. This process is repeated at the next control time step, i.e., after an interval of $\Delta T$.
To describe the optimization problem, first we define the state vector $x(k)$ and the vector of control commands and internal variables $v(k)$ as:
$$\displaystyle x(k)$$
$$\displaystyle:=\big{(}T_{z,f}(k),T_{w,f}(k),W_{z,f}(k),\,\,\forall f\in\mathbf{F}\big{)}\in\Re^{3\times n_{f}},$$
(7)
$$\displaystyle v(k)$$
$$\displaystyle:=\big{(}u^{HLC}(k),m_{w}(k),W_{ca}(k)\big{)}\in\Re^{2+(2\times n_{f})+2},$$
(8)
where $T_{z,f}(k)$, $T_{w,f}(k)$, and $W_{z,f}(k)$ are the aggregate zone temperature, wall temperature, and humidity ratio of floor/meta-zone $f$, respectively; $u^{HLC}(k)$ is the control command vector defined in (3.1) and $m_{w}(k)$ is the chilled water flow rate into the cooling coil. The exogenous input vector is:
$$\displaystyle w(k)$$
$$\displaystyle:=\big{(}\eta_{sol}(k),T_{oa}(k),W_{oa}(k),q_{int,f}(k),\omega_{int,f}(k),$$
$$\displaystyle\qquad\qquad\qquad\qquad\qquad\quad\forall f\in\mathbf{F}\big{)}\in\Re^{3+(2\times n_{f})},$$
(9)
where $\eta_{sol}(k)$ is the solar irradiance, $T_{oa}(k)$ is the outdoor air temperature, $W_{oa}(k)$ is the outdoor air humidity ratio, $q_{int,f}(k)$ is the aggregate internal heat load in floor/meta-zone $f$ due to occupants, lights, equipment, etc., and $\omega_{int,f}(k)$ is the aggregate rate of water vapor generation in floor/meta-zone $f$ due to occupants and other sources. We denote the forecast of these exogenous inputs as $\hat{\hat{w}}$; in Section 5, we discuss how these forecasts are obtained. The vector of nonnegative slack variables $\zeta(k):=\big{(}\zeta_{T,f}^{low}(k),\zeta_{T,f}^{high}(k),\zeta_{W,f}^{low}(k),\zeta_{W,f}^{high}(k),\,\,\forall f\in\mathbf{F}\big{)}\in\Re^{4\times n_{f}},$ is introduced for feasibility of the optimization problem.
The optimization problem at time index $j$ is:
$$\displaystyle\min_{V,X,Z}$$
$$\displaystyle\sum\limits_{k=j}^{j+NM-1}\bigg{[}P_{fan}(k)+P_{cc}(k)+\sum\limits_{f\in\mathbf{F}}P_{reheat,f}(k)\bigg{]}\Delta t+P_{slack}(k),$$
(10a)
where $$P_{fan}(k)$$ is given by (2), $$P_{cc}(k)$$ is given by (3), $$P_{reheat,f}(k):=\frac{m_{sa,f}(k)C_{pa}\big{[}T_{sa,f}(k)-T_{ca}(k)\big{]}}{\eta_{reheat}COP_{h}}$$, $$V:=[v^{T}(j),v^{T}(j+1),\dots,v^{T}(j+NM-1)]^{T}$$, $$X:=[x^{T}(j+1),x^{T}(j+2),\dots,x^{T}(j+NM)]^{T}$$, and $$Z:=[\zeta^{T}(j+1),\zeta^{T}(j+2),\dots\zeta^{T}(j+NM)]^{T}$$. The last term, $$P_{slack}$$, penalizes the aggregate zone temperature and humidity slack variables:
$$\displaystyle P_{slack}(k):=$$
$$\displaystyle\sum\limits_{f\in\mathbf{F}}\bigg{[}\lambda_{T}^{low}\zeta_{T,f}^{low}(k+1)+\lambda_{T}^{high}\zeta_{T,f}^{high}(k+1)+$$
$$\displaystyle\quad\lambda_{W}^{low}\zeta_{W,f}^{low}(k+1)+\lambda_{W}^{high}\zeta_{W,f}^{high}(k+1)\bigg{]},$$
where the $$\lambda$$s are penalty parameters. The total supply airflow rate $$m_{sa}(k)$$ used in $$P_{fan}(k)$$ and $$P_{cc}(k)$$, is given by $$m_{sa}(k)=\sum\limits_{f\in\mathbf{F}}m_{sa,f}(k)=\sum\limits_{\text{f}\in\mathbf{F}}\sum\limits_{i\in\mathbf{I_{f}}}m_{sa,i}(k)$$. The optimal control commands are obtained by solving the optimization problem (10a) subject to the following constraints:
$$\displaystyle T_{z,f}(k+1)=T_{z,f}(k)+\Delta t\bigg{[}\frac{T_{oa}(k)-T_{z,f}(k)}{\tau_{za,f}}+$$
$$\displaystyle\,\,\,\frac{T_{w,f}(k)-T_{z,f}(k)}{\tau_{zw,f}}+A_{z,f}\eta_{sol}(k)+\frac{q_{int,f}(k)+q_{ac,f}(k)}{C_{z,f}}\bigg{]}$$
(10b)
$$\displaystyle T_{w,f}(k+1)=T_{w,f}(k)+\Delta t\bigg{[}\frac{T_{oa}(k)-T_{w,f}(k)}{\tau_{wa,f}}+$$
$$\displaystyle\qquad\qquad\qquad\qquad\frac{T_{z,f}(k)-T_{w,f}(k)}{\tau_{wz,f}}+A_{w,f}\eta_{sol}(k)\bigg{]}$$
(10c)
$$\displaystyle W_{z,f}(k+1)=W_{z,f}(k)+\frac{\Delta tR_{g}T_{z,f}(k)}{V_{f}P^{da}}\bigg{[}\omega_{int,f}(k)+$$
$$\displaystyle\qquad\qquad\qquad\qquad\qquad m_{sa,f}(k)\frac{W_{ca}(k)-W_{z,f}(k)}{1+W_{ca}(k)}\bigg{]}$$
(10d)
$$\displaystyle T_{ca}(k)=T_{ma}(k)+m_{w}(k)f\big{(}T_{ma}(k),W_{ma}(k),m_{sa}(k),m_{w}(k)\big{)}$$
(10e)
$$\displaystyle W_{ca}(k)=W_{ma}(k)+m_{w}(k)g\big{(}T_{ma}(k),W_{ma}(k),m_{sa}(k),m_{w}(k)\big{)}$$
(10f)
$$\displaystyle T_{z,f}^{low}(k)-\zeta_{T,f}^{low}(k)\leq T_{z,f}(k)\leq T_{z,f}^{high}(k)+\zeta_{T,f}^{high}(k)$$
(10g)
$$\displaystyle a^{low}T_{z,f}(k)+b^{low}-\zeta_{W,f}^{low}(k)\leq W_{z,f}(k)$$
$$\displaystyle\qquad\qquad\qquad\qquad\leq a^{high}T_{z,f}(k)+b^{high}+\zeta_{W,f}^{high}(k)$$
(10h)
$$\displaystyle m_{oa}^{min}\leq m_{oa}(k)\leq m_{oa}^{max}$$
(10i)
$$\displaystyle T_{ca}(k+1)\leq min\big{(}T_{ca}(k)+T_{ca}^{rate}\Delta t,T_{ma}(k+1),T_{ca}^{high}\big{)}$$
(10j)
$$\displaystyle T_{ca}(k+1)\geq max\big{(}T_{ca}(k)-T_{ca}^{rate}\Delta t,T_{ca}^{low}\big{)}$$
(10k)
$$\displaystyle W_{ca}(k)\leq W_{ma}(k)$$
(10l)
$$\displaystyle r_{oa}(k)=m_{oa}(k)/m_{sa}(k)$$
(10m)
$$\displaystyle r_{oa}(k+1)\leq min\big{(}r_{oa}(k)+r_{oa}^{rate}\Delta t,r_{oa}^{high}\big{)}$$
(10n)
$$\displaystyle r_{oa}(k+1)\geq max\big{(}r_{oa}(k)-r_{oa}^{rate}\Delta t,r_{oa}^{low}\big{)}$$
(10o)
$$\displaystyle m_{sa,f}^{low}\leq m_{sa,f}(k)\leq m_{sa,f}^{high}$$
(10p)
$$\displaystyle T_{ca}(k)\leq T_{sa,f}(k)\leq T_{sa}^{high}$$
(10q)
$$\displaystyle\zeta_{T,f}^{low}(k+1),\zeta_{T,f}^{high}(k+1),\zeta_{W,f}^{low}(k+1),\zeta_{W,f}^{high}(k+1)\geq 0$$
(10r)
where constraints (10b)-(10d), (10g)-(10h), and (10p)-(10r) are $\forall f\in\mathbf{F}$. Constraints (10b)-(10f), (10i), (10l)-(10m), and (10p)-(10r) are for $k\in\{j,\dots,j+NM-1\}$, constraints (10g) and (10h) are for $k\in\{j+1,\dots,j+NM\}$, and constraints (10j)-(10k) and (10n)-(10o) are for $k\in\{j-1,\dots,j+NM-2\}$. The control commands remain the same for $M$ time steps, as the control interval $\Delta T=M\Delta t$, i.e., $u^{HLC}(k)=u^{HLC}(k+1)=\dots=u^{HLC}(k+M-1),\,\,\forall k\in\{j,j+M,\dots,j+NM-1\}$.
Constraints (10b) and (10c) are due to the aggregate thermal dynamics of floor/meta-zone $f$, which is a discretized form of an RC (resistor-capacitor) network model, specifically a 2R2C model. The two states of the model are aggregate zone temperature ($T_{z,f}$) and wall temperature ($T_{w,f}$, a fictitious state). In constraint (10b), $q_{ac,f}(k)$ is the heat influx due to the HVAC system and is given by $q_{ac,f}(k):=m_{sa,f}(k)C_{pa}(T_{sa,f}(k)-T_{z,f}(k))$. The model has seven parameters $\{C_{z,f},\tau_{zw,f},\tau_{za,f},A_{z,f},\tau_{wz,f},\tau_{wa,f},A_{w,f}\}$. In the evaluation study, they are estimated using the algorithm presented in [20] and will be discussed later in Section 5.
The constraint (10d) is for the aggregate humidity dynamics of floor/meta-zone $f$, where $W_{z,f}$ is the aggregate zone humidity ratio, $V_{f}$ is the volume of meta-zone $f$, $R_{g}$ is the specific gas constant of dry air, $P^{da}$ is the partial pressure of dry air, and $W_{ca}$ is the conditioned air humidity ratio [28].
Constraints (10e) and (10f) are for the control-oriented cooling and dehumidifying coil model, which was developed in our prior work [5]. The specific functional form in (10e) and (10f) is chosen so that when the chilled water flow rate is zero, no cooling or dehumidification of the air occurs, so the conditioned air temperature and humidity ratio are equal to the mixed air temperature and humidity ratio: $T_{ca}=T_{ma}$ and $W_{ca}=W_{ma}$, when $m_{w}=0$. The interested readers are referred to [5, Section 3.1.1] for details regarding the model.
Constraints (10g) and (10h) are box constraints to maintain the temperature and humidity of the meta-zones within the allowed comfort limits. The constraints are softened using slack variables $\zeta_{T,f}^{low}(k)$, $\zeta_{T,f}^{high}(k)$, $\zeta_{W,f}^{low}(k)$, and $\zeta_{W,f}^{high}(k)$; constraint (10r) ensures that these slack variables are nonnegative. Imposing constraints directly on the relative humidity of zones ($RH_{z}$) is difficult, as relative humidity is a highly nonlinear function of dry bulb temperature and humidity ratio [27, Chapter 1]. So we linearize this function which gives us $a^{low}$, $b^{low}$, $a^{high}$, and $b^{high}$ in (10h), and helps in converting the constraints on relative humidity to humidity ratio ($W_{z}$).
Constraint (10i) is for the outdoor airflow rate, where the minimum allowed value ($m_{oa}^{min}$) is computed based on the ventilation requirements specified in ASHRAE 62.1 [29] and to maintain positive building pressurization.
Constraints (10j)-(10k) and (10n)-(10o) are to take into account the capabilities of the cooling coil and damper actuators. In constraints (10j) and (10l), the inequalities $T_{ca}(k+1)\leq T_{ma}(k+1)$ and $W_{ca}(k)\leq W_{ma}(k)$ ensure that the cooling coil can only cool and dehumidify the mixed air stream; it cannot add heat or moisture. Similarly, in constraint (10q) the inequality $T_{sa,f}(k)\geq T_{ca}(k)$ ensures that the reheat coils can only add heat; it cannot cool.
Constraint (10p) is to take into account the capabilities of the fan and aggregate capabilities of the VAV boxes. The limits $m_{sa,f}^{low}$ and $m_{sa,f}^{high}$ are computed using the VAV schedule from the mechanical drawings of a building as follows: $m_{sa,f}^{low}:=\sum\limits_{i\in\mathbf{I_{f}}}m_{sa,i}^{low}$ and $m_{sa,f}^{high}:=\sum\limits_{i\in\mathbf{I_{f}}}m_{sa,i}^{high}$.
Note that of the states $x(k)$ defined in (7), $T_{w,f}$ is a fictitious state that cannot be measured, while the other two states aggregate zone temperature ($T_{z,f}$) and aggregate zone humidity ratio ($W_{z,f}$) are measured. So we estimate the current state $\hat{x}(k)$ using a Kalman filter.
3.2 Projection-Based Low-Level Controller (LLC)
The role of the low-level controller (LLC) is to appropriately distribute the aggregate quantities—such as the total supply airflow rate and reheat power consumption—computed by the HLC to individual zones/VAVs. The LLC needs to do so by capturing two important properties: (i) it should consider the needs of individual zones and distribute accordingly, and (ii) it should act in coherence with the HLC, so that there is minimal mismatch for the MPC optimization in the next round.
The LLC is a projection-based feedback controller that decides on the supply airflow rate and supply air temperature for each VAV box/zone. That is, the control command vector that the LLC needs to decide is:
$$\displaystyle u^{LLC}(k)$$
$$\displaystyle:=[m_{sa,i}(k),T_{sa,i}(k)]^{T}\in\Re^{n_{z}+n_{z}^{rh}},$$
where for $m_{sa,i}$, $i\in\mathbf{I_{f}},\,\,\forall\text{f}\in\mathbf{F}$, and for $T_{sa,i}$, $i\in\mathbf{I_{rh,f}},\,\,\forall\text{f}\in\mathbf{F}$. It decides these control commands by using the following information from the HLC: (i) total allowed supply airflow rate to all the zones $m_{sa}(k)=\sum\limits_{f\in\mathbf{F}}m_{sa,f}(k)$, (ii) total allowed reheat power consumption $P_{reheat}(k)=\sum\limits_{f\in\mathbf{F}}P_{reheat,f}(k)$, (iii) the temperature at which the zones in each meta-zone should be maintained at $T_{z,f}(k+1)$, and (iv) the conditioned air temperature $T_{ca}(k)$. Here on in this section, we will be using the superscript HLC ($\bullet^{HLC}$) for these variables to make it clear that these are obtained from the high-level controller.
First the needs of each zone are assessed based on the current measured temperature $T_{z,i}(k)$ and the range it should be in $[T_{z,i}^{htg}(k),T_{z,i}^{clg}(k)]$ and are translated into the desired supply airflow rate $m_{sa,i}^{d}(k)$ and supply air temperature $T_{sa,i}^{d}(k)$.
Then these desired values along with the information obtained from the HLC are used to solve a projection problem to compute the control commands for all the zones, $u^{LLC}(k)$.
The procedure used to compute the desired values $m_{sa,i}^{d}(k)$ and $T_{sa,i}^{d}(k)$ is explained below. This is similar to the Dual Maximum control logic presented in Section 4; a schematic representation of it is shown in Figure 6.
1.
First the temperature range $[T_{z,i}^{htg}(k),\,T_{z,i}^{clg}(k)]$ in which each zone should be is computed as follows: $T_{z,i}^{htg}(k)=max\big{(}T_{z,f}^{HLC}(k+1)-\tilde{T}_{z}^{db}/2,\,T_{z,f}^{low}\big{)}\,\,\forall i\in\mathbf{I_{f}}$ and $T_{z,i}^{clg}(k)=min\big{(}T_{z,f}^{HLC}(k+1)+\tilde{T}_{z}^{db}/2,\,T_{z,f}^{high}\big{)}\,\,\forall i\in\mathbf{I_{f}}$, where $T_{z,f}^{HLC}(k+1)$ is obtained from the HLC, $\tilde{T}_{z}^{db}$ is a deadband, and $T_{z,f}^{low}$ and $T_{z,f}^{high}$ are the limits used in constraint (10g).
2.
If the zone temperature is between the cooling and heating setpoints ($T_{z,i}(k)\in[T_{z,i}^{htg}(k),\,T_{z,i}^{clg}(k)]$), then the controller is in deadband mode. The supply airflow rate is desired to be at its minimum and no heating is required, i.e., $m_{sa,i}^{d}(k)=m_{sa,i}^{low}$ and $T_{sa,i}^{d}(k)=T_{ca}^{HLC}(k)$.
3.
If the zone temperature is warmer than the cooling setpoint ($T_{z,i}(k)>T_{z,i}^{clg}(k)$), then the controller is in cooling mode. The supply airflow rate is desired to be increased as needed and no heating is required, i.e., $m_{sa,i}^{d}(k)=min\big{(}m_{sa,i}^{low}+K_{m,i}^{clg}(T_{z,i}(k)-T_{z,i}^{clg}(k)),\,m_{sa,i}^{high}\big{)}$ and $T_{sa,i}^{d}(k)=T_{ca}^{HLC}(k)$.
4.
If the zone temperature is cooler than the heating setpoint ($T_{z,i}(k)<T_{z,i}^{htg}(k)$), then the controller is in heating mode. Heating is required, and the supply airflow rate is desired to be increased only if additional heating is needed, i.e., $T_{sa,i}^{d}(k)=min\big{(}T_{ca}^{HLC}(k)+K_{T,i}^{htg}(T_{z,i}^{htg}(k)-T_{z,i}(k)),\,T_{sa}^{high}\big{)}$; if $T_{sa,i}^{d}(k)=T_{sa}^{high}$, then $m_{sa,i}^{d}(k)=min\big{(}m_{sa,i}^{low}+K_{m,i}^{htg}(T_{z,i}^{htg}(k)-T_{z,i}(k)),\,m_{sa,i}^{high,reheat}\big{)}$, otherwise $m_{sa,i}^{d}(k)=m_{sa,i}^{low}$.
5.
Finally, we impose the following rate constraints:
$$\displaystyle m_{sa,i}(k-M)-m_{sa,i}^{rate}\Delta T\leq m_{sa,i}^{d}(k)$$
$$\displaystyle\qquad\qquad\qquad\qquad\qquad\leq m_{sa,i}(k-M)+m_{sa,i}^{rate}\Delta T,$$
$$\displaystyle T_{sa,i}(k-M)-T_{sa,i}^{rate}\Delta T\leq T_{sa,i}^{d}(k)$$
$$\displaystyle\qquad\qquad\qquad\qquad\qquad\leq T_{sa,i}(k-M)+T_{sa,i}^{rate}\Delta T,$$
where $m_{sa,i}(k-M)$ and $T_{sa,i}(k-M)$ are the supply airflow rate and supply air temperature from the previous control time step.
These desired values—$m_{sa,i}^{d}(k)$ and $T_{sa,i}^{d}(k)$—along with information from the HLC are used to solve the following projection problem to obtain the control commands for all the zones, $u^{LLC}(k)$:
$$\displaystyle\min_{u^{LLC}(k)}$$
$$\displaystyle\sum\limits_{\text{f}\in\mathbf{F}}\sum_{i\in\mathbf{I_{f}}}\lambda_{m,i}(m_{sa,i}(k)-m_{sa,i}^{d}(k))^{2}+$$
$$\displaystyle\qquad\qquad\sum\limits_{\text{f}\in\mathbf{F}}\sum_{i\in\mathbf{I_{rh,f}}}\lambda_{T,i}(T_{sa,i}(k)-T_{sa,i}^{d}(k))^{2}$$
(11a)
subject to the following constraints:
$$\displaystyle\sum\limits_{\text{f}\in\mathbf{F}}\sum_{i\in\mathbf{I_{f}}}m_{sa,i}(k)\leq m_{sa}^{HLC}(k)$$
(11b)
$$\displaystyle\sum\limits_{\text{f}\in\mathbf{F}}\sum_{i\in\mathbf{I_{rh,f}}}\frac{m_{sa,i}(k)C_{pa}\Big{(}T_{sa,i}(k)-T_{ca}^{HLC}(k)\Big{)}}{\eta_{reheat}COP_{h}}\leq P_{reheat}^{HLC}(k)$$
(11c)
$$\displaystyle m_{sa,i}^{low}\leq m_{sa,i}(k)\leq m_{sa,i}^{high},\quad\forall i\in\mathbf{I_{f}},\,\,\forall\text{f}\in\mathbf{F}$$
(11d)
$$\displaystyle T_{ca}(k)\leq T_{sa,i}(k)\leq T_{sa}^{high},\quad\forall i\in\mathbf{I_{rh,f}},\,\,\forall\text{f}\in\mathbf{F}$$
(11e)
where the sets $\mathbf{I_{f}}$ and $\mathbf{I_{rh,f}}$ are defined at the beginning of this section, $\lambda$s are weights, $m_{sa}^{HLC}(k)=\sum\limits_{f\in\mathbf{F}}m_{sa,f}^{HLC}(k)$, and $P_{reheat}^{HLC}(k)=\sum\limits_{f\in\mathbf{F}}P_{reheat,f}^{HLC}(k)$.
Constraints (11b) and (11c) are to ensure that the total supply airflow rate and reheat power consumption do not exceed the limits computed by the HLC. Constraints (11d) and (11e) are to take in to account the capabilities of the VAV boxes and reheat coils. In constraint (11e), the inequality $T_{sa,i}(k)\geq T_{ca}(k)$ ensures that the reheat coils can only add heat to the conditioned air and cannot cool. The upper limit on supply air temperature ($T_{sa}^{high}$) in constraint (11e) is to prevent thermal stratification [22].
4 Baseline Control ($BL$)
For zone climate control, we consider the Dual Maximum algorithm [22] as the baseline; a schematic representation of this algorithm is shown in Figure 6. Even though Single Maximum is more commonly used, including in the Innovation Hub building, we choose Dual Maximum for the baseline, as it is more energy-efficient among the two [22, 30]. The Dual Maximum controller operates in three modes based on the zone temperature ($T_{z,i}$): (i) cooling, (ii) heating, and (iii) deadband.
The zone’s supply airflow rate ($m_{sa,i}$) and supply air temperature ($T_{sa,i}$) are varied based on the mode, as explained below.
1.
Cooling mode: If the zone temperature is warmer than the cooling setpoint, then the controller is in cooling mode. The supply airflow rate ($m_{sa,i}$) is varied between the minimum ($m_{sa,i}^{low}$) and maximum ($m_{sa,i}^{high}$) as needed, and the supply air temperature ($T_{sa,i}$) is equal to the conditioned air temperature ($T_{ca}$), i.e., no reheat.
2.
Heating mode: If the zone temperature is below the heating setpoint, then the controller is in heating mode. First, the supply air temperature ($T_{sa,i}$) is increased up to the maximum allowed value ($T_{sa}^{high}$) as needed to maintain the zone temperature at the heating setpoint. If the zone temperature still cannot be maintained at the heating setpoint, then the supply airflow rate is increased between the minimum ($m_{sa,i}^{low}$) and the heating maximum ($m_{sa,i}^{high,reheat}$) values.
3.
Deadband mode: If the zone temperature is between the heating and cooling setpoints, then the controller is in deadband mode. The supply airflow rate is kept at the minimum, and the supply air temperature is equal to the conditioned air temperature, i.e., no reheat.
In the case of VAV boxes that do not have reheat coils, the logic during cooling and deadband modes are the same. In heating mode, the supply airflow rate is at the minimum and the supply air temperature is equal to the conditioned air temperature, as the VAV box cannot heat.
For the AHU-level commands, the $BL$ controller uses fixed conditioned air temperature that is determined based on expected thermal (sensible and latent) load, and fixed outdoor airflow rates based on ventilation requirements, e.g., ASHRAE 62.1 [29]. Another consideration in choosing outdoor airflow rate is building positive pressurization requirements [27].
5 Simulation Setup
Recall that the plant is based on an air handling unit serving 33 zones, of which 29 are equipped with reheat coils, and the remaining 4 do not have reheat coils (cooling only).
See Table 1 and Figure 3 for the entire list of VAV boxes/zones. Of the 29 VAV boxes with reheat, three of them serve laboratories which are equipped with fume hoods (209, 303, and 310), and one of them serve restrooms (103). The VAV boxes serving these labs need to be controlled to satisfy the negative pressurization requirements with respect to corridor, so we assume that they operate according to the existing rule based feedback control strategy. Therefore, $n_{z}=29$ and $n_{z}^{rh}=25$; for $m_{sa,i}$, $i\in\{\mathbf{I_{1}}:=\{101$-$102,104$-$109$}, $\mathbf{I_{2}}:=\{201$-$208,210$-$212\}$, $\mathbf{I_{3}}:=\{301$-$302,304$-$309$, $311$-$312\}$ and for $T_{sa,i}$, $i\in\{\mathbf{I_{rh,1}}:=\{\mathbf{I_{1}}\backslash\{106,107\}\},\mathbf{I_{rh,2}}:=\{\mathbf{I_{2}}\backslash\{205\}\},\mathbf{I_{rh,3}}:=\{\mathbf{I_{3}}\backslash\{307\}\}\}$. The sets $\mathbf{I_{1}}$, $\mathbf{I_{2}}$, and $\mathbf{I_{3}}$ defined above are the VAVs/zones in floors 1, 2, and 3, respectively. The sets $\mathbf{I_{rh,1}}$, $\mathbf{I_{rh,2}}$, and $\mathbf{I_{rh,3}}$ exclude the VAV boxes which do not have a reheat coil.
The outdoor weather data used in simulations is obtained from the National Solar Radiation Database [31] for Gainesville, Florida. As mentioned in Section 2.1, the internal heat load due to occupants are computed based on the number of occupants provided to the zone block. We assume that the zones are occupied from Monday to Friday between 8:00 a.m. to noon and 1:00 p.m. to 5:00 p.m., with the total number of occupants ($n_{p,f}$) in floor 1 as 24, in floor 2 as 26, and in floor 3 as 22. We assume a power density of 12.92 W/m${}^{2}$ (1.2 W/ft${}^{2}$) for internal heat load due to lighting and equipment. For special purpose rooms like electrical and telecommunication, we use a higher power density of 53.82 W/m${}^{2}$ (5 W/ft${}^{2}$). These heat loads from lighting and equipment are assumed to be halved during weekends.
The following zone temperature and humidity limits are used in the simulations: $T_{z}^{low}$ = $21.1\degree$C (70$\degree$F), $T_{z}^{high}$ = $23.3\degree$C (74$\degree$F), $RH_{z}^{low}$ = 20%, and $RH_{z}^{high}$ = 65%. The chosen thermal comfort envelope is shown in Figure 7. Typically the zone temperature limits during unoccupied mode (unocc) are relaxed when compared to the occupied mode (occ), i.e., $[T_{z}^{low,occ},T_{z}^{high,occ}]\subseteq[T_{z}^{low,unocc},T_{z}^{high,unocc}]$. Due to its usage, the Innovation Hub building is always operated in occupied mode, so we assume the same in simulations. For the simulation results reported later, the zone temperature violation is computed as $max\big{(}T_{z}(k)-T_{z}^{high},\,\,T_{z}^{low}-T_{z}(k),\,\,0\big{)}$ and the zone relative humidity violation is computed as $max\big{(}RH_{z}(k)-RH_{z}^{high},\,\,RH_{z}^{low}-RH_{z}(k),\,\,0\big{)}$, with the upper and lower limits mentioned above.
The fan power coefficient $\alpha_{fan}$ in (2) is 14.2005 $W/(kg/s)^{3}$, which is obtained using a least squares fit to data collected from the building.
The parameters of the cooling and dehumidifying coil model used in the plant are fit using the procedure explained in Section 2.1.2 of [5]. The root mean square errors for the validation data set are 0.25$\degree$C (0.46$\degree$F, 2%) for $T_{ca}$ and 0.22$\times 10^{-4}kg_{w}/kg_{da}$ (2.6%) for $W_{ca}$.
The AHU in the building is equipped with a draw-through supply fan and therefore the fan is located after the cooling coil. The fan emits heat, which leads to a slight increase in the conditioned air temperature before it is supplied to the VAV boxes. For the simulations, we assume this increase in temperature to be 1.11$\degree$C (2$\degree$F).
MZHC Parameters: The optimization problems in HLC and LLC are solved using CasADi [32] and IPOPT [33], a nonlinear programming (NLP) solver, on a Desktop Windows computer with 16GB RAM and a 3.60 GHZ $\times$ 8 CPU. As mentioned in Section 3.1, $\Delta t=5$ minutes, $\Delta T=15$ minutes, $T=24$ hours, $N=96$, and $M=3$. The number of decision variables for the HLC are 7008 and for the LLC are 54. On an average it takes only 3.28 seconds to solve the optimization problem in the HLC and 0.018 seconds to solve the optimization problem in the LLC.
The parameters for the control-oriented cooling and dehumidifying coil model are fit using the procedure explained in [5]. For the validation data set, the root mean square errors are 0.97$\degree$C (1.75$\degree$F, 7.6%) for $T_{ca}$ and 0.63$\times 10^{-4}kg_{w}/kg_{da}$ (7.6%) for $W_{ca}$.
Since the Innovation Hub building has three floors, we aggregate it into three meta-zones, i.e., $f\in\{1,2,3\}=:\mathbf{F}$. The parameters of the aggregate thermal dynamics model for each meta-zone are estimated using the algorithm presented in [20]. The parameters are shown in Table 2. Figure 8 shows the out of sample prediction results using the estimated aggregate RC network model.
For the aggregate humidity dynamics model, floor volumes used are $V_{1}=1036.6~{}m^{3}$, $V_{2}=1504.1~{}m^{3}$, and $V_{3}=1330.8~{}m^{3}$, which are obtained from mechanical drawings of the building.
The following limits are used for the zone temperature constraint (10g): $T_{z,f}^{low}=$21.1$\degree$C (70$\degree$F) and $T_{z,f}^{high}=$23.3$\degree$C (74$\degree$F). The coefficients for the humidity constraint in (10h) are $a^{high}=0.000621~{}kg_{w}/kg_{da}/\degree C$ and $b^{high}=-0.173323~{}kg_{w}/kg_{da}$, which corresponds to a relative humidity of 60%, and $a^{low}=0.000203~{}kg_{w}/kg_{da}/\degree C$ and $b^{low}=-0.056516~{}kg_{w}/kg_{da}$, which corresponds to a relative humidity of 20%. We introduce a factor of safety by using a slightly tighter higher limit of 60% for the relative humidity of the zones when compared to the thermal comfort envelope presented in Figure 7.
The minimum allowed value for the outdoor airflow rate ($m_{oa}^{min}$) is 3.24 kg/s (5700 cfm), which is obtained from the AHU schedule in the mechanical drawings for the building. The maximum possible value for the outdoor airflow rate ($m_{oa}^{max}$) is 8.52 kg/s (15000 cfm). The various limits on the supply airflow rates are obtained using the VAV schedule presented in Table 1. The remaining limits used in the controllers are as follows: $r_{oa}^{low}=0\%$, $r_{oa}^{high}=100\%$, $\tilde{T}_{z}^{db}=0.56\degree$C (1$\degree$F), $T_{ca}^{low}=11.67\degree$C (53$\degree$F), $T_{ca}^{high}=17.2\degree$C (63$\degree$F), and $T_{sa}^{high}=30\degree$C (86$\degree$F). The higher limit on the conditioned air temperature ($T_{ca}^{high}$) is to introduce a factor of safety and make the controller robust.
The MPC controller requires predictions of the various exogenous inputs specified in (3.1). We compute the loads due to occupants in $q_{int,f}$ and $\omega_{int,f}$ based on the occupancy profile used in simulating the plant. The outdoor weather related exogenous inputs are assumed to be fully known.
BL Parameters: The cooling and heating setpoints are chosen to be 21.1$\degree$C (70$\degree$F) and 23.3$\degree$C (74$\degree$F), respectively. The minimum, maximum, and heating maximum values for the supply airflow rate of all the VAV boxes are listed in Table 1. The maximum allowed value for the supply air temperature ($T_{sa}^{high}$) is 30$\degree$C (86$\degree$F). The conditioned air temperature ($T_{ca}$) is kept at a constant value of 11.67$\degree$C (53$\degree$F). Typically $T_{ca}$ is kept at 12.8$\degree$C (55$\degree$F), especially in hot-humid climates, to ensure humidity control [34], but recall that we assume there is a 1.11$\degree$C (2$\degree$F) increase in temperature because of the heat from the draw-through supply fan in the AHU, so we keep it at 11.67$\degree$C (53$\degree$F) to compensate for it. The outdoor airflow rate is kept at 3.24 kg/s (5700 cfm), which is obtained from mechanical drawings for the building.
6 Results and Discussions
Performance of the controllers is compared using three types of outdoor weather conditions: hot-humid (Jul/06 to Jul/13), mild (Feb/19 to Feb/26), and cold (Jan/30 to Feb/06). The proposed controller reduces energy use significantly compared to $BL$, from approximately 11% to 68% depending on weather; see Figure 9. The indoor climate control performance of $MZHC$ and $BL$ are nearly identical. With $MZHC$, the RMSE (root mean square error) of zone temperature violation is 0.1$\degree$C (0.18$\degree$F) and the RMSE of zone relative humidity violation is 0.05%, while with $BL$ they are 0.01$\degree$C (0.02$\degree$F) and 0% respectively. The computational cost of the proposed $MZHC$ is small, just a few seconds are needed to compute decisions at every control update. On an average it takes 3.28 seconds to solve the optimization problem in the HLC and 0.018 seconds to solve the optimization problem in the LLC.
Simulation results for the different weather conditions are discussed in detail next.
6.1 Hot-Humid Week
Figure 10 shows the simulation time traces for a hot-humid week. It is found that using the proposed $MZHC$ leads to 11% energy savings when compared to $BL$, as presented in Figure 9.
Both controllers lead to negligible violations of aggregate zone temperature ($T_{z,f}$) and relative humidity ($RH_{z,f}$) constraints. Data for the three meta-zones are shown in Figure 10(c), and for three individual zones, one from each floor, are shown in Figure 11. $BL$ ensures that dry enough air is supplied to the zones at all times by keeping the conditioned air temperature ($T_{ca}$) at a constant low value of 11.67$\degree$C (53$\degree$F), and hence the humidity limit is not violated. In the case of $MZHC$, the humidity constraint is found to be active always. This can be seen in Figure 10(c); recall that we use a tighter constraint of 60% instead of 65% to introduce a factor of safety. This active constraint limits the increase in $T_{ca}$, which can be seen in Figure 10(d). One of the reasons reheating is required even during such a hot week (the outdoor air temperature is as high as 32.2$\degree$C/90$\degree$F) is because of this active humidity constraint, which requires dry, and thus, cold air to be supplied to the zones. This could also be one of the reasons $MZHC$ decides to maintain the zone temperatures at the lower limit (see Figure 10(c)). Keeping the zones at a higher temperature will require an increase in reheating energy.
Most of the prior works on using MPC for HVAC control ignore humidity and latent heat in their formulation. In an attempt to reduce energy/cost, such controllers are likely to make decisions during these hot-humid conditions which will lead to poor indoor humidity, as they are unaware of the factors mentioned above [5]. Thus such controllers cannot be used particularly in hot-humid climates.
The energy savings by $MZHC$ is because of two main reasons. One, $MZHC$ increases the $T_{ca}$ as long as the humidity constraints are not violated, while $BL$ uses a conservatively designed value as explained above. This leads to a reduction in cooling energy consumption by $MZHC$; see Figures 9 and 10(b). Two, the warmer $T_{ca}$ supplied to the VAV boxes requires lesser reheating in the case of $MZHC$. This leads to a reduction in the reheat energy consumption, which can be seen in Figures 9 and 10(b). The decisions regarding the supply airflow rates are found to be the same for both the controllers, and thus the fan energy consumptions are identical.
Since the outdoor air is hot and humid (see Figure 10(a)), bringing in more than the minimum outdoor airflow rate ($m_{oa}$) required will increase the sensible and latent loads on the cooling coil. So $MZHC$ decides to keep $m_{oa}$ at the minimum; see Figure 10(d).
6.2 Mild Week
Figure 12 shows the simulation results for a mild week. It is found that using $MZHC$ leads to $\sim$60% energy savings when compared to $BL$, as shown in Figure 9. This significant reduction in energy consumption can be attributed to three main reasons. Two of them are the same as those explained in Section 6.1; the effects here are more prominent and the details are discussed in the subsequent paragraph. The third is that, since the outdoor weather is mild and dry (see Figure 12(a)), $MZHC$ also decides to use “free” cooling when possible by bringing in more than the minimum outdoor air required which leads to further reduction in cooling energy consumption.
Figure 12(c) shows the aggregate zone temperature ($T_{z,f}$) and relative humidity ($RH_{z,f}$) for all the three floors/meta-zones. Unlike the results for the hot-humid week, the humidity constraint is found to be only intermittently active as the outdoor weather is relatively dry. This provides more room for optimizing the conditioned air temperature, which has two important implications: (i) reduction in the cooling energy consumption, and (ii) minimal reheat energy consumption. For example, see 13:00-22:00 h in Figure 12(d) where $T_{ca}$ is at its higher limit, and also see Figure 12(b) where $P_{cc}$ is significantly reduced and $P_{reheat}$ is almost zero.
6.3 Cold Week
The energy savings when using $MZHC$ is found to be significant as can be seen in Figure 9. Since the outdoor weather is cold and dry, there is a lot of room for optimizing the control commands. The reasons for energy reduction when using $MZHC$ are the same as those explained in Section 6.2, therefore we do not discuss them in detail here.
Comment 1.
Recall that as mentioned in Section 4, the Innovation Hub building uses Single Maximum algorithm for zone climate control as opposed to the Dual Maximum algorithm presented here [22]. In the case of Single Maximum algorithm, the minimum allowed value for the supply airflow rate has to be high enough so that the heating load can be met with low enough supply air temperature to prevent thermal stratification [22]. While in the case of Dual Maximum algorithm, the airflow rate is varied during the heating mode, thus the minimum allowed airflow rate is not limited by stratification. Therefore using Single Maximum algorithm leads to higher fan, cooling, and reheating energy consumptions. This also implies that the energy savings by the proposed controller will be even higher.
7 Conclusion
The proposed control architecture is designed to address a number of limitations in the existing literature on multi-zone building control using MPC. The main one is the reliance on a high-resolution multi-zone model, which can be challenging to obtain.
A low-resolution model of the building is more convenient since such a model can be identified in a tractable manner from measurements. The challenge then is to convert the MPC decisions, which are computed for the fictitious zones in the model, to the commands for the VAV boxes of the actual building. The proposed architecture does that by posing this conversion as a projection problem that uses not only what the MPC computes but also feedback from zones. The result is a principled method of computing VAV box commands that are consistent with the optimal decisions made by the MPC without needing dynamic models of indiviudal zones. At the same time, the use of feedback from the zones ensures that zone climate states are also close to the aggregate climate states computed by the MPC.
The positive simulation results provide confidence on the effectiveness of the proposed controller especially because of the large plant model mismatch. The simulation testbed mimics a real building closely, including the heterogenous nature of the zones in the building.
Another observation from the simulations that should be emphasized is the need to incorporate humidity and latent heat. The indoor humidity constraint is seen to be active when using the proposed controller, especially during hot-humid weather. Without humidity being explicitly considered, the controller is likely to have caused high space humidity in an effort to reduce energy use.
There are many avenues for further exploration, such as experimental verification, modifying the formulation to minimize cost instead of energy and to include demand charges, improving methods to forecast disturbances, etc.
{acknowledgment}
This research reported here has been partially supported by the NSF through award # 1934322 and the State of Florida through a REET (Renewable Energy and Energy Efficient Technologies) grant. We thank David Brooks for help in understanding the design and operation of Innovation Hub’s HVAC system.
References
[1]
Serale, G., Fiorentini, M., Capozzoli, A., Bernardini, D., and Bemporad, A.,
2018.
“Model predictive control (MPC) for enhancing building and HVAC
system energy efficiency: Problem formulation, applications and
opportunities”.
Energies, 11(3), p. 631.
[2]
Shaikh, P. H., Nor, N. B. M., Nallagownden, P., Elamvazuthi, I., and Ibrahim,
T., 2014.
“A review on optimized control systems for building energy and
comfort management of smart sustainable buildings”.
Renewable and Sustainable Energy Reviews, 34, pp. 409 –
429.
[3]
Goyal, S., and Barooah, P., 2013.
“Energy-efficient control of an air handling unit for a single-zone
VAV system”.
In IEEE Conference on Decision and Control, pp. 4796 – 4801.
[4]
Joe, J., and Karava, P., 2019.
“A model predictive control strategy to optimize the performance of
radiant floor heating and cooling systems in office buildings”.
Applied Energy, 245, pp. 65 – 77.
[5]
Raman, N., Devaprasad, K., Chen, B., Ingley, H. A., and Barooah, P., 2020.
“Model predictive control for energy-efficient HVAC operation with
humidity and latent heat considerations”.
Applied Energy, 279, December, p. 115765.
[6]
Chen, X., Wang, Q., and Srebric, J., 2016.
“Occupant feedback based model predictive control for thermal
comfort and energy optimization: A chamber experimental evaluation”.
Applied Energy, 164, pp. 341 – 351.
[7]
Ma, J., Qin, J., Salsbury, T., and Xu, P., 2012.
“Demand reduction in building energy systems based on economic model
predictive control”.
Chemical Engineering Science, 67(1), pp. 92 – 100.
Dynamics, Control and Optimization of Energy Systems.
[8]
Bengea, S. C., Kelman, A. D., Borrelli, F., Taylor, R., and Narayanan, S.,
2013.
“Implementation of model predictive control for an HVAC system in
a mid-size commercial building”.
HVAC&R Research, 20, pp. 121–135.
[9]
Radhakrishnan, N., Su, Y., Su, R., and Poolla, K., 2016.
“Token based scheduling for energy management in building HVAC
systems”.
Applied Energy, 173, pp. 67 – 79.
[10]
Png, E., Srinivasan, S., Bekiroglu, K., Chaoyang, J., Su, R., and Poolla, K.,
2019.
“An internet of things upgrade for smart and scalable heating,
ventilation and air-conditioning control in commercial buildings”.
Applied Energy, 239, pp. 408 – 424.
[11]
Ma, Y., Richter, S., and Borrelli, F., 2012.
“Chapter 14: Distributed model predictive control for building
temperature regulation”.
In Control and Optimization with Differential-Algebraic
Constraints, 22, March, pp. 293–314.
[12]
Mei, J., and Xia, X., 2018.
“Multi-zone building temperature control and energy efficiency using
autonomous hierarchical control strategy”.
In 2018 IEEE 14th International Conference on Control and Automation
(ICCA), pp. 884–889.
[13]
Patel, N. R., Risbeck, M. J., Rawlings, J. B., Wenzel, M. J., and
Turney, R. D., 2016.
“Distributed economic model predictive control for large-scale
building temperature regulation”.
In 2016 American Control Conference (ACC), pp. 895–900.
[14]
Yang, Y., Hu, G., and Spanos, C. J., 2020.
“HVAC energy cost optimization for a multizone building via a
decentralized approach”.
IEEE Transactions on Automation Science and Engineering, 17(4), pp. 1950–1960.
[15]
Yushen Long, Shuai Liu, Lihua Xie, and Johansson, K. H., 2016.
“A hierarchical distributed MPC for HVAC systems”.
In 2016 American Control Conference (ACC), pp. 2385–2390.
[16]
Jorissen, F., and Helsen, L., 2016.
“Towards an automated tool chain for MPC in multi-zone
buildings”.
In 4th International Conference on High Performance Buildings,
pp. 1–10.
[17]
Li, X., and Wen, J., 2014.
“Review of building energy modeling for control and operation”.
Renewable and Sustainable Energy Review, 37,
p. 517–537.
[18]
Kim, D., Cai, J., Ariyur, K. B., and Braun, J. E., 2016.
“System identification for building thermal systems under the
presence of unmeasured disturbances in closed loop operation: Lumped
disturbance modeling approach”.
Building and Environment, 107, pp. 169 – 180.
[19]
Zeng, T., and Barooah, P., 2020.
“Identification of network dynamics and disturbance for a multi-zone
building”.
IEEE Transactions on Control Systems Technology, 28,
August, pp. 2061 – 2068.
[20]
Guo, Z., Coffman, A. R., Munk, J., Im, P., Kuruganti, T., and Barooah, P.,
2020.
“Aggregation and data driven identification of building thermal
dynamic model and unmeasured disturbance”.
Energy and Buildings, September.
in press, available online Sept 2020.
[21]
Fritzson, P., and Engelson, V., 1998.
“Modelica — A unified object-oriented language for system
modeling and simulation”.
In ECOOP’98 — Object-Oriented Programming, E. Jul, ed., Springer
Berlin Heidelberg, pp. 67–90.
[22]
ASHRAE, 2011.
The ASHRAE handbook : Applications (SI Edition).
[23]
Liang, W., Quinte, R., Jia, X., and Sun, J.-Q., 2015.
“MPC control for improving energy efficiency of a building air
handler for multi-zone vavs”.
Building and Environment, 92, pp. 256 – 268.
[24]
Taylor, S. T., 2007.
“VAV system static pressure setpoint reset”.
ASHRAE journal, 6.
[25]
Jorissen, F., Reynders, G., Baetens, R., Picard, D., Saelens, D., and Helsen,
L., 2018.
“Implementation and Verification of the IDEAS Building Energy
Simulation Library”.
Journal of Building Performance Simulation, 11,
pp. 669–688.
[26]
Roulet, C.-A., Heidt, F., Foradini, F., and Pibiri, M.-C., 2001.
“Real heat recovery with air handling units”.
Energy and Buildings, 33(5), pp. 495 – 502.
[27]
ASHRAE, 2017.
The ASHRAE handbook fundamentals (SI Edition).
[28]
Goyal, S., and Barooah, P., 2012.
“A method for model-reduction of non-linear thermal dynamics of
multi-zone buildings”.
Energy and Buildings, 47, April, pp. 332–340.
[29]
ASHRAE, 2016.
ANSI/ASHRAE standard 62.1-2016, ventilation for acceptable air
quality.
[30]
Goyal, S., Ingley, H., and Barooah, P., 2013.
“Occupancy-based zone climate control for energy efficient
buildings: Complexity vs. performance”.
Applied Energy, 106, June, pp. 209–221.
[31]
National Solar Radiation Database (NSRDB).
https://nsrdb.nrel.gov.
[32]
Andersson, J. A. E., Gillis, J., Horn, G., Rawlings, J. B., and Diehl, M.,
2019.
“Casadi: a software framework for nonlinear optimization and optimal
control”.
Mathematical Programming Computation, 11(1), Mar,
pp. 1–36.
[33]
Wächter, A., and Biegler, L. T., 2006.
“On the implementation of an interior-point filter line-search
algorithm for large-scale nonlinear programming”.
Mathematical Programming, 106(1), Mar, pp. 25–57.
[34]
Williams, J., 2013.
Why is the supply air temperature 55F?
http://8760engineeringblog.blogspot.com/2013/02/why-is-supply-air-temperature-55f.html.
Last accessed: Aug, 03, 2020. |
Metrics in the space of curves
A. Yezzi
Georgia Institute of Technology, Atlanta, USA
A. C. G. Mennucci
Scuola Normale Superiore, Pisa, Italy
Abstract
In this paper we study geometries on the manifold of curves.
We define a manifold $M$ where objects $c\in M$ are curves, which we
parameterize as $c:S^{1}\to{\mathrm{l\hskip-1.5ptR}}^{n}$ ($n\geq 2$, $S^{1}$ is the circle).
Given a curve $c$, we define the tangent space $T_{c}M$ of $M$ at $c$
including in it all deformations $h:S^{1}\to{\mathrm{l\hskip-1.5ptR}}^{n}$ of $c$.
We discuss Riemannian and Finsler metrics $F(c,h)$ on this manifold
$M$, and in particular the case of the geometric $H^{0}$ metric
$F(c,h)=\int|h|^{2}ds$ of normal deformations $h$ of $c$; we study
the existence of minimal geodesics of $H^{0}$ under constraints; we
moreover propose a conformal version of the $H^{0}$ metric.
Date: January 18, 2021
2000 Mathematics Subject Classification: 58B20, 58D15, 58E10
This work is dedicated to the memory of Anthony J. Yezzi, Sr.,
father of Anthony Yezzi, Jr., who passed away shortly after its
initial completion. May he rest in peace and be remembered always
as a loving husband, father, and grandfather who is and will continue
to be dearly missed.
1 Introduction
In this paper we study geometries on the manifold $M$ of curves.
This manifold contains curves $c$, which we parameterize as
$c:S^{1}\to{\mathrm{l\hskip-1.5ptR}}^{n}$ ($S^{1}$ is the circle).
Given a curve $c$,
we define the tangent space $T_{c}M$ of $M$ at $c$ including in it
deformations $h:S^{1}\to{\mathrm{l\hskip-1.5ptR}}^{n}$, so that an infinitesimal deformation of the
curve $c$ in direction $h$ will yield the curve
$c(\theta)+\varepsilon h(\theta)$.
This manifold $M$ is the Shape Space that is studied in
this paper.
We would like to define a Riemannian metric
on the manifold $M$ of curves:
this means that, given two deformations $h,k\in T_{c}M$, we want to define
a scalar product $\langle h,k\rangle_{c}$, possibly dependent on $c$.
The Riemannian metric would then entail a distance $d(c_{0},c_{1})$ between
the curves in $M$, defined as the infimum of the length $\mathop{\operator@font Len}\nolimits(\gamma)$
of all smooth paths $\gamma:[0,1]\to M$ connecting $c_{0}$ to $c_{1}$.
We call minimal geodesic a path providing the minimum of $\mathop{\operator@font Len}\nolimits(\gamma)$
in the class of $\gamma$ with fixed endpoints.
A number of methods have been proposed in Shape Analysis to
define distances between shapes, averages of shapes and optimal
morphings between shapes;
some of these approaches are reviewed in section
§2. At the same time, there has been much
previous work in Shape Optimization, for example Image Segmentation
via Active Contours, 3D Stereo Reconstruction via Deformable Surfaces;
in these later methods, many authors have defined Energy Functionals
on curves (or surfaces) and utilized the Calculus of Variations to
derive curve evolutions to minimize the Energy Functionals; often
referring to these evolutions as Gradient Flows. For example, the
well known Geometric Heat Flow, popular for its smoothing effect on
contours, is often referred as the gradient flow for length.
The reference to these flows as gradient flows implies a
certain Riemannian metric on the space of curves; but this fact has
been largely overlooked. We call this metric $H^{0}$ henceforth. If one
wishes to have a consistent view of the geometry of the space of
curves in both Shape Optimization and Shape Analysis, then one should
use the $H^{0}$ metric when computing distances, averages and morphs
between shapes.
In this paper we first introduce the metric $H^{0}$ in §2.1; we immediatly remark that, surprisingly, it
does not yield a well define metric structure, since the associated
distance is identically zero (1) (1) (1)This striking fact was first described
in [Mum]. In §4 we analyse this
metric; we show that the lower-semi-continuous relaxation of the associated energy functional is
identically zero (see §3.1 and 4.11);
but we prove in thm. 4.12 that, under
additional constraints on the curvature of admissible curves, the
metric $H^{0}$ admits minimum geodesics; we propose in §4.6 an
example that justifies some of the hypotheses in 4.12. We
can then define in §4.5 the
Shape Space of curves with bounded curvature, where the metric
$H^{0}$ entails a positive distance. These hypotheses on curvature,
however, are not compatible with the classical definition of a
Riemannian Geometry.
More recently, a Riemannian metric was proposed in
[MM] for the space $M$ of curves (see
§2.1.3 here); this metric may fix the above
problems; but it would significantly alter the nature of gradient
flows used thus far in various Shape Optimization problems (assuming that
one wishes to make those gradient flows consistent with this
new metric). In this metric, distances measured between curves are
defined using first and second order derivatives of the curves (and
therefore the resulting optimality conditions involve up to fourth
order derivatives); as a consequence, flows designed to converge
towards these optimality conditions are necessarily fourth order,
thereby precluding the use of Level Set Methods which have
become popular in the field of Computer Vision and Shape Optimization.
We propose instead in §5 a class of
conformal metrics that fix the above problems while minimally
altering the earlier flows: in fact the new gradient flows will amount
to a simple time reparameterization of the earlier flows. In addition
the conformal metrics that we propose have some nice numerical and
computational properties: distances measured between curves are
defined using only first order derivatives (and therefore the
resulting optimality conditions involve only second order
derivatives); as a consequence, flows designed to converge towards
these optimality conditions are second order, thereby allowing the use
of Level Set methods: we indeed show such an implementation and a
numerical example in §5.3. We also proposed in
[YM04] a differential operator that is well adapted to the
problem at hand: we review it here as well, in §5.1.1.
1.1 Riemannian and Finsler geometries
Let $M$ be a smooth connected differentiable manifold. (2) (2) (2)If $M$ is infinite dimensional, then we suppose that
it is modeled on a Banach or Hilbert separable space
(see [Lan99], ch.II).
For any $c\in M$,
let $T_{c}M$ be the tangent space at $c$, that is
the set of all vectors tangent to $M$ at $c$;
let $TM$ be the tangent bundle, that is the bundle of all tangent spaces.
Definition 1.1
Let $X$ be a vector space; a norm $|\cdot|$ satisfies
1.
$|v|$ is positive homogenous, i.e.
$$\lambda|v|=|\lambda v|\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode%
\nobreak\ \forall\lambda\geq 0\leavevmode\nobreak\ \leavevmode\nobreak\ %
\leavevmode\nobreak\ ,$$
2.
$$|v+w|\leq|v|+|w|$$
(that, by (1), is equivalent to asking that $|v|$ be convex), and
3.
$|v|=0$ only when $v=0$.
If the last condition is not satisfied, then $|\cdot|$ is a
seminorm.
Definition 1.2
We define a Finsler metric to be a
function $F:TM\to{\mathrm{l\hskip-1.5ptR}}^{+}$, such that
•
$F$ is lower semicontinuous and locally bounded from above,
and,
•
for all $c\in M$ , $v\mapsto F(c,v)$ is a norm on
$T_{c}M$. (3) (3) (3)Sometimes $F$ is called a “Minkowsky norm”
We will also consider it to be symmetric, that is, $F(c,-v)=F(c,v)$.
We will write $|v|_{c}$ for $F(c,v)$, or simply $|v|$ when the base
point $c$ can be easily inferred from the context.
We define the length of any locally Lipschitz path (4) (4) (4)As suggested in [MM],
we want to avoid referring to $\gamma$ as a curve, because
confusion arises when we will introduce the manifold $M$ of
closed curves.
So we will always talk of paths in the infinite dimensional manifold
$M$.
Note also that these paths are open-ended,
while curves comprising $M$ are closed.
$\gamma:[0,1]\to M$ as
$$\mathop{\operator@font Len}\nolimits\gamma=\int_{0}^{1}|{\mathaccent 28767{%
\gamma}}(v)|_{\gamma(v)}\leavevmode\nobreak\ dv$$
and the energy as
$$E(\gamma)=\int_{0}^{1}|{\mathaccent 28767{\gamma}}(v)|_{\gamma(v)}^{2}%
\leavevmode\nobreak\ dv$$
We define the geodesic distance
$d(x,y)$ as the infimum
$$d(x,y)=\inf\mathop{\operator@font Len}\nolimits\gamma$$
(1.1)
for all locally Lipschitz paths $\gamma$ connecting $x$ to $y$.
The path connecting $x$ to $y$ that provides the $\min{E(\gamma)}$
in the class of all paths $\gamma$ connecting $x$ to $y$
is the (minimal length) geodesic connecting $x$ to $y$. (5) (5) (5)As explained in the proposition A.1 in
the appendix A,
we may equivalently define a minimal geodesic to be a minimum of
$\min{\mathop{\operator@font Len}\nolimits(\gamma)}$, that has been reparameterized to constant velocity
Note that, in the classical books on Finsler Geometry
(see for example [BCS]),
$M$ is finite dimensional, and $F(c,v)^{2}$ is considered to be
smooth and strongly convex in the $v$ variable (for $v\neq 0$)
(see also 1.1.2);
this hypothesis is not needed,
though, for the theorems in §A.
1.1.1 Riemannian geometry
Definition 1.3
Suppose that $M$ is modeled on a Hilbert separable space $H$.
A Riemannian geometry is defined by
associating to $M$ a
Riemannian metric $g$; for any $c\in M$ , $g(c)$ is a
positive definite
bilinear form on the tangent space $T_{c}M$ of $M$ at $c$: that is, if
$h,k$ are tangent to $M$ at $c$, then $g(c)$ defines
a scalar product $\langle h,k\rangle_{c}$.
If the form $g$ is positive semi-definite then the
geometry is degenerate,
and we will speak of a pseudo-Riemannian metric.
If it is positive definite, then
tangent space $T_{x}M$ is isomorphic to $H$, by
means of the metric $g$.
See [Lan99], ch.VII.
A Riemannian geometry is a special case of a Finsler geometry:
we define the norm
$$|h|_{c}=\sqrt{\langle h,h\rangle}\leavevmode\nobreak\ \leavevmode\nobreak\ %
\leavevmode\nobreak\ ,$$
(pseudo-Riemannian geometries produce a seminorm $|h|_{c}$).
In the following we often drop the base point $c$ from
$\langle h,k\rangle_{c}$ and $|v|_{c}$.
If $M$ is finite dimensional, then we can write
$$\langle h,k\rangle_{c}=h_{i}g^{i,j}(c)k_{j}$$
in a choice of local coordinates;
then the matrix $g^{i,j}(c)$ is smooth and positive definite.
1.1.2 Geodesics and the exponential map
Suppose that the metric $F$ is a regular Finsler metric, that is,
$F$ is of class $C^{2}$ and $F(c,\cdot)^{2}$ is
strongly convex (for $v\neq 0$);
such is the case when $F(x,v)^{2}=|v|^{2}=\langle v,v\rangle$ in
a smooth Riemannian manifold.
Let
$$\ddot{\gamma}=\Gamma(\dot{\gamma},\gamma)$$
be the Euler–Lagrange O.D.E. characterizing critical paths $\gamma$ of
$$E(\gamma)=\int_{0}^{1}|{\mathaccent 28767{\gamma}}(v)|^{2}\leavevmode\nobreak%
\ dv$$
Define the exponential map $\exp_{c}:T_{p}M\to M$
as $\exp_{c}(\eta)=\gamma(1)$ when $\gamma$ is the
geodesic curve solving
$$\Big{\{}\ddot{\gamma}(v)=\Gamma(\dot{\gamma}(v),\gamma(v)),\leavevmode\nobreak%
\ \leavevmode\nobreak\ \leavevmode\nobreak\ \gamma(0)=c,\leavevmode\nobreak\ %
\leavevmode\nobreak\ \leavevmode\nobreak\ \dot{\gamma}(0)=\eta$$
(1.2)
Then we may state this extended version of the Hopf–Rinow theorem
Theorem 1.4 (Hopf-Rinow)
Suppose that
$M$ is finite dimensional and connected,
then these are equivalent:
•
$(M,d)$ is complete
•
closed bounded sets are compact
•
$M$ is geodesically complete at a point $c$, that is,
(1.2) can be solved for all $v\in{\mathrm{l\hskip-1.5ptR}}$ and all $\eta$,
that is, the map $\eta\mapsto\exp_{c}(\eta)$ is well defined
•
for any $c$, the map $\eta\mapsto\exp_{c}(\eta)$ is well defined and
surjective;
and all
those imply that $\forall x,y\in M$ there exist a minimal geodesic
connecting $x$ to $y$.
1.1.3 Submanifolds
The simplest examples of Riemannian
manifolds are the submanifolds of a Hilbert space $H$.
We think of the finite dimensional case, when $H={\mathrm{l\hskip-1.5ptR}}^{n}$,
or the infinite dimensional case, when we assume that $H$ is
separable.
Proposition 1.5
Define the distance $d(x,y)=\|x-y\|$ in $H$.
Suppose that $M\subset H$ is a closed submanifold.
We may view $M$ as a metric space, $(M,d)$: then it is complete.
We may moreover induce a Riemannian structure on $M$ using the
scalar product of $H$: this in turn induces the geodesic distance
$d^{g}$, as defined in (1.1).
Then $d^{g}\geq d$, and $(M,d^{g})$ is complete as well.
If $M$ is of class $C^{2}$,
moreover, then $d$ and $d^{g}$ are locally equivalent. (6) (6) (6)Proof by standard arguments, see for example
sec. VIII.§6 in [Lan99]
It is not guaranteed that $d$ and $d^{g}$ are globally equivalent,
as shown by this example
Example 1.6
(7) (7) (7)We thank A.Abbondandolo for this remark.
Let $H={\mathrm{l\hskip-1.5ptR}}^{2}$ and $M=\{(s,\sin(s^{2}))\}$.
Let $x_{n}=(\sqrt{\pi n},0)\in M$. Then $d(x_{n},x_{n+1})\to 0$ whereas
$d^{g}(x_{n},x_{n+1})\geq 2$.
In a certain sense, infinite dimensional Riemannian
manifolds are simpler than their corresponding
finite-dimensional: indeed, by [EE70],
Theorem 1.7 (Eells-Elworthy)
Any smooth differentiable manifold $M$ modeled on an
infinite dimensional Hilbert space $H$
may be embedded as an open subset of a Hilbert space.
With respect to geodesics, the matter is though much more complicated.
Suppose $M$ is infinite
dimensional. In this case,
if $(M,d)$ is complete
the equation (1.2) of geodesic curves
can be solved for all $v\in{\mathrm{l\hskip-1.5ptR}}$;
but (unfortunately) many other important implications
contained in the Hopf–Rinow theorem are false.
The most important example is due to Atkin
[Atk75]:
Example 1.8 (Atkin)
There exists an infinite dimensional
complete connected
smooth Riemannian manifold $M$ and $x,y\in M$ such that there is no geodesic
connecting $x$ to $y$.
We show a simpler example, of
an infinite dimensional Riemannian manifold $M$ such
that the metric space $(M,d)$ is complete, but
there exist two points $x,y\in M$ that cannot be connected
by a minimal geodesic.
Example 1.9 (Grossman)
(8) (8) (8)This example is also in a remark in
sec. VIII.§6 in [Lan99]
Let $l^{2}({{\mathrm{l\hskip-1.5ptN}}})$ be the Hilbert space of real
sequences $x=(x_{0},x_{1},\ldots)$
with the scalar product
$$\langle x,y\rangle=\sum_{i=0}^{\infty}x_{i}y_{i}$$
Let $e_{i}=(0,0,\ldots,0,1,0,\ldots)$ where $1$ is in the i-th position.
We build an ellipsoid
$M=\left\{x\in l^{2}\leavevmode\nobreak\ |\leavevmode\nobreak\ x_{0}^{2}+\sum_{%
i=1}x_{i}^{2}/(1+1/i)^{2}=1\right\}$
in $l^{2}({{\mathrm{l\hskip-1.5ptN}}})$.
Since $M$ is closed, then it is complete (with the induced metric).
Let $N=e_{0}=(1,0,0,\ldots)$, $S=-e_{0}=(-1,0,0,\ldots)$.
Let $\gamma_{i}$ be the geodesic starting from
$\gamma_{i}(0)=N$ and with starting speed $\dot{\gamma}_{i}(0)=e_{i}$; then there exists a first $\lambda_{i}>0$
such that $\gamma_{i}(\lambda_{i})=S$
(moreover $\mbox{Len}(\gamma_{i})=\lambda_{i}$).
Then
$\mbox{Len}(\gamma_{i})\to\pi$, but the sequence $\gamma_{i}$
does not have a limit.
Note that we may think of using weak convergence: but
$\gamma_{i}$ weakly converges to the diameter; and $e_{i}$
weakly converges to $0$.
See also [Eke78].
It is then, in general, quite difficult to prove that
an infinite dimensional manifold admits minimal geodesics
(even when it is known to be complete); a known result is
Theorem 1.10 (Cartan-Hadamard)
Suppose that
$M$ is connected, simply connected and has seminegative curvature;
then these are equivalent:
•
$(M,d)$ is complete
•
for a $c$, the map $\eta\to\exp_{c}(\eta)$ is well defined
and then there exists an unique minimal geodesic connecting any
two points.
(9) (9) (9)Corollary 3.9 and 3.11 in sec. IX.§3 in [Lan99]
1.2 Geometries of curves
Now, suppose that $c(\theta)$ is an immersed curve $c:S_{1}\to{\mathrm{l\hskip-1.5ptR}}^{n}$,
where $S^{1}$ is the circle; we want to define
a geometry on $M$, the space of all such immersions $c$.
$M$ is a manifold;
the tangent space $T_{c}M$ of $M$ at $c$ contains all the deformations
$h\in T_{c}M$ of the curve $c$, that are all the vector fields along $c$. Then,
an infinitesimal deformation of the
curve $c$ in “direction” $h$ will yield (on first order) the curve
$c(u)+\varepsilon h(u)$.
If $\gamma:[0,1]\to M$ is a path connecting curves,
then we may define a homotopy $C:S^{1}\times[0,1]\to{\mathrm{l\hskip-1.5ptR}}^{n}$
associated to $\gamma$ by $C(\theta,v)=\gamma(v)(\theta)$
(more is in §3.2.1).
1.2.1 Finsler geometry of curves
Any energy that we will study in this paper can be reconducted
to this general form
$$E(\gamma)=\int_{0}^{1}F\Big{(}\gamma(\cdot,v),\partial_{v}\gamma(\cdot,v)\Big{%
)}^{2}\leavevmode\nobreak\ dv$$
(1.3)
where $F(c,h)$ is defined when $c$ is a curve,
and $h\in T_{c}M$ is a deformation of $c$;
note that $F$ will be often a Minkowsky seminorm and not a
norm (10) (10) (10)That is, it will fail to satisfy property 3
in definition 1.1
on the space $M$ of immersions (see 1.13).
We look mainly for metrics in the space $M$ that are independent
on the parameterization of the curves $c$: to this end, [MM]
define
$$B_{i}=B_{i}(S^{1},{\mathrm{l\hskip-1.5ptR}}^{2})=\mbox{Imm}(S^{1},{\mathrm{l%
\hskip-1.5ptR}}^{2})/\mbox{Diff}(S^{1})$$
and
$$B_{i,f}=B_{i,f}(S^{1},{\mathrm{l\hskip-1.5ptR}}^{2})=\mbox{Imm}_{f}(S^{1},{%
\mathrm{l\hskip-1.5ptR}}^{2})/\mbox{Diff}(S^{1})$$
that are the quotients
of the spaces $\mbox{Imm}_{f}$ of smooth immersion, and of the
space $\mbox{Imm}_{f}(S^{1},{\mathrm{l\hskip-1.5ptR}}^{2})$ of smooth free immersion,
with $\mbox{Diff}(S^{1})$ (the space of automorphisms of $S^{1}$).
$B_{i,f}$ is a manifold, the base of a principal fiber bundle, as
proved in §2.4.3 in [MM], while $B_{i}$ is not.
Any metric that does not
depend on the parameterization of the curves $c$ (as defined in
eq.(1.4)) may be projected to $B_{i}$
by means of the results in §2.5 in [MM]
(the most important step appears also here as 3.10).
Remark 1.11 (extending $M$)
$\mbox{Imm}(S^{1},M)$ is on open subset of the Banach space
$C^{1}(S^{1}\to{\mathrm{l\hskip-1.5ptR}}^{n})$, where it is connected iff $n\geq 3$; whereas
in the case $n=2$ of planar curves, it is divided in connected
components each containing curves with the same
winding number.
To define a Riemannian Geometry on $M=\mbox{Imm}(S^{1},M)$,
it may be convenient to view it as a subset of an Hilbert
space such as $H^{1}(S^{1}\to{\mathrm{l\hskip-1.5ptR}}^{n})$, and to complete it there.
1.3 Abstract approach
As a first part of this paper, we want to cast the problem
in an abstract setting.
There are some general properties that we may ask of a metric
defined as in sec. 1.2.1.
We start with a fundamental property (that is a prerequisite
to most of the others).
0.
[well-posedness and existence of minimal geodesics] The
Finsler metric $F$ induces a well defined (11) (11) (11)That is,
the distance between different points is positive, and
$d$ generates the same topology that the atlas of the manifold $M$ induces
geodesic distance $d$;
$(M,d)$ is complete (or, it may be completed inside the space of
mappings $c:S^{1}\to{\mathrm{l\hskip-1.5ptR}}^{n}$);
for any two curves in $M$, there exists a minimal
geodesic connecting them.
Let $C$ be a minimal geodesic connecting $c_{0}$ and $c_{1}$.
We assume that $C$ is a homotopy of $c_{0}$ to $c_{1}$.
1.
[rescaling]
if $\lambda>0$,
if we rescale $c_{0},c_{1}$ to $\lambda c_{0},\lambda c_{1}$,
then we would like that $\lambda C$ be a minimal geodesic.
A sufficient condition is that
$F(\lambda c,h)=\lambda^{a}F(c,h)$
for some $a\geq 0$, and all $\lambda\geq 0$.
(12) (12) (12)We could ask
$F(\lambda c,h)=l(\lambda)F(c,h)$
for some $l:{\mathrm{l\hskip-1.5ptR}}^{+}\to{\mathrm{l\hskip-1.5ptR}}^{+}$ monotone increasing,
with $l(x)>0$ for $x>0$; but,
if $l$ is continuous, then $l(x)=x^{a}$ for some
$a\geq 0$
2.
[euclidean invariance]
If we apply an euclidean transformation $A$ to $c_{0}$, $c_{1}$,
we would like that $AC$ be a minimal geodesic connecting $Ac_{0}$ to
$Ac_{1}$
If $F(Ac,Ah)=F(c,h)$ for all $c,h$, then the above is satisfied.
3.
[parameterization invariance]
there are two version of this: we define
curve-wise parameterization invariance
when the metric does not depend on the parameterization of the curve,
that is
$$F(\tilde{c},\tilde{h})=F(c,h)$$
(1.4)
when $\tilde{c}(t)=c(\varphi(t))$ and $\tilde{h}(t)=h(\varphi(t))$
are reparameterizations of $c,h$
homotopy-wise parameterization invariance
Define
$$\tilde{C}(\theta,v)=C(\varphi(\theta,v),v)$$
where $\varphi:S^{1}\times[0,1]\to S^{1}$, $\varphi\in C^{1}$,
$\varphi(\cdot,v)$ is a diffemorphism for all fixed $v$. We
would like that, in this case, $E(\tilde{C})=E(C)$.
If $F$ can be written as
$$F=F(c,\pi_{N}h)$$
(1.5)
(that is, $F$ depends only on the normal part of the deformation)
and if it satisfies (1.4), then, by proposition 3.8, $E(\tilde{C})=E(C)$.
In both cases, the geometric structure we are building depends
only on the embedding of the curves, and not on the parameterization.
The above properties defined in 1,2 are valid for all examples that we will
show; property 3 is satisfied for some of them.
Property 0 is possibly the most important argument.
Definition 1.12
Any metric satisfying the above 1,2,3 is called a geometric
metric.
Note that
Remark 1.13
If $F$ satifies (1.5) then $F(c,\cdot)$ is
necessarily a seminorm and not a norm (13) (13) (13)That is, it does not satisfy
property 3 in the definition 1.1 on the space $M$:
we should then talk of a pseudo-Riemannian geometry of curves.
The projection of $F$ to the space $B_{i,f}$ may be nonetheless
a norm.
These other properties would be very important in applications in
Computer Vision.
(14) (14) (14)Some of these properties are much trickier:
we do not know sufficient conditions that imply them
4.
[finite projection]
There should be a finite dimensional approximation of our metric,
for purposes numerical computation.
A sufficient condition is that the energy $E(C)$ should be well defined and
continuous with respect to a norm of a Sobolov space $W^{k,p}(I)$
(with $k\in{\mathrm{l\hskip-1.5ptN}}$ and $p\in[1,\infty)$):
this would imply that we may approximate $C$ by smooth functions $C_{h}$
and $E(C_{h})\to E(C)$.
5.
[embedding preserving]
if $c_{0}$ and $c_{1}$ are embedded, we would like $C(\cdot,v)$
to be embedded at all $v$
6.
[maximum principle]
In the following, suppose that curves are embedded in ${\mathrm{l\hskip-1.5ptR}}^{2}$, and
write $c\subset c^{\prime}$ to mean that $c^{\prime}$ is contained in the bounded
region of plane enclosed by $c$. (15) (15) (15)By Jordan’s closed curve theorem, any embedded closed
curve in the plane divides the plane in two regions, one bounded and
one unbounded
If $c_{0}\subset c^{\prime}_{0}$, $c_{1}\subset c^{\prime}_{1}$,
then we would like that exists
a minimal geodesic $C^{\prime}$ connecting $c^{\prime}_{0}$ to $c^{\prime}_{1}$ such that
$C^{\prime}(\cdot,v)\subset C(\cdot,v)$ for $v\in[0,1]$.
This is but another version of the Maximum Principle;
it would imply that the minimal geodesic is unique.
It is an important prerequisite if we want to implement
numerical algorithms by using Level Set Methods.
7.
[convexity preserving]
if $c_{0}$ and $c_{1}$ are convex, we would like $C(\cdot,v)$
to be convex at all $v$
8.
[convex bounding]
we would like that at all $v$, (the image of) the curve
$C(\cdot,v)$
be contained in the convex envelope of the curves $c_{0},c_{1}$
9.
[translation]
If $a\in{\mathrm{l\hskip-1.5ptR}}^{n}$,
if $c_{1}=a+c_{0}$ is a translation of $c_{0}$, we would like that
the uniform movement $C(\theta,v)=c_{0}(\theta)+va$ be
a minimal geodesic from $c_{0}$ to $c_{1}$
So we state the abstract problem: (16) (16) (16)
Solving this problem in abstract would be comparable to what Shannon
did for communication theory, where in [Sha49] he
asserted there would exist a code for communication on a noisy
channel, without actually showing an efficient algorithm to compute it.
Problem 1.14
Consider the space of curves $M$, and the family $\mathcal{G}$ of all
Riemannian (or, regular Finsler) Geometries $F$ on $M$. (17) (17) (17)$\mathcal{G}$ is non empty: see [Lan99].
By using 1.7, it would seem that there
exists Riemannian metrics $F$ on $M$ such that $(M,F)$ is geodesically
complete; we did not carry on a detailed proof.
Does there exist a metric $F\in\mathcal{G}$, satisfying
the above properties 0,1,2,3?
Consider metrics $F\in\mathcal{G}$ that
may be written in integral form
$$F(c,h)=\int_{c}f\big{(}c(s),d_{s}c(s),\ldots,d_{s^{j}}^{j}c(s),\,h(s),\ldots,d%
_{s^{i}}^{i}h(s)\big{)}ds$$
what is the relationship between the
degrees $i,j$ and the properties in this section?
2 Examples, and different approaches and results
We now present some approaches and ideas that have been proposed
to define a metric and a distance on the space of curves;
we postpone exact definitions to section §3.2.
2.1 Riemannian geometries of curves
A Riemannian geometry is obtained by associating to
$T_{c}M$ the scalar product of
an Hilbert space $H$ of squared integrable functions.
We actually have many choices for the definition of $H$.
2.1.1 Parametric (non-geometric) form of $H^{0}$
•
We may define
$$H=H^{0}(S^{1}\to{\mathrm{l\hskip-1.5ptR}}^{n})=L^{2}(S^{1}\to{\mathrm{l\hskip-%
1.5ptR}}^{n})$$
endowed with the scalar product
$$\langle h,k\rangle=\int_{S_{1}}\langle h(\theta),k(\theta)\rangle d\theta$$
(2.1)
for all $h,k\in T_{c}M$;
this is a common choice in analysis and geometry texts (18) (18) (18)See ch 2.3 in [Kli82], or the long list of references at
the end of II§1 in [Lan99];
however, the resulting metric is not invariant with respect to
parameterization of curves (see (3) on page
3) and is therefore not geometric.
Remark 2.1
This is the most common choice in numerical applications:
each curve is numerically
represented by a finite number $m$ of sample points; thereby
discretizing the geometry of curves to the geometry of
${\mathrm{l\hskip-1.5ptR}}^{nm}$.
Therefore (1.3) takes the form
$$\int_{0}^{1}\|{\mathaccent 28767{\gamma}}(v)\|_{H^{0}}^{2}\leavevmode\nobreak%
\ dv=\int_{0}^{1}\int_{S^{1}}|\partial_{v}\gamma(\theta,v)|^{2}\leavevmode%
\nobreak\ d\theta dv$$
(2.2)
that is defined when $\gamma\in H^{1}([0,1]\to M)$.
The energy of a homotopy is then
$$E_{s}(C)=\int_{I}|\partial_{v}C|^{2}$$
Definition 2.2
We define the space
$$H^{1,0}\big{(}I\to{\mathrm{l\hskip-1.5ptR}}^{n}\big{)}=H^{1}\Big{(}[0,1]\to H^%
{0}(S^{1}\to{\mathrm{l\hskip-1.5ptR}}^{n})\Big{)}$$
and define the norm on
the above space $H^{1,0}$ to be
$$\int_{0}^{1}\int_{S^{1}}|\gamma(\theta,v)|^{2}+|\partial_{v}\gamma(\theta,v)|^%
{2}\leavevmode\nobreak\ d\theta dv$$
(2.3)
then
Proposition 2.3
$H^{1,0}$ is the space of all finite energy homotopies,
and the norm (2.3) above is equivalent
to the energy (2.2) on families $\gamma$ with
fixed end points (by prop. 3.14).
$E_{s}(C)$ is strongly continuous in $H^{1,0}$ and is convex,
and $E_{s}(C)$ is coercive (proof by using 3.14).
So $E_{s}$ has a very simple unique minimal geodesic,
namely, the pointwise linear interpolation
$$C^{*}(\theta,v)=(1-v)c_{0}(\theta)+vc_{1}(\theta)$$
•
As a second choice, we may define in $H$ the scalar product
$$\langle h,k\rangle=\int_{0}^{l}\langle h(s),k(s)\rangle ds=\int_{S_{1}}\langle
h%
(\theta),k(\theta)\rangle|\dot{c}(\theta)|d\theta$$
(2.4)
where $l$ is the length of $c$, $ds$ is the arc infinitesimal, and
$\dot{c}=\partial_{\theta}c$. (19) (19) (19)There is an abuse of
notation in (2.4); we would like,
intuitively, to define $h$ and $k$ on the immersed curve
$c$; to this end, we define $h(s)$ and $k(s)$ on the arc-parameterization
$c(s):[0,l]\to{\mathrm{l\hskip-1.5ptR}}^{n}$ (with $l=\mathop{\operator@font len}\nolimits(c)$), and pull them back to
$c(\theta):S^{1}\to{\mathrm{l\hskip-1.5ptR}}^{n}$, where we write $h(\theta),k(\theta)$
instead of $h(s(\theta)),k(s(\theta))$
This scalar product does not depend on the parameterization of the curve $c$;
but the resulting metric is still not invariant with respect to
reparameterization of homotopies
(see 3 in 3).
By projecting this metric onto $B_{i,f}$ and lifting it back to
$M$ (using 3.10),
we then devise an appropriate geometric metric, as follows.
2.1.2 Geometric form of $H^{0}$
We then propose the scalar product
$$\langle h,k\rangle=\int_{0}^{l}\langle\pi_{N}h(s),\pi_{N}k(s)\rangle ds$$
(2.5)
where $\pi_{N}$ is the projection to the normal space $N$ to the curve
(see 3.2).
From now on, when we speak of the $H^{0}$ metric, we will be
implying this last definition.
Note that we may equivalently define the scalar product as in
(2.4), and only accept in $T_{c}M$ deformations
that are orthogonal to the tangent of the curve. This would
be akin to working in the quotient manifold $B_{i,f}$; or it may
be viewed as a sub-Riemannian geometry on $M$ itself.
This geometric metric generates the energy
$$E^{N}(C){\stackrel{{\scriptstyle\mbox{.}}}{{=}}}\int_{I}|\pi_{N}\partial_{v}C|%
^{2}|\dot{C}|\,d\theta\,dv$$
(2.6)
which is invariant of reparameterizations of homotopies
(see 3 in page 3).
By proposition 3.10,
the distance induced by this metric is equal to
the distance induced by the previous one (2.4).
Unfortunately, it has been noted in [Mum]
that the metric $H^{0}$ does not define a distance between curves, since
$$\inf E^{N}(C)=0$$
(see C.1 and C.3).
We will study this metric further in section 4.
There is a good reason to focus our attention
on the properties of this metric (2.5) for curves. Namely,
this is precisely the metric that is implicitly assumed in formulating
gradient flows of contour based energy functionals in the vast
literature on shape optimization. Consider for example the well known
geometric heat flow ($\partial_{t}C=\partial_{ss}C$) (20) (20) (20)
We will sometimes write $C_{v}$ for $\partial_{v}C$ (and so on),
to simplify the derivations
in which a curve evolves along the inward
normal with speed equal to its signed curvature. This flow is widely
considered to be the gradient descent of the Euclidean arclength
functional. Its smoothing properties have led to its widespread use
within the fields of computer vision and image processing. The only
sense, however, in which this is truly a gradient flow is with respect
to the $H^{0}$ metric as we see in the following calculation
(where $L(t)$ denotes the time varying arclength of an evolving curve $C(u,t)$
with parameter $u\in[0,1]$).
$$\displaystyle L(t)$$
$$\displaystyle=\int_{C}ds=\int_{0}^{1}|C_{u}|\,d{u}$$
$$\displaystyle L^{\prime}(t)$$
$$\displaystyle=\int_{0}^{1}\frac{C_{ut}\cdot C_{u}}{|C_{u}|}\,d{u}=\int_{0}^{1}%
C_{tu}\cdot C_{s}\,d{u}=-\int_{0}^{1}C_{t}\cdot C_{su}\,d{u}=-\int_{C}C_{t}%
\cdot C_{ss}\,ds$$
$$\displaystyle=-\Big{\langle}C_{t},C_{ss}\Big{\rangle}_{H^{0}}$$
If we were to change the metric then the inner-product shown
above would no longer correspond to the inner-product associated to
the metric. As a consequence, the above flow could no longer be
considered as the gradient flow with respect to the new metric.
In other words, the gradient flow would be different. We will consider
this consequence more at length in §5.
2.1.3 Michor-Mumford
To overcome the pathologies of the $H^{0}$ metric,
Michor and Mumford [MM] propose the metric
$$G^{A}_{c}(h,k)=\int_{0}^{1}(1+A|\kappa_{c}|^{2})\langle\pi_{N}h(u),\pi_{N}k(u)%
\rangle|\dot{c}(u)|du$$
(2.7)
on planar curves,
where $\kappa_{c}(u)$ is the curvature of $c(u)$, and $A>0$ is fixed.
This may be generalized to the energy (21) (21) (21)$J(C)$ is defined in (4.1);
$H$ is the mean curvature, defined in 3.3.
Note that both $E^{N}(C)$ and $J(C)$ are invariant with respect to
reparameterization, in the sense defined in 3.
$$E^{A}(C){\stackrel{{\scriptstyle\mbox{.}}}{{=}}}\int_{0}^{1}\int_{S^{1}}|\pi_{%
N}\partial_{v}C|^{2}(1+A|H|^{2})|{\mathaccent 28767{C}}|\leavevmode\nobreak\ d%
\theta dv=E^{N}(C)+AJ(C)$$
(2.8)
on space homotopies $C(u,v):[0,1]\times[0,1]\to{\mathrm{l\hskip-1.5ptR}}^{n}$.
This approach is discussed in §4.4.1.
2.1.4 Srivastava et al.
We consider here planar curves of length $2\pi$ and parameterized by arclength,
using the notation $\xi(s):S^{1}\to{\mathrm{l\hskip-1.5ptR}}^{2}$. Such curves are
Lipschitz continuous.
If $\xi\in C^{1}$, then $|\dot{\xi}|=1$, so we may lift the equality
$$\dot{\xi}(s)=\big{(}\!\cos(\theta(s)),\leavevmode\nobreak\ \sin(\theta(s))\big%
{)}$$
(2.9)
to obtain a continuous function $\theta:{\mathrm{l\hskip-1.5ptR}}\to{\mathrm{l\hskip-1.5ptR}}$.
This continuous lifting $\theta$ is unique up to addition of
a constant $2\pi h$, $h\in{\mathbb{Z}}$; and
$\theta(s+2\pi)-\theta(s)=2\pi i$, where $i\in{\mathbb{Z}}$ is the
winding number, or rotation index of $\xi$.
The addition of a generic constant to $\theta$ is equivalent to
a rotation of $\xi$.
We then understand that we may represent arc-parameterized curves $\xi(s)$,
up to translation, scaling, and rotations, by
considering a suitable class of liftings $\theta(s)$ for $s\in[0,2\pi]$.
Two spaces are defined in [KSMJ03];
we present the case of
“Shape Representation using Direction Functions”,
where the space of (pre)-shapes is
defined as the closed subset $M$ of $L^{2}=L^{2}([0,2\pi])$,
$$M=\left\{\theta\in L^{2}([0,2\pi])\leavevmode\nobreak\ |\leavevmode\nobreak\ %
\phi(\theta)=(2\pi^{2},0,0)\right\}$$
where $\phi:L^{2}\to{\mathrm{l\hskip-1.5ptR}}^{3}$ is defined by
$$\phi_{1}(\theta)=\int_{0}^{2\pi}\theta(s)ds\leavevmode\nobreak\ \leavevmode%
\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \phi_{2}(\theta)=\int_{0}%
^{2\pi}\!\cos\theta(s)ds\leavevmode\nobreak\ \leavevmode\nobreak\ ,\leavevmode%
\nobreak\ \leavevmode\nobreak\ \phi_{3}(\theta)=\int_{0}^{2\pi}\!\sin\theta(s)ds$$
Define $Z$ as the set of representations in $M$ of flat curves;
then $Z$ is closed in $L^{2}$, and $M\setminus Z$ is a manifold:
(22) (22) (22)The details and the proof of 2.4
and 2.5
are in appendix §B
Proposition 2.4
By the implicit function theorem, $M\setminus Z$ is a smooth immersed
submanifold of codimension 3 in $L^{2}$.
Note that $M\setminus Z$ contains the (representation of) all smooth
immersed curves.
The manifold $M\setminus Z$ inherits a Riemannian structure, induced
by the scalar product of $L^{2}$; geodesics may be prolonged smoothly as
long as they do not meet $Z$. Even if $M$ may not be a manifold at
$Z$, we may define the geodesic distance $d^{g}(x,y)$ in $M$ as the
infimum of the length of Lipschitz paths $\gamma:[0,1]\to L^{2}$ whose
image is contained in $M$; (23) (23) (23)It seems that $M$ is Lipschitz-arc-connected, so
$d^{g}(x,y)<\infty$; but we did not carry on a detailed proof since
$d^{g}(x,y)\geq\|x-y\|_{L^{2}}$, and $M$ is closed in ${L^{2}}$, then the
metric space $(M,d^{g})$ is complete.
We don’t know if $(M,d^{g})$ admits minimal geodesics, or if it
falls in the category of examples such as 1.9.
For any $\theta\in M$, it is possible to reconstruct the
curve by integrating
$$\xi(s)=\int_{0}^{s}\cos(\theta(t)),\sin(\theta(t)))dt$$
(2.10)
This means that $\theta$ identifies an unique curve (of length $2\pi$,
and arc-parameterized) up to
rotations and translations, and to the choice of the base point
$\xi(0)$; for this last reason, $M$ is called in
[KSMJ03] a preshape space.
The shape space $S$ is obtained by quotienting $M$
with the relation $\theta\sim\hat{\theta}$ iff
$\theta(s)=\hat{\theta}(s-a)+b$, $a,b\in{\mathrm{l\hskip-1.5ptR}}$.
We do not discuss this quotient here.
We may represent any Lipschitz closed arc-parameterized curve $\xi$
using a measurable $\theta\in M$:
let $\mbox{arc}:S^{1}\to[0,2\pi)$ be the inverse of
$\theta\mapsto(\cos(\theta),\sin(\theta))$; $\mbox{arc}()$ is a Borel function;
then $((\mbox{arc}\circ\dot{\xi})(s)+a)\in M$, for an $a\in{\mathrm{l\hskip-1.5ptR}}$.
We remark that the
measurable representation is never unique: for any measurable
$A,B\subset[0,2\pi]$ with $|A|=|B|$,
$(\theta(s)+2\pi({\mathbf{1}}_{A}(s)-{\mathbf{1}}_{B}(s)))$ will
as well represent $\xi$ in $M$. This implies that the family
$A_{\xi}$ of $\theta$ that represent the same curve $\xi$
is infinite. It may be then advisable to define a
quotient distance as follows:
$$\hat{d}(\xi,\xi^{\prime}){\stackrel{{\scriptstyle\mbox{.}}}{{=}}}\inf_{\theta%
\in A_{\xi},\theta^{\prime}\in A_{\xi^{\prime}}}d(\theta,\theta^{\prime})$$
(2.11)
where $d(\theta,\theta^{\prime})=\|\theta-\theta^{\prime}\|_{L^{2}}$, or alternatively
$d=d^{g}$ is the geodesic distance on $M$.
If $\xi\in C^{1}$, we have an unique (24) (24) (24)Indeed, the continuous lifting is unique up to addition of a
constant to $\theta(s)$, which is equivalent to
a rotation of $\xi$; and the constant is decided by
$\phi_{1}(\theta)=2\pi^{2}$
continuous representation $\theta\in M$; but note that,
even if $\xi,\xi^{\prime}\in C^{1}$, the infimum (2.11)
may not be given by the continuous representations $\theta,\theta^{\prime}$
of $\xi,\xi^{\prime}$. Moreover there are curves $\xi$ that do not
admit a continuous representation $\theta$. As a consequence, it will not be
possible to define the rotation index of such curves $\xi$; indeed
we prove this result:
Proposition 2.5
For any $h\in{\mathbb{Z}}$, the set of closed smooth curves
$\xi$ with rotation index $h$, when represented in $M$
using (2.9), is dense in $M\setminus Z$.
2.1.5 Higher order Riemannian geometry
If we want an higher order model,
we may define a metric mimicking the definition of the Hilbert space
$H^{1}$, by defining
$$\langle h,k\rangle_{H^{1}}=\langle h,k\rangle_{H^{0}}+\langle\dot{h},\dot{k}%
\rangle_{H^{0}}$$
(2.12)
We have again many different choices, since
•
we may use in the RHS of (2.12)
the parametric $H^{0}$ scalar product
(2.1), in which case the scalar product
$\langle h,k\rangle_{H^{1}}$ is the standard scalar product
of $H^{1}(S^{1}\to{\mathrm{l\hskip-1.5ptR}}^{n})$; then
homotopies are in the space
$$H^{1}\big{(}[0,1]\to H^{1}(S^{1}\to{\mathrm{l\hskip-1.5ptR}}^{n})\big{)}$$
with norm
$$\int_{I}|\gamma|^{2}+\langle\partial_{v}\gamma\partial_{v}\gamma\rangle+%
\langle\partial_{v}{\mathaccent 28767{\gamma}},\partial_{v}{\mathaccent 28767{%
\gamma}}\rangle$$
•
we may use
in the RHS of (2.12) the scalar product (2.4)
or (2.5). Unfortunately none of these choices
is invariant with respect to reparameterization of homotopies.
We don’t know of many application of this idea;
the only one may be considered to be [You98].
2.2 Finsler geometries of curves
To conclude, we present two examples of Finsler geometries of curves
that have been used (sometimes covertly) in the literature.
2.2.1 $L^{\infty}$ and Hausdorff metric
If we wish to define a norm on
$T_{c}M$ that is modeled on the norm of
the Banach space $L^{\infty}(S^{1}\to{\mathrm{l\hskip-1.5ptR}}^{n})$, we define
$$F^{\infty}(c,h)=\|\pi_{N}h\|_{L^{\infty}}=\mathop{\operator@font supess}%
\nolimits_{\theta}|\pi_{N}h(\theta)|$$
This Finsler metric is geometric.
The length of a homotopy is then
$$\mathop{\operator@font Len}\nolimits(C)=\int_{0}^{1}\mathop{\operator@font
supess%
}\nolimits_{\theta}|\pi_{N}\partial_{v}C(\theta,v)|dv$$
Hausdorff metric
We recall the definition of the Hausdorff metric
$$d_{H}(A,B){\stackrel{{\scriptstyle\mbox{.}}}{{=}}}\max\left\{\max_{x\in A}d(x,%
B),\max_{y\in B}d(A,y)\right\}$$
where $A,B\in{\mathrm{l\hskip-1.5ptR}}^{n}$ are closed, and
$$d(x,A){\stackrel{{\scriptstyle\mbox{.}}}{{=}}}\min_{y\in A}|x-y|$$
Let $\Xi$ be the
collection of all compact subsets of ${\mathrm{l\hskip-1.5ptR}}^{n}$.
We define the length of any continuous path
$\xi:[0,1]\to\Xi$
by using the total variation, as follows
$$\mathop{\operator@font len}\nolimits^{H}\gamma{\stackrel{{\scriptstyle\mbox{.}%
}}{{=}}}\sup_{T}\sum_{i=1}^{j}d_{H}\big{(}\xi(t_{i-1}),\xi(t_{i})\big{)}$$
(2.13)
where the sup is carried out over all finite subsets
$T=\{t_{0},\cdots,t_{j}\}$ of $[0,1]$ and $t_{0}\leq\cdots\leq t_{j}$.
The metric space $(\Xi,d_{H})$ is complete and path-metric, (25) (25) (25)Path-metric: $d_{H}(A,B)=\inf\mathop{\operator@font len}\nolimits^{H}\gamma$ where the
infimum is computed in the class of Lip curves
$\gamma:[0,1]\to\Xi$ connecting $A$ to $B$
and it is possible to connect any two $A,B\in\Xi$
by a minimal geodesic, of length $d_{H}(A,B)$. (26) (26) (26)To prove this, note that $(\Xi,d_{H})$
is locally compact and complete; and apply A.2
Let $\Xi_{c}$ be the class of compact connected $A\subset{\mathrm{l\hskip-1.5ptR}}^{n}$;
$\Xi_{c}$ is a closed subset of $(\Xi,d_{H})$; $\Xi_{C}$ is Lipschitz-path-connected
(27) (27) (27)That is, any $A,B\in\Xi_{c}$ can be connected
by a Lipschitz arc $\gamma:[0,1]\to\Xi_{c}$;
for all above reasons, it is possible to connect any two $A,B\in\Xi_{c}$
by a minimal geodesic moving in $\Xi_{c}$;
but note that $\Xi_{c}$ is not geodesically convex in $\Xi$. (28) (28) (28)There exist two points $A,B\in\Xi_{c}$ and a minimal geodesics $\xi$
connecting $A$ to $B$ in the metric space $(\Xi,d)$, such that
the image of $\xi$ is not contained inside $\Xi_{c}$
We don’t know if $(\Xi_{c},d_{H})$ is path-metric.
Projection of $F^{\infty}$ into Hausdorff metric space
When we associate to a continuous curve $c\in M$ its image
$Im(c)\subset{\mathrm{l\hskip-1.5ptR}}^{n}$, we are actually defining
a natural projection
$$Im:M\to\Xi_{c}$$
This projection transforms a path $\gamma$ in $M$
into a path $\xi:[0,1]\to\Xi_{c}$; if
the homotopy $C(\theta,v)=\gamma(v)(\theta)$
is continuous, then $\xi$ is continuous.
Moreover the projection of all embedded curves $c$
is dense in $\Xi_{c}$.
It is possible to prove that
$$\mathop{\operator@font Len}\nolimits(\gamma)=\mathop{\operator@font len}%
\nolimits^{H}(\xi)$$
for a large class of paths; and then
the distance induced by the metric $F^{\infty}$ coincides with
$d_{H}(Im(c_{0}),Im(c_{1}))$.
This is quite useful, since in
the metric space generated by the metric $F^{\infty}$, it is possible
to find two curves that cannot be connected by a geodesic
(this is due to topological obstructions); whereas, the minimal geodesic will
exist in the space $(\Xi_{c},d_{H})$.
For this reason, [CFK03] proposed an approximation method
to compute $\mathop{\operator@font len}\nolimits^{H}(\xi)$ by means of a family of energies
defined using a smooth integrand (29) (29) (29)The approximation is mainly
based on the property $\|f\|_{L^{p}}\to_{p}\|f\|_{L^{\infty}}$,
for any measurable function $f$ defined on a bounded domain
, and successively to find approximation of geodesics.
Unfortunately, the geometry in $(\Xi,d_{H})$ is highly non regular:
for example, it is possible to find two compact sets such that
there are uncountably many minimal geodesics connecting them. (30) (30) (30)This is an unpublished result due to Alessandro Duci.
2.2.2 $L^{1}$ and Plateau problem
If we wish to define a geometric norm on
$T_{c}M$ that is modeled on the norm of
the Banach space $L^{1}(S^{1}\to{\mathrm{l\hskip-1.5ptR}}^{n})$, we may define
the metric
$$F^{1}(c,h)=\|\pi_{N}h\|_{L^{1}}=\int|\pi_{N}h(\theta)||\dot{c}(\theta)|d\theta$$
The length of a homotopy is then
$$\mathop{\operator@font Len}\nolimits(C)=\int_{I}|\pi_{N}\partial_{v}C(\theta,v%
)||\dot{C}(\theta,v)|d\theta dv$$
which coincides with
$$\mathop{\operator@font Len}\nolimits(C)=\int_{I}|\partial_{v}C(\theta,v)\times%
\partial_{\theta}C(\theta,v)|d\theta dv$$
This last is easily recognizable as the surface area of the homotopy
(up to multiplicity); the problem of finding a minimal geodesic
connecting $c_{0}$ and $c_{1}$
in the $F^{1}$ metric may be reconducted to
the Plateau problem of finding a surface
which is an immersion of $I=S^{1}\times[0,1]$ and which has
fixed borders to the curves $c_{0}$ and $c_{1}$.
The Plateau problem is a wide and well studied subject
upon which Fomenko expounds in the monograph [Fom90].
3 Basics
3.1 Relaxation of functionals
To prove that
a Riemannian manifold admits minimal geodesics, we
will study the energy $E(\gamma)$ by means of methods
in Calculus of Variations; we review some basic ideas.
Let $X$ be a topological space, endowed with a topology $\tau$,
and ${\Omega}\subset X$, and $f:{\Omega}\to{\mathrm{l\hskip-1.5ptR}}$.
The function $f$ is lower semi continuous if, for all $x\in X$,
$$\liminf_{y\to x}f(y)\geq f(x)$$
Note that, if the topology $\tau$ is not defined by a metric,
then we can introduce a different condition:
$F$ is sequentially lower semi continuous if, for all $x\in X$,
for all $x_{n}\to x$,
$$\liminf_{n}f(x_{n})\geq f(x)$$
We define the sequential relaxation $\Gamma f$ of $f$ on ${\Omega}$
to be the function
$$\Gamma f:\overline{\Omega}\to{\mathrm{l\hskip-1.5ptR}}$$
that is the supremum of all $f^{\prime}:\overline{\Omega}\to{\mathrm{l\hskip-1.5ptR}}$
that are sequentially lower semicontinuous on
$\overline{\Omega}$,
and $f^{\prime}\leq f$ in ${\Omega}$. We have that, for all $x\in\overline{\Omega}$,
$$\Gamma_{\Omega}f(x)=\min_{(x_{n}),x_{n}\to x}\{\liminf_{n}f(x_{n})\}$$
where the minimum is taken on all sequences $x_{n}$ converging to $x$,
with $x_{n},x\in{\Omega}$.
$f$ is sequentially lower semicontinuous, iff $\Gamma f|_{{\Omega}}=f$.
If $X$ is a metric space, then “sequentially lower semicontinuous
functions” are “lower semicontinuous
functions”, and viceversa; so we may drop the adjective “sequentially”.
Consider again a function $f:X\to{\mathrm{l\hskip-1.5ptR}}$:
it is called coercive in $X$ if $\forall M\in{\mathrm{l\hskip-1.5ptR}}$,
$$\{x\in X:f(x)\leq M\}$$
is contained in a compact set.
Proposition 3.1
If $f$ is coercive in $X$ and sequentally lower semi continuous
then it admits a minimum on any closed set, that is,
for all $C\subset X$ closed there exists $x\in C$ s.t.
$$f(x)=\min_{y\in C}f(y)$$
This result is one of the pillars of the modern Calculus of
Variations. We will see that unfortunately this result may not be applied
directly to the problem at hand (see 4.1).
3.2 Curves and notations
Consider in the following a curve $c(\theta)$
defined as $c:S^{1}\to{\mathrm{l\hskip-1.5ptR}}^{n}$.
We write ${\mathaccent 28767{c}}$ for $\partial_{\theta}c$.
Definition 3.2
Suppose that $c$ is $C^{1}$; or suppose that $c\in W^{1,1}_{\text{loc}}$,
that is, $c$ admits a weak derivative ${\mathaccent 28767{c}}$.
At all points where ${\mathaccent 28767{c}}(\theta)\neq 0$, we define the
tangent vector
$$T(\theta)=\frac{{\mathaccent 28767{c}}(\theta)}{|{\mathaccent 28767{c}}(\theta%
)|}$$
At the points where ${\mathaccent 28767{c}}=0$ we define $T=0$.
Let $v\in{\mathrm{l\hskip-1.5ptR}}^{n}$. We define the projection onto the normal
space $N=T^{\perp}$
$$\pi_{N}v=v-\langle v,T\rangle T$$
and on the tangent
$$\pi_{T}v=\langle v,T\rangle T$$
so $\pi_{N}v+\pi_{T}v=v$ and $|\pi_{N}v|^{2}+|\pi_{T}v|^{2}=|v|^{2}$
(that implies $|\pi_{N}v|^{2}=|v|^{2}-\langle v,T\rangle^{2}$).
If $c$ admits the weak derivative $\partial_{\theta}c$
then $T$ is measurable, so $T\in L^{\infty}$
and $\pi_{T},\pi_{N}\in L^{\infty}(S^{1}\to{\mathrm{l\hskip-1.5ptR}}^{n\times n})$
A curve $c\in C^{1}(S^{1}\to{\mathrm{l\hskip-1.5ptR}}^{n})$
is immersed when $|{\mathaccent 28767{c}}|>0$ at all points.
In this case, we can always define the
arc parameter $s$ so that
$$ds=|{\mathaccent 28767{c}}(\theta)|d\theta$$
and the derivation with respect to the arc parameter
$$\partial_{s}=\frac{1}{|{\mathaccent 28767{c}}|}\partial_{\theta}$$
We will also consider curves $c\in W^{1,1}(S^{1}\to{\mathrm{l\hskip-1.5ptR}}^{n})$
such that $|{\mathaccent 28767{c}}|>0$ at almost all points.
There are two different definitions of curvature of an immersed curve:
mean curvature $H$ and signed curvature $\kappa$, which is
defined when $c$ is valued in ${\mathrm{l\hskip-1.5ptR}}^{2}$.
$H$ and $k$ are extrinsic curvatures
(31) (31) (31)See also:
Eric W. Weisstein. ”Extrinsic Curvature.” From MathWorld–A
Wolfram Web Resource.
http://mathworld.wolfram.com/ExtrinsicCurvature.html :
they are properties of the embedding of $c$ into ${\mathrm{l\hskip-1.5ptR}}^{n}$.
Definition 3.3 (H)
If $c$ is $C^{2}$ regular and immersed,
we can define the mean curvature $H$ of
$c$ as
$$H=\partial_{s}T=\frac{1}{|{\mathaccent 28767{c}}|}\partial_{\theta}T$$
In general, we will say that a curve $c\in W^{1,1}_{\text{loc}}(S^{1})$ admits
mean curvature in the measure sense if there exists a
a vector valued measure $H$ on $S^{1}$
such that
$$\int_{I}T(s)\partial_{s}\phi(s)\leavevmode\nobreak\ ds=-\int_{I}\phi(s)\,H(ds)%
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode%
\nobreak\ \forall\phi\in C^{\infty}(S^{1})$$
that is
$$\int_{I}T(\theta)\partial_{\theta}\phi(\theta)\,d\theta=-\int_{I}\phi(\theta)%
\,|{\mathaccent 28767{c}}|H(d\theta)\leavevmode\nobreak\ \leavevmode\nobreak\ %
\leavevmode\nobreak\ \leavevmode\nobreak\ \forall\phi\in C^{\infty}(S^{1})$$
Note that the two definitions are related, since when $c\in C^{2}$,
the measure is $H=\partial_{s}T\cdot ds$.
See also [Sim83], §7 and §16.
We can then define the projection onto the curvature vector $H$ by
$$\pi_{H}v=\frac{1}{|H|^{2}}\langle v,H\rangle H$$
Definition 3.4 (N)
When the curve $c$ is in ${\mathrm{l\hskip-1.5ptR}}^{2}$, and is immersed,
we can define (32) (32) (32)There is a slight abuse of notation here,
since in the definition $N=T^{\perp}$ in 3.2
we defined $N$ to be a “vector space” and not a “vector”
a unit vector $N$
such that $N\perp T$ and $N$ is $\pi/2$ degree anticlockwise
with respect to $T$. In this case for any vector $V\in{\mathrm{l\hskip-1.5ptR}}^{2}$,
$\pi_{N}V=N\langle N,V\rangle$, and,
$$|V^{2}|=\pi_{N}V$$
Definition 3.5 ($\kappa$)
if $c$ is in ${\mathrm{l\hskip-1.5ptR}}^{2}$ then we can define a signed
scalar curvature
$$\kappa=\langle H,N\rangle$$
If $H$ is a measure, then $k$ is a real valued measure
defined by
$$\kappa(A)=\sum_{i=1}^{n}\int_{A}N_{i}(\theta)\,dH_{i}(d\theta)\leavevmode%
\nobreak\ \leavevmode\nobreak\ .$$
Note that $|H|=|\kappa|$.
When $c\in C^{2}(S^{1}\to{\mathrm{l\hskip-1.5ptR}}^{2})$ is immersed,
$$\partial_{s}T=\kappa N\leavevmode\nobreak\ \leavevmode\nobreak\ \text{ and }%
\leavevmode\nobreak\ \leavevmode\nobreak\ \partial_{s}N=-\kappa T\leavevmode%
\nobreak\ \leavevmode\nobreak\ .$$
Remark 3.6
When the curve $c$ is in ${\mathrm{l\hskip-1.5ptR}}^{2}$, and is immersed then
$$\langle H,v\rangle=\kappa\pi_{N}v$$
whereas for immersed curves $c$ in ${\mathrm{l\hskip-1.5ptR}}^{n}$
$$|\langle H,v\rangle|\leq|H||\pi_{N}v|$$
(3.1)
and we do not expect to have equality in general when $n\geq 3$,
since $H$ is only a vector in the $(n-1)$-dimensional
subspace $N=T^{\perp}$.
3.2.1 Homotopies
Let $I=S^{1}\times[0,1]$.
We define a homotopy to be a continuous function
$$C(\theta,v):S^{1}\times[0,1]\to{\mathrm{l\hskip-1.5ptR}}^{n}$$
This homotopy is a path connecting
$c_{0}=C(\cdot,0)$ and $c_{1}=C(\cdot,1)$
in the space $M$ of curves:
indeed any path $\gamma$ in $M$ is associated
to a homotopy $C$ by $C(\theta,v)=\gamma(v)(\theta)$.
We extend the above definitions to homotopies $C$,
isolating any curve $c$ in the homotopy by defining
$c(\theta)=C(\theta,v)$ for the corresponding fixed $v$;
for example, when
$C$ admits the weak derivative $\partial_{\theta}C$ then
$$\pi_{T},\pi_{N}\in L^{\infty}(S^{1}\times[0,1]\to{\mathrm{l\hskip-1.5ptR}}^{n%
\times n})$$
Remark 3.7
We extend the measure $H(\cdot,v)$ on $S^{1}$, (that is curvature of
any single curve $C(\cdot,v)$) to a Borel measure $\hat{H}$ on $I$,
by
$$\hat{H}(A)=\int_{0}^{1}H(A_{v})dv$$
(3.2)
where $A_{v}$ is the section of $A$.
$H$ can be defined equivalently using the formula
$$\int_{I}T(\theta,v)\partial_{\theta}\phi(\theta,v)=-\int_{I}\phi(\theta,v)|{%
\mathaccent 28767{C}}(\theta,v)|H(d\theta,dv)\leavevmode\nobreak\ \leavevmode%
\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \forall\phi\in C^{\infty}_%
{c}(I)$$
Moreover we define the length
$$\mathop{\operator@font len}\nolimits(C)(v)=\int_{S^{1}}|{\mathaccent 28767{C}}%
(\theta,v)|\leavevmode\nobreak\ d\theta$$
so that $\mathop{\operator@font len}\nolimits(C):[0,1]\to{\mathrm{l\hskip-1.5ptR}}^{+}$ is a function of $v$.
3.3 Preliminary results
Definition 3.8 (Reparameterization)
We define the reparameterization of $C$ to $\widetilde{C}$ as
$$\widetilde{C}(\theta,v)=C(\varphi(\theta,v),v)$$
(3.3)
where $\varphi:S^{1}\times[0,1]\to S^{1}$ is $C^{1}$ regular
with $\partial_{\theta}\varphi\neq 0$.
Then (by direct computation)
$$\partial_{\theta}\widetilde{C}(\theta,v)=\partial_{\theta}C(\tau,v)\partial_{%
\theta}\varphi(\theta,v)$$
(3.4)
so that $T=\widetilde{T}\text{sign}(\partial_{\theta}\varphi)$; and
$$\pi_{\widetilde{N}}\partial_{v}\widetilde{C}(\theta,v)=\pi_{N}\partial_{v}C(%
\tau,v)$$
(3.5)
whereas
$$\pi_{\widetilde{T}}\partial_{v}\widetilde{C}(\theta,v)=\pi_{T}\partial_{v}C(%
\tau,v)+{\mathaccent 28767{C}}(\tau,v)\partial_{v}\varphi(\theta,v)$$
(3.6)
where $\tau{\stackrel{{\scriptstyle\mbox{.}}}{{=}}}\varphi(\theta,v)$.
We may choose to reparameterize using the arclength parameter as in the
following proposition.
Proposition 3.9 (Arc parameter)
For any $C^{1}$ regular homotopy $C$ such that all curves $\theta\mapsto C$
are immersed,
there exists a $\varphi$ as in (3.3) above
such that $|\partial_{\theta}\widetilde{C}|$ is constant in $\theta$ for any $v$
(that is, there exists $l:[0,1]\to{\mathrm{l\hskip-1.5ptR}}$ such that
$|\partial_{\theta}\widetilde{C}(\theta,v)|=l(v)$)
{proof}
We just choose
$$\varphi(\theta,v)=\frac{2\pi}{\mathop{\operator@font len}\nolimits C}\int_{0}^%
{\theta}|\dot{C}(t,v)|\leavevmode\nobreak\ dt$$
On the other hand, we may reparameterize to eliminate $\pi_{T}\partial_{v}C$
Proposition 3.10
For any $C^{2}$ regular homotopy $C$ such that all curves $\theta\mapsto C$
are immersed
there exists a $\varphi$ as in (3.3) above
such that $\pi_{\widetilde{T}}\partial_{v}\widetilde{C}=0$.
{proof}
Both $\pi_{T}\partial_{v}C$ and
${\mathaccent 28767{C}}\partial_{v}\varphi$ are parallel to $T$:
so, if $\partial_{v}\varphi=-{\langle\partial_{v}C,T\rangle}/{|{\mathaccent 28767{C}}|}$, then
$\pi_{\widetilde{T}}\partial_{v}\widetilde{C}=0$
The O.D.E.
$$\begin{cases}\partial_{v}\varphi(\theta,v)=-\frac{\langle\partial_{v}C(\tau,v)%
,T(\tau,v)\rangle}{|{\mathaccent 28767{C}}(\tau,v)|}\\
\tau{\stackrel{{\scriptstyle\mbox{.}}}{{=}}}\varphi(\theta,v)\cr\varphi(\theta%
,0)=\theta,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ %
\theta\in S^{1}\end{cases}$$
(3.7)
can be solved for $v\in[0,1]$, since
$$M(\tau,v)=-\frac{\langle\partial_{v}C(\tau,v),T(\tau,v)\rangle}{|{\mathaccent 2%
8767{C}}(\tau,v)|}$$
is periodic in $\tau$ and continuous, and then is bounded: that is,
$$\max_{S^{1}\times[0,1]}M<\infty$$
Defining $\psi=\partial_{\theta}\varphi=\dot{\varphi}$ we compute
$$\displaystyle{d\hskip 5.0pt\over{d{v}}}\psi(\theta,v)={d\hskip 5.0pt\over{d{%
\theta}}}{d\hskip 5.0pt\over{d{v}}}\varphi=-{d\hskip 5.0pt\over{d{\theta}}}%
\frac{\langle\partial_{v}C(\tau,v),T(\tau,v)\rangle}{|{\mathaccent 28767{C}}(%
\tau,v)|}=$$
$$\displaystyle=-\psi(\theta,v){d\hskip 5.0pt\over{d{\tau}}}\left(\frac{\langle%
\partial_{v}C(\tau,v),T(\tau,v)\rangle}{|{\mathaccent 28767{C}}(\tau,v)|}\right)$$
so $\psi$ solves
$$\begin{cases}\partial_{v}\psi(\theta,v)=-\psi(\theta,v){d\hskip 5.0pt\over{d{%
\tau}}}\frac{\langle\partial_{v}C(\tau,v),T(\tau,v)\rangle}{|{\mathaccent 2876%
7{C}}(\tau,v)|}\\
\psi(\theta,0)=1,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak%
\ \theta\in S^{1}\end{cases}$$
(3.8)
and then $\partial_{\theta}\varphi>0$ at all times $v$.
Note that $\varphi$ is not unique: we may change in
(3.7) and simply require that
$\varphi(\cdot,0)$ be a diffeomorphism. The above result is stated in section 2.8 in [MM]; there
$\widetilde{C}$ is called a horizontal path, since it
is the canonical parallel lifting of a path in $B_{i,f}$ to a path in $M$.
For example, consider the unit circle translating with unit speed
along the $x$-axis, giving rise to the following homotopy $C(\theta,v)$:
$$C(\theta,v)=(v+\cos\theta,\sin\theta)$$
While this is certainly the simplest way to parameterize the homotopy,
it does not yield a motion purely in the normal direction at each
point along each curve. However, the following reparameterization of
the homotopy yields exactly the same family of translating circles (and
therefore the same homotopy) but such that each point on each circle
along the homotopy flows exclusively along the normal to the corresponding
circle (i.e. in the radial direction).
$$\displaystyle\widetilde{C}(\theta,v)$$
$$\displaystyle=$$
$$\displaystyle(\widetilde{x},\widetilde{y})$$
$$\displaystyle\widetilde{x}(\theta,v)$$
$$\displaystyle=$$
$$\displaystyle v+\frac{(1-e^{2v})+(1+e^{2v})\cos\theta}{(1+e^{2v})+(1-e^{2v})%
\cos\theta}$$
$$\displaystyle\widetilde{y}(\theta,v)$$
$$\displaystyle=$$
$$\displaystyle\frac{2e^{v}\sin\theta}{(1+e^{2v})+(1-e^{2v})\cos\theta}$$
In figure 1 we see a comparision between the
the trajectories of various points (fixed values of $\theta$)
along the translating circle in the original homotopy $C$ and
its reparameterization $\widetilde{C}$.
The above result is suprising, kind of “black magic”: it seems to
suggest that while modeling the motion of curves we can
neglect the tangential part of the motion $\pi_{T}\partial_{v}C$.
Unfortunately, however, this is not the case.
We begin by providing a simple example.
Example 3.11
The curve $C(\cdot,v)$ is translating as in figure 2,
with unit speed:
$$C(\theta,v){\stackrel{{\scriptstyle\mbox{.}}}{{=}}}c_{0}(\theta)+ve_{1}$$
After the reparameterization 3.10,
the points in the tracts $BC$ and $AD$
are motionless, whereas the parameterization in $AB$ is stretching to
produce new points for the curve, and in $CD$ it is absorbing
points: then, $\partial_{\theta}\widetilde{C}\to\infty$ if the
curvature in $AB$ goes to infinity.
We now consider the math in more detail.
Proposition 3.12
Suppose that $n=2$ for simplicity:
define the curvature $\kappa$ as per definition 3.5.
Suppose in particular that $|{\mathaccent 28767{C}}|=1$ at all points: then
$$\partial_{v}|{\mathaccent 28767{C}}|^{2}=0=2\langle\partial_{v}\partial_{%
\theta}C,T\rangle$$
By deriving (3.3) we obtain
$$\displaystyle\partial_{\theta}\widetilde{C}(\theta,v)=\partial_{\theta}C(\tau,%
v)\psi(\theta,v)$$
where $\psi=\partial_{\theta}\varphi$
solves (3.8), that we can rewrite
(using $|{\mathaccent 28767{C}}|=1$) as
$$\partial_{v}\psi=-\psi{\langle\partial_{v}C,N\rangle\kappa}$$
(where $C,\kappa,N,T$ are evaluated at $(\varphi(\theta,v),v)$).
Note that
$|\langle\partial_{v}C,N\rangle|=|\pi_{N}\partial_{v}C|$.
This implies that the parameterization of $\widetilde{C}$ will be highly affected
at points where both $\kappa$ and $\pi_{N}\partial_{v}C$ are big.
So the above teaches us that there are two different approches to the
reparameterization: we may either use it to control ${\mathaccent 28767{C}}$ or
$\pi_{T}\partial_{v}C$, but not both.
3.4 Homotopy classes
3.4.1 Class $\mathbb{C}$
Given an energy and two continuous curves $c_{0}$ and $c_{1}$,
we will try to find a homotopy that
minimizes this energy, searching in the class $\mathbb{C}$
so defined
Definition 3.13 (class $\mathbb{C}$)
Let $I=S^{1}\times[0,1]$. Let $\mathbb{C}$ be the class of all homotopies
$C:I\to{\mathrm{l\hskip-1.5ptR}}^{n}$, continuous on $I$ and
locally Lipschitz in $S^{1}\times(0,1)$, such that
$c_{0}=C(\cdot,0)$ and $c_{1}=C(\cdot,1)$.
Such minimum will be a minimal geodesic
connecting $c_{0}$ and $c_{1}$ in the space of curves.
In this class $\mathbb{C}$ we can state
Proposition 3.14 (Poincarè inequality)
there are two constants $a^{\prime},a^{\prime\prime}>0$ such that $\forall C\in\mathbb{C}\cap H^{1,0}$,
$$\|C\|_{H^{1,0}(I)}^{2}\leq a^{\prime}+a^{\prime\prime}\int_{I}|\partial_{v}C|^%
{2}$$
(3.9)
$a^{\prime},a^{\prime\prime}>0$ depend on $c_{0}$ and $c_{1}$.
3.4.2 Class ${\mathbb{F}}$ of prescribed-parameter curves
In some of the following sections we will
change our point of view with respect to the above assumption 3.13
by fixing a measurable positive function $l:[0,1]\to{\mathrm{l\hskip-1.5ptR}}^{+}$
and restricting our attention
to a family of homotopies such that $|\partial_{\theta}C(\theta,v)|=l(v)$.
Definition 3.15 (class ${\mathbb{F}}$)
The class ${\mathbb{F}}$ contains all of $C\in\mathbb{C}$ such that
$$|\partial_{\theta}C(\theta,v)|=l(v)$$
for all $\theta,v$.
In particular, $\mathop{\operator@font len}\nolimits(C)=2\pi l(v)$.
Note that, by 4.2, if $l$ is not Hölder continuous
then the class ${\mathbb{F}}$ will be too poor to be useful (for example,
it will not contain smooth functions).
Unfortunately the class ${\mathbb{F}}$ is not closed with respect to the weak convergence
in $W^{1,p}$
Proposition 3.16
Suppose that $l$ is bounded and
$|\partial_{\theta}C_{h}(\theta,v)|=l(v)$ for all $h,\theta,v$;
assume that $1\leq p<\infty$ and
$\partial_{\theta}C_{h}\rightharpoonup V$ weakly in $L^{p}(I\to{\mathrm{l\hskip-1.5ptR}}^{n})$, or
$p=\infty$ and $\partial_{\theta}C_{n}\rightharpoonup^{*}V$ weakly-* in
$L^{\infty}(I\to{\mathrm{l\hskip-1.5ptR}}^{n})$.
Then $|V(\theta,v)|\leq l(v)$.
{proof}
Since $|\partial_{\theta}C_{h}|\leq\sup l$, we may always assume that
$\partial_{\theta}C_{h}\rightharpoonup^{*}V$ weakly-* in
$L^{\infty}(I\to{\mathrm{l\hskip-1.5ptR}}^{n})$; the result follows immediatly from
theorem 1.1 in [Dac82]
We conclude that
Theorem 3.17
Let $1\leq p\leq\infty$.
The closure of ${\mathbb{F}}\cap W^{1,p}$ with respect to weak convergence $W^{1,p}$
is contained in
the class $\overline{\mathbb{F}}$ of all $C\in W^{1,p}$ such that $C(\cdot,0)=c_{0}$ and $C(\cdot,1)=c_{1}$ are given and $|\partial_{\theta}C|\leq l(v)$.
{proof}
Suppose $(C_{h})\subset{\mathbb{F}}$ and $C_{h}\rightharpoonup C$: we prove that $C\in\overline{\mathbb{F}}$.
$(C_{h})$ is bounded in $W^{1,p}$: by
Rellich-Kondrachov theorem (see thm. IV.16 [Bre86])
$(C_{h})$ is pre-compact in $L^{p}$: up to a subsequence, we may
assume that $C_{h}\to C$ in $L^{p}$ and that $C_{h}\to C$
on almost all points: then $C(\cdot,0)=c_{0}$ and $C(\cdot,1)=c_{1}$.
Remark 3.18
We could just as well have defined a class of
homotopies with prescribed lengths, where we fix
a function $\hat{l}$ and include in the class all $C$ such that
$\mathop{\operator@font len}\nolimits(C)=\hat{l}(v)$ for all $v$. However, if the energy $E$ is geometric,
then using the reparameterization 3.9, we can always
replace any such $C$ with a $\widetilde{C}\in{\mathbb{F}}$, and
$E(C)=E(\widetilde{C})$.
3.4.3 Factoring out reparameterizations
If we only consider curves $c$ such that $|\dot{c}|\equiv 1$, then
(as pointed in [MM])
the group of reparameterizations is $S^{1}\ltimes{\mathbb{Z}}_{2}$, where
•
the group $S^{1}$ is associated to the change of the initial
point $c(0)$ for the parameterization, that is,
the reparameterization $\tau\in S^{1}$ changes the curve $c$ to
$c(\theta+\tau)$,
•
and ${\mathbb{Z}}_{2}$ means the operation of changing direction,
that is, $c(\theta)$ becomes $c(-\theta)$
We extend the above to homotopies satisfying
$|\partial_{\theta}C(\theta,v)|=l(v)$;
the second reparameterization is not significant; the first one
does not affect the normal velocity $\pi_{N}\partial_{v}C$;
so the reparameterization in the class ${\mathbb{F}}$
does not affect energies that depend only on $\pi_{N}\partial_{v}C$
(as per eq. (1.5)).
4 Analysis of $E^{N}(C)$
In this section we will focus our attention on the energy
$$E^{N}(C){\stackrel{{\scriptstyle\mbox{.}}}{{=}}}\int_{I}|\pi_{N}\partial_{v}C|%
^{2}|\dot{C}|\,d\theta\,dv$$
which is associated with the geometric version of the $H^{0}$ metric.
We will derive a result of existence of minimima of $E^{N}$ in
a class of homotopies such that the curvature is bounded.
Remark 4.1 (winding)
Unfortunately, any energy that is independent of parameterization
cannot be coercive. It is then difficult to prove existence of geodesics
by means of the standard procedure in the Calculus of Variations
3.1.
Indeed, suppose that $C$ is a homotopy, and define
$$C_{k}(\theta,v)=C(\theta+k2\pi v,v)$$
$\forall k\in\mathbb{Z}$; then
$$E^{N}(C_{k})=E^{N}(C)$$
(As a special case, consider two curves $c_{0}=c_{1}$, and
$C_{k}(\theta,v)=c_{0}(\theta+k2\pi v)$; each $C_{k}$
is a minimal geodesic connecting $c_{0},c_{1}$, since $E^{N}(C_{k})=0$).
This proves that $E^{N}$ is not coercive in $H^{1}(I)$, since
$\int|\partial_{v}C_{k}|^{2}\to\infty$ when $k\to\infty$.
4.1 Knowledge base
Some of the follwing results apply
in the class ${\mathbb{F}}$ of homotopies with
prescribed arc parameter $|\partial_{\theta}C(\theta,v)|=l(v)$ while
others apply in the more general class $\mathbb{C}$.
Consider the integral
$$J(C){\stackrel{{\scriptstyle\mbox{.}}}{{=}}}\int_{I}|H|^{2}|\pi_{N}\partial_{v%
}C|^{2}|\dot{C}|\,d\theta\,dv$$
(4.1)
where $H$ is the curvature of $C$ along $\theta$.
We now deduce the following result from [MM]
Proposition 4.2
Suppose that the homotopy $C$ is smooth and
immersed: then by extending to ${\mathrm{l\hskip-1.5ptR}}^{n}$
the computation in sec. 3.3 of [MM] we get
$${d\hskip 5.0pt\over{d{v}}}\sqrt{\mathop{\operator@font len}\nolimits(C)(v)}%
\leq\frac{1}{2}\sqrt{\int_{S^{1}}\langle H,\partial_{v}C\rangle^{2}|\dot{C}|d\theta}$$
and then, for any $0\leq v^{\prime}<v^{\prime\prime}\leq 1$,
$$\displaystyle\sqrt{\mathop{\operator@font len}\nolimits(C)(v^{\prime\prime})}-%
\sqrt{\mathop{\operator@font len}\nolimits(C)(v^{\prime})}$$
$$\displaystyle\leq$$
$$\displaystyle\frac{1}{2}\int_{v^{\prime}}^{v^{\prime\prime}}\left(\int_{S^{1}}%
\langle H,\partial_{v}C\rangle^{2}|\dot{C}|d\theta\right)^{1/2}dv\leq$$
$$\displaystyle\leq$$
$$\displaystyle\frac{1}{2}\sqrt{v^{\prime\prime}-v^{\prime}}\left(\int_{v^{%
\prime}}^{v^{\prime\prime}}\int_{S^{1}}\langle H,\partial_{v}C\rangle^{2}|\dot%
{C}|d\theta\,dv\right)^{1/2}\leq$$
$$\displaystyle\leq$$
$$\displaystyle\frac{\sqrt{J(C)}}{2}\sqrt{v^{\prime\prime}-v^{\prime}}$$
(by Cauchy-Schwarz and (3.1))
and this implies that $\sqrt{\mathop{\operator@font len}\nolimits(C)}$ is Hölder continuous
when $J(C)$ is finite.
Proposition 4.3 (l.s.c. and polyconvex function)
Let $W,V\in{\mathrm{l\hskip-1.5ptR}}^{n}$. Let $W\times V$ be the vector of all $n(n-1)/2$
determinants of all 2 by 2 minors of the matrix having $W,V$ as
columns.
Consider a continuous function
$f:{\mathrm{l\hskip-1.5ptR}}^{n\times 2}\to{\mathrm{l\hskip-1.5ptR}}$ such that
$$f(W,V)=g(W,V,W\times V)$$
where $g:{\mathrm{l\hskip-1.5ptR}}^{n(n+3)/2}\to{\mathrm{l\hskip-1.5ptR}}$ is convex: then $f$ is polyconvex
(33) (33) (33)The more general definition and the properties may be found
sec. 4.1 in [But89], or
in 2.5 and 5.4 in [Dac82].
Let $p\geq 2$, suppose $f\geq 0$:
by theorem 4.1.5 and remark 4.1.6 in [But89]
then
$$\int_{I}f(\partial_{\theta}C,\partial_{v}C)d\theta dv$$
is $W^{1,p}$-weakly-lower-semi-continuous. (34) (34) (34)Sequentially
weakly-* if $p=\infty$
This means that,
if $C_{h}\rightharpoonup C$ weakly in $W^{1,p}$, that is
$$C_{h}\to C,\leavevmode\nobreak\ \leavevmode\nobreak\ \partial_{v}C_{h}%
\rightharpoonup\partial_{v}C\leavevmode\nobreak\ \leavevmode\nobreak\ \partial%
_{\theta}C_{h}\rightharpoonup\partial_{\theta}C$$
(4.2)
in $L^{p}$,
(35) (35) (35)We can write equivalently $C_{h}\to C$ or $C_{h}\rightharpoonup C$
in (4.2), thanks to
Rellich-Kondrachov theorem (see thm. IV.16 [Bre86])
then
$$\liminf_{h}\int_{I}f(\partial_{\theta}C_{h},\partial_{v}C_{h})\geq\int_{I}f(%
\partial_{\theta}C,\partial_{v}C)$$
4.2 Compactness
We now list some simple lemmas.
Lemma 4.4
let $\widetilde{C}(\theta,v)=C(\theta+\varphi(v),v)$ be
a reparameterization; then
$$\inf_{\varphi}\int|\pi_{T}\partial_{v}\widetilde{C}|^{2}=\int_{0}^{1}\int_{S^{%
1}}\left(\pi_{T}\partial_{v}C(\theta,v)-{\mathrm{-\hskip-11.0pt\int}}_{S^{1}}%
\pi_{T}\partial_{v}C(s,v)\leavevmode\nobreak\ ds\right)^{2}\leavevmode\nobreak%
\ d\theta\leavevmode\nobreak\ dv$$
(4.3)
Lemma 4.5 (Poincarè inequality in $S^{1}$)
If $f:S^{1}\to{\mathrm{l\hskip-1.5ptR}}^{n}$ is $C^{1}$, then
$$\max|f|-\min|f|\leq\frac{1}{\pi}\int_{S^{1}}|df|$$
(4.4)
so that
$$\sqrt{\int\left|f-{\mathrm{-\hskip-11.0pt\int}}f\right|^{2}}\leq\frac{1}{2\pi}%
\int_{S^{1}}|df|$$
(4.5)
For any $a\in{\mathrm{l\hskip-1.5ptR}},a\neq 0$,
$$\max|f|-\min|f|\leq\frac{2}{\pi}\int_{S^{1}}|df+a|$$
(4.6)
so that
$$\sqrt{\int\left|f-{\mathrm{-\hskip-11.0pt\int}}f\right|^{2}}\leq\frac{1}{\pi}%
\int_{S^{1}}|df+a|$$
(4.7)
Lemma 4.6
Suppose $C\in C^{2}$.
If we derive $\partial_{v}(|{\mathaccent 28767{C}}|^{2})$ we get
$$\partial_{v}(|{\mathaccent 28767{C}}|^{2})=2\langle T,\partial_{v}\partial_{%
\theta}C\rangle|{\mathaccent 28767{C}}|$$
so, (36) (36) (36) When $|{\mathaccent 28767{C}}|=0$, then $T=0$, by our definition,
so the equation (4.8) holds as well
in the distributional sense
when $|{\mathaccent 28767{C}}|\neq 0$,
$$\partial_{v}(|{\mathaccent 28767{C}}|)=\langle T,\partial_{v}\partial_{\theta}C\rangle$$
(4.8)
If the curves are in $C^{2}$ and
we derive $\partial_{\theta}(\pi_{T}\partial_{v}C)$, we get
$$\partial_{\theta}(\pi_{T}\partial_{v}C)=\partial_{\theta}\langle T,\partial_{v%
}C\rangle=\langle H,\partial_{v}C\rangle|{\mathaccent 28767{C}}|+\langle T,%
\partial_{\theta}\partial_{v}C\rangle=\langle H,\partial_{v}C\rangle|{%
\mathaccent 28767{C}}|+\partial_{v}(|{\mathaccent 28767{C}}|)$$
(4.9)
(where $\langle H,\partial_{v}C\rangle=\kappa\pi_{N}\partial_{v}C$
for curves in the plane).
Proposition 4.7
Let $M>0$ be a constant.
Suppose that a $C^{2}$ homotopy $C:I\to{\mathrm{l\hskip-1.5ptR}}^{n}$ satisfies
$|\dot{C}(\theta,v)|=l(v)$ and
$$\int_{0}^{1}\left(\int_{S^{1}}\langle H,\partial_{v}C\rangle|\dot{C}|%
\leavevmode\nobreak\ d\theta\right)^{2}dv\leq M$$
(4.10)
where $H$ is the curvature of $C$ along $\theta$.
Then there exists a suitable reparameterizations
$$\widetilde{C}(\theta,v)=C(\theta+\varphi(v),v)$$
(4.11)
such that
$$\int|\pi_{T}\partial_{v}\widetilde{C}|^{2}\leq 2M$$
{proof}
Suppose that $l\in C^{1}$ (the general proof being obtained by an
approximation argument).
We summarize derivations in 4.6,
$$\partial_{\theta}(\pi_{T}\partial_{v}C)=\langle H,\partial_{v}C\rangle|\dot{C}%
|+\partial_{v}l$$
Applying eqns (4.3) and (4.7)
(with $a=-\partial_{v}l$) we obtain
$$\displaystyle\inf_{\varphi}\int|\pi_{T}\partial_{v}\widetilde{C}|^{2}=\int_{0}%
^{1}\int_{S^{1}}\left|\pi_{T}\partial_{v}C(\theta,v)-{\mathrm{-\hskip-11.0pt%
\int}}_{S^{1}}\pi_{T}\partial_{v}C(\tilde{\theta},v)\leavevmode\nobreak\ d%
\tilde{\theta}\right|^{2}\leavevmode\nobreak\ d\theta\leavevmode\nobreak\ dv\leq$$
$$\displaystyle\leq\int_{0}^{1}\left(\int_{S^{1}}|\partial_{\theta}(\pi_{T}%
\partial_{v}C)+a|\leavevmode\nobreak\ d\theta\right)^{2}\leq\int_{0}^{1}\left(%
\int_{S^{1}}|\langle H,\partial_{v}C\rangle|\dot{C}|+\partial_{v}l-\partial_{v%
}l|\leavevmode\nobreak\ d\theta\right)^{2}dv$$
Note that the reparametrization (4.11) may be viewed as an
unwinding when confronted with 4.1.
Remark 4.8
1
Note that, if $|\dot{C}(\theta,v)|=l(v)$ then
$$\int_{0}^{1}\left(\int_{S^{1}}\langle H,\partial_{v}C\rangle|\dot{C}|%
\leavevmode\nobreak\ d\theta\right)^{2}dv\leq\int_{I}\langle H,\partial_{v}C%
\rangle^{2}l(v)^{2}\leq(\sup l)J(C)$$
So a bound on $J(C)$ should provide compactness.
4.3 Lower semicontinuity
Let $\alpha>0,\beta>0$, $V,W\in{\mathrm{l\hskip-1.5ptR}}^{n}$, define
$$e(W,V)=|\pi_{W^{\perp}}V|^{\alpha}|W|^{\beta}$$
and define
$$E_{\alpha,\beta}(C){\stackrel{{\scriptstyle\mbox{.}}}{{=}}}\int_{I}e(\partial_%
{\theta}C,\partial_{v}C)$$
Note that $E^{N}(C)$ is obtained by choosing $\alpha=2,\beta=1$.
In general if $\beta=1$ then $E_{\alpha,\beta}(C)$ is a geometric energy
(see 1.12).
Let $W\times V$ be the vector of all $n(n-1)/2$
determinants of all 2 by 2 minors of the matrix having $W,V$ as
columns.
The identity
$$|\pi_{W^{\perp}}V|^{2}|W|^{2}=|V|^{2}|W|^{2}-\langle V,W\rangle^{2}=|V\times W%
|^{2}$$
is easily checked
(37) (37) (37)In ${\mathrm{l\hskip-1.5ptR}}^{3}$ we have
$\langle V,W\rangle=|V||W|\cos\alpha$ and $|V\times W|=|V||W|\sin\alpha$,
where $\alpha$ is the angle between the two vectors.
Let
$$f(W,V)=|V\times W|\leavevmode\nobreak\ \leavevmode\nobreak\ .$$
Note that $f$ is a polyconvex function (see 4.3)
and that
$$e(W,V)=|\pi_{W^{\perp}}V|^{\alpha}|W|^{\beta}=|W\times V|^{\alpha}|W|^{\beta-\alpha}$$
(4.12)
and
$$E_{\alpha,\beta}(C)=\int_{I}f(\partial_{\theta}C,\partial_{v}C)^{\alpha}|%
\partial_{\theta}C|^{\beta-\alpha}$$
We can provide this lower semicontinuity result in the class ${\mathbb{F}}$.
Proposition 4.9
Let $p\geq 2$.
Suppose $\alpha>\beta>0$.
Fix a continous function $l:[0,1]\to{\mathrm{l\hskip-1.5ptR}}^{+}$, $l\geq 0$, and use it to
build the class ${\mathbb{F}}$.
Then $E_{\alpha,\beta}$ is $W^{1,p}$-weakly-lower-semi-continuous in the class
${\mathbb{F}}$.
More precisely: let
$$C\in{\mathbb{F}},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak%
\ \leavevmode\nobreak\ (C_{h})_{h}\subset{\mathbb{F}}$$
if $C_{h}\rightharpoonup C$ weakly in $W^{1,p}$, that is,
$$C_{h}\to C,\leavevmode\nobreak\ \leavevmode\nobreak\ \partial_{v}C_{h}%
\rightharpoonup\partial_{v}C\leavevmode\nobreak\ \leavevmode\nobreak\ \partial%
_{\theta}C_{h}\rightharpoonup\partial_{\theta}C$$
in $L^{p}$, and
$$l(v)=|\dot{C}(\theta,v)|=|\dot{C}_{h}(\theta,v)|$$
then
$$\liminf_{h}E_{\alpha,\beta}(C_{h})\geq E_{\alpha,\beta}(C)$$
{proof}
We prove the theorem in steps:
•
Let $\lambda>0$.
Suppose $|\dot{C}|\equiv|\dot{C}_{h}|\equiv\lambda$; then
$$E_{\alpha,\beta}(C)=\lambda^{\beta-\alpha}\int f(\partial_{\theta}C,\partial_{%
v}C)^{\alpha}$$
is l.s.c. (by 4.3).
•
for any homotopy $C$ and any continuous $g:[0,1]\to{\mathrm{l\hskip-1.5ptR}}^{+}$,
define $e_{g}(C)(v):[0,1]\to{\mathrm{l\hskip-1.5ptR}}^{+}$,
$${e_{g}(C){\stackrel{{\scriptstyle\mbox{.}}}{{=}}}\left\{\,\vbox{\halign{\cr}${g%
(v)^{\beta-\alpha}}\int_{S^{1}}f(\partial_{\theta}C,\partial_{v}C)^{\alpha}d%
\theta$&\quad{if $g(v)>0$}\cr$0$&\quad{if $g(v)=0$}}}\right.$$
If $C\in{\mathbb{F}}$ (that is $l(v)=|\dot{C}(\theta,v)|$) then
$$E_{\alpha,\beta}(C)=\int_{0}^{1}e_{l}(C)dv$$
•
Consider a piecewise function $g\geq 0$ defined
$$g=\sum_{i=1}^{m}g_{i}\chi_{[a_{i},a_{i+1})}$$
(4.13)
and let
$$\hat{E}(C){\stackrel{{\scriptstyle\mbox{.}}}{{=}}}\int_{0}^{1}e_{g}(C)dv=\sum_%
{i=1}^{m}\int_{a_{i}}^{a_{i+1}}e_{g_{i}}(C)dv$$
then we apply the previous reasoning to all addends
and conclude that
$$\liminf_{h}\hat{E}(C_{h})\geq\hat{E}(C)$$
•
Suppose $l$ is continuous and $l\geq 0$.
Let $\tau$ be the class of piecewise functions $g\geq 0$ defined
as in (4.13),
such that on any interval $[a_{i},a_{i+1})$, either
$g_{i}=0$ or
$$g_{i}\geq\sup_{[a_{i},a_{i+1})}l\leavevmode\nobreak\ \leavevmode\nobreak\ .$$
Then for any such $g$ and $C\in{\mathbb{F}}$,
$$\int_{0}^{1}e_{g}(C)dv\leq E_{\alpha,\beta}(C)\leavevmode\nobreak\ \leavevmode%
\nobreak\ .$$
•
choose $g$ in the class $\tau$; then
$$\liminf_{h}E_{\alpha,\beta}(C_{h})\geq\liminf_{h}\int_{0}^{1}e_{g}(C_{h})dv%
\geq\int_{0}^{1}e_{g}(C)dv$$
•
Fix $C$: it is possible to find a sequence $(g_{j})\subset\tau$
such that
$$e_{g_{j}}(C)(v)\to_{j}e_{l}(C)(v)$$
for almost all points $v$, monotonically increasing:
indeed, let
$$A_{j,i}=[i2^{-j},(i+1)2^{-j}),\leavevmode\nobreak\ \leavevmode\nobreak\ %
\leavevmode\nobreak\ I_{j,i}=\inf_{v\in A_{j,i}}l(v),\leavevmode\nobreak\ %
\leavevmode\nobreak\ \leavevmode\nobreak\ S_{j,i}=\sup_{v\in A_{j,i}}l(v)$$
and
$${g_{j}(v)=\left\{\,\vbox{\halign{\cr}$0$&\quad{if $v\in A_{j,i}$ and $I_{j,i}=0%
$}\cr$S_{j,i}$&\quad{if $v\in A_{j,i}$ and $I_{j,i}>0$}}}\right.$$
Then
$$E_{\alpha,\beta}(C)=\sup_{j}\int_{0}^{1}e_{g_{j}}(C)dv$$
We would like to prove this more general statement
Conjecture 4.10
choose Lipschitz homotopies
$$C:I\to{\mathrm{l\hskip-1.5ptR}}^{n},\leavevmode\nobreak\ \leavevmode\nobreak\ %
C_{h}:I\to{\mathrm{l\hskip-1.5ptR}}^{n}$$
define
$$l(v){\stackrel{{\scriptstyle\mbox{.}}}{{=}}}\mathop{\operator@font len}%
\nolimits C,\leavevmode\nobreak\ \leavevmode\nobreak\ l_{h}(v){\stackrel{{%
\scriptstyle\mbox{.}}}{{=}}}\mathop{\operator@font len}\nolimits C_{h}$$
If $l_{h}\to l$ uniformly and
$C_{h}\rightharpoonup C$ weakly in $W^{1,p}$, then
$$\liminf_{h}E^{N}(C_{h})\geq E^{N}(C)$$
Whereas we cannot generalize the theorem
further: this is due to example 4.3.1.
4.3.1 Example
We want to show that $E^{N}$ is not l.s.c.
if we do not control the length.
Let $\alpha>\beta>0$, define
$$e(W,V)=|\pi_{W^{\perp}}V|^{\alpha}|W|^{\beta}=|W\times V|^{\alpha}|W|^{\beta-\alpha}$$
and define
$$E_{\alpha,\beta}(C){\stackrel{{\scriptstyle\mbox{.}}}{{=}}}\int_{I}e(\partial_%
{\theta}C,\partial_{v}C)$$
We will actually show that $E_{\alpha,\beta}$ is not l.s.c., and that
Proposition 4.11
$$\Gamma E_{\alpha,\beta}(C)=0$$
where the relaxation is computed with respect to weak-* $L^{\infty}$
convergence of the derivatives.
1.
For simplicity, we temporarily drop the
requirement that the curves $C(\cdot,v)$ be closed.
We then redefine $I=[0,1]\times[0,1]$.
2.
Let $\widetilde{C}:I\to I$ be a Lipshitz map such that
$$\widetilde{C}(\theta,0)=(\theta,0),\leavevmode\nobreak\ \leavevmode\nobreak\ %
\widetilde{C}(\theta,1)=(\theta,1),\leavevmode\nobreak\ \leavevmode\nobreak\ %
\widetilde{C}(0,v)=(0,v),\leavevmode\nobreak\ \leavevmode\nobreak\ \widetilde{%
C}(1,v)=(1,v)\leavevmode\nobreak\ \leavevmode\nobreak\ .$$
(4.14)
3.
Let $h\geq 1$ be integers.
We rescale $\tilde{C}$ and glue many copies of it to build $C_{h}$, as
follows
$$C_{h}(\theta,v){\stackrel{{\scriptstyle\mbox{.}}}{{=}}}\frac{1}{h}\tilde{C}%
\Big{(}(h\theta)\mathop{\operator@font mod}\nolimits(1),(hv)\mathop{%
\operator@font mod}\nolimits(1)\Big{)}+b_{h}(\theta,v)$$
where $b_{h}:I\to{\mathrm{l\hskip-1.5ptR}}^{2}$ is the piecewise continuous function
$$b_{h}(\theta,v){\stackrel{{\scriptstyle\mbox{.}}}{{=}}}\left(\frac{1}{h}%
\lfloor h\theta\rfloor,\frac{1}{h}\lfloor hv\rfloor\right)$$
(in particular, $C_{1}=\tilde{C}$).
We represent this process in figure 3.
Then
$$\displaystyle\partial_{\theta}C_{h}(\theta,v)=$$
$$\displaystyle\partial_{\theta}\tilde{C}\Big{(}(h\theta)\mathop{\operator@font
mod%
}\nolimits(1),(hv)\mathop{\operator@font mod}\nolimits(1)\Big{)}$$
$$\displaystyle\partial_{v}C_{h}(\theta,v)=$$
$$\displaystyle\partial_{v}\tilde{C}\Big{(}(h\theta)\mathop{\operator@font mod}%
\nolimits(1),(hv)\mathop{\operator@font mod}\nolimits(1)\Big{)}$$
We may think of $C_{h}$ as
homotopies that connect the same two curves, namely,
$$C_{h}(\theta,0)=(\theta,0)=\tilde{C}(\theta,0),\leavevmode\nobreak\ %
\leavevmode\nobreak\ \leavevmode\nobreak\ C_{h}(\theta,1)=(\theta,1)=\tilde{C}%
(\theta,1)\leavevmode\nobreak\ \leavevmode\nobreak\ ,$$
while extrema points move in a controlled way, namely
$$C_{h}(0,v)=(0,v)=\tilde{C}(0,v),\leavevmode\nobreak\ \leavevmode\nobreak\ %
\leavevmode\nobreak\ C_{h}(1,v)=(1,v)=\tilde{C}(1,v)\leavevmode\nobreak\ %
\leavevmode\nobreak\ .$$
4.
Let $C(\theta,v){\stackrel{{\scriptstyle\mbox{.}}}{{=}}}(\theta,v)$ be the identity.
The sequence $C_{h}$ has the following properties
(a)
$C_{h}\to C$ in $L^{\infty}$, and more precisely
$$\sup_{I}|C_{h}-C|\leq\frac{2}{h}$$
(b)
$\partial_{\theta}C_{h}$ and $\partial_{v}C_{h}$
are bounded in $L^{\infty}$, and more precisely,
$$\sup_{I}|\partial_{\theta}C_{h}|\leq|\partial_{\theta}\tilde{C}|,\leavevmode%
\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \sup_{I}|\partial_{v}C_{h}%
|\leq\sup_{I}|\partial_{v}\tilde{C}|$$
and then all $C_{h}$ are equi Lipschitz;
(c)
$$\partial_{\theta}C_{h}\rightharpoonup e_{1}=(1,0)=\partial_{\theta}C$$
and
$$\partial_{v}C_{h}\rightharpoonup e_{2}=(0,1)=\partial_{v}C$$
weakly* in $L^{\infty}(I)$. (38) (38) (38)Proof by lemma 1.2 in [Dac82]
(d)
let $\tilde{l}(v){\stackrel{{\scriptstyle\mbox{.}}}{{=}}}\mathop{\operator@font len%
}\nolimits(\tilde{C})$, $a{\stackrel{{\scriptstyle\mbox{.}}}{{=}}}\int\tilde{l}$ and
$$l_{h}(v){\stackrel{{\scriptstyle\mbox{.}}}{{=}}}\mathop{\operator@font len}%
\nolimits(C_{h})(v)=\tilde{l}\big{(}(hv)\mathop{\operator@font mod}\nolimits(1%
)\big{)}$$
then $l_{h}\rightharpoonup a$ weakly* in $L^{\infty}([0,1])$.$\leavevmode\ {}^{\,(38)}$
(e)
Suppose $\tilde{C}$ is piecewise smooth, so that the curvature
$H_{h}$ of $C_{h}$ can be defined almost everywhere.
By (4.14), $\tilde{l}(0)=\tilde{l}(1)=1$.
If $\tilde{l}(v)>1$ at some points, then the sequence
$l_{h}(v)$ is not equicontinuous: then, by
proposition 4.2, the sequence of integrals
$$\int_{0}^{1}\int_{S^{1}}\langle H_{h},\partial_{v}C_{h}\rangle^{2}|\dot{C}_{h}%
|d\theta\,dv$$
is unbounded in $h$. (39) (39) (39)a fortiori,
$J(C_{h})$ is unbounded — $J(C)$ was defined in (4.1)
(f)
$E_{\alpha,\beta}(C_{h})$ is constant in $h$:
indeed
$$\displaystyle\int_{I}e(\partial_{\theta}C_{h},\partial_{v}C_{h})$$
$$\displaystyle=$$
$$\displaystyle h^{2}\int_{0}^{1/h}\int_{0}^{1/h}e\big{(}\partial_{\theta}C_{h},%
\partial_{v}C_{h}\big{)}d\theta\,dv=$$
$$\displaystyle=$$
$$\displaystyle h^{2}\int_{0}^{1/h}\int_{0}^{1/h}e\big{(}\partial_{\theta}\tilde%
{C}(h\theta,hv),\partial_{v}\tilde{C}(h\theta,hv)\big{)}d\theta\,dv=$$
$$\displaystyle=$$
$$\displaystyle\int_{I}e\big{(}\partial_{\theta}\tilde{C}(\theta,v),\partial_{v}%
\tilde{C}(\theta,v)\big{)}d\theta\,dv$$
that is
$$E_{\alpha,\beta}(C_{h})=E_{\alpha,\beta}(\tilde{C})$$
(4.15)
5.
Let $j\geq 1$ be a fixed integer.
Let $\widetilde{C}:I\to{\mathrm{l\hskip-1.5ptR}}^{2}$ be defined by
$${\gamma(v)=\left\{\,\vbox{\halign{\cr}$v$&\quad{if $v\in[0,1/2]$}\cr$(1-v)$&%
\quad{if $v\in[1/2,1]$}}}\right.$$
(4.16)
and
$$\tilde{C}(\theta,v)=\big{(}\theta,v+\gamma(v)\sin(2\pi j\theta)\big{)}$$
(4.17)
(note that $\tilde{C}$ is a graph).
We have this further properties
•
We know that $l_{h}\rightharpoonup a$ with $a=\int\mathop{\operator@font len}\nolimits\tilde{C}$;
but this limit $a$ is strictly bigger than $1$, whereas
$\mathop{\operator@font len}\nolimits C\equiv 1$
•
For $\theta\in[0,1/h]$ and $v\in[0,1/h]$
$$\displaystyle\partial_{v}C_{h}(\theta,v)=$$
$$\displaystyle\partial_{v}\tilde{C}(h\theta,hv)$$
$$\displaystyle=(0,\leavevmode\nobreak\ 1+\gamma^{\prime}(hv)\,\sin(2\pi jh%
\theta))$$
$$\displaystyle\partial_{\theta}C_{h}(\theta,v)=$$
$$\displaystyle\partial_{\theta}\tilde{C}(h\theta,hv)$$
$$\displaystyle=(1,\leavevmode\nobreak\ 2\pi j\gamma(hv)\cos(2\pi jh\theta))$$
Then
$$\sup_{I}|\partial_{\theta}C_{h}|\leq 1+2\pi j,\leavevmode\nobreak\ \leavevmode%
\nobreak\ \sup_{I}|\partial_{v}C_{h}|\leq 2$$
•
We compute
$$\displaystyle E_{\alpha,\beta}(\tilde{C})=\int_{I}e\big{(}\partial_{\theta}%
\tilde{C}(\theta,v),\partial_{v}\tilde{C}(\theta,v)\big{)}d\theta\,dv=$$
$$\displaystyle=\int_{0}^{1}\int_{0}^{1}|1+\gamma^{\prime}(v)\sin(j\theta)|^{%
\alpha}|1+\gamma(v)^{2}j^{2}\cos(j\theta)^{2}|^{(\beta-\alpha)/2}\,d\theta\,dv$$
$$\displaystyle=2\int_{0}^{1/2}\int_{0}^{1}|1+\sin(j\theta)|^{\alpha}|1+v^{2}j^{%
2}\cos(j\theta)^{2}|^{(\beta-\alpha)/2}\,d\theta\,dv$$
then
$$\lim_{j\to\infty}E_{\alpha,\beta}(\tilde{C})=0$$
(4.18)
6.
Combining (4.15) and (4.18)
we prove that $E_{\alpha,\beta}$ is not l.s.c. Indeed for
$j$ large
$$\lim_{h}E_{\alpha,\beta}(C_{h})=E_{\alpha,\beta}(\tilde{C})<E_{\alpha,\beta}(C%
)=1$$
whereas $C_{h}\rightharpoonup C$.
7.
to prove 4.11, consider any homotopy $C$;
this may be approximated by a piecewise linear homotopy,
which in turn may be approximated by many replicas of the above construction.
4.4 Existence of minimal geodesics
Theorem 4.12
Let $M>0$.
Let $\mathcal{A}$ be the set of admissible curves $c:S^{1}\to{\mathrm{l\hskip-1.5ptR}}^{n}$, such that
•
$c:S^{1}\to{\mathrm{l\hskip-1.5ptR}}^{n}$ is Lipschitz, and $c$ admits curvature $H$
in the measure sense (see in 3.3), and moreover
•
the total mass $|H|(S^{1})$ of the curvature $H$ of $c$
is bounded uniformly
$$|H|(S^{1})\leq M\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ .$$
(4.19)
Let $c_{0},c_{1}$ be curves in $\mathcal{A}$.
Fix a bounded continuous function $l:[0,1]\to{\mathrm{l\hskip-1.5ptR}}^{+}$, with $\inf l>0$.
Let $\mathcal{B}$ be the class
of homotopies $C:I\to{\mathrm{l\hskip-1.5ptR}}^{n}$ such that
•
$C\in H^{1}(I\to{\mathrm{l\hskip-1.5ptR}}^{n})$
•
any given curve $\theta\mapsto C(\theta,v)$ is in $\mathcal{A}$
•
the curvature can be extended to the homothopy (see 3.7),
$\partial_{v}C$ is continuous, and
$$\int_{0}^{1}\left(\int_{S^{1}}\langle H,\partial_{v}C\rangle|\dot{C}|%
\leavevmode\nobreak\ d\theta\right)^{2}dv\leq M$$
•
$l(v)=\mathop{\operator@font len}\nolimits C(v)=\int_{S^{1}}|\dot{C}|%
\leavevmode\nobreak\ d\theta$
•
$C(\theta,0)=c_{0}(\theta)$ and $C(\theta,1)=c_{1}$
If $\mathcal{B}$ is non empty, then the functional $E^{N}$ admits a minimizing
homotopy $C^{*}$;
this minimum $C^{*}$ satisfies all the above requirements, but possibly for
condition (4.10).
{proof}
The proof is divided in two important (and independent) steps
•
Let $C_{h}$ be a sequence such that
$$\lim E^{N}(C_{h})=\inf_{C\in\mathcal{B}}E^{N}(C)$$
Up to reparameterization, assume
$$|\dot{C}_{h}(\theta,v)|=l(v)\leavevmode\nobreak\ \leavevmode\nobreak\ .$$
By this bound, and the compactness result 4.7,
we reparameterize any term $C_{h}$ to $\widetilde{C}_{h}$ by
$$\widetilde{C}_{h}(\theta,v)=C_{h}(\theta+\varphi_{h}(v),v)$$
so that
$$\int|\pi_{T}\partial_{v}\widetilde{C}_{h}|^{2}\leq 2M\leavevmode\nobreak\ %
\leavevmode\nobreak\ \leavevmode\nobreak\ ;$$
moreover
$$\int|\pi_{N}\partial_{v}\widetilde{C}_{h}|^{2}\leq(\max\frac{1}{l})\int|\pi_{N%
}\partial_{v}\widetilde{C}_{h}|^{2}|\dot{C}|\leq(\max\frac{1}{l})(1+\inf E^{N})$$
(4.19)
(definitively in $h$) and then
$$\int|\partial_{v}\widetilde{C}_{h}|^{2}\leq 2M+(\max\frac{1}{l})(1+\inf E^{N})$$
whereas
$$\int_{I}|\dot{C}_{h}|^{2}=\int_{0}^{1}l(v)^{2}dv\leavevmode\nobreak\ %
\leavevmode\nobreak\ \leavevmode\nobreak\ :$$
then (by the Banach-Alaoglu-Bourbaki theorem
(40) (40) (40)See thm III.15 III.25 and cor III.26 in [Bre86])
up to a subsequence, $\widetilde{C}_{h}$ converges
weakly in $H^{1}$ to a homotopy $C^{*}$.
•
We want to prove that
$|\dot{C}^{*}(\theta,v)|=l(v)$ for almost all $\theta,v$.
We know that $|\dot{C}^{*}(\theta,v)|\leq l(v)$,
by 3.16. Suppose on the opposite that
$|\dot{C}^{*}(\theta,v)|<l(v)$: then there exists $\varepsilon>0$ and
an measurable subset $A\subset I$ with positive measure, such that
$|\dot{C}^{*}(\theta,v)|<l(v)-\varepsilon$ for $(\theta,v)\in A$.
Let $A_{v}=\{\theta:(\theta,v)\in A\}$ be the slice of $A$:
then, by Fubini-Tonelli, there is a $v$ such that the measure
of $A_{v}$ is positive. We fix that $v$. Suppose that $H_{h}$
is the curvature of $C_{h}(\cdot,v)$: then
$$H_{h}=l(v)\partial_{\theta\theta}C_{h}$$
in the sense of measures.
We know that $H$ has bounded mass: so $\partial_{\theta\theta}C_{h}$ has:
by Theorem 3.23 in [AFP00], $\partial_{\theta}C_{h}(\cdot,v)$
is compact in $L^{1}(S^{1})$, so, (up to a
subsequence), we would have that $\dot{C}_{h}(\cdot,v)\to\dot{C}^{*}(\cdot)$ strongly in $L^{1}(S^{1})$:
then $|\dot{C}^{*}(\theta,v)|=l(v)$ for that particular choice of $v$,
achieving contradiction.
•
By 4.9,
$\liminf_{h}E^{N}(C_{k_{h}})\geq E^{N}(C^{*})$: then $C^{*}$ is the minimum.
Remark 4.13
If we wish to extend the above theorem, we face some obstacles.
•
If we do not enforce some bounds on curvature
(as (4.19) and
(4.10)), then
the example in §4.6 shows that we cannot achieve
compactness of a minimizing subsequence
•
If we wish to remove the hypothesis “$\inf l>0$’’, (41) (41) (41)note that the bound (4.19)
does not imply that $\inf l>0$
we are faced with the following problem: if $l(v)=0$, then
the curve $C(\cdot,v)$ collapses to a point; consequently
if $l(v)=0$ on an interval $[a,b]$,
then the homotopy collapses to path on that
interval; moreover, the length and the energy of $C$
restricted to $v\in[a,b]$ is necessarily $0$, so that $E(C)$ provides
no bound on the behaviour of $C$: we again lose
compactness of a minimizing subsequence
(indeed, the inequality
(4.19) needs that $\inf_{v}\mathop{\operator@font len}\nolimits(C)(v)>0$, to be able to
control $\int|\partial_{v}C|^{2}$).
4.4.1 Michor-Mumford
Let $d(c_{0},c_{1})$ be the geodesic distance induced by
the metric $G^{A}$ defined in [MM]
(see eq.(2.7) here).
By the results in [MM], we know that (in
general) this distance is non degenerate:
Proposition 4.14
Consider an homotopy $C$ connecting two curves
$c_{0}=C(\cdot,0),c_{1}=C(\cdot,1)$,
and its energy $E^{A}(C){\stackrel{{\scriptstyle\mbox{.}}}{{=}}}E^{N}(C)+AJ(C)$
(see eq.(2.8) here)
By Hölder inequality,
$$\displaystyle\left(\int_{I}|\dot{C}|\right)^{1/2}\left(\int_{I}|\pi_{N}%
\partial_{v}C|^{2}|\dot{C}|\right)^{1/2}\geq\int_{I}|\pi_{N}\partial_{v}C||%
\dot{C}|\geq\int_{I}|\partial_{v}C\times\dot{C}|$$
We then obtain the
area swept out bound
(42) (42) (42)in a form slightly better than the one
in §3.4 in [MM]
$$\left(\int_{I}|\partial_{v}C\times\dot{C}|\right)^{2}\leq\frac{E^{N}(C)}{\int_%
{0}^{1}\mathop{\operator@font len}\nolimits(C)}$$
Indeed the leftmost term is the area swept by the
homotopy.
By the proposition 4.2, we know that, if $J(C)$ is
bounded, then $\mathop{\operator@font len}\nolimits(C)$ is continuous and is bounded.
So, if $c_{0}\neq c_{1}$, and there is no zero-area homotopy
(43) (43) (43)note that there are curves $c_{0}\neq c_{1}$
in the completion of the
space $B_{i}$ (the completion is described in §3.11 in
[MM]) that can be connected by a zero-area
homotopy
connecting $c_{0},c_{1}$, then $d(c_{1},c_{2})>0$.
See also prop. 5.1 here.
We cannot instead currently prove a theorem of existence of
minimal geodesics for the energy $E^{A}(C)$;
in this section we have though derived some insight;
so we discuss the conjecture
Conjecture 4.15
Fix two curves $c_{0}$ and $c_{1}$.
The energy $E^{A}(C)$ admits a minimum in the class $\mathbb{C}$ of
homotopies connecting $c_{0}$ and $c_{1}$.
May we improve the proof of theorem 4.12
to prove this conjecture? We discuss what is
ok and what is wrong.
•
We may want to use $J(C)$ to drop the requirement (4.19)
that is, in turn, used to have l.s.c. of the functional $E^{N}$;
so we may think of proving this lemma
“Suppose that we are given a sequence of
smooth homotopies $C_{h}$,
and $J(C_{h})\leq M$, and $C_{h}\rightharpoonup C$ weakly in $W^{1,p}$:
then $\mathop{\operator@font len}\nolimits(C_{h})\to\mathop{\operator@font len}%
\nolimits(C)$”
but this is wrong, as seen by this example
Example 4.16
Let $C_{h}:[0,1]\times[0,1]\to{\mathrm{l\hskip-1.5ptR}}^{2}$ defined as
$$C_{h}(u,v)=\left(u,\leavevmode\nobreak\ \frac{1}{h}\sin(2\pi hu)\right)$$
and
$$C(u,v)=(u,0)\leavevmode\nobreak\ \leavevmode\nobreak\ .$$
These homotopies
do not depend on $v$: then
$J(C_{h})=0$. On the other hand, $C_{h}\rightharpoonup C$ but $\mathop{\operator@font len}\nolimits(C_{h})$ is
constant and bigger than $1=\mathop{\operator@font len}\nolimits(C)$.
•
We need way to be sure that curves in the homotopy do
not collapse to points (as discussed in 4.13);
so we may think of proving this lemma
“Suppose that the homotopy $C$ admits curvature,
and $J(C)<\infty$: then $\inf_{v}\mathop{\operator@font len}\nolimits(C)(v)>0$”
but this is wrong, as seen by this example
Example 4.17
Let
$$c_{1}(\theta)=(\sin(\theta),\cos(\theta))$$
be the circle in ${\mathrm{l\hskip-1.5ptR}}^{2}$, and build the homotopy
$$C(\theta,v)=v^{4}c_{1}(\theta)$$
Then
$$J(C)=\int_{0}^{1}\frac{1}{v^{8}}(4v^{3})^{2}2\pi v^{4}dv=32\pi\int_{0}^{1}v^{2%
}dv=32\pi/3$$
•
We need a semicontinuity result on $J(C)$.
Indeed we cannot hope in any cancellation effect
in the sum $E^{N}(C)+J(C)$ because the two energies scale
in different ways:
(44) (44) (44)and this suggests that the energy $E^{A}(C)$
should not satisfy the rescaling property 1
Remark 4.18 (rescaling)
Let $\varepsilon>0$ and
$\tilde{C}=\varepsilon C$ then $E^{N}(\tilde{C})=\varepsilon^{3}E^{N}(C)$
but $J(\tilde{C})=\varepsilon J(C)$
•
On the bright side, we do not have a counterexample
to show if $J(C)$ is not l.s.c.; actually,
remark (4e) (on page 4e)
suggests that if $C_{h}\to C$ and the homotopies have a common border
condition (such as (4.14)),
and $J(C_{h})$ is bounded, then $\liminf E^{N}(C_{h})\geq E^{N}(C)$.
•
Moreover, by 4.14 we know that
the induced distance is in general not degenerate.
•
As pointed in 4.8,
the term $J(C)$ in the metric provides compactness in $H^{1}(I)$
•
Moreover the example in §4.6 shows
that we do need to control the curvature of curves to be able
to prove existence of minimal geodesics: this justifies the term $J(C)$
in the Michor-Mumford energy $E^{A}$ (as
well as the bound (4.10) in theorem 4.12).
4.5 Space of Curves
By using the previous theorem 4.12, we immediatly obtain
a metric on a Space of Shapes.
Fix $M>0$.
Let $\mathcal{S}$ be the space of unit length
(45) (45) (45)if the conjecture 4.10 holds true, then
the “unit length” constraint may be dropped;
note that the formula of the metric is “geometric” (as defined in
1.12); but the minimal geodesics and the
distance are not invariant
with respect to rescaling, due to the bound $|\kappa|\leq M$.
closed immersed $C^{2}$ curves such
that for any $c\in\mathcal{S}$, the curvature $\kappa$ of $c$ is
bounded by $M$, as
$$|\kappa|\leq M\leavevmode\nobreak\ \leavevmode\nobreak\ .$$
We may think of $\mathcal{S}$ as a “submanifold with border” in the
manifold $M$ of all closed unit length immersed $C^{2}$ curves.
Then we can use
the Riemannian metric
$$\langle h,k\rangle=\int_{S_{1}}\langle\pi_{N}h(\theta),\pi_{N}k(\theta)\rangle%
\leavevmode\nobreak\ d\theta$$
to define a positive geodesic distance in $\mathcal{S}$:
by the theorem in the previous section, this distance admits minimal
geodesics:
{proof}
indeed, we fix $\mathop{\operator@font len}\nolimits(C)\equiv 1$; we may write
$$J(C)\leq K^{2}E^{N}(C)$$
then we may use the remark 4.8
to obtain compactness of minimizing subsequences.
Unfortunately, since $\mathcal{S}$ has a border (given by the
constraint $|\kappa|\leq M$) then the minimal geodesic will
not, in general, satisfy the Euler-Lagrange ODE defined by $E^{N}$.
4.6 The pulley
We show that there exists a sequence of Lipschitz
functions $C_{h}:I\to{\mathrm{l\hskip-1.5ptR}}^{2}$ such that $|\pi_{N}\partial_{v}C_{h}|\leq 1$,
$|\partial_{\theta}C_{h}|=1$,
but $\int_{I}|\partial_{v}C_{h}|\to\infty$.
Consider the figure 4. The thick line $ABCDEF$ is
the curve $\theta\mapsto C_{5}(\theta,0)$. The thick arrows represent
the normal part $\pi_{N}\partial_{v}C_{5}$ of the velocity, while the thin
dashed arrows represent the tangent part $\pi_{T}\partial_{v}C_{5}$ of the
velocity. The circles are just for fun, and represent the wheels of
the pulley.
The movement of $C_{5}(\theta,v)$, that is, its evolution in $v$, is so
described: the part $ABCD$ is still, that is, it is constant in $v$;
in the part $DE$ (respectively $FA$) of the curve, vertical segments move
apart (resp., together) as the thick arrows indicate,
with horizontal velocity with norm $|\pi_{N}\partial_{v}C_{5}|=1$;
as the curve unravels, it is forced to move also parallel to itself.
The generic curve $C_{h}$ has $2h$ wheels: $h$ wheels in section $DE$,
to pull apart, and $h$ wheels in section $AF$, to pull together;
the horizontal tracts in $AF$ and $DF$ are of lenght
$\propto 1/h$, so the tract $AF$ straightens up in a time
$\Delta v\propto 1/h$: at that moment, the movement inverts:
so while $v\in[0,1]$, the cycle repeats for $h$ times.
In this case, the tangent velocity in section $DF$ is $h$ times the
normal velocity in sections $AF$ and $DE$. Then, if we choose the
normal velocity to have norm 1, the tangent velocity will explode
when $h\to\infty$.
This means that the family of homotopies $C_{h}$
will not be compact in $H^{1}(I\to{\mathrm{l\hskip-1.5ptR}}^{1})$.
The first objection that comes to mind when reading
the above is “this example is not showing any problem with
the curve itself, it is just giving problems with the
parameterization of the curve”.
Indeed we may reparameterize the curves so that
the tangent velocity will not explode
when $h\to\infty$: by using 3.10,
we obtain that $\pi_{\tilde{T}}\partial_{v}\tilde{C}=0$.
Remembering remark 3.12, we understand that
this is not going to help, though. So we point out this other problem.
Let $\lambda=\lambda(v,h)$ be the distance from feature point $D$ to feature
point $E$. Then $\partial_{v}\lambda\to\infty$ if $h\to\infty$.
5 Conformal metrics
We recall at this point that the well known geometric heat flow
($C_{t}=C_{ss}$) (46) (46) (46)Recall that we write $C_{v}$ for $\partial_{v}C$ (and so on),
to simplify the derivations
is truly the gradient descent for the Euclidean
arclength of a curve with respect to the $H^{0}$ metric.
Unfortunately, given the pathologies encountered thus far with $H^{0}$, we
see that this famous flow is not a gradient flow with respect to
a well behaved Riemannian metric. If we propose a different metric,
the new gradient descent flow for the Euclidean arclength of a curve
will of course be entirely different.
For example, the metric (2.7) proposed
by [MM] yields the following gradient flow for arclength:
$$C_{t}=\frac{C_{ss}}{1+A\,C_{ss}\cdot C_{ss}}=\frac{\kappa}{1+A\kappa^{2}}N$$
(5.1)
Notice that the normal speed in (5.1) is not
monotic in the curvature; and therefore the flow
(5.1) will not
share the nice properties of the geometric heat flow
($\partial_{t}C=\partial_{ss}C$). For example, embedded curves
do not always remain embedded under this new flow, as illustrated
in Fig. 5.
Given the pathologies of $H^{0}$ we have no choice but to propose a new
metric if we wish to construct a well behaved Riemannian geometry on
the space of curves. However, we may seek a new metric whose gradient
structure is as similar as possible to that of the $H^{0}$ metric. In
particular, for any functional $E:M\to{\mathrm{l\hskip-1.5ptR}}$ we may ask that the
gradient flow of $E$ with respect to our new metric be related to
that gradient flow of $E$ with respect to $H^{0}$ by only a time
reparameterization. In other words, if $C(t)$ represents a gradient
flow trajectory according to $H^{0}$ and if $\hat{C}(t)$ represents the
gradient flow trajectory according to our proposed new metric, then
we wish that
$$\hat{C}(t)=C(f(t))$$
for some positive time reparameterization $f:{\mathrm{l\hskip-1.5ptR}}\to{\mathrm{l\hskip-1.5ptR}}$, $\dot{f}>0$.
The resulting gradient flows will then be related as follows.
$$\hat{C}_{t}=\dot{f}(t)\,C_{t}$$
(5.2)
The only class of new metrics that will satisfy (5.2) are
conformal modifications of the original $H^{0}$ metric, which we
will denote by $H^{0}_{\phi}$. Such metrics
are completely defined by combining the original $H^{0}$ metric with a
positive conformal factor $\phi:M\to{\mathrm{l\hskip-1.5ptR}}$ where $\phi(c)>0$ may
depend upon the curve $c$. The relationship between the inner products
is given as follows.
$$\Big{\langle}h_{1},h_{2}\Big{\rangle}_{\!H^{0}_{\phi}}=\phi(c)\,\Big{\langle}h%
_{1},h_{2}\Big{\rangle}_{\!H^{0}}$$
(5.3)
Note that for any energy functional $E$ of curves $C(t)$ we have
the following equivalent expressions, where the first and last expressions
are by definition of the gradient and the middle expression comes from
the definition (5.3) of a conformal metric.
$$\frac{d}{dt}E(C(t))=\left\langle\frac{\partial C}{\partial t},\underbrace{%
\nabla^{\phi}E(C)}_{\parbox{35.0pt}{\vspace{-2.5ex}\tiny\begin{center}%
Conformal\inner@par Gradient\end{center}}}\right\rangle_{\!\!\raisebox{17.2pt}%
{$\scriptstyle H^{0}_{\phi}$}}\!\!\!\!\!=\phi\left\langle\frac{\partial C}{%
\partial t},\underbrace{\nabla^{\phi}E(C)}_{\parbox{35.0pt}{\vspace{-2.5ex}%
\tiny\begin{center}Conformal\inner@par Gradient\end{center}}}\right\rangle_{\!%
\!\raisebox{17.2pt}{$\scriptstyle H^{0}$}}\!\!\!\!\!=\left\langle\frac{%
\partial C}{\partial t},\underbrace{\nabla E(C)}_{\parbox{35.0pt}{\vspace{-2.5%
ex}\tiny\begin{center}Original\inner@par Gradient\end{center}}}\right\rangle_{%
\!\!\raisebox{17.2pt}{$\scriptstyle H^{0}$}}$$
(5.4)
We see from (5.4) that the
conformal gradient differs only in magnitude from the original
$H^{0}$ gradient
$$\nabla^{\phi}E=\frac{1}{\phi}\nabla E$$
and therefore the conformal gradient flow differs only in speed
from the $H^{0}$ gradient flow.
$$\frac{\partial C}{\partial t}=-\nabla^{\phi}E(C)=-\frac{1}{\phi(C)}\nabla E(C)$$
As such and as we desired, the
solution differs only by a time reparameterization $f$ given by
$$\dot{f}=\frac{1}{\phi(C)}$$
The obvious question now is how to choose the conformal factor.
A first suggestion is in the following theorem
Proposition 5.1
(47) (47) (47)We thanks prof. Mumford for suggesting this result.
Suppose that
$$\min_{c}(\phi(c)/\mathop{\operator@font len}\nolimits(c))=a>0$$
(5.5)
Consider an homotopy $C$ connecting two curves
$c_{0}=C(\cdot,0),c_{1}=C(\cdot,1)$,
and its $H^{0}_{\phi}$–energy
$$\int_{0}^{1}\phi(C(\cdot,v))\int_{S^{1}}|\pi_{N}\partial_{v}C|^{2}|\dot{C}|\,d%
\theta\,dv$$
Up to reparameterization 3.9,
$|\dot{C}(\theta,v)|=\mathop{\operator@font len}\nolimits(C(\cdot,v))/2\pi$ so we can
rewrite the energy (using (4.12)) as
$$\displaystyle\int_{0}^{1}\frac{2\pi\phi(C(\cdot,v))}{\mathop{\operator@font len%
}\nolimits(C(\cdot,v))}\int_{S^{1}}|\partial_{v}C\times\dot{C}|^{2}\,d\theta\,%
dv\geq 2\pi a\int_{0}^{1}\int_{S^{1}}|\partial_{v}C\times\dot{C}|^{2}\,d\theta%
\,dv\geq$$
$$\displaystyle\geq a\left(\int_{0}^{1}\int_{S^{1}}|\partial_{v}C\times\dot{C}|%
\,d\theta\,dv\right)^{2}$$
the rightmost term is the square of the area swept by the
homotopy.
So, if $c_{0}\neq c_{1}$, and there do not exist a homotopy
connecting $c_{0},c_{1}$ with zero area, then $d(c_{1},c_{2})>0$.
Although
we already know that the $H^{0}$ metric is not very useful, we may obtain
a lot of insight into how to choose the conformal factor $\phi$ by
observing the structure of the minimizing flow (which turns out to
be unstable) for the $H^{0}$ energy in the space of homotopies. We will
then try to choose the conformal factor in order to counteract the
unstable elements of the $H^{0}$ flow.
5.1 The Unstable $H^{0}$ Flow
5.1.1 Geometric parameters $s$ and ${v_{*}}$
We have denoted by $u\in[0,1]$ a parameter which traces out
each curve in a parameterized homotopy $C(u,v)$ and we have denoted
by $v\in[0,1]$ the parameter which moves us from curve to curve
along the homotopy. Note that both of these parameters are arbitrary
and not unique to the geometry of the curves comprising the homotopy.
We now wish to construct more geometric parameters for the homotopy
which will yield a more meaningful and intuitive expression for the
minimizing flow we are about to derive. The most natural substitute
for the curve parameter $u$ is the arclength parmeter $s$. We must
also address the parameter $v$, however. While
$v$ as a parameter ranging from 0 to 1 seems to have little to do
with the arbritrary choice of the curve parameter $u$,
the differential operator $\frac{\partial}{\partial v}$ depends heavily upon this
prior choice. The desired effect of differentiating along the homotopy
is mixed with the undesired effect of differentiating along the contour
if flowing along corresponding values of $u$ between curves in the
homotopy requires some motion along the tangent direction.
To see the dependence of $\frac{\partial}{\partial v}$ on $u$, note that $C(u,v)$ and
$\hat{C}(u,v)$ where
$$\hat{C}(u,v)=C\left(u^{(1+v)},v\right)$$
constitute the same homotopy geometrically,
and yet $\frac{\partial C}{\partial v}\neq\frac{\partial\hat{C}}{\partial v}$.
We will therefore introduce the more geometric parameter ${v_{*}}$ whose
corresponding differential operator $\frac{\partial}{\partial{v_{*}}}$ yields the most
efficient transport from one curve to another curve along the homotopy
regardless of “correspondence” between values of the curve parameters.
It is clear that such a transport must always move in the normal direction
to the underlying curve since tangential motion along any curve does not
contribute to movement along the homotopy. More preceisely, we define
the parameteres $s$ and ${v_{*}}$ in terms of $u$ and $v$ as follows.
$$\frac{\partial}{\partial s}=\frac{1}{\|C_{u}\|}\frac{\partial}{\partial u}%
\qquad\mbox{and}\qquad\frac{\partial}{\partial{v_{*}}}=\frac{\partial}{%
\partial v}-\big{(}C_{v}\cdot C_{s}\big{)}\frac{\partial}{\partial s}$$
5.1.2 $H^{0}$ Minimizing Flow
Suppose we now consider a time varying family of
homotopies $C(u,v,t):[0,1]\times[0,1]\times(0,\infty)\to{\mathrm{l\hskip-1.5ptR}}^{n}$
and compute the derivative of the $H^{0}$ energy along
this family. Note that the $H^{0}$ energy, in terms of
the new parameters $s$ and ${v_{*}}$ may be simply expressed
as follows (since $\pi_{N}C_{{v_{*}}}=C_{{v_{*}}}$).
$$E(t)=\int_{0}^{1}\int_{0}^{L}\big{\|}C_{{v_{*}}}\big{\|}^{2}\,ds\,dv$$
(5.6)
In the appendix, we show that the derivative of $E$ may be
expressed as follows.
$$E^{\prime}(t)=-2\int_{0}^{1}\int_{0}^{L}C_{t}\cdot\Big{(}C_{{v_{*}}{v_{*}}}\!-%
(C_{{v_{*}}{v_{*}}}\!\!\cdot C_{s})C_{s}-(C_{{v_{*}}}\!\!\cdot C_{ss})C_{{v_{*%
}}}+\frac{1}{2}\|C_{v_{*}}\|^{2}C_{ss}\Big{)}\,ds\,dv$$
(5.7)
In the planar case, $C_{{v_{*}}}$ and $C_{ss}$ are linearly dependent
(as both are orthogonal to $C_{s}$) which means that
$$(C_{{v_{*}}}\cdot C_{ss})C_{{v_{*}}}=(C_{{v_{*}}}\cdot C_{{v_{*}}})C_{ss}=\|C_%
{v_{*}}\|^{2}C_{ss}$$
(5.8)
and therefore
$$E^{\prime}(t)=-2\int_{0}^{1}\int_{0}^{L}C_{t}\cdot\Big{(}\big{(}C_{{v_{*}}{v_{%
*}}}-(C_{{v_{*}}{v_{*}}}\cdot C_{s})C_{s}\big{)}-\frac{1}{2}\|C_{v_{*}}\|^{2}C%
_{ss}\Big{)}\,ds\,dv$$
(5.9)
by which we derive the minimization flow
$$C_{t}=C_{{v_{*}}{v_{*}}}-(C_{{v_{*}}{v_{*}}}\cdot C_{s})C_{s}-\frac{1}{2}\|C_{%
v_{*}}\|^{2}C_{ss}$$
which is geometrically equivalent to the following more simpler
flow (by adding a tangential component):
$$C_{t}=C_{{v_{*}}{v_{*}}}-\frac{1}{2}\|C_{v_{*}}\|^{2}C_{ss}$$
(5.10)
Note that the flow (5.10) consists of two orthogonal
diffusion terms. The first term $C_{{v_{*}}{v_{*}}}$ is stable as it
represents a forward diffusion along the homotopy,
while the second term $-\|C_{v_{*}}\|^{2}C_{ss}$ is an unstable
backward diffusion term along each curve.
Indeed, numerical experiments show a behaviour that parallels
the phenomenon described in §4.3.1.
5.2 Conformal Versions of $H^{0}$
We now define the conformal $H^{0}_{\phi}$ energy (when the conformal
factor $\phi$ is a function of the arclength $L$ of each curve) as
$$E_{\phi}(t)=\int_{0}^{1}\phi(L)\int_{0}^{L}\big{\|}C_{{v_{*}}}\big{\|}^{2}\,ds%
\,dv$$
(5.11)
Once again we compute (in the appendix) the derivative of this
energy along a time varying family of homotopies $C(u,v,t)$.
$$\displaystyle E_{\phi}^{\prime}(t)=-\int_{0}^{1}\int_{0}^{L}C_{t}\cdot$$
$$\displaystyle\Big{(}2\phi^{\prime}L_{{v_{*}}}C_{{v_{*}}}+2\phi C_{{v_{*}}{v_{*%
}}}-2\phi(C_{{v_{*}}{v_{*}}}\cdot C_{s})C_{s}$$
$$\displaystyle-2\phi(C_{{v_{*}}}\cdot C_{ss})C_{{v_{*}}}+(\phi m+\phi^{\prime}M%
)C_{ss}\Big{)}\,ds\,dv$$
where
$$m=\|C_{v_{*}}\|^{2}\qquad\mbox{and}\qquad M=\int_{0}^{L}m\,ds=\int_{0}^{L}\|C_%
{v_{*}}\|^{2}\,ds.$$
As before, we now consider the planar case in
which $C_{{v_{*}}}$ and $C_{ss}$ are linearly dependent
and therefore $(C_{{v_{*}}}\cdot C_{ss})C_{{v_{*}}}=mC_{ss}$,
yielding
$$E^{\prime}(t)=-2\int_{0}^{1}\int_{0}^{L}\!C_{t}\cdot\Big{(}\phi\big{(}C_{{v_{*%
}}{v_{*}}}\!\!-(C_{{v_{*}}{v_{*}}}\!\!\cdot C_{s})C_{s}\big{)}+\phi^{\prime}L_%
{{v_{*}}}C_{{v_{*}}}+\frac{1}{2}(\phi^{\prime}M-\phi m)C_{ss}\Big{)}\,ds\,dv$$
(5.12)
from which we obtain the following minimizing flow
$$C_{t}=\phi\big{(}C_{{v_{*}}{v_{*}}}-(C_{{v_{*}}{v_{*}}}\cdot C_{s})C_{s}\big{)%
}+\phi^{\prime}L_{{v_{*}}}C_{{v_{*}}}+\frac{1}{2}(\phi^{\prime}M-\phi m)C_{ss}$$
which is geometrically equivalent (by adding a tangential
term) to
$$C_{t}=\phi C_{{v_{*}}{v_{*}}}+\phi^{\prime}L_{{v_{*}}}C_{{v_{*}}}+\frac{1}{2}(%
\phi^{\prime}M-\phi m)C_{ss}$$
(5.13)
5.2.1 Stable conformal factor
To stabilize the flow in last equation, we look for a $\phi$ such that
$$\phi^{\prime}M-\phi m\geq 0\qquad\mbox{for all }(s,{v_{*}})$$
(5.14)
or (assuming $M\neq 0$)
$$\frac{\phi^{\prime}}{\phi}=(\log\phi)^{\prime}\geq\frac{m}{M}\qquad\mbox{for %
all }(s,{v_{*}})$$
(5.15)
One way to satisfy this is to choose
$$(\log\phi)^{\prime}=\max_{s,{v_{*}}}\frac{m}{M}\doteq\lambda$$
(5.16)
giving us
$$\phi=e^{\lambda L}$$
(5.17)
yielding the following flow of homotopies
$$C_{t}=e^{\lambda L}\Big{(}2C_{{v_{*}}{v_{*}}}+2\lambda L_{{v_{*}}}C_{{v_{*}}}+%
(\lambda M-m)C_{ss}\Big{)}$$
(5.18)
The choice of having $\phi=e^{\lambda L}$
satisfies (5.5); and it agrees
also with the discussion in §5.4,
that hints that the energy $E(C)$ associated to the $H^{0}_{\phi}$ metric
may be lower semi continuous when
$\phi(c)\geq\mathop{\operator@font len}\nolimits(c)$.
The above conformal metric does not entail an unique Riemannian Metric
on the space of curves: indeed the choice of $\lambda$ depends on
the homotopy itself.
In the numerical experiments shown in this paper, we chose $\lambda$
to satisfy (5.16) at time $t=0$ and found that
this was enough to stabilize the flow up to convergence. However,
we have no mathematical proof of this phenomenon.
5.3 Numerical results
We note that the minimizing flow (5.13) consists of
two stable diffusion terms and a transport term. As such, we have
the option to utilize level set methods in the implementation of
(5.13).
We represent the evolving homotopy $C(u,v,t)$ as an evolving surface
$S(u,v,t)$
$$S(u,v,t)=(C(u,v),v,t)$$
We then perform a Level Set Embedding of this surface into a 4D scalar
function $\psi$ such that
$$\psi\big{(}C(u,v,t),v,t\big{)}=0.$$
The goal is now to determine an evolution for $\psi$ which yields
the evoluton (5.13) for the level sets of each of
its 2D cross-sections. Differentiating
$$\frac{d}{dt}\Big{(}\psi\big{(}x(u,v,t),y(u,v,t),v,t\big{)}=0\Big{)}\quad%
\longrightarrow\quad\psi_{t}+\nabla\psi\cdot C_{t}=0$$
where $\nabla\psi=(\psi_{x},\psi_{y})$ denotes the 2D spatial gradient of each
2D cross-section of $\psi$, and substituting (5.13),
noting that $N=\nabla\psi/\|\nabla\psi\|$, yields the corresponding
Level Set Evolution.
$$\displaystyle\psi_{t}\!=$$
$$\displaystyle\psi_{vv}\!-\!\frac{2\psi_{v}}{\|\nabla\psi\|^{2}}(\nabla\psi_{v}%
\!\cdot\!\nabla\psi)\!+\!\frac{\psi_{v}^{2}}{\|\nabla\psi\|^{4}}\big{(}\nabla^%
{2}\psi\nabla\psi\big{)}\!\cdot\!\nabla\psi$$
$$\displaystyle-\frac{1}{2}\left(\frac{\psi_{v}^{2}}{\|\nabla\psi\|^{2}}-\lambda%
\int_{0}^{L}\frac{\psi_{v}^{2}}{\|\nabla\psi\|^{2}}\,ds\right)\nabla\cdot\left%
(\frac{\nabla\psi}{\|\nabla\psi\|}\right)\|\nabla\psi\|\!+\!\lambda L_{v}\psi_%
{v}$$
Note that for simplicity we have dropped the factor
$e^{\lambda L}$ from (5.13) since we are guaranteed
that this factor is always positive. As a result, we do not change
the steady-state of the flow by omitting this factor.
If we numerically compute the geodesic of two curves $c_{0},c_{1}$ in figure
6,
we obtain the geodesic, which is represented, by slicing it in
figure 7 and as a surface in figure 8.
5.4 Example
This example shows why we think that the
conformal energy $E(C)$ may be lower semicontinuous (on planar curves)
in the case when $\phi(c)\geq\mathop{\operator@font len}\nolimits(c)$.
Fix $0<\varepsilon<1/2$ and $\lambda\geq 0$.
Suppose
$${c(u)=\left\{\,\vbox{\halign{\cr}$(u,u\lambda)$&\quad{if $u\in[0,\varepsilon]$}%
\cr$(u,(2\varepsilon-u)\lambda)$&\quad{if $u\in[\varepsilon,2\varepsilon]$}\cr%
$(u,0)$&\quad{if $u\in[2\varepsilon,1]$}}}\right.$$
(5.19)
is the curve in figure 9.
We define the homotopy $C:[0,1]\times[0,1]\to{\mathrm{l\hskip-1.5ptR}}^{2}$ by
$$\tilde{C}(u,v)=c(u)+(0,v)$$
Let with $C(u,v)=(u,v)$ be the identity.
We may tesselate, as explained in point
3 in 4.3.1, to
build a sequence of homotopies $C_{h}$ such that
$C_{h}\to_{h}C$ in $L^{\infty}$ and
$$\partial_{u,v}C_{h}\rightharpoonup^{*}_{h}\partial_{u,v}C\mbox{ weakly* in
$L^{\infty}$},$$
and $E(C_{h})=E(C)$.
Now we compute
$$\mathop{\operator@font len}\nolimits(c)=1+2\varepsilon(\sqrt{1+\lambda^{2}}-1)%
=1+2\varepsilon\alpha$$
where we define $\alpha=\sqrt{1+\lambda^{2}}-1$ for convenience.
Note that $\alpha\geq 0$.
We compute the energy (using the identity (4.12))
$$\displaystyle E(C_{h})=E(\tilde{C})=\int_{0}^{1}\int_{0}^{1}\frac{\phi(C)}{|%
\partial_{u}C|}dudv=$$
$$\displaystyle=(1-2\varepsilon)\phi(C)+2\varepsilon\frac{\phi(C)}{\sqrt{1+%
\lambda^{2}}}=\phi(C)\left(1-2\varepsilon+2\varepsilon\frac{1}{\alpha+1}\right)=$$
$$\displaystyle=\phi(C)\left(1-2\varepsilon\frac{\alpha}{\alpha+1}\right)\geq$$
$$\displaystyle\geq(1+2\varepsilon\alpha)\left(1-2\varepsilon\frac{\alpha}{%
\alpha+1}\right)=1+2\varepsilon\left(\alpha-\frac{\alpha}{\alpha+1}-\frac{2%
\varepsilon\alpha^{2}}{\alpha+1}\right)=$$
$$\displaystyle=1+2\varepsilon\frac{\alpha^{2}-2\varepsilon\alpha^{2}}{\alpha+1}%
=1+2\varepsilon\alpha^{2}\frac{1-2\varepsilon}{\alpha+1}\geq 1=E^{N}(C)$$
Appendix A More on Finsler metrics
We now provide two more results
on Finsler metrics 1.2, for convenience of the reader.
The first result explains the relationship between the
length functional $\mathop{\operator@font len}\nolimits(\xi)$ and the energy functional $E(\xi)$
Proposition A.1
Fix $x,y$ in the following, and let ${\mathcal{A}}$ be
the class of all locally Lipschitz paths $\gamma:[0,1]\to M$
connecting $x$ to $y$.
These are known properties of the the length and the energy.
•
If $\xi,\gamma$ are locally Lipschitz and $\phi$ is a monotone
continuous function such that $\xi=\gamma\circ\phi$
and $\phi(0)=0,\phi(1)=1$ then
$\mathop{\operator@font Len}\nolimits\gamma=\mathop{\operator@font Len}\nolimits\xi$.
•
In general, by Hölder inequality, $E(\gamma)\geq\mathop{\operator@font Len}\nolimits(\gamma)^{2}$.
•
If $\gamma$ provides a minimum of $\min_{\mathcal{A}}{E(\gamma)}$, then it
is also a minimum of $\min_{\mathcal{A}}{\mathop{\operator@font Len}\nolimits(\gamma)}$ in the same class,
${E(\gamma)}=\mathop{\operator@font Len}\nolimits(\gamma)^{2}$, and moreover $|\dot{\gamma}(t)|_{\gamma(t)}$ is constant in $t$, that is, $\gamma$ has
constant velocity.
•
If $\xi$ provides a minimum of $\min_{\mathcal{A}}{\mathop{\operator@font Len}\nolimits(\gamma)}$, then
there exists a monotone continuous function $\phi$ and a path
$\gamma$ such that $\xi=\gamma\circ\phi$, and $\gamma$ is a
minimum of $\min_{\mathcal{A}}{E(\gamma)}$. (48) (48) (48)in writing
$\xi=\gamma\circ\phi$, we, in a sense, define $\gamma$ by a
pullback of $\xi$: see 2.29 in [Men]. Note
that we could not write, in general, $\gamma=\xi\circ\phi^{-1}$: indeed, it is possible that a minimum of $\min_{\mathcal{A}}{\mathop{\operator@font Len}\nolimits(\gamma)}$ may stay still for an interval of time; that is,
we must allow for the case when $\phi$ is not invertible.
{proof}
By 2.27, 2.42, 3.8, 3.9 in [Men] and
4.2.1 in [AT00].
This second result is the version of the Hopf–Rinow theorem
1.4 to the case of generic metric spaces.
Theorem A.2 (Hopf-Rinow)
Suppose that the metric space
$(M,d)$ is locally compact and path-metric, then these are equivalent:
•
the metric space $(M,d)$ is complete,
•
closed bounded sets are compact
and both imply that any two points can be connected by a minimal length
geodesic.
A proof is in §1.11 and §1.12 in Gromov’s [Gro99],
or in Theorem 1.2 in [Men] (which holds
also in the asymmetric case).
Appendix B Proofs of §2.1.4
Let $Z$ be the set of all $\theta\in L^{2}([0,2\pi])$ such that
$\theta(s)=a+k(s)\pi$ where $k(s)\in{\mathbb{Z}}$ is
measurable, $(a=2\pi-\int k)$,
and
$$|\{k(s)=0\mathop{\operator@font mod}\nolimits 2\}|=|\{k(s)=1\mathop{%
\operator@font mod}\nolimits 2\}|=\pi$$
$Z$ is closed (by thm. 4.9 in [Bre86]).
We see that $Z$ contains the (representations $\theta$ of) flat curves
$\xi$, that is, curves $\xi$ whose image is contained in a line;
one such curve is
$$\xi_{1}(s)=\xi_{2}(s)=\begin{cases}s/\sqrt{2}&s\in[0,\pi]\\
(2\pi-s)/\sqrt{2}&s\in(\pi,2\pi]\end{cases},\qquad\theta=\begin{cases}\pi/2&s%
\in[0,\pi]\\
3\pi/2&s\in(\pi,2\pi]\end{cases}$$
We provide here the proof of
2.4: $M\setminus Z$ is a manifold.
{proof}
Indeed, suppose by contradiction that
$\nabla\phi_{1},\nabla\phi_{2},\nabla\phi_{3}$
are linear dependant at $\theta\in M$,
that is, there exists $a\in{\mathrm{l\hskip-1.5ptR}}^{3},a\neq 0$ s.t.
$$a_{1}\cos(\theta(s))+a_{2}\sin(\theta(s))+a_{3}=0$$
for almost all $s$; then, by integrating, $a_{3}=0$,
therefore $a_{1}\cos(\theta(s))+a_{2}\sin(\theta(s))=0$ that means
that $\theta\in Z$.
See also §3.1 in [KSMJ03].
This is the proof of 2.5:
{proof}
Fix $\theta_{0}\in M\setminus Z$. Let $T=T_{\theta_{0}}M$ be the
tangent at $\theta_{0}$. $T$ is the vector space orthogonal to
$\nabla\phi_{i}(\theta_{0})$ for $i=1,2,3$.
Let $e_{i}=e_{i}(s)\in L^{2}\cap C^{\infty}_{c}$
be near $\phi_{i}(\theta_{0})$ in $L^{2}$, so that the map
$(x,y):T\times{\mathrm{l\hskip-1.5ptR}}^{3}\to L^{2}$
$$(x,y)\mapsto\theta=\theta_{0}+x+\sum_{i=1}^{3}e_{i}y_{i}$$
(B.1)
is an isomorphism.
Let $M^{\prime}$ be $M$ in these coordinates; by the Implicit Function
Theorem (5.9 in [Lan99]), there exists an open set
$U^{\prime}\subset T$, $0\in U^{\prime}$, an open $V^{\prime}\subset{\mathrm{l\hskip-1.5ptR}}^{3}$, $0\in V^{\prime}$, and a smooth function $f:U\to{\mathrm{l\hskip-1.5ptR}}^{3}$ such that the local
part $M^{\prime}\cap(U^{\prime}\times V^{\prime})$ of the manifold $M^{\prime}$ is the graph of
$y=f(x)$.
We immediatly define a smooth projection $\pi:U^{\prime}\times V^{\prime}\to M^{\prime}$
by setting $\pi^{\prime}(x,y)=(x,f(x))$; this may be expressed in
$L^{2}$; let $(x(\theta),y(\theta))$ be the inverse of
(B.1) and $U=x^{-1}(U^{\prime})$; we define the
projection $\pi:U\to M$ by setting
$$\pi(\theta)=\theta_{0}+x+\sum_{i=1}^{3}e_{i}f_{i}(x(\theta))$$
Then
$$\pi(\theta)(s)-\theta(s)=\sum_{i=1}^{3}e_{i}(s)a_{i}\leavevmode\nobreak\ %
\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ a_{i}{%
\stackrel{{\scriptstyle\mbox{.}}}{{=}}}(f_{i}(x(\theta))-y_{i})\in{\mathrm{l%
\hskip-1.5ptR}}$$
(B.2)
so if $\theta(s)$ is smooth, then $\pi(\theta)(s)$ is smooth.
Let $\theta_{n}$ be smooth functions such that $\theta_{n}\to\theta$ in $L^{2}$, then $\pi(\theta_{n})\to\theta_{0}$; if we choose
them to satisfy $\theta_{n}(2\pi)-\theta_{n}(0)=2\pi h$, then, by
the formula (B.2), $\pi(\theta)(2\pi)-\pi(\theta)(0)=2\pi h$ so that
$\pi(\theta_{n})\in M$ and it represents a smooth curve with the
assigned rotation index $h$.
Appendix C $E^{N}$ is ill-posed
The result C.1 following below
was inspired from a description
of a similar phenomenon, found on page 16 of the
slides [Mum] of D. Mumford:
it is possible to connect the two segments
$c_{0}(u)=(u,0)$ and $c_{1}(u)=(u,1)$ with a family of
homotopies $C_{k}:[0,1]\times[0,1]\to{\mathrm{l\hskip-1.5ptR}}^{2}$ such that $E^{N}(C_{k})\to_{k}0$.
We represent the idea in figure 10.
We use the above idea to show that the distance induced
by $E^{N}$ is zero.
(49) (49) (49)
We have recently discovered an identical proposition in
[MM]: we anyway propose this proof, since it is more
detailed.
Proposition C.1 ($E^{N}$ is ill-posed)
Fix $c_{0}$ and $c_{1}$ to be two regular curves.
We want to show that, $\forall\epsilon>0$, there is a homotopy $C$
connecting $c_{0}$ to $c_{1}$ such that $E^{N}(C)<\epsilon$.
1.
To start, suppose that $c_{1}$ is contained in the surface of a
sphere, that is, $|c_{1}|=1$ is constant. Suppose also that
$|{\mathaccent 28767{c}}_{1}|=1$.
Consider the linear
interpolant, from the origin to $c_{1}$:
$$C(\theta,v)=vc_{1}(\theta)$$
The image of this homotopy is a cone.
We want to play a bad trick to the linear interpolant: we define
a homotopy whose image is the cone, but that moves points with
different speeds and times. Let $\epsilon=\pi/k$ in the following;
we define the sawtooth $Z:S^{1}\to[0,\epsilon]$
$${Z(\theta)=\left\{\,\vbox{\halign{\cr}$\theta$&\quad{if $\theta\in[0,\epsilon]$%
}\cr$(2\epsilon-\theta)$&\quad{if $\theta\in[\epsilon,2\epsilon]$}\cr$(\theta-%
2\epsilon)$&\quad{if
$\theta\in[2\epsilon,3\epsilon]$}\cr$(4\epsilon-\theta)$&\quad{if $\theta\in[3%
\epsilon,4\epsilon]$}\cr$\cdots$}}\right.$$
(note that $Z(\theta)+Z(\theta+\epsilon)=\epsilon$,
$Z(\theta)=Z(-\theta)$)
Let
$$C_{k}(\theta,v)=c_{1}(\theta)\frac{2v}{\epsilon}Z(\theta)$$
for
$v\in[0,1/2]$, and
$$\displaystyle C_{k}(\theta,v)=c_{1}(\theta)\left(1-2\frac{1-v}{\epsilon}Z(%
\theta+\epsilon)\right)$$
for $v\in[1/2,1]$.
The energy $E(C_{k})$ is splitted in two parts for $v$ and in
$2k$ equal parts in $\theta$, so we compute the energy only for
two regions, and then multiply by $2k$.
•
In region $\theta\in[0,\epsilon]$ $v\in[0,1/2]$, we have
$$C_{k}(\theta,v)=c_{1}(\theta)2\frac{1}{\epsilon}v\theta$$
$$\partial_{v}C_{k}=c_{1}(\theta)2\frac{1}{\epsilon}\theta$$
$$\partial_{\theta}C_{k}={\mathaccent 28767{c}}_{1}(\theta)2\frac{v}{\epsilon}%
\theta+c_{1}(\theta)2\frac{v}{\epsilon}$$
and
$$|\partial_{\theta}C_{k}|^{2}=4v^{2}\frac{1}{\epsilon^{2}}(\theta^{2}+1)$$
Since
$$|\pi_{N}v|^{2}=|v-\langle v,T\rangle T|^{2}=|v|^{2}-(\langle v,T\rangle)^{2}$$
then
$$\displaystyle|\pi_{N}\partial_{v}C_{k}|^{2}=\left|\frac{2}{\epsilon}\theta\pi_%
{N}c_{1}(\theta)\right|^{2}=\frac{4}{\epsilon^{2}}\theta^{2}\Big{(}|c_{1}(%
\theta)|^{2}-(\langle T,c_{1}(\theta)\rangle)^{2}\Big{)}=$$
$$\displaystyle=\frac{4}{\epsilon^{2}}\theta^{2}\left(1-\langle\partial_{\theta}%
C,c_{1}(\theta)\rangle^{2}\frac{1}{|\partial_{\theta}C|^{2}}\right)=$$
$$\displaystyle=\frac{4}{\epsilon^{2}}\theta^{2}\left(1-\left\langle c_{1}(%
\theta)2\frac{v}{\epsilon},c_{1}(\theta)\right\rangle^{2}\frac{1}{|\partial_{%
\theta}C|^{2}}\right)=\frac{4}{\epsilon^{2}}\theta^{2}\left(1-4\frac{1}{%
\epsilon^{2}}v^{2}\frac{1}{|\partial_{\theta}C|^{2}}\right)=$$
$$\displaystyle=\frac{4}{\epsilon^{2}}\theta^{2}\left(1-\frac{1}{1+\theta^{2}}%
\right)=\frac{4}{\epsilon^{2}}\theta^{4}\frac{1}{1+\theta^{2}}$$
so that the energy for the part $v\in[0,1/2]$ of the homotopy is
$$\displaystyle E^{N}(C_{k})=4k\int_{0}^{1/2}\int_{0}^{\epsilon}\left|\pi_{N}%
\partial_{v}C_{k}\right|^{2}\left|\partial_{\theta}C_{k}\right|\leavevmode%
\nobreak\ d\theta\leavevmode\nobreak\ dv=$$
$$\displaystyle=4k\int_{0}^{1/2}\int_{0}^{\epsilon}\frac{4}{\epsilon^{2}}\theta^%
{4}\frac{1}{1+\theta^{2}}\left|\partial_{\theta}C_{k}\right|\leavevmode%
\nobreak\ d\theta\leavevmode\nobreak\ dv=$$
$$\displaystyle=16k\frac{1}{\epsilon^{2}}\int_{0}^{1/2}\int_{0}^{\epsilon}\theta%
^{4}\frac{1}{1+\theta^{2}}\sqrt{4v^{2}\frac{1}{\epsilon^{2}}(\theta^{2}+1)}%
\leavevmode\nobreak\ d\theta\leavevmode\nobreak\ dv=$$
$$\displaystyle=32k\frac{1}{\epsilon^{3}}\int_{0}^{1/2}\int_{0}^{\epsilon}\theta%
^{4}\frac{1}{1+\theta^{2}}v\sqrt{(\theta^{2}+1)}\leavevmode\nobreak\ d\theta%
\leavevmode\nobreak\ dv=$$
$$\displaystyle=32k\frac{1}{\epsilon^{3}}\frac{1}{8}\int_{0}^{\epsilon}\theta^{4%
}\frac{1}{\sqrt{(\theta^{2}+1)}}\leavevmode\nobreak\ d\theta\leq$$
$$\displaystyle\leq 4k\frac{1}{\epsilon^{3}}\frac{\epsilon^{5}}{5}=\frac{4}{5}%
\pi^{2}\frac{1}{k}$$
•
similarly in region $\theta\in[0,\epsilon]$ $v\in[1/2,1]$ we have
$$C_{k}(\theta,v)=c_{1}(\theta)\left(1-2\frac{1-v}{\epsilon}(\epsilon-\theta)\right)$$
but we implicitely change variable $\theta\mapsto\theta-\epsilon$,
$v\mapsto v-1$
to write
$$C_{k}(\theta,v)=c_{1}(\theta)\left(1-2\frac{v}{\epsilon}\theta\right)$$
(this means that we will
integrate on $\theta\in[-\epsilon,0]$ $v\in[-1/2,0]$). Then
$$\partial_{v}C_{k}=-c_{1}(\theta)2\theta\frac{1}{\epsilon}$$
and
$$\partial_{\theta}C_{k}={\mathaccent 28767{c}}_{1}(\theta)\left(1-2\frac{v}{%
\epsilon}\theta\right)-c_{1}(\theta)2\frac{v}{\epsilon}$$
and
$$|\partial_{\theta}C_{k}|^{2}=\left(1-2\frac{v}{\epsilon}\theta\right)^{2}+4%
\frac{v^{2}}{\epsilon^{2}}=\frac{1}{\epsilon^{2}}\Big{(}(\epsilon-2v\theta)^{2%
}+4v^{2}\Big{)}$$
$$\displaystyle|\pi_{N}\partial_{v}C_{k}|^{2}=\left|\frac{2}{\epsilon}\theta\pi_%
{N}c_{1}(\theta)\right|^{2}=\frac{4}{\epsilon^{2}}\theta^{2}\Big{(}|c_{1}(%
\theta)|^{2}-\langle T,c_{1}(\theta)\rangle^{2}\Big{)}=$$
$$\displaystyle=\frac{4}{\epsilon^{2}}\theta^{2}\left(1-\left\langle c_{1}(%
\theta)2\frac{v}{\epsilon},c_{1}(\theta)\right\rangle^{2}\frac{1}{|\partial_{%
\theta}C_{k}|^{2}}\right)=$$
$$\displaystyle=\frac{4}{\epsilon^{2}}\theta^{2}\left(1-4\frac{v^{2}}{\epsilon^{%
2}}\frac{1}{|\partial_{\theta}C_{k}|^{2}}\right)=\frac{4}{\epsilon^{2}}\theta^%
{2}\left(\frac{\left(\epsilon-2v\theta\right)^{2}}{(\epsilon-2v\theta)^{2}+4v^%
{2}}\right)$$
Note that
$$(\epsilon-2v\theta)^{2}+4v^{2}\geq\epsilon^{2}(1+2v)^{2}+4v^{2}\geq\frac{2%
\epsilon^{4}}{(1+\epsilon^{2})^{2}}$$
(C.1)
(the positive minimum is reached at $v=-\epsilon^{2}/(2+2\epsilon^{2}),\theta=-\epsilon$)
Since
$$\displaystyle|\pi_{N}\partial_{v}C_{k}|^{2}|\partial_{\theta}C_{k}|=\frac{4}{%
\epsilon^{2}}\theta^{2}\left(\frac{\left(\epsilon-2v\theta\right)^{2}}{(%
\epsilon-2v\theta)^{2}+4v^{2}}\right)\sqrt{\frac{1}{\epsilon^{2}}\Big{(}(%
\epsilon-2v\theta)^{2}+4v^{2}\Big{)}}=$$
$$\displaystyle=\frac{4}{\epsilon^{3}}\theta^{2}\left(\frac{\left(\epsilon-2v%
\theta\right)^{2}}{\sqrt{(\epsilon-2v\theta)^{2}+4v^{2}}}\right)=\frac{4}{%
\epsilon}\tau^{2}\frac{\epsilon^{2}(1-2v\tau)^{2}}{\sqrt{\epsilon^{2}(1-2v\tau%
)^{2}+4v^{2}}}$$
where $\tau=\theta/\epsilon$
so the energy for the part $v\in[1/2,1]$ of the homotopy
becomes
$$\displaystyle E^{N}(C_{k})=2k\int_{-1/2}^{0}\int_{-\epsilon}^{0}|\pi_{N}%
\partial_{v}C_{k}|^{2}|\partial_{\theta}C_{k}|\leavevmode\nobreak\ d\theta%
\leavevmode\nobreak\ dv=$$
$$\displaystyle=2k\int_{-1/2}^{0}\int_{-1}^{0}4\tau^{2}\frac{\epsilon^{2}(1-2v%
\tau)^{2}}{\sqrt{\epsilon^{2}(1-2v\tau)^{2}+4v^{2}}}\leavevmode\nobreak\ d\tau%
\leavevmode\nobreak\ dv=$$
$$\displaystyle=8\pi\int_{-1/2}^{0}\int_{-1}^{0}\tau^{2}\frac{(1-2v\tau)^{2}}{%
\sqrt{(1-2v\tau)^{2}+4v^{2}/\epsilon^{2}}}\leavevmode\nobreak\ d\tau%
\leavevmode\nobreak\ dv$$
since by (C.1) the integrand is continuous and
positive, and it decreases pointwise when $\epsilon\to 0$, then
so $E^{N}(C_{k})\to 0$ as $\epsilon\to 0$ (by Beppo-Levi lemma,
or Lebesgue theorem).
2.
as a second step, consider a generic smooth curve $c_{1}$; we could
approximate it with a piecewise smooth curve $c_{1}^{\prime}$ , where
each piece in $C^{\prime}$ is either contained in a sphere, or in a radious
exiting from $0$: by using the above homotopy on each spherical piece,
and translating and scaling each radial piece to $0$, we can
have a homotopy
3.
then, given two generic smooth curves $c_{0},c_{1}$, we can
approximate them as above to obtain $c_{0}^{\prime},c_{1}^{\prime}$, and build
a homotopy
$$c_{0}\to c_{0}^{\prime}\to 0\to c_{1}^{\prime}\to c_{1}$$
this final homotopy can be built with small energy
Remark C.2
More in general, if $\alpha>\beta>0$ , then the energy
$$\int_{I}|\pi_{N}\partial_{v}C|^{\alpha}|\dot{C}|^{\beta}$$
is ill-defined, as is shown in 4.3.1.
Proposition C.3 ($E$ is ill-posed)
Consider the energy $E(C)$ associated to the metric
(2.4).
Fix $c_{0}$ and $c_{1}$ to be two regular curves.
$\forall\epsilon>0$, there is a homotopy $C\in\mathbb{C}$
connecting $c_{0}$ to $c_{1}$ such that $E(C)<\epsilon$.
{proof}
Consider a homotopy defined as in the previous proposition; if we
mollify it,
we can obtain a regular homotopy $C^{\prime}$ such that
$\|C^{\prime}-C\|_{W^{1,3}}$ and $\|C^{\prime}-C\|_{\infty}$ are arbitrarily
small; then we can reconnect $C^{\prime}$ to $c_{0}$ to $c_{1}$ with a small
cost, to create $C^{\prime\prime}$: by some direct
computation, $E(C^{\prime\prime})<2\epsilon$.
By using prop. 3.8 on $C^{\prime\prime}$
we obtain a $\widetilde{C}$ such that $E^{N}(\widetilde{C})=E^{N}(C^{\prime\prime})$, and
since $\pi_{\widetilde{N}}\widetilde{C}=0$,
$$E(\widetilde{C})=E^{N}(\widetilde{C})=E^{N}(C^{\prime\prime})\leq 2\epsilon$$
Appendix D Derivation of Flows
In this section we show the details of the calculations of the
minimizing flows for both the $H^{0}$ and conformal energies.
D.1 Some preliminary calculus
First we develop in the following subsections some of the calculus
that we will need to work with the geometric parameters $s$
and ${v_{*}}$ introduced in section 5.
D.1.1 Commutation of derivatives
Note that the parameters $s$ and ${v_{*}}$ do not form true coordinates
and therefore have a non-trivial commutator. The third parameter
$t$ will come into play later when we consider a time varying family of
homotopies $C(u,v,t)$ and take the resulting time derivative of either
the $H^{0}$ or the conformal energy along this family.
$$\displaystyle\frac{\partial}{\partial t}\,\frac{\partial}{\partial{v_{*}}}$$
$$\displaystyle=$$
$$\displaystyle\frac{\partial}{\partial t}\left(\frac{\partial}{\partial v}-%
\frac{C_{u}\cdot C_{v}}{C_{u}\cdot C_{u}}\,\frac{\partial}{\partial u}\right)$$
$$\displaystyle=$$
$$\displaystyle\frac{\partial}{\partial t}\,\frac{\partial}{\partial v}-\frac{C_%
{u}\cdot C_{v}}{C_{u}\cdot C_{u}}\,\frac{\partial}{\partial t}\,\frac{\partial%
}{\partial u}-\left(\frac{C_{ut}\cdot C_{v}+C_{u}\cdot C_{vt}}{C_{u}\cdot C_{u%
}}-2\frac{(C_{u}\cdot C_{v})(C_{ut}\cdot C_{u})}{(C_{u}\cdot C_{u})^{2}}\right%
)\frac{\partial}{\partial u}$$
$$\displaystyle=$$
$$\displaystyle\left(\frac{\partial}{\partial v}-\frac{C_{u}\cdot C_{v}}{C_{u}%
\cdot C_{u}}\,\frac{\partial}{\partial u}\right)\frac{\partial}{\partial t}-%
\frac{C_{u}\cdot\left(C_{tv}-\frac{C_{u}\cdot C_{v}}{C_{u}\cdot C_{u}}C_{tu}%
\right)+C_{tu}\cdot\left(C_{v}-\frac{C_{u}\cdot C_{v}}{C_{u}\cdot C_{u}}C_{u}%
\right)}{C_{u}\cdot C_{u}}\frac{\partial}{\partial u}$$
$$\displaystyle=$$
$$\displaystyle\framebox{$\displaystyle\frac{\partial}{\partial{v_{*}}}\,\frac{%
\partial}{\partial t}-\big{(}C_{s}\cdot C_{t{v_{*}}}+C_{ts}\cdot C_{{v_{*}}}%
\big{)}\frac{\partial}{\partial s}$}$$
$$\displaystyle\frac{\partial}{\partial t}\,\frac{\partial}{\partial s}$$
$$\displaystyle=$$
$$\displaystyle\frac{\partial}{\partial t}\left(\frac{1}{\|C_{u}\|}\,\frac{%
\partial}{\partial u}\right)=\frac{1}{\|C_{u}\|}\,\frac{\partial}{\partial t}%
\,\frac{\partial}{\partial u}-\frac{C_{ut}\cdot C_{u}}{\|C_{u}\|^{3}}\,\frac{%
\partial}{\partial u}=\framebox{$\displaystyle\frac{\partial}{\partial s}\,%
\frac{\partial}{\partial t}-C_{ts}\cdot C_{s}\frac{\partial}{\partial s}$}$$
$$\displaystyle\frac{\partial}{\partial{v_{*}}}\,\frac{\partial}{\partial s}$$
$$\displaystyle=$$
$$\displaystyle\frac{\partial}{\partial v}\left(\frac{1}{\|C_{u}\|}\,\frac{%
\partial}{\partial u}\right)-C_{v}\cdot C_{s}\frac{\partial}{\partial s}\left(%
\frac{1}{\|C_{u}\|}\,\frac{\partial}{\partial u}\right)$$
(D.3)
$$\displaystyle=$$
$$\displaystyle\frac{1}{\|C_{u}\|}\,\frac{\partial}{\partial v}\,\frac{\partial}%
{\partial u}-\frac{C_{uv}\cdot C_{u}}{\|C_{u}\|^{3}}\,\frac{\partial}{\partial
u%
}-C_{v}\cdot C_{s}\frac{\partial}{\partial s}\,\frac{\partial}{\partial s}$$
$$\displaystyle=$$
$$\displaystyle\frac{\partial}{\partial s}\,\frac{\partial}{\partial v}-C_{vs}%
\cdot C_{s}\frac{\partial}{\partial s}-C_{v}\cdot C_{s}\frac{\partial}{%
\partial s}\,\frac{\partial}{\partial s}$$
$$\displaystyle=$$
$$\displaystyle\frac{\partial}{\partial s}\left(\frac{\partial}{\partial v}-C_{v%
}\cdot C_{s}\frac{\partial}{\partial s}\right)+C_{v}\cdot C_{ss}\frac{\partial%
}{\partial s}=\framebox{$\displaystyle\frac{\partial}{\partial s}\,\frac{%
\partial}{\partial{v_{*}}}+C_{{v_{*}}}\cdot C_{ss}\frac{\partial}{\partial s}$}$$
D.1.2 Some identities
Here we write down some useful identities regarding various
derivatives of the homotopy with respect to the geometric
parameters $s$ and ${v_{*}}$.
1.
$C_{s}\cdot C_{s}=1$
2.
$C_{s}\cdot C_{{v_{*}}}=0$
3.
$C_{{v_{*}}s}\cdot C_{{v_{*}}}=C_{s{v_{*}}}\cdot C_{{v_{*}}}=-C_{{v_{*}}{v_{*}}%
}\cdot C_{s}$
4.
$C_{{v_{*}}s}\cdot C_{s}=-C_{ss}\cdot C_{{v_{*}}}$
5.
$C_{s{v_{*}}}\cdot C_{s}=0$
6.
$C_{ss}\cdot C_{s}=0$
D.1.3 Commutation of derivatives with integrals
Finally, we write down how to commute derivatives and integrals
when differentiating with respect to $t$ or ${v_{*}}$.
$$\displaystyle\frac{\partial}{\partial t}\int_{0}^{L}f\,ds$$
$$\displaystyle=$$
$$\displaystyle\frac{\partial}{\partial t}\int_{0}^{1}f\|C_{u}\|\,du=\int_{0}^{1%
}f_{t}\|C_{u}\|+f(C_{ut}\cdot C_{s})\,du=\int_{0}^{L}f_{t}+f(C_{ts}\cdot C_{s}%
)\,ds$$
$$\displaystyle=$$
$$\displaystyle\framebox{$\displaystyle\int_{0}^{L}f_{t}-f_{s}(C_{t}\cdot C_{s})%
-f(C_{t}\cdot C_{ss})\,ds$}$$
$$\displaystyle\frac{\partial}{\partial{v_{*}}}\int_{0}^{L}f\,ds$$
$$\displaystyle=$$
$$\displaystyle\frac{\partial}{\partial v}\int_{0}^{1}f\|C_{u}\|\,du=\int_{0}^{1%
}f_{v}\|C_{u}\|+f(C_{uv}\cdot C_{s})\,du=\int_{0}^{L}f_{v}+f(C_{vs}\cdot C_{s}%
)\,ds$$
(D.5)
$$\displaystyle=$$
$$\displaystyle\int_{0}^{L}f_{v}-f_{s}(C_{v}\cdot C_{s})-f(C_{v}\cdot C_{ss})\,%
ds=\framebox{$\displaystyle\int_{0}^{L}f_{{v_{*}}}-f(C_{{v_{*}}}\cdot C_{ss})%
\,ds$}$$
D.1.4 Intermediate Expressions
The last step, before begining the flow calculation will be to
introduce a few “intermediate” expressions that will help keep the
the expressions in the upcoming derivations from becoming
too lengthy.
$$\displaystyle m$$
$$\displaystyle=$$
$$\displaystyle C_{{v_{*}}}\cdot C_{{v_{*}}}$$
(D.6)
$$\displaystyle m_{t}$$
$$\displaystyle=$$
$$\displaystyle 2\,C_{{v_{*}}t}\cdot C_{{v_{*}}}=2\,C_{t{v_{*}}}\cdot C_{{v_{*}}}$$
(D.7)
$$\displaystyle m_{s}$$
$$\displaystyle=$$
$$\displaystyle 2\,C_{{v_{*}}s}\cdot C_{{v_{*}}}=2\,C_{s{v_{*}}}\cdot C_{{v_{*}}%
}=-2\,C_{{v_{*}}{v_{*}}}\cdot C_{s}$$
(D.8)
$$\displaystyle m_{{v_{*}}}$$
$$\displaystyle=$$
$$\displaystyle 2\,C_{{v_{*}}{v_{*}}}\cdot C_{{v_{*}}}$$
(D.9)
$$\displaystyle m_{{v_{*}}{v_{*}}}$$
$$\displaystyle=$$
$$\displaystyle 2\,C_{{v_{*}}{v_{*}}{v_{*}}}\cdot C_{{v_{*}}}+2\,C_{{v_{*}}{v_{*%
}}}\cdot C_{{v_{*}}{v_{*}}}$$
(D.10)
$$\displaystyle M$$
$$\displaystyle=$$
$$\displaystyle\int_{0}^{L}C_{{v_{*}}}\cdot C_{{v_{*}}}ds$$
(D.11)
$$\displaystyle M_{t}$$
$$\displaystyle=$$
$$\displaystyle\int_{0}^{L}2\,C_{t{v_{*}}}\cdot C_{{v_{*}}}+2\,(C_{{v_{*}}{v_{*}%
}}\cdot C_{s})(C_{t}\cdot C_{s})-m\,(C_{t}\cdot C_{ss})\,ds$$
(D.12)
$$\displaystyle M_{{v_{*}}}$$
$$\displaystyle=$$
$$\displaystyle\int_{0}^{L}2\,C_{{v_{*}}{v_{*}}}\cdot C_{{v_{*}}}-m\,(C_{{v_{*}}%
}\cdot C_{ss})\,ds$$
(D.13)
$$\displaystyle M_{{v_{*}}{v_{*}}}$$
$$\displaystyle=$$
$$\displaystyle\int_{0}^{L}2\,C_{{v_{*}}{v_{*}}{v_{*}}}\cdot C_{{v_{*}}}+2\,C_{{%
v_{*}}{v_{*}}}\cdot C_{{v_{*}}{v_{*}}}-m\,(C_{{v_{*}}}\cdot C_{ss{v_{*}}})$$
$$\displaystyle -m\,(C_{{v_{*}}{v_{*}}}\cdot C_{ss})-4\,(C_{{v_{*}}{v_{*}}}%
\cdot C_{{v_{*}}})(C_{{v_{*}}}\cdot C_{ss})+m\,(C_{{v_{*}}}\cdot C_{ss})^{2}\,ds$$
$$\displaystyle=$$
$$\displaystyle\int_{0}^{L}2\,C_{{v_{*}}{v_{*}}{v_{*}}}\cdot C_{{v_{*}}}+2\,C_{{%
v_{*}}{v_{*}}}\cdot C_{{v_{*}}{v_{*}}}+2\,(C_{s{v_{*}}}\cdot C_{{v_{*}}})^{2}+%
m\,(C_{s{v_{*}}}\cdot C_{s{v_{*}}})$$
$$\displaystyle -m\,(C_{{v_{*}}{v_{*}}}\cdot C_{ss})-4\,(C_{{v_{*}}{v_{*}}}%
\cdot C_{{v_{*}}})(C_{{v_{*}}}\cdot C_{ss})\,ds$$
$$\displaystyle L$$
$$\displaystyle=$$
$$\displaystyle\int_{0}^{L}ds$$
(D.15)
$$\displaystyle L_{t}$$
$$\displaystyle=$$
$$\displaystyle\int_{0}^{L}-C_{t}\cdot C_{ss}\,ds$$
(D.16)
$$\displaystyle L_{{v_{*}}}$$
$$\displaystyle=$$
$$\displaystyle\int_{0}^{L}-C_{{v_{*}}}\cdot C_{ss}\,ds$$
(D.17)
$$\displaystyle L_{{v_{*}}{v_{*}}}$$
$$\displaystyle=$$
$$\displaystyle\int_{0}^{L}-C_{{v_{*}}{v_{*}}}\cdot C_{ss}-C_{{v_{*}}}\cdot C_{%
ss{v_{*}}}+(C_{{v_{*}}}\cdot C_{ss})^{2}\,ds$$
$$\displaystyle=$$
$$\displaystyle\int_{0}^{L}C_{s{v_{*}}}\cdot C_{s{v_{*}}}\,ds-C_{{v_{*}}{v_{*}}}%
\cdot C_{ss}$$
D.2 $H^{0}$ flow calculation
We are now ready to begin the flow calculation. We’ll start with the
case of the $H^{0}$ energy in this subsection and then proceed to
the conformal case in the followin subsection.
We begin by considering a time-varying family of homotopies
$C(u,v,t):[0,1]\times[0,1]\times(0,\infty)\to{\mathrm{l\hskip-1.5ptR}}^{n}$
and write the $H^{0}$ energy as
$$E(t)=\int_{0}^{1}\int_{0}^{L}\big{\|}C_{{v_{*}}}\big{\|}^{2}\,ds\,dv=\int_{0}^%
{1}Mdv$$
(D.19)
Then the variation of $E$ is
$$\displaystyle E^{\prime}(t)$$
$$\displaystyle=$$
$$\displaystyle\int_{0}^{1}M_{t}\,dv=\int_{0}^{1}\int_{0}^{L}2\,C_{t{v_{*}}}%
\cdot C_{{v_{*}}}+2\,(C_{{v_{*}}{v_{*}}}\cdot C_{s})(C_{t}\cdot C_{s})-m\,(C_{%
t}\cdot C_{ss})\,ds\,dv$$
$$\displaystyle=$$
$$\displaystyle\int_{0}^{1}\int_{0}^{L}2\,\Big{(}C_{tv}-(C_{v}\cdot C_{s})C_{ts}%
\Big{)}\cdot C_{{v_{*}}}\,ds\,dv+\int_{0}^{1}\int_{0}^{L}C_{t}\cdot\Big{(}2\,(%
C_{{v_{*}}{v_{*}}}\cdot C_{s})C_{s}-m\,C_{ss}\Big{)}\,ds\,dv$$
$$\displaystyle=$$
$$\displaystyle 2\int_{0}^{1}\int_{0}^{1}\,C_{tv}\cdot C_{{v_{*}}}\|C_{u}\|\,du%
\,dv$$
$$\displaystyle+2\int_{0}^{1}\int_{0}^{L}(C_{v}\cdot C_{s})(C_{t}\cdot C_{{v_{*}%
}s})+(C_{vs}\cdot C_{s}+C_{v}\cdot C_{ss})(C_{t}\cdot C_{{v_{*}}})\,ds\,dv$$
$$\displaystyle+\int_{0}^{1}\int_{0}^{L}C_{t}\cdot\Big{(}2\,(C_{{v_{*}}{v_{*}}}%
\cdot C_{s})C_{s}-m\,C_{ss}\Big{)}\,ds\,dv$$
$$\displaystyle=$$
$$\displaystyle 2\int_{0}^{1}\int_{0}^{1}-\,(C_{t}\cdot C_{{v_{*}}})(C_{uv}\cdot
C%
_{s})-\,C_{t}\cdot C_{{v_{*}}v}\|C_{u}\|\,du\,dv$$
$$\displaystyle+2\int_{0}^{1}\int_{0}^{L}(C_{v}\cdot C_{s})(C_{t}\cdot C_{{v_{*}%
}s})+(C_{vs}\cdot C_{s}+C_{v}\cdot C_{ss})(C_{t}\cdot C_{{v_{*}}})\,ds\,dv$$
$$\displaystyle+\int_{0}^{1}\int_{0}^{L}C_{t}\cdot\Big{(}2\,(C_{{v_{*}}{v_{*}}}%
\cdot C_{s})C_{s}-m\,C_{ss}\Big{)}\,ds\,dv$$
$$\displaystyle=$$
$$\displaystyle 2\int_{0}^{1}\int_{0}^{L}C_{t}\cdot\Big{(}-C_{{v_{*}}v}+(C_{v}%
\cdot C_{s})C_{{v_{*}}s}+(C_{v}\cdot C_{ss})C_{{v_{*}}}\Big{)}\,ds\,dv$$
$$\displaystyle+\int_{0}^{1}\int_{0}^{L}C_{t}\cdot\Big{(}2\,(C_{{v_{*}}{v_{*}}}%
\cdot C_{s})C_{s}-m\,C_{ss}\Big{)}\,ds\,dv$$
$$\displaystyle=$$
$$\displaystyle-\int_{0}^{1}\int_{0}^{L}C_{t}\cdot\Big{(}2C_{{v_{*}}{v_{*}}}-2(C%
_{{v_{*}}{v_{*}}}\cdot C_{s})C_{s}-2(C_{{v_{*}}}\cdot C_{ss})C_{{v_{*}}}+m\,C_%
{ss}\Big{)}\,ds\,dv$$
In the planar case, $C_{{v_{*}}}$ and $C_{ss}$ are linearly dependent
(as both are orthogonal to $C_{s}$) which means that
$$(C_{{v_{*}}}\cdot C_{ss})C_{{v_{*}}}=(C_{{v_{*}}}\cdot C_{{v_{*}}})C_{ss}=mC_{ss}$$
(D.20)
and therefore
$$E^{\prime}(t)=-\int_{0}^{1}\int_{0}^{L}C_{t}\cdot\Big{(}2\big{(}C_{{v_{*}}{v_{%
*}}}-(C_{{v_{*}}{v_{*}}}\cdot C_{s})C_{s}\big{)}-m\,C_{ss}\Big{)}\,ds\,dv$$
(D.21)
by which we derive the minimization flow
$$C_{t}=2\big{(}C_{{v_{*}}{v_{*}}}-(C_{{v_{*}}{v_{*}}}\cdot C_{s})C_{s}\big{)}-%
mC_{ss}$$
D.3 Conformal flow calculation
We now define the $H^{0}_{\phi}$ energy as
$$E_{\phi}(t)=\int_{0}^{1}\phi(L)\int_{0}^{L}\big{\|}C_{{v_{*}}}\big{\|}^{2}\,ds%
\,dv=\int_{0}^{1}\phi Mdv$$
(D.22)
and compute its derivative as
$$\displaystyle E^{\prime}(t)$$
$$\displaystyle=$$
$$\displaystyle\int_{0}^{1}\phi^{\prime}L_{t}M+\phi M_{t}\,dv$$
$$\displaystyle=$$
$$\displaystyle\int_{0}^{1}\left(-\phi^{\prime}M\int_{0}^{L}C_{t}\cdot C_{ss}\,%
ds+\phi\int_{0}^{L}2\,C_{t{v_{*}}}\cdot C_{{v_{*}}}+2\,(C_{{v_{*}}{v_{*}}}%
\cdot C_{s})(C_{t}\cdot C_{s})-m\,(C_{t}\cdot C_{ss})\,ds\right)\,dv$$
$$\displaystyle=$$
$$\displaystyle\int_{0}^{1}\int_{0}^{L}2\phi\Big{(}C_{tv}-(C_{v}\cdot C_{s})C_{%
ts}\Big{)}\cdot C_{{v_{*}}}\,ds\,dv$$
$$\displaystyle+\int_{0}^{1}\int_{0}^{L}C_{t}\cdot\Big{(}2\phi(C_{{v_{*}}{v_{*}}%
}\cdot C_{s})C_{s}-(\phi m+\phi^{\prime}M)C_{ss}\Big{)}\,ds\,dv$$
$$\displaystyle=$$
$$\displaystyle 2\int_{0}^{1}\int_{0}^{1}\phi\,C_{tv}\cdot C_{{v_{*}}}\|C_{u}\|%
\,du\,dv$$
$$\displaystyle+2\phi\int_{0}^{1}\int_{0}^{L}(C_{v}\cdot C_{s})(C_{t}\cdot C_{{v%
_{*}}s})+(C_{vs}\cdot C_{s}+C_{v}\cdot C_{ss})(C_{t}\cdot C_{{v_{*}}})\,ds\,dv$$
$$\displaystyle+\int_{0}^{1}\int_{0}^{L}C_{t}\cdot\Big{(}2\phi(C_{{v_{*}}{v_{*}}%
}\cdot C_{s})C_{s}-(\phi m+\phi^{\prime}M)C_{ss}\Big{)}\,ds\,dv$$
$$\displaystyle=$$
$$\displaystyle 2\int_{0}^{1}\int_{0}^{1}-(\phi\,C_{t}\cdot C_{{v_{*}}})(C_{uv}%
\cdot C_{s})-C_{t}\cdot(\phi^{\prime}L_{v}C_{{v_{*}}}+\phi\,C_{{v_{*}}v})\|C_{%
u}\|\,du\,dv$$
$$\displaystyle+2\phi\int_{0}^{1}\int_{0}^{L}(C_{v}\cdot C_{s})(C_{t}\cdot C_{{v%
_{*}}s})+(C_{vs}\cdot C_{s}+C_{v}\cdot C_{ss})(C_{t}\cdot C_{{v_{*}}})\,ds\,dv$$
$$\displaystyle+\int_{0}^{1}\int_{0}^{L}C_{t}\cdot\Big{(}2\phi(C_{{v_{*}}{v_{*}}%
}\cdot C_{s})C_{s}-(\phi m+\phi^{\prime}M)C_{ss}\Big{)}\,ds\,dv$$
$$\displaystyle=$$
$$\displaystyle 2\int_{0}^{1}\int_{0}^{L}C_{t}\cdot\Big{(}-\phi^{\prime}L_{v}C_{%
{v_{*}}}-\phi\,C_{{v_{*}}v}+\phi(C_{v}\cdot C_{s})C_{{v_{*}}s}+\phi(C_{v}\cdot
C%
_{ss})C_{{v_{*}}}\Big{)}\,ds\,dv$$
$$\displaystyle+\int_{0}^{1}\int_{0}^{L}C_{t}\cdot\Big{(}2\phi(C_{{v_{*}}{v_{*}}%
}\cdot C_{s})C_{s}-(\phi m+\phi^{\prime}M)C_{ss}\Big{)}\,ds\,dv$$
$$\displaystyle=$$
$$\displaystyle-\int_{0}^{1}\int_{0}^{L}C_{t}\cdot\Big{(}2\phi^{\prime}L_{{v_{*}%
}}C_{{v_{*}}}+2\phi C_{{v_{*}}{v_{*}}}-2\phi(C_{{v_{*}}{v_{*}}}\cdot C_{s})C_{%
s}-2\phi(C_{{v_{*}}}\cdot C_{ss})C_{{v_{*}}}+(\phi m+\phi^{\prime}M)C_{ss}\Big%
{)}\,ds\,dv$$
In the planar case, $C_{{v_{*}}}$ and $C_{ss}$ are linearly dependent
(as both are orthogonal to $C_{s}$) which means that
$$(C_{{v_{*}}}\cdot C_{ss})C_{{v_{*}}}=(C_{{v_{*}}}\cdot C_{{v_{*}}})C_{ss}=mC_{ss}$$
(D.23)
and therefore
$$E^{\prime}(t)=-\int_{0}^{1}\int_{0}^{L}C_{t}\cdot\Big{(}2\phi\big{(}C_{{v_{*}}%
{v_{*}}}-(C_{{v_{*}}{v_{*}}}\cdot C_{s})C_{s}\big{)}+2\phi^{\prime}L_{{v_{*}}}%
C_{{v_{*}}}+(\phi^{\prime}M-\phi m)C_{ss}\Big{)}\,ds\,dv$$
(D.24)
which entails the flow
$$C_{t}=2\phi\big{(}C_{{v_{*}}{v_{*}}}-(C_{{v_{*}}{v_{*}}}\cdot C_{s})C_{s}\big{%
)}+2\phi^{\prime}L_{{v_{*}}}C_{{v_{*}}}+(\phi^{\prime}M-\phi m)C_{ss}$$
Contents
1 Introduction
1.1 Riemannian and Finsler geometries
1.1.1 Riemannian geometry
1.1.2 Geodesics and the exponential map
1.1.3 Submanifolds
1.2 Geometries of curves
1.2.1 Finsler geometry of curves
1.3 Abstract approach
2 Examples, and different approaches and results
2.1 Riemannian geometries of curves
2.1.1 Parametric (non-geometric) form of $H^{0}$
2.1.2 Geometric form of $H^{0}$
2.1.3 Michor-Mumford
2.1.4 Srivastava et al.
2.1.5 Higher order Riemannian geometry
2.2 Finsler geometries of curves
2.2.1 $L^{\infty}$ and Hausdorff metric
2.2.2 $L^{1}$ and Plateau problem
3 Basics
3.1 Relaxation of functionals
3.2 Curves and notations
3.2.1 Homotopies
3.3 Preliminary results
3.4 Homotopy classes
3.4.1 Class $\mathbb{C}$
3.4.2 Class ${\mathbb{F}}$ of prescribed-parameter curves
3.4.3 Factoring out reparameterizations
4 Analysis of $E^{N}(C)$
4.1 Knowledge base
4.2 Compactness
4.3 Lower semicontinuity
4.3.1 Example
4.4 Existence of minimal geodesics
4.4.1 Michor-Mumford
4.5 Space of Curves
4.6 The pulley
5 Conformal metrics
5.1 The Unstable $H^{0}$ Flow
5.1.1 Geometric parameters $s$ and ${v_{*}}$
5.1.2 $H^{0}$ Minimizing Flow
5.2 Conformal Versions of $H^{0}$
5.2.1 Stable conformal factor
5.3 Numerical results
5.4 Example
A More on Finsler metrics
B Proofs of §2.1.4
C $E^{N}$ is ill-posed
D Derivation of Flows
D.1 Some preliminary calculus
D.1.1 Commutation of derivatives
D.1.2 Some identities
D.1.3 Commutation of derivatives with integrals
D.1.4 Intermediate Expressions
D.2 $H^{0}$ flow calculation
D.3 Conformal flow calculation
List of Figures
1 Reparameterization to $\pi_{\widetilde{T}}\partial_{v}\widetilde{C}=0$
2 Stretching the parameterization
3 Tesselation of homotopy $\tilde{C}$ to form $C_{h}$
4 The pulley $C_{h}$, in case $h=5$
5 Intersections induced by flow.
6 Curves $c_{0}$ and $c_{1}$
7 Slices of homotopy
8 Surface of homotopy
9 The curve from eq. (5.19)
10 Artistic rendition of the homotopy $C_{k}$, from [Mum]
References
[AFP00]
Luigi Ambrosio, Nicola Fusco, and Diego Pallara, Functions of bounded
variation and free discontinuity problems, Oxford Mathematical Monographs,
The Clarendon Press Oxford University Press, New York, 2000. MR MR1857292
(2003a:49002)
[AT00]
L. Ambrosio and P. Tilli, Selected topics in ”analysis in metric
spaces”, appunti, edizioni Scuola Normale Superiore, Pisa, 2000.
[Atk75]
C. J. Atkin, The Hopf-Rinow theorem is false in infinite dimensions,
Bull. London Math. Soc. 7 (1975), no. 3, 261–266. MR MR0400283 (53
#4118)
[BCS]
D. Bao, S. S. Chern, and Z. Shen, An introduction to Riemann-Finsler
geometry, (October 1, 1999 version).
[Bre86]
H. Brezis, Analisi funzionale, Liguori Editore, Napoli, 1986, (italian
translation of Analyse fonctionalle, Masson, 1983, Paris).
[But89]
G. Buttazzo, Semicontinuity, relaxation and integral representation in
the calculus of variation, scientific & technical, no. 207, Longman, 1989.
[CFK03]
G. Charpiat, O. Faugeras, and R. Keriven, Approximations of shape metrics
and application to shape warping and empirical shape statistics, INRIA
report 4820, 2003.
[Dac82]
Bernard Dacorogna, Weak continuity and weak lower semicontinuity of
non-linear functionals, Lecture Notes in Mathematics, vol. 922,
Springer-Verlag, 1982.
[EE70]
J. Eells and K. D. Elworthy, Open embeddings of certain Banach
manifolds, Ann. of Math. (2) 91 (1970), 465–485. MR MR0263120 (41
#7725)
[Eke78]
Ivar Ekeland, The Hopf-Rinow theorem in infinite dimension, J.
Differential Geom 13 (1978), no. 2, 287–301.
[Fom90]
A.T: Fomenko, The Plateau problem, Studies in the development of
modern mathematics, Gordon and Breach, 1990.
[Gro99]
M. Gromov, Metric structures for Riemannian and non-Riemannian
spaces, Birkhäuser, 1999.
[Kli82]
Wilhelm Klingenberg, Riemannian Geometry, W. de Gruyter, Berlin,
1982.
[KSMJ03]
Eric Klassen, Anuj Srivastava, Washington Mio, and Shantanu Joshi,
Analysis of planar shapes using geodesic paths on shape spaces, 2003.
[Lan99]
Serge Lang, Fundamentals of differential geometry, Springer, 1999.
[Men]
A. C. G. Mennucci, On asymmetric distances, preprint,
http://cvgmt.sns.it/papers/and04/.
[MM]
Peter W. Michor and David Mumford, Riemannian geometris of space of
plane curves, http://front.math.ucdavis.edu/math.DG/0312384.
[Mum]
D. Mumford, Slides of the gibbs lectures,
http://www.dam.brown.edu/people/mumford/Papers/Gibbs.pdf.
[Sha49]
C. E. Shannon, A mathematical theory of communication, The Mathematical
Theory of Communication, University of Illinois Press, 1949.
[Sim83]
L. Simon, Lectures on Geometric Measure Theory, Proc. Center for
Mathematical Analysis, vol. 3, Australian National University, Canberra,
1983.
[YM04]
A. Yezzi and A. Mennucci, Conformal riemannian metrics in space of
curves, 2004, EUSIPCO04, MIA
http://www.ceremade.dauphine.fr/ cohen/mia2004/.
[You98]
Laurent Younes, Computable elastic distances between shapes, SIAM
Journal of Applied Mathematics 58 (1998), 565–586. |
Motion of classical charged particles with magnetic moment in external plane-wave electromagnetic fields
Martin Formanek
[email protected]
Andrew Steinmetz
[email protected]
Johann Rafelski
[email protected]
Department of Physics,
The University of Arizona,
Tucson, AZ, 85721, USA
Abstract
We study the motion of a charged particle with magnetic moment in external electromagnetic fields utilizing covariant unification of Gilbertian and Amperian descriptions of particle magnetic dipole moment. Considering the case of a current loop, our approach is verified by comparing classical dynamics with the classical limit of relativistic quantum dynamics.
We obtain motion of a charged particle in the presence of an external linearly polarized EM (laser) plane wave field incorporating the effect of spin dynamics. For specific laser-particle initial configurations, we determine that the Stern-Gerlach force can have a cumulative effect on the trajectory of charged particles.
PACS numbers
13.40.Em Electric and magnetic moments,
03.30.+p Special relativity
I Introduction
In the context of high intensity laser-matter interaction, including particle acceleration, much attention is being paid to classical dynamics of charged particles, in particular electrons and positrons. We consider here the contribution of the Stern-Gelrach force due to magnetic moment, further advancing the work of Wen, Keitel and Bauke Wen:2017zer . Since this force is much smaller compared to Lorentz force, and a well defined dynamical formulation was presented only recently Rafelski:2017hce , much work remains to be done.
Our theoretical formulation is building upon this covariant unification of Amperian and Gilbertian dynamics and the study of neutral particle dynamics in presence of the Stern-Gerlach force Formanek:2018mbv ; Formanek:2019cga . Study of Stern-Gerlach particle dynamics along this conceptual approach was initiated by Good Good , and Nyborg Nyborg , see Ref. Bagrov1980 for review of this work.
We begin by presenting the formulation of the model for particle motion and spin dynamics in Section II, showing how the magnetic moment force effect on the particle’s translational motion is included. Our Stern-Gerlach force is a natural extension of the Thomas-Bargmann-Michele-Telegdi (TBMT) spin precession dynamics Thomas1927 ; Bargmann:1959gz . It shows how the magnetic moment interacts with the external electromagnetic (EM) field inhomogenities and how this interaction affects the particle’s trajectory.
In Section III, we check our approach validity by comparing with Ref. Wen:2017zer for the case of a charged particle with a magnetic moment moving across a current loop. In Section IV, we turn to our main objective, the study of dynamics in presence of (laser) plane waves. To solve this intricate dynamical problem, we adopt covariant techniques employed in the study of exact charged spin-0 particle dynamics Sarachik1970 and further developed in the study of radiation reaction effects Dipiazza2008 ; Hadad:2010mt .
These methods allow us to identify and use conservation laws along with differential equations for the covariant projections to reduce the coupled equations for particle motion and spin dynamics to a greatly simplified and analytically solvable set. Our solution including spin dynmics is analytical and transparent, allowing applications to environments where magnetic moment dynamics could be relevant.
We summarize, discuss, and evaluate the achieved results in Section V,
II Dynamics of Charged particle with magnetic moment
The covariant and unified (Amperian=Gilbertian) ‘dipole charge model’ was formulated by us in Ref. Rafelski:2017hce and previously in Good ; Nyborg . The magnetic moment interaction is incorporated through the ‘magnetic 4-potential’ $B^{\mu}$
$$B_{\mu}\equiv F^{*}_{\mu\nu}s^{\nu},\quad\text{where}\quad F^{*}_{\mu\nu}\equiv\frac{1}{2}\varepsilon_{\mu\nu\alpha\beta}F^{\alpha\beta}\,,$$
(1)
with $\varepsilon_{0123}=+1$ is a dual tensor to the EM tensor $F^{\mu\nu}$. The 4-potential $B_{\mu}$ was constructed Rafelski:2017hce in such a way that in the co-moving frame $u^{\mu}=(c,0)$ a following quantity
$$d_{m}B\cdot u=cd_{m}F^{*}_{0\nu}s^{\nu}=-\boldsymbol{\mu}_{m}\cdot\boldsymbol{\mathcal{B}}\,.$$
(2)
gives us the correct potential energy of an elementary magnetic moment $\boldsymbol{\mu}_{m}$ in an external magnetic field $\boldsymbol{\mathcal{B}}$. We have introduced the conserved ‘magnetic dipole charge’ $d_{m}$: in the rest frame of the particle it has the meaning of a proportionality constant between the magnetic moment and particle spin
$$\boldsymbol{\mu}_{m}=cd_{m}\boldsymbol{s},\quad cd_{m}=\frac{e}{m}+\tilde{a}\;,$$
(3)
where $\tilde{a}=ae/m$ is proportional to anomalous magnetic moment $a$. For electrons $a\approx\alpha/2\pi=1.16\times 10^{-3}$.
With the 4-potential $B^{\mu}$, we can formulate the equation of motion as
$$m\dot{u}^{\mu}=eF^{\mu\nu}u_{\nu}+d_{m}G^{\mu\nu}u_{\nu}\,,$$
(4)
where the ‘dot’ denotes a derivative with respect to proper time $\tau$. The tensor $G_{\mu\nu}$ reads
$$G_{\mu\nu}\equiv\partial_{\mu}B_{\nu}-\partial_{\nu}B_{\mu}\,.$$
(5)
Substituting the definition of magnetic 4-potential $B_{\mu}$ from Eq. (1) and performing usual algebra to obtain regular Lorentz force terms, we arrive to
$$\dot{u}^{\mu}=\frac{e}{m}F^{\mu\nu}u_{\nu}-\frac{d_{m}}{m}u\cdot\partial(F^{*\mu\nu}s_{\nu})\\
+\frac{d_{m}}{m}\partial^{\mu}(u\cdot F^{*}\cdot s)\,.$$
(6)
The last two terms can be simplified using the the covariant Maxwell equation
$$\partial_{\mu}F^{*}_{\alpha\beta}+\partial_{\alpha}F^{*}_{\beta\mu}+\partial_{\beta}F^{*}_{\mu\alpha}=0\,,$$
(7)
while considering that the partial derivatives commute with the $u^{\mu}$ and $s^{\mu}$ 4-vectors, since the derivatives act only on the field quantities. We are left with the equation of motion
$$\dot{u}^{\mu}=\frac{1}{m}\left(eF^{\mu\nu}-d_{m}s\cdot\partial F^{*\mu\nu}\right)u_{\nu}\,.$$
(8)
The first term on the RHS of Eq. (8) is the standard Lorentz force, while the second is the covariant version of the Stern-Gerlach force term. Since the magnetic 4-potential $B^{\mu}$ is gauge invariant, a $d_{m}B\cdot u$ term can be added with impunity to the $eA\cdot u$ term in the Lagrangian action, resulting in the variational principle origin of Eq. (8), up to sub-leading higher order spin dynamics terms arising from $ds^{\mu}/d\tau$ term in Euler-Lagrange equations.
Now using Schwinger’s method schwinger1974 , we construct the spin dynamics equations applying the constraints
$$\displaystyle u\cdot s=0,\quad$$
$$\displaystyle\Rightarrow\quad\dot{u}\cdot s+u\cdot\dot{s}=0\,,$$
(9)
$$\displaystyle s^{2}=\text{const},\quad$$
$$\displaystyle\Rightarrow\quad s\cdot\dot{s}=0\,.$$
(10)
The exact solution linear in external fields for the dynamics considered in Eq. (8) is
$$\dot{s}^{\mu}=\frac{e}{m}F^{\mu\nu}s_{\nu}+\widetilde{a}\left(F^{\mu\nu}s_{\nu}-\frac{u^{\mu}}{c^{2}}u\cdot F\cdot s\right)\\
-\frac{d_{m}}{m}s\cdot\partial F^{*\mu\nu}s_{\nu}\;.$$
(11)
The first term assures consistency with the Lorentz force component, Eq. (9), the second term encompasses anomalous magnetic moment behavior, and the third ensures consistency with the Stern-Gerlach force term in Eq. (8). Equation (11) is a ‘minimal’ solution of the Schwinger consistency requirements, in a sense that additional terms preserving the conditions Eq. (9) and Eq. (10) are possible, but not necessary without additional physical requirements.
III Particle in an inhomogeneous magnetic field
We consider a magnetic field pointing along the $z$-axis $\boldsymbol{\mathcal{B}}=\mathcal{B}_{z}(z)\hat{z}$.
The initial 4-velocity and 4-spin are oriented along the $z$-axis as well
$$\displaystyle u^{\mu}(0)$$
$$\displaystyle=\gamma_{0}c(1,0,0,\beta_{0})\,,$$
(12)
$$\displaystyle s^{\mu}(0)$$
$$\displaystyle=\gamma_{0}s_{0}(\beta_{0},0,0,1)\,,$$
(13)
where $s_{0}=\pm\hbar/2$ is positive for spin oriented along the positive $z$-axis or negative for the opposite case. Initially there is no Lorentz force on the particle since it moves parallel to the magnetic field. In fact, in this configuration the motion remains 1D because all products $F^{\mu\nu}u_{\nu}$ and $F^{\mu\nu}s_{\nu}$ start at zero and remain zero because the Stern-Gerlach terms contribute only to zeroth and $z$-components. We can then effectively rewrite the equations (8) and (11) as
$$\displaystyle\dot{u}^{\mu}$$
$$\displaystyle=-\frac{d_{m}}{m}s\cdot\partial(F^{*\mu\nu})u_{\nu}\,,$$
(14)
$$\displaystyle\dot{s}^{\mu}$$
$$\displaystyle=-\frac{d_{m}}{m}s\cdot\partial(F^{*\mu\nu})s_{\nu}\,,$$
(15)
The torque Eq. (15) in this case does not change the direction of the spin, only its velocity dependence is modified so that $u\cdot s=0$ is satisfied. From this argument alone the solution for spin is
$$s^{\mu}(\tau)=\gamma s_{0}(\beta,0,0,1)\,,$$
(16)
where $\gamma$ is the relativistic Lorentz factor. The consistent solution for the change of particle velocity $\beta$ as a function of position can be derived from either eqs. (14,15) as
$$\frac{d\beta}{dt}=\frac{d_{m}s_{0}}{m}\frac{\partial_{z}\mathcal{B}_{z}(z)}{\gamma^{2}}\,.$$
(17)
Without magnetic moment $d_{m}=0$ the particle would pass through the region of the magnetic field unimpeded with constant velocity equal to its initial velocity $\beta_{0}$. The Stern-Gerlach force on magnetic moment accelerates or decelerates the particle based on the direction of the spin $s_{0}$ and sign of the gradient $\partial_{z}B_{z}$.
As in Ref. Wen:2017zer we will model the magnetic field as the field generated by a current loop with a radius $L/\pi$. Along the $z$-axis, the magnetic field has the form
$$\mathcal{B}_{z}=\frac{\mathcal{B}_{0}}{(1+\pi^{2}z^{2}/L^{2})^{3/2}}\,.$$
(18)
The radial component of the magnetic field $\mathcal{B}_{r}$ vanishes on the axis although its derivative $\partial_{r}$ is non-zero, thus satisfying Maxwell equation $\nabla\cdot\boldsymbol{\mathcal{B}}=0$ (See p. 181 in Ref. jackson ). The derivative $\partial_{z}B_{r}$ is zero on the axis, ensuring that for perfectly polarized electrons the motion remains 1D. The field and its derivative are plotted in Figure 1. For electrons $d_{m}<0$ since the direction of their magnetic moment is opposite to spin. Thus electrons entering the field from the left with aligned spin $s_{0}=+\hbar/2$ would first be slowed down by the increasing field and then accelerated back again as the particle leaves. This is consistent with the textbook Stern-Gerlach force $\boldsymbol{F}=\nabla(\boldsymbol{\mu}_{m}\cdot\boldsymbol{\mathcal{B}})$. The velocity of particle with aligned spin is thus smaller than $\beta_{0}$ throughout the motion. The anti-aligned spin $s_{0}=-\hbar/2$ electron would first be accelerated and then decelerated so that its velocity is greater than $\beta_{0}$ throughout the motion. This means that electrons with aligned spin (+) would lag behind electrons with anti-aligned spin ($-$) when moving through the same region. We can compare their trajectories with the motion of electrons when the magnetic field is absent using
$$\Delta z_{\pm}=z_{\pm}(t)-(z_{0}+v_{0}t)\,.$$
(19)
The plots for the numerical solutions with maximum magnetic field strength $\mathcal{B}_{0}=10\ \text{T}$ and radius $L/\pi=1\ \text{cm}$ are presented in Figure 2. The numerical integration was initialized at time $t_{0}=0$ for the electron position $z_{0}/L=-5.0$ and initial velocity $v_{0}=2\times 10^{8}\ \text{m/s}$. Qualitatively, the spread in distance between the two oppositely polarized electron types
$$\Delta z\equiv\Delta z_{-}-\Delta z_{+}\sim\frac{1}{\gamma^{2}}\,,$$
(20)
has the same behavior found from the Foldy-Wouthuysen model tracking the classical limit of quantum magnetic moment dynamics. See discussion in Section 3.2 in Wen:2017zer which shows the same $1/\gamma^{2}$ dependence unlike the classical model they used with $1/\gamma$ behavior. As was also pointed out in Wen:2017zer , distinguishing between magnetic moment models based on this experiment is a challenge. This is because the difference becomes substantial only for high gamma factors when the flight time of electrons is much shorter and thus the trajectory differences due to the magnetic moment interaction decrease. Any experiment would be also limited by how well the electrons can be polarized along the $z$-axis and by their displacement from this axis.
IV Charged particle in linearly polarized plane wave field
IV.1 Problem definition
In our previous work Formanek:2019cga , we presented an analytical solution for neutral particle motion in an external plane wave field based on the covariant approach of Dipiazza2008 ; Hadad:2010mt . Here, the situation is more complicated because the particle is charged and feels a corresponding Lorentz force. Nevertheless, we will demonstrate that an analytical solution can still be found.
In the neutral particle case there is no Lorentz force, and the magnetic moment interaction is a first order effect. Previously, we discussed such interactions for hypothetical neutrino magnetic moments and neutron beam control Formanek:2018mbv , which required enormous field intensities to produce a measurable effect. Electrons have an advantage of having orders of magnitude higher magnetic moment than all other stable charged particles, neutrons, or the yet-to-be-found magnetic moment of neutrinos. Thus, we look here specifically at electron dynamics allowing the Stern-Gerlach force to affect their trajectory to a much greater extent than is the case for all other particles.
The relativistic effects for particles in external laser fields are controlled by a Lorentz-invariant parameter $a_{0}$, so-called unitless normalized laser amplitude Mourou2006 , given by
$$a_{0}=\frac{e\sqrt{|A^{\mu}A_{\mu}|}}{mc}=\frac{e\mathcal{E}\lambdabar}{mc^{2}}=1.2\frac{\sqrt{I[10^{20}\text{ W/cm}^{2}]}}{\hbar\omega[1\text{ eV}]}\,,$$
(21)
where $I$ is intensity of the laser and $\omega$ its frequency. This quantity corresponds to the work done by a laser’s electric field $\mathcal{E}$ over one reduced wavelength $\lambdabar$ compared to the particle’s rest mass energy $mc^{2}$. For $a_{0}\sim 1$ we enter the relativistic regime and for $a_{0}\gg 1$ the ultra-relativistic regime.
IV.2 Classical vs quantum dynamics
The classical particle dynamics should arise as a limit of relativistic quantum theory. The most common approach to deriving these equations in the classical limit relies on a Foldy-Wouthuysen transformation Foldy1950 of the spin-1/2 Dirac equation. This is followed by introducing a correspondence principle for the time evolution of observables such as position, kinematic momentum, and spin in the Heisenberg picture Silenko:2007wi .
In the situation of strong external EM fields for particles with gyromagnetic ratio $g\neq 2$, we have argued the correct quantum relativistic description of particle dynamics is not necessarily based on the Dirac equation. As we pointed out in Ref. Steinmetz:2018ryf , a more natural approach to anomalous magnetic moment is found in the Klein-Gordon-Pauli (KGP) equation which incorporates a Pauli term, capturing the dynamics of magnetic moment, into Klein-Gordon equation.
This insight is not compatible with currently most common approach, the use the Dirac-Pauli (DP) equation where the Dirac equation is supplemented by a Pauli term. The primary difference between these two approaches is that while in KGP the entire magnetic moment, and thus spin dynamics, is described by a single mathematical object, the DP approach breaks apart the magnetic moment into a natural $g=2$ part embedded in the spinor structure and an anomalous part described by the Pauli term.
In prior work Steinmetz:2018ryf we presented an argument that the difference between these two approaches becomes apparent in strong EM fields which can be found around magnetars or in high-$Z$ atoms. However, for the external fields considered in this work the difference between DP and KGP formulation will not be apparent in consistency with our prior assumption to neglect subleading spin dynamics effects, see Section II.
The parameter space controlling the classical domain is described in Ref. Khokonov . Apart from the normalized laser amplitude $a_{0}$ Eq. (21) we invoke a Lorentz invariant parameter
$$a_{q}=\frac{\hbar(k\cdot p)}{m^{2}c^{2}}\,,$$
(22)
where $k^{\mu}$ and $p^{\mu}$ are 4-momenta of the photon and electron respectively. In this section we consider the example of the electron at rest $a_{q}=\hbar\omega/mc^{2}\approx 10^{-6}$ for 1eV visible laser light. From the diagram in Ref. Khokonov we see that any $a_{0}$ satisfying $\ln a_{0}<4$ allows us to treat the problem classically. Later we consider example of $a_{0}=0.1$ which is squarely in the classical domain. In this work we don’t consider the radiation of the electrons due to their motion.
In the literature one often sees the Lorentz invariant parameter $\chi$ as defined by Ritus1985
$$\chi=\frac{e\hbar\sqrt{|u\cdot F\cdot F\cdot u|}}{m^{2}c^{3}}=\left.\frac{\mathcal{E}}{\mathcal{E}_{\text{S}}}\right|_{CF}=a_{0}a_{q}\\
=5.9\times 10^{-2}E[\text{GeV}]\sqrt{I[10^{20}\text{W/cm}^{2}]}\,,$$
(23)
where $E$ is the electron energy. This parameter represents the electric field strength $\mathcal{E}$ in units of the critical Schwinger field in the co-moving frame of the electron. For an electron $\mathcal{E}_{S}=1.3\times 10^{18}$ V/m. This parameter is a product of the two previous ones Eq. (21) and Eq. (22). It has been shown that quantum effects become non-negligible already for $\chi\gtrsim 0.1$ Uggerhoj .
IV.3 Dynamical equations
The covariant 4-potential for a plane wave field is given by
$$A^{\mu}=\varepsilon^{\mu}\mathcal{A}_{0}f(\xi),\quad\xi=\frac{\omega}{c}\hat{k}\cdot x\;,$$
(24)
where the unitless wave 4-vector $\hat{k}^{\mu}$ is light-like and orthogonal to the space-like polarization vector $\varepsilon^{\mu}$. We impose the following constraints which are satisfied by plane waves
$$\quad\hat{k}^{2}=0,\quad\hat{k}\cdot\varepsilon=0,\quad\varepsilon^{2}=-1\,.$$
(25)
The amplitude of the field is given by $\mathcal{A}_{0}$ and $\xi$ denotes its invariant phase. The oscillatory part of the laser field and the pulse envelope are then defined by some function given by $f(\xi)$ unique to the laser.
Substituting the 4-potential into the EM field tensor yields
$$F^{\mu\nu}=\partial^{\mu}A^{\nu}-\partial^{\nu}A^{\mu}=\frac{\mathcal{A}_{0}\omega}{c}f^{\prime}(\xi)(\hat{k}^{\mu}\varepsilon^{\nu}-\varepsilon^{\mu}\hat{k}^{\nu})\;.$$
(26)
In our notation, prime marks (such as $f^{\prime}$) are used to denote derivatives with respect to the phase $\xi$.
The properties of Eq. (25) ensure that the contraction of the $F^{\mu\nu}$ tensor with $\hat{k}^{\mu}$ is zero. It is also useful to calculate
$$(s\cdot\partial)F^{*\mu\nu}=\frac{\mathcal{A}_{0}\omega^{2}}{c^{2}}f^{\prime\prime}(\xi)(\hat{k}\cdot s)\epsilon^{\mu\nu\alpha\beta}\hat{k}_{\alpha}\varepsilon_{\beta}\;,$$
(27)
since this term appears in both particle and spin dynamics equations (8) and (11). Notice that contracting Eq. (27) with both $\hat{k}_{\mu}$ and $\varepsilon_{\mu}$ yields zero because of the antisymmetric properties of $\epsilon^{\mu\nu\alpha\beta}$. This means that in the projections with $\hat{k}^{\mu}$ and $\varepsilon^{\mu}$ the Stern-Gerlach term does not play a role.
The dynamical equations (8) and (11) in terms of the plane wave potential (24) are then
$$\displaystyle\dot{u}^{\mu}$$
$$\displaystyle=\frac{e\mathcal{A}_{0}\omega}{mc}f^{\prime}(\xi)(\hat{k}^{\mu}(\varepsilon\cdot u)-\varepsilon^{\mu}(\hat{k}\cdot u))$$
$$\displaystyle-\frac{\mathcal{A}_{0}d_{m}\omega^{2}}{mc^{2}}f^{\prime\prime}(\xi)(\hat{k}\cdot s)\epsilon^{\mu\nu\alpha\beta}u_{\nu}\hat{k}_{\alpha}\varepsilon_{\beta}\;,$$
(28)
$$\displaystyle\dot{s}^{\mu}$$
$$\displaystyle=\omega d_{m}\mathcal{A}_{0}f^{\prime}(\xi)(\hat{k}^{\mu}\varepsilon\cdot s-\varepsilon^{\mu}\hat{k}\cdot s)-u^{\mu}(u\cdot F\cdot s)\frac{\widetilde{a}}{c^{2}}$$
$$\displaystyle-\frac{\mathcal{A}_{0}d_{m}\omega^{2}}{mc^{2}}f^{\prime\prime}(\xi)(\hat{k}\cdot s)\epsilon^{\mu\nu\alpha\beta}s_{\nu}\hat{k}_{\alpha}\varepsilon_{\beta}\;.$$
(29)
These two equations are coupled through the Stern-Gerlach interaction. Namely the quantity of interest is the function $\hat{k}\cdot s(\tau)$ which appears in the coupling term. In the following Section (IV.4) we present an analytical solution for this function and in Section (IV.5) we find the analytical solution for 4-velocity $u^{\mu}(\tau)$.
IV.4 Spin precession in $\hat{k}^{\mu}$ and $\varepsilon^{\mu}$ projections
As we mentioned above, the coupling through the Stern-Gerlach force between the equation of motion Eq. (IV.3) and spin dynamics Eq. (29) disappears if projected with $\hat{k}^{\mu}$ and $\varepsilon^{\mu}$. In such a case the motion and spin precession is governed by only the TBMT equations of motion Bargmann:1959gz . The situation here is more complex than the neutral particle case presented in Formanek:2019cga as the equations of motion gain additional terms proportional to particle charge. In the charged case, the projection of 4-velocity and laser polarization $\varepsilon\cdot u(\tau)$ is no longer a constant of motion. Instead we obtain after multiplying Eq. (IV.3) by $\varepsilon^{\mu}$
$$\varepsilon\cdot\dot{u}=\frac{d}{d\tau}(\varepsilon\cdot u)=\frac{e\mathcal{A}_{0}\omega}{mc}k\cdot u(0)f^{\prime}(\xi)\;,$$
(30)
which can be integrated as
$$\varepsilon\cdot u(\tau)=\varepsilon\cdot u(0)+\frac{e}{m}A_{0}(f(\xi(\tau))-f(\xi_{0}))\;.$$
(31)
The projection $\epsilon\cdot u(\tau)$ becomes sensitive to the laser profile as a function of the laser phase and is proportional to $e/m$.
Similarly to the neutral particle case, the charged particle’s projection of wave 4-vector and initial 4-velocity $\hat{k}\cdot u(0)$ remains a constant of motion which can be seen by multiplying Eq. (IV.3) by $\hat{k}^{\mu}$ yielding
$$\hat{k}\cdot\dot{u}=\frac{d}{d\tau}(\hat{k}\cdot u)=0,\quad\Rightarrow\quad\hat{k}\cdot u=\hat{k}\cdot u(0)\;.$$
(32)
This expression also allows us to find the relationship between the proper time of the particle and phase of the wave $\xi$ as
$$\frac{d\xi}{d\tau}=\frac{\omega}{c}\frac{d}{d\tau}(\hat{k}\cdot x)=\frac{\omega}{c}\hat{k}\cdot u(0),\;\Rightarrow\;\xi=\frac{\omega}{c}(\hat{k}\cdot u(0))\tau+\xi_{0}\;.$$
(33)
Now we turn our attention to the spin dynamics of Eq. (29). Using the integral of motion Eq. (32) we can evaluate the contraction of torque with $\hat{k}^{\mu}$ resulting in
$$\hat{k}\cdot\dot{s}=-(\hat{k}\cdot u(0))(u\cdot F\cdot s)\frac{\widetilde{a}}{c^{2}}\,.$$
(34)
By taking another proper time derivative of this equation and after some algebra which also uses the projection $\varepsilon\cdot\dot{s}$ (for details see Formanek:2019cga ) we arrive at second order differential equation for the function $\hat{k}\cdot s(\tau)$
$$\hat{k}\cdot\ddot{s}=\frac{\ddot{f}(\xi(\tau))}{\dot{f}(\xi(\tau))}(\hat{k}\cdot\dot{s})-\frac{\widetilde{a}^{2}\mathcal{A}_{0}^{2}}{c^{2}}\dot{f}^{2}(\xi(\tau))(\hat{k}\cdot s)\;.$$
(35)
We introduce a set of known initial conditions
$$\displaystyle\hat{k}\cdot s(\tau=0)$$
$$\displaystyle=\hat{k}\cdot s(0)\,,$$
(36)
$$\displaystyle\hat{k}\cdot\dot{s}(\tau=0)$$
$$\displaystyle=-(\hat{k}\cdot u(0))(u(0)\cdot F\cdot s(0))\frac{\widetilde{a}}{c^{2}}\;.$$
(37)
which can be used to construct a solution
$$\hat{k}\cdot s(\tau)=\hat{k}\cdot s(0)\cos\left[\frac{\widetilde{a}\mathcal{A}_{0}}{c}\psi(\tau)\right]-\frac{W(0)}{c}\sin\left[\frac{\widetilde{a}\mathcal{A}_{0}}{c}\psi(\tau)\right]\;,$$
(38)
in which we for simplicity write the difference between the laser amplitude at a given time from the initial as
$$\psi(\tau)\equiv f(\xi(\tau))-f(\xi_{0})\,.$$
(39)
We note that $\psi(\tau)$ is also present in Eq. (31). The function $W(\tau)$ is proportional to $u\cdot F\cdot s$ and is defined as
$$W(\tau)\equiv(\hat{k}\cdot u(0))(\varepsilon\cdot s(\tau))-(\varepsilon\cdot u(\tau))(\hat{k}\cdot s(\tau))\;.$$
(40)
Analogously, we can obtain the solution for $\varepsilon\cdot s(\tau)$ as
$$\varepsilon\cdot s(\tau)=\left(\varepsilon\cdot s(0)+\frac{\hat{k}\cdot s(0)}{\hat{k}\cdot u(0)}\frac{e\mathcal{A}_{0}}{m}\psi(\tau)\right)\cos\left[\frac{\widetilde{a}\mathcal{A}_{0}}{c}\psi(\tau)\right]\\
+\left(\frac{c\hat{k}\cdot s(0)}{\hat{k}\cdot u(0)}-\frac{W(0)}{c}\frac{\varepsilon\cdot u(\tau)}{\hat{k}\cdot u(0)}\right)\sin\left[\frac{\widetilde{a}\mathcal{A}_{0}}{c}\psi(\tau)\right]\;.$$
(41)
Note that the spin projection precession is governed only by the value of the anomalous magnetic moment $\widetilde{a}=ae/m$. For no magnetic anomaly the precession vanishes
$$\displaystyle\hat{k}\cdot s(\tau)$$
$$\displaystyle=\hat{k}\cdot s(0)\,,$$
(42)
$$\displaystyle\varepsilon\cdot s(\tau)$$
$$\displaystyle=\varepsilon\cdot s(0)+\frac{\hat{k}\cdot s(0)}{\hat{k}\cdot u(0)}\frac{e\mathcal{A}_{0}}{m}\psi(\tau)\,.$$
(43)
Let us finally consider a particle with an initial configuration ($\tau=0$) long before the arrival of the pulse such that the envelope function is $f(\xi_{0})=0$. Then long after the pulse leaves $\psi(\tau\rightarrow\infty)=0$ the projections of wave 4-vector and polarization on spin should relax back to their original values
$$\displaystyle\hat{k}\cdot s(\tau\rightarrow\infty)$$
$$\displaystyle=\hat{k}\cdot s(0)\,,$$
(44)
$$\displaystyle\varepsilon\cdot s(\tau\rightarrow\infty)$$
$$\displaystyle=\varepsilon\cdot s(0)\,.$$
(45)
These parameters are only reversibly altered by the presence of a plane wave, excluding any deviations that would arise if the particle radiates due its motion, effect not considered in this work.
IV.5 Particle 4-velocity $u^{\mu}(\tau)$
Our ultimate goal in discussing this test case of the charged particle with spin under the influence of a plane wave is to derive how the particle’s trajectory and motion are altered, especially by the presence of the anomalous magnetic moment. Our first step in deriving the 4-velocity directly is to construct another integral of motion by considering the 4-vector
$$L^{\mu}\equiv\epsilon^{\mu\nu\alpha\beta}u_{\nu}(0)\hat{k}_{\alpha}\varepsilon_{\beta}$$
(46)
and projecting the equation of motion Eq. (IV.3) along the direction of $L^{\mu}$ yielding
$$L\cdot\dot{u}(\tau)=\frac{d_{m}\mathcal{A}_{0}\omega^{2}}{mc^{2}}f^{\prime\prime}(\xi)(\hat{k}\cdot s)(\hat{k}\cdot u(0))^{2}\,,$$
(47)
where we used the contraction identity
$$\epsilon^{\mu\nu\alpha\beta}\epsilon_{\mu\rho\gamma\delta}=-\left|\begin{matrix}\delta^{\nu}_{\rho}&\delta^{\nu}_{\gamma}&\delta^{\nu}_{\delta}\\
\delta^{\alpha}_{\rho}&\delta^{\alpha}_{\gamma}&\delta^{\alpha}_{\delta}\\
\delta^{\beta}_{\rho}&\delta^{\beta}_{\gamma}&\delta^{\beta}_{\delta}\\
\end{matrix}\right|\,,$$
(48)
and the constant of motion Eq. (32). Equation (47) can be formally integrated with initial condition $L\cdot u(0)=0$ which is true due to the antisymmetry of $\epsilon^{\mu\nu\alpha\beta}$ in Eq. (46). This results in
$$L\cdot u(\tau)=-h(\tau)(\hat{k}\cdot u(0))^{2}\,.$$
(49)
The unitless integral $h(\tau)$ is given by
$$h(\tau)\equiv-\frac{d_{m}\mathcal{A}_{0}\omega^{2}}{mc^{2}}\int_{\tau_{0}=0}^{\tau}\hat{k}\cdot s(\widetilde{\tau})f^{\prime\prime}(\xi(\widetilde{\tau}))d\widetilde{\tau}\,,$$
(50)
and depends on the known solution for the spin projection $\hat{k}\cdot s(\tau)$ from Eq. (38). For a constant $\hat{k}\cdot s(\tau)=\hat{k}\cdot s(0)$ which is realized in the case of no magnetic anomaly $\widetilde{a}=0$ this integral can be evaluated as
$$h(\tau)=-\frac{d_{m}\mathcal{A}_{0}\omega}{mc}\frac{\hat{k}\cdot s(0)}{\hat{k}\cdot u(0)}[f^{\prime}(\xi(\tau))-f^{\prime}(\xi_{0})]\,,$$
(51)
satisfying the initial condition $h(\tau=0)=0$. In this situation the function $h(\tau)$ is oscillatory as it is proportional to the derivative $f^{\prime}(\xi(\tau))$. Thus, the value of $h(\tau)$ for no magnetic anomaly doesn’t accumulate over the interaction with many plane wave cycles.
This function is responsible for irreversible effects in which the particle is changed after the passage of an EM plane wave, but only in the situation that an anomalous magnetic moment is present. In the next section (Section IV.6) we will discuss under which circumstances the integral for $h(\tau)$ (50) is cummulative for particle initially at rest.
We will look for the 4-velocity by assuming an ansatz
$$u^{\mu}(\tau)=u^{\mu}(0)+C_{1}(\tau)\varepsilon^{\mu}+C_{2}(\tau)\hat{k}^{\mu}+C_{3}(\tau)L^{\mu}\,.$$
(52)
The norm of the last 4-vector is manifestly negative
$$L^{2}=-(\hat{k}\cdot u(0))^{2}\,,$$
(53)
and therefore this 4-vector is space-like.
The solution ansatz Eq. (52) automatically preserves the projection $\hat{k}\cdot u(0)$ defined in Eq. (32) as a constant of motion. The integral of motion for $\varepsilon\cdot u(\tau)$ given by Eq. (31) yields
$$C_{1}(\tau)=-(\varepsilon\cdot u(\tau)-\varepsilon\cdot u(0))=-\frac{e}{m}\mathcal{A}_{0}\psi(\tau)\,.$$
(54)
If we contract the ansatz Eq. (52) with the 4-vector $L^{\mu}$ we obtain for the coefficient $C_{3}(\tau)$
$$C_{3}(\tau)=h(\tau)\,.$$
(55)
Finally, by invoking the condition $u^{2}=u^{2}(0)=c^{2}$ we get for the coefficient $C_{2}(\tau)$
$$C_{2}(\tau)=\frac{1}{2}h^{2}(\tau)\hat{k}\cdot u(0)\\
+\frac{e}{m}\frac{\mathcal{A}_{0}\psi(\tau)}{\hat{k}\cdot u(0)}\left(\varepsilon\cdot u(0)+\frac{1}{2}\frac{e}{m}\mathcal{A}_{0}\psi(\tau)\right)\,.$$
(56)
By substituting all the coefficients back to our ansatz Eq. (52) we obtain a final result
$$u^{\mu}(\tau)=u^{\mu}(0)-\frac{e}{m}\mathcal{A}_{0}\psi(\tau)\varepsilon^{\mu}+\frac{1}{2}h^{2}(\tau)\hat{k}\cdot u(0)\hat{k}^{\mu}\\
+\frac{e}{m}\frac{\mathcal{A}_{0}\psi(\tau)}{\hat{k}\cdot u(0)}\left(\varepsilon\cdot u(0)+\frac{1}{2}\frac{e}{m}\mathcal{A}_{0}\psi(\tau)\right)\hat{k}^{\mu}\\
+h(\tau)\epsilon^{\mu\nu\alpha\beta}u_{\nu}(0)\hat{k}_{\alpha}\varepsilon_{\beta}\;.$$
(57)
It can be easily checked that the solution Eq. (57) solves the dynamical equation given by Eq. (IV.3). Since a first order differential equation has only one solution, this is also a general solution for particle motion. Moreover, this solution has a very clear limit where if the magnetic moment charge $d_{m}$ vanishes, then $h(\tau)$ vanishes removing all effects of spin on the particle’s motion.
Once we set the magnetic dipole charge $d_{m}$ to zero we effectively uncouple the equations for particle motion and spin dynamics because Stern-Gerlach force is no longer present. In this case $h(\tau)\equiv 0$ and the solution Eq. (57) reduces to
$$u^{\mu}(\tau)=u^{\mu}(0)-\frac{e}{m}\psi(\tau)\varepsilon^{\mu}\\
+\frac{e}{m}\frac{\mathcal{A}_{0}\psi(\tau)}{\hat{k}\cdot u(0)}\left(\varepsilon\cdot u(0)+\frac{1}{2}\frac{e}{m}\mathcal{A}_{0}\psi(\tau)\right)\hat{k}^{\mu}\,,$$
(58)
which is the well-known classical solution for a spinless charged particle in an external plane wave field Itzykson2005 .
We can easily evaluate the invariant acceleration as a square of Eq. (IV.3). After contraction of the antisymmetric tensors using Eq. (48) we get
$$\dot{u}^{2}(\tau)=-\frac{\mathcal{A}_{0}^{2}\omega^{2}}{m^{2}c^{2}}(\hat{k}\cdot u(0))^{2}\bigg{(}e^{2}f^{\prime}(\xi)^{2}\\
\left.+d_{m}^{2}\frac{\omega^{2}}{c^{2}}f^{\prime\prime}(\xi)^{2}(\hat{k}\cdot s(\tau))^{2}\right)\;.$$
(59)
This expression depends only on the solution for $\hat{k}\cdot s(\tau)$ defined by Eq. 38 and is manifestly negative as $\dot{u}^{\mu}$ is a space-like vector. The invariant acceleration is therefore only a function of alignment and orientation of the spin 4-vector and the plane wave 4-vector which makes physical sense as the Stern-Gerlach force is sensitive to the alignment of the spin to the external magnetic field.
Finally, in order to assert uniqueness of solution (57), we would like to comment on a basis set which could be constructed in the Minkowski spacetime from the available 4-vectors. A good start would be a selection
$$\displaystyle u^{\mu}(0),\quad s^{\mu}(0)\,$$
$$\displaystyle F^{\mu}\equiv\epsilon^{\mu\nu\alpha\beta}u_{\nu}(0)\hat{k}_{\alpha}s_{\beta}(0)\,$$
$$\displaystyle G^{\mu}\equiv\epsilon^{\mu\nu\alpha\beta}u_{\nu}(0)\varepsilon_{\alpha}s_{\beta}(0)\,.$$
(60)
These four 4-vectors are all mutually orthogonal except for the product
$$F\cdot G=(\varepsilon\cdot s(0))(\hat{k}\cdot s(0))c^{2}+(\varepsilon\cdot u(0))(\hat{k}\cdot u(0))s^{2}\,,$$
(61)
which allows us to use Gramm-Schmidt orthogonalization to define a new 4-vector $H^{\mu}$ given by
$$H^{\mu}\equiv G^{\mu}-F\cdot G\frac{F^{\mu}}{F^{2}}\,,$$
(62)
which together with $u^{\mu}(0)$, $s^{\mu}(0)$ and $F^{\mu}$ forms an orthogonal basis. This basis set becomes degenerate if either $F^{\mu}$ or $G^{\mu}$ is identically zero which happens if a quantity
$$\Omega\equiv\epsilon^{\mu\nu\alpha\beta}\hat{k}_{\mu}u_{\nu}(0)\varepsilon_{\alpha}s_{\beta}(0)\,,$$
(63)
is equal to zero. In that case another 4-vector has to be included to construct an orthogonal basis set dependent on the specific situation. In general we can always construct a basis composed of a time-like vector $u^{\mu}(0)$ and three other orthogonal space-like vectors.
Once the basis set is defined all 4-vectors can be expressed as linear combination of elements of such basis including the time-dependent 4-velocity $u^{\mu}(\tau)$. After solving the differential equations for the expansion coefficient we always recover the solution Eq. (57) which we obtained by choosing the right ansatz.
IV.6 Case of a particle initially at rest
In this section we will address the situation of a particle initially at rest $u^{\mu}(0)=(c,0,0,0)$ with respect to the laboratory observer. This case is of particular importance because it gives us an idea how the particle will react to the external field in the co-moving frame for situations involving a beam of charged particles subjected to a plane wave. For motion where the wave propagation and the particle beam are along the same axis, the results described here will differ only by the application of a Lorentz boost. Without loss of generality, we can choose to orient the coordinate system so that the wave unit vector is along the $z$-axis $\hat{\boldsymbol{k}}=\hat{z}$ and the polarization unit vector is along the $x$-axis $\boldsymbol{\varepsilon}=\hat{x}$. The initial spin is oriented in the arbitrary direction $\boldsymbol{s}_{0}=(s_{0x},s_{0y},s_{0z})$. The associated 4-vectors read
$$\displaystyle\hat{k}^{\mu}=(1,$$
$$\displaystyle 0,0,1),\quad\varepsilon^{\mu}=(0,1,0,0)\,,$$
$$\displaystyle s^{\mu}(0)$$
$$\displaystyle=(0,s_{0x},s_{0y},s_{0z})\,.$$
(64)
Given the general 4-velocity solution from Eq. (57), we will get in the special case of particle initially at rest and plane wave described by 4-vectors in Eq. (IV.6)
$$u^{\mu}(\tau)=c\left(\begin{matrix}1+\frac{1}{2}(h^{2}(\tau)+a_{0}^{2}\psi^{2}(\tau))\\
-a_{0}\psi(\tau)\\
h(\tau)\\
\frac{1}{2}(h^{2}(\tau)+a_{0}^{2}\psi^{2}(\tau))\end{matrix}\right)\,.$$
(65)
In here the terms with $a_{0}\psi(\tau)$ are the standard solution for the Lorentz interaction with the plane wave fields, and the terms with $h(\tau)$ correspond to the magnetic moment interaction. We see that a particle initially at rest will move out into the direction normal to the plane wave propagation; we will speak of such velocity gain as a drift velocity induced by the plane wave. For a better idea about geometry of this situation see Figure 3. We will devote the rest of this section to the study of the forces acting on the particle.
We start by investigating the function $h(\tau)$ from Eq. (50), which governs the magnetic moment interaction. With our choice of the laser-particle configuration given by Eq. (IV.6), the function $W(0)$ from Eq. (40) and projection $\hat{k}\cdot s(0)$ are
$$\displaystyle W(0)$$
$$\displaystyle=-cs_{0x}\,,$$
(66)
$$\displaystyle\hat{k}\cdot s(0)$$
$$\displaystyle=-s_{0z}\,.$$
(67)
These two constants control the spin projection $\hat{k}\cdot s(\tau)$ of Eq. (38) yielding
$$\hat{k}\cdot s(\tau)=-s_{0z}\cos\left[\frac{\widetilde{a}\mathcal{A}_{0}}{c}\psi(\tau)\right]+s_{0x}\sin\left[\frac{\widetilde{a}\mathcal{A}_{0}}{c}\psi(\tau)\right]\,.$$
(68)
We see that this function is zero and remains zero in the case of the initial spin oriented only along the $y$-axis, i.e. for $s_{0x}=s_{0z}=0$. When that happens the integral $h(\tau)$ Eq. (50) is identically zero and there is no Stern-Gerlach force acting on the particle. The solution for the particle’s motion then reduces to the classical plane wave solution for the spinless electron seen in Eq. (58).
The constants controlling the arguments of the sine and cosine functions can be rewritten for a charged particle as
$$\frac{\widetilde{a}\mathcal{A}_{0}}{c}=\frac{ae\mathcal{A}_{0}}{mc}=-aa_{0}\,,$$
(69)
where $a$ is the anomalous magnetic moment and $a_{0}=|e|\mathcal{A}_{0}/mc$ is the unitless normalized laser amplitude defined earlier in Eq. (21) as the parameter controlling the relativistic effects.
We will model the plane wave as a sine wave which is adiabatically switched on as the wave arrives at the particle position and then adiabatically switched off when the wave leaves. Throughout the motion the function $f(\xi)$ and all its derivatives are bounded by 1 and the initial condition at $\tau=0$ is
$$f(\xi_{0})=f^{\prime}(\xi_{0})=f^{\prime\prime}(\xi_{0})=0\,.$$
(70)
With this, we have for $\psi(\tau)$ from Eq. (39)
$$\psi(\tau)=f(\xi(\tau))-f(\xi_{0})=f(\xi(\tau))\,.$$
(71)
In this case, the integral for $h(\tau)$ Eq. (50) reads
$$h(\xi)=(1+a)a_{0}\frac{\omega}{mc^{2}}\int_{\xi_{0}}^{\xi(\tau)}\bigg{[}-s_{0z}\cos[aa_{0}f(\widetilde{\xi})]\\
-s_{0x}\sin[aa_{0}f(\widetilde{\xi})]\bigg{]}f^{\prime\prime}(\widetilde{\xi})d\widetilde{\xi}\,,$$
(72)
which we will split in the following analysis into two parts
$$h(\xi)\equiv h_{1}(\xi)+h_{2}(\xi)\,,$$
(73)
corresponding to the first and second terms present in the integrand. For an electron, we typically have $aa_{0}\ll 1$ in which case we can evaluate the first part of the integral in Eq. (72) as
$$h_{1}(\xi)=-(1+a)a_{0}\frac{\omega s_{0z}}{mc^{2}}f^{\prime}(\xi)+O(a^{2}a_{0}^{2})\,,$$
(74)
and we see that the function $h_{1}(\xi)$ is in this case oscillatory for the oscillatory wave. The absolute value of this expression can be bounded by
$$|h_{1}(\xi)(aa_{0}\ll 1)|\leq(1+a)a_{0}\frac{\omega|s_{0z}|}{mc^{2}}\approx a_{0}\times 10^{-6}\,,$$
(75)
where the value is given for the electron’s initial spin align with the $z$-direction $s_{0z}=\pm\hbar/2$ and 1 eV laser light. Note that this term is present even for zero anomalous magnetic moment $a=0$, but since it oscillates around zero it doesn’t accumulate over many laser field oscillations.
The second part of the integral for $h(\xi)$ Eq. (72) can be evaluated in the lowest order in $aa_{0}$ as
$$h_{2}(\xi)=-aa_{0}^{2}\frac{\omega s_{0x}}{mc^{2}}\int_{\xi_{0}}^{\xi(\tau)}f(\widetilde{\xi})f^{\prime\prime}(\widetilde{\xi})d\widetilde{\xi}+O(a^{2}a_{0}^{3})\,.$$
(76)
This integral starts accumulating only when the laser particle acquires a phase $\xi(\tau_{a})=\xi_{a}$, $\tau_{a}$ being the time when the pulse arrives and the interaction is switched on. Neglecting the time interval of the laser plane wave ramp-on as short compared to duration of the pulse and approximating with $f(\xi)=\sin(\xi)$, we have
$$h_{2}(\xi)=aa_{0}^{2}\frac{\omega s_{0x}}{mc^{2}}\left(\frac{\xi-\xi_{a}}{2}-\frac{1}{4}\sin 2\xi\right)+O(a^{2}a_{0}^{3})\,.$$
(77)
The oscillatory part of this expression can be again bounded, this time by
$$|h_{2}(\xi)(aa_{0}\ll 1)|_{\text{osc}}\leq\frac{1}{4}aa_{0}^{2}\frac{\omega|s_{0x}|}{mc^{2}}\approx\frac{1}{4}aa_{0}^{2}\times 10^{-6}\,,$$
(78)
where the value is given for electron’s spin along the $x$-direction $s_{0x}=\pm\hbar/2$ and 1 eV laser light. This contribution is much smaller than the $z$-direction spin polarization contribution Eq. (75), because it linearly depends on the value of the anomalous magnetic moment $a$. Again, the oscillations are around zero and do not contribute over many plane wave periods.
The most important part of the function $h_{2}(\xi)$ is the linear term, which keeps accumulating over the interaction with many laser oscillations. The cumulative part is given by the expression
$$h_{2}(\xi)(aa_{0}\ll 1)_{\text{cum}}=aa_{0}^{2}\frac{\omega s_{0x}}{mc^{2}}\frac{\xi-\xi_{a}}{2}\\
=aa_{0}^{2}\frac{\omega s_{0x}}{mc^{2}}\frac{\omega(\tau-\tau_{a})}{2}\,,$$
(79)
where the relationship between the phase and proper time Eq. (33) was used. The plot of the whole function $h_{2}(\xi)$ from Eq. (76) is presented in Figure 4. We clearly see the overall linear trend.
For an electron and 1 eV laser light we have
$$h_{2}(\tau)(aa_{0}\ll 1)_{\text{cum}}\approx aa_{0}^{2}\frac{\omega(\tau-\tau_{a})}{2}\times 10^{-6}\\
=\pi aa_{0}^{2}N\times 10^{-6}\,,$$
(80)
where
$$N=\frac{\omega(\tau-\tau_{a})}{2\pi}$$
(81)
is the number of plane wave oscillations the particle interacted with before leaving the laser beam. This cumulative effect becomes dominant with respect to other contributions from Eqs. (75,78) when
$$N>\frac{2}{\pi}\frac{a_{0}}{a}\,.$$
(82)
For an electron with $a\approx 10^{-3}$ and with laser amplitude $a_{0}=0.1$, this happens after about 65 oscillations. When this condition is satisfied, the whole function $h(\tau)$ can be approximated just by the cumulative term (80).
In the following, we will take $a_{0}\ll 1$, which our $a_{0}=0.1$ roughly satisfies. In such a case, we can approximate the $\gamma$ factor from the zeroth component of Eq. (65) with
$$\gamma(\tau)=1+\frac{1}{2}(h^{2}(\tau)+a_{0}^{2}\psi^{2}(\tau))\approx 1\,,$$
(83)
where we neglected the $h^{2}(\tau)$ and $a_{0}^{2}\psi^{2}(\tau)$ terms as negligible in the non-relativistic limit.
In the same limit we are going to neglect the drift motion in the $\hat{z}$-direction. The $a_{0}^{2}\psi^{2}(\tau)$ term corresponds to the intermittent particle acceleration/deceleration by the laser wave front in the direction of the wave vector. The $h^{2}(\tau)$ is a similar effect induced by the magnetic moment, this time cummulative with $a^{2}a_{0}^{4}N^{2}$ dependence. See Figure 3 for reference.
In the the direction of the polarization ($\hat{x}$), we have oscillatory 4-velocity caused by particle charge / plane wave interaction. This behavior has been already described in literature in detail Rafelski2017 ; Esaray1993 and the velocity can be bounded by
$$|\beta_{x}|\leq a_{0}\,.$$
(84)
Although this velocity can be substantial, it does not accumulate, and it does not cause a drift in the trajectory, since the oscillations in velocity are around zero for a particle starting with zero velocity in $\hat{x}$-direction.
Turning our attention to the magnetic moment contribution to 3-velocity, the drift velocity in the $\hat{y}$-direction can be approximated as
$$\beta_{y}(\tau)\approx h(\tau)\approx aa_{0}^{2}\frac{\omega\tau}{2}\times 10^{-6}\,,$$
(85)
where only the cummulative contribution Eq. (80) is considered.
The maximum velocity caused by the Lorentz force oscillations Eq. (84) and the Stern-Gerlach drift velocity Eq. (85) become comparable after
$$N\approx\frac{2}{\pi}\frac{10^{6}}{aa_{0}}\approx 10^{10}$$
(86)
oscillations. This would require keeping an electron that was initially at rest within the laser beam for about 20 $\mu$s, a challenging laser beam control task.
Due to the cumulative Stern-Gerlach force, the $\hat{x}$-polarized electron drifts out from the typical laser beam radius $r_{y}=1.5\,\mu\text{m}$ region after about
$$N\approx\sqrt{\frac{\omega r_{y}}{10^{-6}\pi^{2}caa_{0}^{2}}}\approx 3\times 10^{5}$$
(87)
oscillations. During this time it acquires a transverse velocity in the $\hat{y}$-direction of approximately 3,000 m/s and the corresponding laser pulse length is roughly 1 ns.
The electron bunch is typically randomly polarized along the $\hat{x}$-direction, and the spin can have classically any value from $-\hbar/2$ to $\hbar/2$. The magnetic moment interaction described in this paper would result in a beam splitting along the $\hat{y}$-direction.
V Summary, Discussion and Conclusions
In this work, we have added to the understanding of the contribution of the magnetic moment to electron dynamics in the presenece of an external EM plane wave field in an analytical fashion. Our classical model differs from the one used by Wen et al. Wen:2017zer , since we avoid introduction of particle mass modification. This model of particle motion when spin is involved was proposed a century ago by Frenkel Frenkel1926 (for a reformulation in modern notation see Ternov1980 ), and disucssion of the spin dependent mass (even in homogeneous fields) is presented in Kassandrov:2009jd .
We have considered the two test cases of experimental relevance, which we can compare with the results of Wen et al. Wen:2017zer , who were using Frenkel mass modifying Stern-Gerlach force: (a) the motion of a particle traveling along the axis of a current loop in Section III, and (b) In Section IV the motion of a particle in EM plane waves.
(a) Motion in presence of current loop also in our approach leads to Stern-Gerlach trajectory splitting. We showed that electrons polarized in the direction of motion are delayed with respect to electrons with spin against the direction of motion. Our model qualitatively agrees with the classical limit of the DP equation. This result is also consistent with the KGP approach discussed above as the quantum KGP and DP equations are equivalent in the limit of ‘weak’ external fields.
(b) We have explored motion of a charged particle in an external EM plane wave field. Previously, we presented an analytical solution for such behavior for neutral particles, where the magnetic moment interaction is a first order effect Formanek:2019cga . In this work we extend our solution to the case of charged particles and discuss implications for a particle initially at rest in the laboratory frame, see Section IV.6. We focused on the case of a particle at rest, since this is the situation when a laser shot hits matter at rest. This case can be extended by means of a Lorentz transformation to incorporate another class of experimentally relevant situations of a particle beam moving parallel to the laser pulse.
We showed that for a particle initially polarized in the direction of the plane wave’s polarization, the Stern-Gerlach force pushes the particle in a direction perpendicular to both wave polarization and propagation. This would allow us to spatially separate electrons based on their polarization using a laser.
Any electron beam consisting of particle bunches experiences intrinsic Coulomb repulsion forces in the transversal direction as well. This collective behavior beyond the dynamics of a single particle could overshadow the magnetic moment effect. However, this effect acts in a radial direction rather than along a plane. For a dedicated study of the spin Stern-Gerlach force, unbunched continuous beams are suggested in order to avoid the Coulomb driven beam spreading.
In this work we have not considered the process of emittion of radiation by electrons due to their motion in an external field. A possibility of the spin contribution to the electron radiation has been studied theoretically Khokhonov2 , and demonstrated experimentally kirsebom . Here we draw attention to the expression for the invariant acceleration we obtained, see Eq. (59): the magnetic moment radiation expressed by acceleration squared can be compared to electric dipole radiation. We see that magnetic acceleration strength acquires an extra derivative of light wave $f^{\prime}\to f^{\prime\prime}$, and a cofactor $\omega/m$. This suggests (since expression is exact for plane wave and not for light pulses) that magnetic radiation can be comparable to electric dipole radiation strength considering particle within highly singular light pulses.
The domain in which we have explored the Stern-Gerlach force is governed by classical physics
criteria as was discussed in Section IV.2. We argued in Ref. Rafelski:2017hce that the ‘magnetic dipole’ charge of a particle is a fundamental property alongside its rest mass and electric charge. We like to interpret the magnetic moment in terms of anomaly $a=(g-2)/2$ since the effect that we describe depends on $a$. For an electron it so happens that the anomaly $a_{e}$, the deviation from Bohr magneton, is small and the magnitude is characterized by a fine structure constant and originates in the well-known Schwinger QED diagram. However, this should not be interpreted as if QED is part of the effects considered here.
That our results have no relation to quantum effects is best recognized by considering, instead of an electron, a proton, i.e., a particle with a magnetic moment that is quite different from (nuclear) Bohr magneton. In fact we do not expect any QED effects to appear in particle dynamics in the soft field of a continuous beam laser, let alone to show cumulative effect we see for the Stern-Gerlach force spin dynamics.
However, it can be anticipated that more intense laser beams become available, and/or that we
port the physics we developed here to crystal channeling of electrons or/and protons. Therefore
in the future we would like to extend the magnetic moment interaction to the quantum domain by
incorporating the Stern-Gerlach potential into quantum-mechanical framework. A useful tool on
this path would be a semi-classical treatment which shows a great promise to describe accurately
the ultra-relativistic motion Bagrov ; Wistisen .
The dynamical examples presented demonstrate that the electron beam control in some environments requires understanding and incorporation of the magnetic moment interaction due to the Stern-Gerlach type force in particle dynamics. Considering specific laser-particle initial configurations we have shown that the Stern-Gerlach force due to plane (laser) wave influences the velocity of charged particles in a cumulative way. This differs from the transverse effect due to the Lorentz force which primarily causes oscilatory motion. One can wonder if this effect can be used to measure the anomalous magnetic moment of charged particles. Unlike the spin precession experiments, it would use the trajectory modification by Stern-Gerlach force, but a study of the achievable precision is still required.
To conclude: in order to fully describe the behavior of electrons in external fields, the magnetic moment interaction cannot be neglected. We believe that our results will become relevant whenever electron beam control requires full account of the magnetic moment dynamics.
Acknowledgements. We would like to thank the anonymous referees for presenting several references helping our discussion of the parameters controlling the validity of the classical approach and addressing prior work.
References
(1)
M. Wen, C. H. Keitel and H. Bauke,
“Spin-one-half particles in strong electromagnetic fields: Spin effects and radiation reaction,”Phys. Rev. A 95, no.4, 042102 (2017)
doi:10.1103/PhysRevA.95.042102
(2)
J. Rafelski, M. Formanek and A. Steinmetz,
“Relativistic Dynamics of Point Magnetic Moment,”Eur. Phys. J. C 78, no.1, 6 (2018)
doi:10.1140/epjc/s10052-017-5493-2
(3)
M. Formanek, S. Evans, J. Rafelski, A. Steinmetz and C. T. Yang,
“Strong fields and neutral particle magnetic moment dynamics,”Plasma Phys. Control. Fusion 60, 074006 (2018)
doi:10.1088/1361-6587/aac06a
(4)
M. Formanek, A. Steinmetz and J. Rafelski,
“Classical neutral point particle in linearly polarized EM plane wave field,”Plasma Phys. Control. Fusion 61, no.8, 084006 (2019)
doi:10.1088/1361-6587/ab242e
(5)
R. H. Good, Jr.,
“Classical Equations of Motion for a Polarized Particle in an Electromagnetic Field,”Physical Review, 125(6), 2112 (1962).
(6)
P. Nyborg,
“On Classical Theories of Spinning Particles, ”Nuovo Cimento, 31, 1209 (1962)
(7)
V. G. Bagrov, and V. .A. Bordovitsyn,
“Classical spin theory,”Sov. Phys. J. 23, 128 (1980).
(8)
L. H. Thomas,
“The kinematics of an electron with an axis,”Philos. Mag. Ser. 7(3), 1 (1927).
(9)
V. Bargmann, L. Michel and V. L. Telegdi,
“Precession of the polarization of particles moving in a homogeneous electromagnetic field,”Phys. Rev. Lett. 2, 435-436 (1959)
doi:10.1103/PhysRevLett.2.435
(10)
E. S. Sarachik, and G. T. Schappert,
“Classical theory of the scattering of intense laser radiation by free electrons.”Physical Review D 10, 2738 (1970).
(11)
A. Di Piazza,
“Exact solution of the landau-lifshitz equation in a plane wave,”Lett. Math. Phys., 83, 305-13 (2008)
(12)
Y. Hadad, L. Labun, J. Rafelski, N. Elkina, C. Klier and H. Ruhl,
“Effects of Radiation-Reaction in Relativistic Laser Acceleration,”Phys. Rev. D, 82, 096012 (2010)
doi:10.1103/PhysRevD.82.096012
(13)
J. S. Schwinger,
“Spin precession - a dynamical discussion,”Am. J. Phys., 42, 510 (1974)
(14)
J. D. Jackson,
Classical Electrodynamics, Third Edition,
John Wiley & Sons, Inc., Hoboken, N.J. (1999).
(15)
G. A. Mourou, T. Tajima, and S. V. Bulanov,
“Optics in the relativistic regime,”Rev. Mod. Phys., 78, 309 (2006).
(16)
L. L. Foldy and S. A. Wouthuysen,
“On the Dirac theory of spin 1/2 particles and its non-relativistic limit,”Physical Review, 78(1), p. 29. (1950)
(17)
A. J. Silenko,
“Foldy-Wouthyusen Transformation and Semiclassical Limit for Relativistic Particles in Strong External Fields,”Phys. Rev. A 77, 012116 (2008)
doi:10.1103/PhysRevA.77.012116
(18)
A. Steinmetz, M. Formanek and J. Rafelski,
“Magnetic Dipole Moment in Relativistic Quantum Mechanics,”Eur. Phys. J. A 55, no.3, 40 (2019)
doi:10.1140/epja/i2019-12715-5
(19)
A. Kh. Khokonov, and M. Kh. Khokonov,
“Classification of the Interactions of Relativistic Electrons with Laser Radiation,”Tech. Phys. Lett. 31(2), 154-156 (2005).
(20)
V. I. Ritus,
“Quantum effects of the interaction of elementary particles with an intense electromagnetic field,”J. of Sov. Laser Research, 6(5), 497-617 (1985).
(21)
U. I. Uggerhøj,
“The interaction of relativistic particles with strong crystalline fields,”Rev. Mod. Phys. 77(4), 1131 (2005).
(22)
C. Itzykson and J. B. Zuber,
Quantum Field Theory,
Dover, Mineola, N.Y. (2005).
(23)
J. Rafelski,
Relativity Matters: From Einstein’s EMC2 to Laser Particle Acceleration and Quark-Gluon Plasma,
Springer, Heidelberg, Germany (2017).
(24)
E. Esaray,
“Nonlinear Thomson scattering of intense laser pulses from beams and plasmas,”Phys. Rev. E, 48(4), 3003 (1993).
(25)
J. Frenkel,
“The dynamics of spinning electron,”Z. Phys. 37, 243 (1926).
(26)
I. M. Ternov and V. A. Bordovitsyn,
“Modern interpretation of J. I. Frenkel’s classical spin theory,”Sov. Phys. Usp. 23, 679 (1980)
(27)
V. Kassandrov, N. Markova, G. Schaefer, and A. Wipf,
“On the model of a classical relativistic particle of unit mass and spin,”J. Phys. A 42, 315204 (2009)
doi:10.1088/1751-8113/42/31/315204
(28)
A. Kh. Khokhonov, M. Kh. Khokhonov, and A. A. Kizdermishov,
“Possibility of Generating High-Energy Photons by Ultrarelativistic Electrons in the Field of a Terrawatt Laser and in Crystals,”Tech. Phys. 47(11), 1413 (2002).
(29)
K. Kirsebom, U. Mikkelsen, E. Uggerhøj, K. Elsener, S. Ballestrero, P. Sona, and Z. Z. Vilakazi,
“First measurements of the unique influence of spin on the energy loss of ultrarelativistic electrons in strong electromagnetic fields,”Phys. Rev. Lett. 87(5), 054801 (2001).
(30)
V. G. Bagrov, V. V. Belov, and A. Yu. Trifonov,
“Theory of spontaneous radiation by electrons in trajectory-coherent approximation,”J. Phys. A 26, 6341 (1993).
(31)
T. N. Wistisen,
“Interference effect in nonlinear Compton scattering,”Phys. Rev. D 90(12), 125008 (2014). |
The SimbolX view of the unresolved X–ray background
A. Comastri
1
INAF – Osservatorio Astronomico di Bologna, Via Ranzani 1,
I-40127 Bologna, Italy
[email protected]
R. Gilli
1
INAF – Osservatorio Astronomico di Bologna, Via Ranzani 1,
I-40127 Bologna, Italy
[email protected]
F. Fiore
2INAF – Osservatorio Astronomico di Roma, Via Frascati 33,
I-00040 Monteporzio Catone (RM), Italy
2
C. Vignali
3Dipartimento di Astronomia, Università di Bologna,
Via Ranzani 1, I-40127 Bologna, Italy
31
INAF – Osservatorio Astronomico di Bologna, Via Ranzani 1,
I-40127 Bologna, Italy
[email protected]
R. Della Ceca
4INAF – Osservatorio Astronomico di Brera, Via Brera 28,
I-20121 Milano, Italy
4
G. Malaguti
5INAF–IASF, via Gobetti 101,
I-40129 Bologna, Italy
5
Abstract
We will briefly discuss the importance of sensitive X–ray observations
above 10 keV for a better understanding
of the physical mechanisms associated to the Supermassive Black Hole
primary emission and
to the cosmological evolution of the most obscured Active Galactic Nuclei.
keywords: Galaxies: active – X-rays – Cosmology: observations
\idline
75282
\subtitle\offprints
A. Comastri
1 Introduction
The fraction of the hard X-ray background (XRB) resolved into
discrete sources by deep Chandra and XMM-Newton observations
smoothly decreases from practically 100% below 2–3 keV
to about 50% in the 6–10 keV energy range (Worsley et al. 2005),
becoming negligible at energies above 10 keV, where the bulk of the
energy density is produced.
The shape and intensity of the unresolved XRB calculated
by Worsley et al. (2005) is well described
by a “peaky” spectrum which is similar to that computed
by folding the average spectrum of heavily obscured ($N_{H}>10^{23}$ cm${}^{-2}$)
and Compton Thick ($N_{H}>10^{24}$ cm${}^{-2}$; hereinafter CT) AGN over the
redshift range $z\sim$ 0.5–1.5 (e.g., Comastri 2004a).
The search for and the characterization of this population
has become increasingly important in the last years and represents
the last obstacle towards a complete census of accreting
Supermassive Black Holes (SMBHs).
According to the most recent version of XRB synthesis
models (Gilli et al. 2007), heavily obscured and CT AGN are
thought to be numerically as relevant as
less obscured Compton Thin AGN. As a consequence, their impact
on the astrophysics and evolution of AGN population as a whole
cannot be neglected.
It is convenient to consider absorption in the CT regime in two
classes: mildly and heavily CT (see Comastri 2004b).
The column density of the absorbing gas of the former class
is just above the
unity optical depth for Compton scattering,
while it is much larger than unity for the latter.
The primary radiation in mildly CT AGN is able to penetrate the
obscuring gas and is visible above $\sim$ 10 keV, while Compton
down-scattering mimics absorption over the
entire energy range for heavily CT AGN.
As far as the XRB is concerned, the most relevant contribution is
expected to come from mildly CT AGN which are expected to be as bright
as unobscured AGN above 10–15 keV. Though the flux of the primary emission
is strongly depressed, also
heavily CT AGN may provide some contribution
to the XRB which depend on their scattering/reflection
efficiency, most likely correlated with the geometry of the obscuring gas.
At present, the best estimates of the basic properties of mildly
CT AGN rely on the observations obtained with the PDS
instrument onboard BeppoSAX (Risaliti et al. 1999;
Guainazzi et al. 2005). CT absorption, at
least among nearby AGN, appears to be rather common.
The relative fraction depends on the adopted selection
criterium and can be as high as 50–60% for optically selected
Seyfert 2 galaxies.
More recently, BAT/Swift (Ajello et al. 2007)
and IBIS/INTEGRAL (i.e., Bird et al. 2007) have surveyed
the hard X–ray sky.
Although these surveys have provided relatively large numbers of
hard X–ray sources (e.g., Krivonos et al. 2007), they are limited to
bright fluxes ($S_{10-100keV}>10^{-11}$ erg cm${}^{-2}$ s${}^{-1}$),
thus sampling only the very local Universe. As a consequence, the resolved
fraction of the hard ($>$ 10 keV) XRB is of the order of a few
percent (Sazonov et al. 2007).
Assuming the absorption distribution determined in the
local Universe and folding it
with cosmological evolution in AGN synthesis models, it is
possible to predict the
relative number of CT AGN in a purely hard X–ray selected sample.
Though the error bars suffer from small number statistics,
there is a fairly
good agreement with the model predictions (Figure 1), implying that CT
absorption should be common also
at high redshift and luminosities. However, only a few CT AGN
are known beyond the local Universe (e.g. Comastri 2004b;
Della Ceca et al. 2007).
Even the deepest X–ray
surveys in the 2–10 keV band (e.g., Tozzi et al. 2006;
Georgantopoulos et al. 2007)
uncovered only a dozen of candidate heavily obscured CT AGN and await for
better spectroscopic X–ray data for a confirmation.
It has been recently suggested that selection via mid–IR and
optical colors is promising to pick up high-$z$ heavily obscured AGN
(Fiore et al. 2007a; Daddi et al. 2007). Stacking the Chandra
counts of sources selected on the basis of mid infrared (24$\mu$m)
excess emission wrt the near–infrared optical flux, a strong
signal is revealed in the hard band (up to 4–6 keV), which implies,
at the average source redshift ($z$$\sim$2), $N_{H}$$\sim 10^{24}$ cm${}^{-2}$.
Despite the similarities in the source selection and stacking
techniques, the estimated space density of the candidate high-$z$ CT
AGN differs by up a factor 3–5. The difference can be ascribed at least
in part to the slightly different luminosity ranges sampled.
According to Fiore et al. (2007a) the “missing” population of luminous
($L_{X}>10^{44}$ erg s${}^{-1}$) candidate CT AGN at $z\sim$ 2 would
have the right size to explain the
unresolved XRB, while following Daddi et al. (2007) it may be
significantly more numerous than predicted by synthesis models especially
at low ($L_{X}<10^{43}$ erg s${}^{-1}$) luminosities.
The above estimates are affected by large uncertainties, however they
strongly point towards the existence of a sizable population of accreting
heavily obscured SMBHs at high redshift.
The direct detection of the CT AGN population over the redshift range
most densely populated by less obscured sources contributing
to the XRB ($z\sim$ 0.7–1) bears important implications for
our understanding of SMBH evolution. Moreover, a reliable determination
of their luminosity function is a key parameter to reconcile the relic SMBH mass function
with the local one estimated through the local
$M_{BH}-\sigma/M_{BH}-M_{bulge}$ relationships and galaxies luminosity
function (Marconi et al. 2004; Merloni et al. 2004).
2 Current hard X–ray observations
The expected fraction of CT AGN in the 15–200 keV band as a function of flux
is reported in Figure 1. The advantage of the hard X–ray selection wrt
the 2–10 keV band is evident.
The INTEGRAL and Swift all sky hard X–ray surveys have provided the first
flux limited samples which can be compared with model predictions.
There is a fairly good agreement, but the statistics is still dominated by fluctuations
associated to small numbers. Moreover, the CT nature of Swift and INTEGRAL
sources
is estimated combining low signal to noise ratio hard X–ray ($>$ 10 keV) spectra
with observations at lower energies, mainly with Chandra and XMM–Newton.
Follow–up observations of candidate CT AGN from the surveys described above
were performed with Suzaku. The hard X–ray detector and the XIS CCD camera
onboard the Japanese satellite are sensitive over a broad energy range
($\sim$ 0.5–60 keV) and return good quality spectra for relatively bright
hard X–ray sources.
Interestingly enough, Suzaku follow–up observations
of hard X–ray selected INTEGRAL and Swift sources (Ueda et al. 2007;
Comastri et al. 2007),
only barely detected
below 10 keV, suggest that a population of extremely hard sources,
appearing only above 5–6 keV, may have escaped detection by surveys at lower energies.
Their high energy spectrum ($>$ 5–10 keV) is dominated by a strong reflection component
from optically thick material, while there is no evidence for the presence of a
soft X–ray component, presumably due to scattering of the nuclear radiation, and common
among Seyfert 2 galaxies.
A possible explanation is that the geometrical distribution of the obscuring/reflecting
gas is different from known Type 2 AGN.
By requiring that their integrated emissivity does not exceed the XRB level and associated
uncertainties at 30 keV (e.g. Churazov et al. 2007; Frontera et al. 2007), and assuming the
same evolution of X–ray
selected AGN (e.g. Hasinger et al. 2005; La Franca et al. 2005) it is, in principle,
possible to constrain their number density.
The hypothetic population of reflection dominated AGN could be up to a factor 4 more numerous
than CT AGN (Figure 2).
Such an estimate is highly uncertain and strongly depends upon the assumption of a
reflection dominated spectrum.
The bottom line of such an exercise is that there might be room for a
previously unknown population of obscured AGN emerging only above 5–10 keV.
3 Future imaging X–ray surveys
Imaging observations above 10 keV will offer a unique opportunity
to address at least some of the issues mentioned above. The expected SimbolX
capabilities are such to make possible the detection of a statistically
significant sample of heavily obscured and CT AGN up to $z\sim$ 1.
The expected quality of the spectral data over the range of fluxes accessible to
SimbolX is extensively discussed in a companion paper (Della Ceca et al. 2007).
The expected number counts in the 10–40 keV band (Figure 3) are obtained extrapolating
the spectral energy and absorption distributions of the Gilli et al (2007)
synthesis model.
The hard X–ray source surface density keeps a steep (close to Euclidean) slope
down to the expected sensitivity limits of SimbolX observations
($\sim 10^{-14}$ erg cm${}^{-2}$ s${}^{-1}$)
in the 10–40 keV band.
More specifically, a few hundreds CT AGN
per square degree are predicted at the limits of a deep survey.
By scaling the estimated space densities for the SimbolX
field of view, our estimates
translate in a few up to about ten CT AGN in a deep (1 Msec) pointing.
Taking into account the off-axis sensitivity and the fact that the faintest
sources will be detected with a number of counts insufficient for X–ray spectral analysis,
the search for the most obscured AGN cannot be pursued only with
deep pointings.
A comprehensive understanding of the physics and evolution of distant and
obscured AGN, as well as of different classes of cosmic sources, requires
a close synergy between deep and wide area surveys.
Indeed, the evolution
of X–ray selected AGN was determined by a large number of surveys which
have extensively sampled the solid angle vs. depth discovery space
(see Fig. 1 of Brandt & Hasinger 2005).
A possible strategy is discussed in the Fiore et al (2007b) companion paper.
The predicted redshift distribution of CT AGN at three different limiting
fluxes is reported in Figure 4. Not surprisingly, the peak moves to higher redshifts
as the sensitivity increases.
A pronounced high redshift ($z\sim$ 0.5–2) tail
starts to develop at fluxes around 10${}^{-13}$ erg cm${}^{-2}$ s${}^{-1}$ and lower.
As far as the search for obscured accretion at high redshift is concerned,
the best trade–off would be to maximize the area covered close
these fluxes.
4 Conclusions
Deep SimbolX surveys will be able to resolve about half of the XRB in the 10–40
keV energy range (Figure 5). This achievement will represents a major step forward.
In fact, at present, less than a few percent of the XRB in that band is resolved.
A comparison with an extrapolation of an AGN synthesis model (Comastri 2004a)
tuned to reproduce the resolved background below 10 keV is reported in Fig 5.
Although the level of the XRB predicted to be resolved by SimboX
is close to the model extrapolation, it will be possible, with the present configuration,
to probe the entire range of absorption of the XRB sources.
Several “new” CT AGN (see Fiore et al. 2007b), which are undetected
even in the deepest XMM–Newton and Chandra surveys (Fig. 1), will be revealed.
Moreover, given that the fraction of CT AGN is expected to steeply increase just below
$\sim 10^{-14}$ erg cm${}^{-2}$ s${}^{-1}$, it would be highly rewarding
to push the limiting flux to somewhat lower fluxes.
It is worth remarking that the resolution of a significant fraction
of the XRB has always brought significant advances in our understanding of AGN evolution.
ROSAT deep surveys showed that luminous unobscured quasars at high redshift ($z\sim$ 1.5–2)
were responsible for most of the soft (around 1 keV) XRB (Lehmann et al. 2001).
Later on, thanks to Chandra and XMM–Newton
deep surveys, it was demonstrated that the bulk of the XRB at least up to 5–6 keV is originating
in relatively low luminosity sources, most of them obscured, at $z\sim$ 1 (Brandt & Hasinger 2005).
While we expect to uncover the so far elusive population of CT AGN up to relatively
high redshift, it may well be possible that the content of the X–ray sky
above $\sim$ 10 keV is different from what predicted.
We look forward to new unexpected findings which could be obtained by pushing imaging
observations in the so far unexplored hard X–ray band.
Acknowledgements.
We acknowledge financial contribution from contracts ASI–INAF I/023/05/0, ASI–INAF I/088/06/0
and PRIN–MUR grant 2006–02–5203.
References
Ajello et al. (2007)
Ajello, M. et al. 2007, \apj, in press (arXiv:0709.4333)
Bird (2007)
Bird A.J., et al. 2007 \aaps170, 175
Brandt & Hasinger (2005)
Brandt, W.N., Hasinger, G. 2005, \araa, 43, 827
Churazov et al. (2007)
Churazov, E. et al. 2007, \aap, 467, 529
Comastri (2004a)
Comastri, A. 2004a, in Multiwavelength AGN Surveys; proceedings of the Guillermo Haro
Conference held December 8-12, 2003, in Cozumel, Mexico. R. Mujica and R. Maiolino eds.
World Scientific Publishing Company, Singapore, p. 323
Comastri (2004b)
Comastri, A. 2004b, in “Supermassive Black Holes in the
Distant Universe”, Astroph. and Space Science Library, 308, p. 245
Comastri et al. (2007)
Comastri A., Gilli R., Vignali C., et al. 2007
in “The Extreme Universe in the Suzaku Era”, in press (arXiv:0704.1253)
Daddi et al. (2007)
Daddi, E. et al. 2007, \apj, in press (arXiv:0705.2832)
Della Ceca et al. (2007)
Della Ceca, R. et al. 2007, these proceedings (arXiv:0709.3060)
Fiore et al. (2007)
Fiore, F. et al. 2007a, \apj, in press (arXiv:0705.2864)
Fiore et al. (2007)
Fiore, F. et al. 2007b, these proceedings
Frontera et al. (2007)
Frontera F. et al., 2007 \apj, 666, 86
Georgantopoulos et al. (2007)
Georgantopoulos, I. Georgakakis, A. Akylas, A. 2007 \aap, 466, 823
Gilli et al. (2007)
Gilli, R., Comastri A., & Hasinger G. 2007 \aap, 463, 73
Gruber et al. (1999)
Gruber, D.E., 1999 \apj, 520, 124
Guainazzi et al. (2005)
Guainazzi, M. Matt, G. Perola, G.C. 2005 \aap, 444, 119
Hasinger et al. (2005)
Hasinger, G., Miyaji, T., Schmidt, M.S. 2005 \aap, 441, 417
Krivonos et al. (2007)
Krivonos, R. et al. 2007 \aap, submitted (arXiv:astro–ph/0701836)
La Franca et al. (2005)
La Franca, F., Fiore, F., Comastri, A., et al. 2005 \apj, 635, 864
Lehmann et al. (2001)
Lehmann, I., et al. 2001 \aap, 371, 833
Marconi et al. (2004)
Marconi, A. Risaliti, G Gilli, et al. 2004 \mnras, 351, 169
Markwardt et al. (2005)
Markwardt, C.B. et al. 2005 \apj, 633, L77
Merloni et al. (2004)
Merloni A., Rudnick G., Di Matteo T., 2004 \mnras, 354, L37
Risaliti et al. (1999)
Risaliti G., Maiolino R., Salvati M. 1999 \apj, 522, 157
Revnivtsev et al. (2005)
Revnivtsev M., et al. 2003 \apj, 522, 157
Revnivtsev et al. (2005)
Sazonov S., et al. 2007 \aap, submitted (arXiv:0708.3215)
Tozzi et al. (2006)
Tozzi, P. et al. 2006 \aap, 451, 457
Tozzi et al. (2006)
Ueda, Y. et al. 2007 \apj, 664, L79
Tozzi et al. (2006)
Worsley, M.A. et al. 2004 \mnras, 352, L28 |
STIS Longslit Spectroscopy Of The Narrow Line Region Of NGC 4151.
I. Kinematics and Emission Line Ratios11affiliation: Based on observations with the NASA/ESA Hubble Space
Telescope, obtained at the Space Telescope Science Institute, which
is operated by AURA Inc under NASA contract NAS5-26555
C. H. Nelson22affiliation: Physics Dept., University of Nevada, Las Vegas, Box 4002,
4505 Maryland Pkwy., Las Vegas, NV 89154, [email protected],
[email protected] , D. Weistrop22affiliation: Physics Dept., University of Nevada, Las Vegas, Box 4002,
4505 Maryland Pkwy., Las Vegas, NV 89154, [email protected],
[email protected] ,
J. B. Hutchings33affiliation: Dominion Astrophysical Observatory, National Research Council of Canada,
5071 W. Saanich Rd., Victoria B.C. V8X 4M6, Canada , D. M. Crenshaw44affiliation: Catholic University of America, NASA/Goddard Space Flight Center,
Code 681, Greenbelt, MD 20771 ,
T. R. Gull55affiliation: NASA/Goddard Space Flight Center,
Code 681, Greenbelt, MD 20771 , M. E. Kaiser66affiliation: Department of Physics and Astronomy, Johns Hopkins University,
Baltimore, MD 21218 ,
S. B. Kraemer44affiliation: Catholic University of America, NASA/Goddard Space Flight Center,
Code 681, Greenbelt, MD 20771 , D. Lindler55affiliation: NASA/Goddard Space Flight Center,
Code 681, Greenbelt, MD 20771
Abstract
Longslit spectra of the Seyfert galaxy NGC 4151 from the UV to near
infrared have been obtained with STIS to study the kinematics and
physical conditions in the NLR. The kinematics show evidence for
three components, a low velocity system in normal disk rotation, a
high velocity system in radial outflow at a few hundred km s${}^{-1}$ relative
to the systemic velocity and an additional high velocity system also
in outflow with velocities up to 1400 km s${}^{-1}$, in agreement with results
from STIS slitless spectroscopy (Hutchings et al., 1998, Kaiser
$et~{}al.$, 1999, Hutchings et al., 1999) We have explored two
simple kinematic models and suggest that radial outflow in the form of
a wind is the most likely explanation. We also present evidence indicating
that the wind may be decelerating with distance from the nucleus.
We find that the emission line ratios along our slits are all entirely
consistent with photoionization from the nuclear continuum source. A
decrease in the [OIII] $\lambda$5007 / H$\beta$ and [OIII] $\lambda$5007 / [OII] $\lambda$3727 ratios suggests that the
density decreases with distance from the nucleus. This trend is borne
out by the [SII] ratios as well. We find no strong evidence for
interaction between the radio jet and the NLR gas in either the
kinematics or the emission line ratios in agreement with the results
of Kaiser et al. (1999) who find no spatial coincidence of NLR
clouds and knots in the radio jet. These results are in contrast to
other recent studies of nearby AGN which find evidence for significant
interaction between the radio source and the NLR gas.
galaxies: Seyfert, galaxies: individual (NGC 4151),
galaxies: kinematics and dynamics, line: formation
1 Introduction
Since the launch of the Hubble Space Telescope (HST) many
imaging studies of the Narrow Line Regions (NLR) of active galactic
nuclei (AGN) have been carried out. These studies have shown that the
emission line gas often has a complex morphology, frequently taking the
form of a bicone centered on the galaxy nucleus. (e.g. NGC
4151, Evans et al., 1993, Boksenberg et al. 1995, NGC 1068,
Evans et al. 1991, see also the archival study by Schmitt &
Kinney 1996). In the standard model for an AGN, a dense molecular
torus with a radius of a few parsecs surrounds the nucleus and
collimates the radiation field (e.g. Antonucci 1993).
According to the model, differences in the continuum and emission line
spectra, which form the basis for classification of Seyferts and
other types of AGN, can be explained largely by differences in the
orientation of the torus to our line of sight. For example, in type 1
Seyfert galaxies our viewing angle is close to the symmetry axis of
the torus allowing a direct view of the Broad Line Region (BLR) and
the nuclear continuum source, while in type 2 Seyfert galaxies our
vantage point lies closer to the plane of the torus which then blocks
a direct view of the inner regions.
In many instances the NLR morphology and kinematics appear
closely linked to the radio structure, particularly in Seyfert
galaxies with linear or jet-like radio sources. In these objects the
line emitting gas is often found to be cospatial with the radio jets
and there is also kinematic evidence for physical interaction between
the jets and the NLR gas (Capetti et al. 1999, Whittle et
al., 1988). The suggestion has been made that expansion of the radio
plasma into the host galaxy’s interstellar medium produces fast shock
waves which emit a hard continuum and ultimately provide the dominant
source of ionizing photons (Taylor, Dyson, & Axon, 1992, Sutherland,
Bicknell & Dopita, 1993).
The degree to which photoionization by a nuclear continuum source or
by autoionizing shocks contributes to the overall energetics of the
NLR has been the subject of some debate. In principal one can
distinguish between them spectroscopically by studying the spatially
resolved kinematics and the physical conditions of the gas as revealed
by the relative intensities of specific emission lines. The Space
Telescope Imaging Spectrograph (STIS) is ideally suited for this type
of study. We have therefore undertaken a detailed investigation of the
kinematics and physical conditions across the NLR of NGC 4151, one of
the nearest Seyfert galaxies.
Evidence for outflow and photo-ionization cones in the NLR of
NGC 4151 was presented by Schulz (1988, 1990) based on ground-based
longslit spectroscopy. Peculiar flat-topped and double-peaked emission line
profiles were observed to the SE and NW between 2${}^{\prime\prime}$ and 6${}^{\prime\prime}$ from the nucleus and are most consistent with outflow models. Schulz (1990)
suggests that the outflow is driven either by a wind related to the
active nucleus or by an expanding radio plasmon.
The NLR kinematics in NGC 4151 have been studied in detail using
slitless spectroscopy from STIS (Hutchings et al., 1998, Kaiser
et al., 1999, and Hutchings et al., 1999). These
observations reveal three distinct kinematic components: one
consisting of low velocity clouds ($|V-V_{sys}|\sim 100$ km s${}^{-1}$ ),
primarily in the outer NLR following the rotation of the host galaxy
disk, a second consisting of moderately high velocity clouds
($|V-V_{sys}|\geq 400$ km s${}^{-1}$ ) most likely associated with radial
outflow within the biconical morphology and a third component of
fainter but much higher velocity clouds ($|V-V_{sys}|\sim 1400$ km s${}^{-1}$ )
which is also outflowing but not restricted to the biconical flow of
the intermediate velocity component. No evidence for higher
velocities in the vicinity of the radio knots was found suggesting
that the radio jet has minimal influence on the NLR kinematics.
A somewhat different conclusion was drawn by Winge et al. (1999)
primarily using longslit spectroscopy with HST’s Faint Object
Camera. They claim evidence for strong interaction between the radio
jet and the NLR gas. Furthermore, after subtracting the influence of
the radio jet and galaxy rotation on the kinematics, they suggest that
the residual motion is the rotation of a thin disk of gas on nearly
Keplerian orbits beyond 0${}^{\prime\prime}\!\!$.5 (60 pc using their linear scale)
around an enclosed mass of $\rm 10^{9}M_{\odot}$. Interior to 60 pc the
velocities turn over suggesting that the mass is extended, and, if
their interpretation is correct, they are able to place upper limits
on the mass of a nuclear black hole of $\sim 5\times 10^{7}\rm M_{\odot}$.
In this paper we present the initial results from our low resolution,
longslit spectroscopy. A second paper presents a detailed
photoionization model using the emission line ratios presented here
(Kraemer et al. 1999, Paper II). Section 2 presents the
observations and describes the data reduction procedures including
correction for scattered light from the Seyfert nucleus. Section
3 describes the results of the kinematic and preliminary line
ratio analyses. In section 4 we discuss the results in terms
of different NLR models. We summarize our results and conclusions in
section 5.
2 Observations and Data Reduction
Longslit spectroscopy of NGC 4151 was obtained with STIS on board HST.
Four low dispersion gratings, G140L, G230LB, G430L and G750L, were
used producing spectra ranging from the UV at 1150 Å to the
near-infrared at 10,270 Å. Note that the G230LB mode, which uses the
CCD detector, was used instead of the G230L, due to the bright object
protection limits imposed on use of the MAMA detectors. Two slit
alignments were chosen to cover regions of specific interest and as
many of the bright emission line clouds as possible. The first
position was chosen to pass through the nucleus at position angle
221${}^{\circ}$, while the second was offset from the nucleus by 0${}^{\prime\prime}\!\!$.1 to
the south at position angle 70${}^{\circ}$. Figure 1 shows the slit
apertures drawn on the WFPC-2 narrow band image of the [OIII] $\lambda$5007 emission
line structure obtained from the HST archives (proposal ID 5124,
principal investigator H. Ford). The 0${}^{\prime\prime}\!\!$.1 slit was used to
preserve spectral resolution, given here for each of the four gratings
assuming an extended source (the emission line clouds are generally
resolved along the slit): 2.4 Å for G140L, 2.7 Å for G230LB,
5.5 Å for G430L, and 9.8 Å for G750L (Woodgate et al.
1998, Kimble et al. 1998, Baum et al. 1998). A log of the
observations is presented in Table 1. One set of observations failed
and as a result no G140L spectrum was available for P.A. 70${}^{\circ}$.
The spectra were reduced using the IDL software developed at NASA’s
Goddard Space Flight Center for the Instrument Definition Team
(Lindler et al. 1998). Cosmic ray hits were identified and
removed from observations using the CCD detector (G230LB, G430L, and
G750L) by combining the multiple images obtained at each visit in each
spectroscopic mode. Hot or warm pixels (identified in STIS dark
images) were replaced by interpolation in the dispersion
direction. Wavelength calibration exposures obtained after each
science observation were used to correct the wavelength scale for
zero-point shifts. The spectra were also geometrically rectified and
flux-calibrated to produce a constant wavelength along each column
(the spatial direction) and fluxes in units of ergs s${}^{-1}$ cm${}^{-2}$
Å${}^{-1}$ per cross-dispersion pixel. Spectra obtained at the same
position angle and spectroscopic mode were combined to increase the
signal-to-noise ratios.
The bright, unresolved Seyfert nucleus of NGC 4151 creates a number of
difficulties when trying to examine emission lines from the NLR close
in. Scattered light, largely from Airy rings imaged on the slit,
causes features of the nuclear point source spectrum to be
superimposed on fainter NLR features. These follow linear tracks
running nearly parallel to the dispersion, diverging slightly with
wavelength and can be detected as much as 20$-$30 pixels from the
nucleus (Bowers & Baum, 1998). This is a particularly difficult
problem for measuring the Balmer lines in the NLR since the BLR lines
are strong and often have peculiar shapes which can influence the
continuum placement if not subtracted properly. In addition, the
extended halo of the PSF must be modeled and subtracted. Furthermore,
reflection of the bright nucleus in the CCD modes appears as a ghost
spectrum, which is displaced from the nucleus in both the dispersion
and spatial directions. Several techniques were used to remove these
effects.
Corrections for scattered light in the spectra were applied in the
following order: 1) removal of the reflection spectrum (in the CCD
spectra), 2) correction for the halo, and 3) removal of the remaining
PSF, including the diffraction-ring tracks. The reflection spectrum is
not only shifted in both directions, it is broadened in the spatial
direction, compressed in the dispersion direction, and altered in
intensity as a function of wavelength (it tends to be redder than the
nuclear spectrum). The reflection in each original spectral image was
isolated by subtracting the scattered-light at the same spatial
distances on the other side of the nuclear spectrum. Then the nuclear
spectrum was shifted along the slit and compressed in the dispersion
direction until the strong emission features matched those in the
reflection. It was then divided into the observed reflection to obtain
the large-scale intensity variations in both dispersion and spatial
directions. These variations were fitted in wavelength regions that do
not contain extended emission, in both directions with low-order
splines. The fits were then multiplied by the altered nuclear spectrum
to produce a model of the reflection which was subtracted from the
original spectral image. A circularly-symmetric halo was adopted from
previous work on the STIS detectors (Lindler 1999), and collapsed to
match the observed PSF in the spatial direction (obtained by adding
regions along the dispersion direction that do not contain extended
emission). The halo function was adjusted at various radial positions
until a reasonable match was obtained with the broad-scale profile of
the PSF (i.e., ignoring diffraction tracks, etc.). The halo was then
deconvolved from the original image using an iterative technique that
removes flux from the halo and places it in the core.
To remove the remaining scattered light, a scattering template was
constructed using archival observations of stars observed with the
same grating and slit width. First, the template spectrum was
normalized in the dispersion direction by dividing through by the
spectrum summed along the slit. Next, the template was smoothed in the
dispersion direction, using a median filter with a 50 pixel wide
window. The nuclear spectrum of NGC 4151 was then multiplied into the
template to simulate the scattered light spectrum. The scattering
subtracted spectra are clean of broad line emission as close as 4
pixels from the nucleus. Because the nuclear H$\alpha$ line in the
G750L spectrum at P.A. 221 is saturated, the true line profile is
distorted and complicates construction of the scattering template. A
substitute for the saturated profile was obtained from the G750M short
exposure in our slitless spectroscopy with good results.
An alternative approach was also applied which used the structure
along the slit in a continuum region of the NGC 4151 spectrum itself
to form the model template. First, the entire image was normalized by
dividing each row (which lies along the dispersion direction) by the
summed nuclear spectrum from the central four rows. A spline
(typically of order 11) was then fitted along each row in regions that
do not contain emission lines. Thus the fit is a model of the
scattering as a function of wavelength and position along the slit for
a point source spectrum of constant flux per unit wavelength. This
procedure was effective in modeling the diffraction tracks as well as
the overall PSF. The spline fits were then multipled by the nuclear
spectrum at each spatial position, and subtracted from the reflection-
and halo-corrected image to produce a final corrected image, which was
used for subsequent analysis.
The resulting spectra are shown in Figure 2 and Figure
3. The corrections bring out the structure in the bright
lines, and allow us to see fainter lines that are not evident in the
original spectra. The correction process was not perfect, as evidenced
by the faint structure seen in the regions above and below the strong
nuclear lines, particularly H$\alpha$ $\lambda 6563$. However, these
problems are minor, and the contaminating effects of nuclear
absorption and emission were removed well enough for accurate
measurement of the extended emission, even very close to the nucleus.
Although our primary interest is the NLR, the data set also
contains high quality nuclear spectra of NGC 4151 at two epochs.
Monitoring campaigns have shown pronounced variability in both the
nuclear continuum and BLR emission (Robinson et al., 1994,
Crenshaw et al., 1996, Warwick et al., 1996, Kaspi et
al., 1996, Ulrich et al., 1997, Weymann et al., 1997,
Peterson et al. 1998). Over a time interval of 33 days
(Jan. 8, 1998 and Feb. 10, 1998; see Table 1) the nuclear continuum
dropped by 17% at 3050 Å and 10% at 6924 Å decreasing
monotonically between these two wavelengths. This degree of variation
is consistent with that reported in short timescale variability
studies kaspi96 . The variation of the BLR emission lines is
less pronounced than that of the continuum, with H$\gamma$ and H$\beta$ showing a decrease in flux, while the change in the H$\alpha$ + N[II] line
profile is more difficult to evaluate. The absorption lines in our
far-UV spectrum are similar to those in the FOS spectra published by
Weymann et al. (1997) but is at too low a resolution for
comparison to the high resolution GHRS spectrum.
3 Analysis
3.1 Measurement of Line Fluxes and Component Deblending
Emission line fluxes and their errors were measured along the slit in
each spectral range for a total of 45 emission lines. Individual
spectra were extracted from the longslit spectra by summing along the
slit. The size of the extraction bins was dictated by the need for
reasonably accurate fluxes for the He II $\lambda$1640 and
$\lambda$4686 lines, which were used for the reddening corrections in
Paper II. Experimentation revealed that bin lengths of 0${}^{\prime\prime}\!\!$.2 (4
CCD pixels, 8 MAMA pixels) within the inner $\pm$1${}^{\prime\prime}$ and 0${}^{\prime\prime}\!\!$.4
outside this region would provide reasonable signal-to-noise ratios
for these lines, and still isolate the emission-line clouds that we
identified in our earlier papers. In some cases slightly different bin
sizes were used to isolate individual clouds or to increase the
signal-to-noise ratios.
To measure the line fluxes, first a linear fit to the continuum
adjacent to each line was subtracted. Typically the continuum was
very close to zero following removal of the scattered light, but
continuum subtraction was helpful in regions of residual
structure. Next, the extreme ends of the red and blue wings of the
line were marked and the total flux and centroid were computed between
these two points. The uncertainties in the line fluxes were estimated
using the error arrays for each spectrum produced by CALSTIS and a
propagation of errors analysis (Bevington, 1969). For the blended
lines of H$\alpha$ and [N$\,$II]$\lambda\lambda$6548, 6584, and [S$\,$II] $\lambda\lambda$ 6717, 6731, we used the [O$\,$III]$\lambda$5007 line as a
template to deblend the lines (see Crenshaw & Peterson 1986). This
was superior to Gaussian fitting since the emission line profiles are
often complex. The results of the emission line flux measurements are
presented in Table 2 where the flux values are listed relative to H$\beta$ and the H$\beta$ flux is given at the bottom in units of $10^{-15}$ ergs
cm${}^{-2}$ s${}^{-1}$ Å${}^{-1}$. The errors obtained for each flux are
given in parentheses.
Because of the failure of the far-UV spectrum at P.A. 70${}^{\circ}$ no
G140L spectrum was obtained and so a reddening correction using the He
II lines was not possible. Although dereddening using the Balmer
decrement is certainly valid, a reliable extrapolation from the red to
the blue and near-UV lines is uncertain. We prefer, therefore, to
continue the analysis without the corrected line ratios taking care
that any possible effects of reddening are accounted for by other
means. Correction of the line ratios for reddening is an important
step for a detailed photoionization model and is therefore presented
in Paper II for the data at P.A. 221${}^{\circ}$.
To extract information on the multiple kinematic components the [OIII] $\lambda$5007 and [OIII] $\lambda$4959 lines were fitted independently with one to three
Gaussians. Many slit extractions showed two components although in
only a few cases was there compelling evidence for a third. Only
velocity components measured independently at both [OIII] $\lambda$4959 and
[OIII] $\lambda$5007 are included. To test that each component represented the true
kinematics of the gas, we compared the velocities obtained at both
[OIII] $\lambda$5007 and [OIII] $\lambda$4959. Only those components with velocity
difference in the two lines less than or equal to twice the mean
difference for all points were retained. The procedure was then
repeated. The first iteration removed velocity components that were
wildly discordant, and therefore unlikely to be real, while the second
gave us confidence that the remaining components are physically
significant. From the difference in velocity between components
extracted at [OIII] $\lambda$4959 and [OIII] $\lambda$5007, we estimate the standard
deviation of the velocities to be 30 km s${}^{-1}$ . The results for each slit
position are given in Table 3a and 3b where the Gaussian components
are listed in order of increasing velocity. Negative slit
positions correspond to the SW region and positive slit positions
correspond to the NE.
3.2 Kinematics
Figure 4 shows portions of the longslit spectra centered on
the [OIII] $\lambda$5007 , [OIII] $\lambda$4959 and H$\beta$ emission lines for both slit positions, with the NE
end of the slit at the top. The complex velocity structure that has been
noted in both ground-based and HST studies (e.g. Schulz 1990,
Kaiser et al. 1999, Hutchings et al., 1999) is seen
including line splitting at several positions along the slit. Note
that as a result of our scattering correction and PSF subtraction we
are able to probe the emission line kinematics to within 0${}^{\prime\prime}\!\!$.2 of
the nucleus.
Included within our slits are four of the high velocity regions
(absolute value of projected velocity greater than 400 km s${}^{-1}$ ) reported
in Hutchings et al. (1999) and 20 of the clouds identified in
Kaiser et al. (1999) (Tables 4a, 4b ). Our agreement with
Hutchings’ velocities is reasonable, ranging from a difference of 6
km s${}^{-1}$ for region N detected at slit P.A. 221${}^{\circ}$ , to 160 km s${}^{-1}$ for
region D. While some of the difference is undoubtedly due to
measurement uncertainties, there may be real differences due to the
portion of the high velocity regions which fall within our slit.
There may also be some uncertainty due to confusion of spectral and
spatial information in the slitless data. We see components of high
velocity gas not specifically identified by Hutchings et al.
(1999) on both sides of the bicone at both slit positions (Tables 3a,
3b). This gas corresponds to high velocity gas imaged by Hutchings
et al. (1999), but for which velocities were not previously
measured. The high velocity components generally account for a small
fraction of the total flux in the [OIII] emission lines, again in
agreement with the findings of Hutchings et al. (1999). We find
more high velocity components in slit position P.A. 70${}^{\circ}$, which is
close to the radio ridge line, than we do in the P.A. 221${}^{\circ}$ slit.
However, there is some high velocity gas not associated with the radio
emission.
Comparison of our velocities with those reported by Kaiser et
al. (1999) is more difficult, since in several cases they reported
single velocities for clouds where we find multiple velocity
components. Furthermore, there are instances of extended clouds for
which our slit does not sample the entire cloud. If we compare only
velocities for clouds for which we find a single velocity component
and average our velocities for clouds occurring in more than one
extraction bin, we find the average difference in velocities is -18
km s${}^{-1}$ +/- 94 km s${}^{-1}$ , (in the sense V(this paper) - V(Kaiser etal.)).
This difference and range is comparable to what was found for
the high velocity clouds, and can be attributed to the same causes.
Figure 5 shows the velocities of the individual [OIII]
components from the Gaussian deblending. Points along P.A. 221${}^{\circ}$ are marked as solid points and along P.A. 70${}^{\circ}$ as open symbols. The
horizontal bars indicate the size of the extracted spectrum used for
the measurement along the slit. Vertical error bars are omitted since
the velocity uncertainties are comparable to the size of the points on
the diagram (see section 3). A systemic velocity of 997 km s${}^{-1}$ has been subtracted from the data. The solid and dashed lines show
results expected for our simple models described below. The results
follow the velocity distribution determined from the slitless
spectroscopy of Kaiser $et~{}al.$ (1999) and the plot is similar to
their Figure 8, though without the extreme high velocities. The
velocities at large distances from the nucleus are consistent with the
rotation of the galactic disk, while closer in the velocities are
strongly blue shifted SW of the nucleus and strongly redshifted to the
NE.
To better understand the kinematics we consider two possibilities for
the general form of the velocity field: radial outflow from the
nucleus and expansion directed away from the radio axis. We adopt the
basic conical geometry of the NLR of NGC 4151 as modeled by Pedlar
et al. (1993), with the radio jet pointing 40${}^{\circ}$ from the line
of sight and projected onto the plane of the sky at a P.A. of 77${}^{\circ}$.
After consideration of the well-known geometry of the host galaxy
(Simkin, 1975, Bosma, Ekers, & Lequeux, 1977) we require that the
cone opening angle be wide enough to include our line of sight to the
nucleus and also to intersect the disk of the host galaxy, since the
Extended Narrow Line Region (ENLR) kinematics follow the rotation of
the disk. Pedlar et al. (1993) estimate the opening angle to be
130${}^{\circ}$. However, Boksenberg et al. (1995) argue that the NLR
is density bounded and the ionized gas only partially fills the cone.
Therefore we choose a narrow vertex angle of 70${}^{\circ}$ which is a better
match to the observed NLR structure. The models are drawn
schematically in Figure 6. These models are used to estimate
the radial velocity as a function of projected distance from the
nucleus for each slit position angle. Our purpose is not to
produce a detailed match to the observed velocities of each individual
cloud, but to test two ideas about general form of the NLR kinematics.
Therefore, we assume that the interior of the cone is uniformly filled and
note that the observed velocity distribution is not expected to be as
smooth or complete as the model, reflecting the way in which the
emission line clouds fill the cone.
For both the radial outflow model and the jet expansion model we
consider two cases, one in which the flow has a constant velocity and
one in which the flow decelerates as it moves outward. We model this
decelerating flow as a $R^{-1/2}$ dependence where $R$ in the radial
flow model is distance from the nucleus and $R$ in the jet expansion
model is distance from the radio axis. This particular form of
deceleration is chosen since it seems to represent the data best and
is meant only to illustrate the effect. The results are plotted in
Figure 7 for all four models in the form of a model longslit
spectrum of a single emission line comparable to Figure
4. The slit was chosen to lie along P.A. 70${}^{\circ}$ and to
have a slit width of 0${}^{\prime\prime}\!\!$.1 as in our STIS observations. These
simulated spectra were then deblended using two Gaussians at each slit
position in the same manner as the real data. The velocities for the
decelerating models are shown in Figure 5 as the dashed
lines (one for each Gaussian component) for the case of jet expansion
and the solid lines for the case of radial flow.
For the case of expansion away from the radio axis we expect both
large positive and large negative velocities relative to the systemic
velocity at any given position along the slit. In the case of radial
outflow, however, large positive velocities and velocities much closer
to the systemic velocity will be observed on one side of the slit
while on the other side, large negative velocities and velocities near
the systemic velocity are expected. Since for NGC 4151 the far side
of the SW cone lies close to the plane of the sky, the bulk of the
flow is transverse to the line of sight, yielding radial velocities
close to the systemic value, while the near side is much closer to the
line of sight yielding large approaching radial velocities. Similar
considerations hold for the NE cone except that the near side of the
cone lies in the plane of the sky and the far side yields the large
receding velocities.
We conclude that the radial outflow case gives a better match to the
observed velocity distribution than the case of expansion away from
the jet. In the case of a radially decelerating flow the overall
envelope of the highest velocities decreases as one moves away from
the nucleus much as seen in Figure 5. Although the match
is not perfect it seems to follow the trend of less extreme velocities
as one moves along the slit. From these simple models we cannot
exclude the possibility of some motion perpendicular to the radio
jet. However, it does seem likely that the flow is dominated by a
radial outflow from the nucleus which slows with distance and that any
contribution from expansion away from the jet is less significant.
3.3 Line Ratios and Projected Distance from the Nucleus
An understanding of the physical conditions in the NLR can be
obtained by considering how various line strengths change as a
function of distance from the nucleus and with respect to each other.
In Paper II (Kraemer et al. 1999) a detailed photoionization model
is developed using the emission line fluxes presented here. In the current
paper we present a simpler analysis.
The ratio of [OIII] $\lambda$5007 to H$\beta$ is well known to be sensitive to the
ionization parameter $U=Q/4\pi r^{2}n_{e}c$, where $Q$ is the rate
at which ionizing photons are emitted, $r$ is the distance to the
nucleus, $n_{e}$ is the electron density, and $c$ is the speed of light.
Figure 8 shows the [OIII] $\lambda$5007 to H$\beta$ ratio as a function of
distance along the slit for both position angles. We use ratios that
have not been corrected for extinction since the lines are close in
wavelength and are therefore rather insensitive to reddening. We see
from the diagram that the line ratio decreases with distance in the
inner 2${}^{\prime\prime}$ and recovers somewhat at larger radii on the NE side
(positive X-axis). This trend was shown in Kaiser et al. (1999)
from the slitless spectroscopy who suggest that this apparent change
in the ionization parameter with distance reflects a decrease in
density.
Apart from the increase in [OIII] $\lambda$5007 / H$\beta$ on the extreme NE side of
P.A. 221${}^{\circ}$ , there is no significant difference in the ratio between
the two position angles, suggesting that while the ionization state of
the gas may change moving away from the nucleus, it generally does not
change laterally, i.e. with distance from the radio axis.
A similar trend is seen in the ratio of [OIII] $\lambda$5007 to [OII] $\lambda$3727 , which is also
sensitive to the ionization parameter. Figure 9 plots the
ratio versus distance for both slit positions. Again, the line fluxes
have not been corrected for extinction but the dust is most likely
patchy (see Paper II) and so is not likely to influence the overall
trend and merely adds scatter. Support for this comes from the fact
that the trend is largely symmetric about the nucleus indicating that
no large scale dust lanes pass through our aperture. Furthermore, for
the slit extractions where both He II lines used for the extinction
corrections are present (along P.A. 221${}^{\circ}$ ), the largest change in
the [OIII]/[OII] ratio from dereddening was a decrease of $\sim 30$%
(see Paper II). Therefore, to the extent that the distribution of
dust is comparable along each slit position, the conclusion of a
decreasing [OIII]/[OII] ratio with distance is robust.
The safest conclusion to draw from these diagrams is that the density
falls off with distance as suggested by Kaiser et al. (1999) and
confirmed in Paper II. In fact, in the inner clouds the high
[OIII]/[OII] most likely results from collisional de-excitation of the O${}^{+}$
ions. Although, these trends could naively be considered an indication
of decreasing ionization parameter with distance from the nucleus,
the more detailed investigation in Paper II suggests a more constant
ionization parameter and a density which declines less rapidly than
$r^{-2}$.
In Figure 10 the density sensitive ratio of [SII] $\lambda$6717 to [SII] $\lambda$6731 is plotted as a function of distance, again for both slit positions.
Judging by the size of the error bars, much of the scatter in the
diagram is real suggesting that the gas is rather clumpy, with regions
of higher and lower density at various points along the slit. There
is also an interesting drop in the ratio very close to the nucleus
particularly in the data from the P.A. 70${}^{\circ}$ slit position,
suggesting an increase in density there. Generally, the [SII] ratios
appear to be larger farther out particularly along P.A. 221${}^{\circ}$ (solid dots), indicating a decrease in density with radius, at least
in the partially ionized zone. Using the five level-atom program
developed by Shaw & Dufour (1995) and assuming a temperature of 15000
${}^{\circ}$ K (see below) we find that the density of the inner NLR is
roughly $2000\rm~{}cm^{-3}$ while in the outer NLR and ENLR the
density has dropped to $\sim 300\rm~{}cm^{-3}$. This agrees with the
results of Robinson et al. (1994) who found density decreasing
with distance from the nucleus in NGC 4151, with an overall NLR
density of $1600~{}\rm cm^{-3}$, and a density in the ENLR of $250~{}\rm cm^{-3}$. This is also in agreement with the interpretation of the
decline in [OIII] $\lambda$5007 /H$\beta$ as the result of a decrease in density.
The [OIII]$\lambda 5007$/[OIII]$\lambda 4363$ ratio is well-known to
be sensitive to the temperature of the gas. Figure 11
shows the [OIII] ratio as a function of distance along the slit. The
use of this ratio to calculate the temperatures is only valid for
densities up to $n_{e}\simeq 10^{5}$ cm${}^{-3}$ at which point collisional
de-excitation begins to have an influence on the line strengths
(Osterbrock, 1974). Furthermore the [SII] densities cannot be used
since they reflect densities in the partially ionized zone. Thus we
use results from Paper II for the gas densities which indicate that in
the O${}^{++}$ zone the densities are below $\rm 10^{5}cm^{-3}$. The
results from the five-level atom program give temperatures in the
range of 12000${}^{\circ}$ K — 17000 ${}^{\circ}$ K. Based on Figure
11 there appears to be a slight trend for a decreasing
ratio (increasing temperature) with distance from the nucleus. This is
difficult to confirm, however, since reddening may play a role,
tending to increase the observed ratio. Paper II gives a more detailed
analysis of the physical conditions along the slit.
3.4 Line Ratio Diagrams and Photoionization
Diagrams plotting one line ratio against another can be used to
investigate the origin of the photoionizing continuum. By choosing
line ratios which consist of lines which are close in wavelength we
can significantly reduce the effects of reddening (see e.g.
Veilleux and Osterbrock, 1989). In Figure 12 a, b, and
c, we present the optical emission line ratios [S II] $\rm\lambda\lambda 6717,6731/H\alpha$, [N II] $\rm\lambda 6584/H\alpha$, [O I]
$\rm\lambda 6300/H\alpha$, respectively, plotted against [OIII] $\lambda$5007 /H$\beta$ .
In each diagram the solid line separates star-forming regions from AGN
and is taken from Veilleux and Osterbrock (1989). The dashed line is
the power-law photoionization model for solar abundance taken from
Ferland and Netzer (1983). The ionization parameter varies from
$10^{-4}$ to $10^{-1.5}$ from lower right to upper left.
We find that the NGC 4151 NLR clouds occupy compact regions on these
diagrams indicating that the source of the ionizing continuum is the
same for all of the points sampled along the slit. Thus none of the
clouds observed shows evidence for star-formation or
LINER-like excitation. While this result is not unexpected it is worth
commenting that the NLR gas all seems to have the same source of
excitation.
Other line ratio diagrams including UV lines are also interesting
since they allow us to investigate the possibility of alternate
ionization mechanisms for the NLR clouds (Allen et al. 1998).
In figure 13a, b and c we plot the ratios of CIV
$\lambda 1549$ to He II $\lambda 1640$, CIV $\lambda 1549$ to CIII]
$\lambda 1909$, [Ne V] $\lambda 3426$ to [Ne III] $\lambda 3869$,
respectively against [OIII] $\lambda$5007 to H$\beta$ (only the P.A. 221${}^{\circ}$ data is
shown for Figures 13a and b since the far-UV observation
at P.A. 70 was unsuccessful). The lines show model grids calculated
using the MAPPINGS II code (Sutherland and Dopita, 1993) by Allen et al. (1999) for shock ionization (bottom), shock plus ionized
precursor gas (middle) and for power-law photoionization (top). For
the shock plus precursor models, the shock velocity increases from 200
km s${}^{-1}$ to 500 km s${}^{-1}$ moving from low to high [OIII] $\lambda$5007 /H$\beta$ ratios. Notice that
for the highest velocity shocks the models coincide with power-law
photoionization models.
Again the NGC 4151 NLR occupies very limited regions in these diagrams
corresponding to photoionization by a power law at high ionization
parameter or by shock plus precursor models with very high velocity
($V_{\rm shock}\simeq 500$km s${}^{-1}$ ). These results strongly suggest that
low velocity shocks play an insignificant role in accounting for the
ionization state of the NLR in NGC 4151 but we cannot rule out the
possibility of ionization by radiation from fast shocks.
4 Discussion
The results of the kinematic and emission line ratio analysis can be
combined to create a coherent picture of the NLR in NGC 4151. We have
seen that the kinematics bear the signature of radial outflow from the
nucleus and are distinctly different from an expansion away from the
radio jet axis. This is an interesting result since many recent
studies have reported kinematic evidence that the radio jet can have a
significant influence on the motion of NLR gas (e.g. Bicknell
et al. 1998, NGC 4151 Winge et al 1998, Mrk 3 Capetti et al. 1998). In these studies the NLR gas is found immediately
surrounding and expanding away from knots of radio emission as in Mrk
3 or forms a bow shock structure around the working surface of the
head of the jet as in Mrk 573 (Capetti et al. 1996, Falcke,
Wilson, & Simpson 1998). This seems not to be the case for NGC 4151.
In conflict with this statement, the study of NGC 4151 by Winge et al. (1998) reports that high velocity clouds are seen around the
edges of the radio knots. This is not confirmed by Kaiser et al.
(1999) who conclude that there is no direct association between
non-virial gas kinematics, as determined by high velocity and high
velocity dispersion, and proximity to the radio knots. Our results
concur with those of Kaiser et al. (1999). While we do find high
velocity clouds in our aperture there is no distinct preference for
them to found along P.A. 70${}^{\circ}$, which is more closely aligned with
the radio axis (P.A. 77${}^{\circ}$).
Further support for radial outflow comes from the emission line ratios
as a function of position. For example there is no significant
difference in the [OIII]/[OII] or [OIII] $\lambda$5007 /H$\beta$ ratios between the two
slit positions even though the spectra at P.A. 70${}^{\circ}$ are much more
closely aligned with the radio axis than the clouds at
P.A. 221${}^{\circ}$. This is in contrast to the case of NGC 1068 where the
[OIII]/[OII] ratio increases dramatically in regions that coincide
with the radio jet (Axon et al. 1998). WFPC2 images of NGC 1068
presented by Capetti, Axon, & Macchetto (1997) may also indicate
higher density and ionization state along the radio jet in this
object. Furthermore, these authors suggest that an additional source
of local ionizing continuum is required to explain the
observations. While these results certainly raise an interesting
possibility for NGC 1068, our results for NGC 4151 show no such
association between the radio morphology and the emission line ratios.
Thus the radio jet in NGC 4151 seems to have little influence on the
ionization state of the gas. Similar results are seen for the [SII]
ratio and the [OIII] $\lambda 5007/\lambda 4363$ ratio suggesting no
strong changes in the physical condition of the gas with proximity to
the radio emission.
Because the line ratio diagrams show no evidence for shock or shock
plus precursor ionization models at least for low velocity shocks,
they support the arguments for radial outflow. If the gas were
expanding away from the radio axis one would expect to see large
amounts of shocked material particularly at the interface of the flow
with the ambient interstellar medium of the host galaxy disk. In the
case of radial outflow, we would expect to see little shocked gas
since the motion is not directed into the disk and the relative
velocities of gas within the flow should be small.
Perhaps an important consideration is that the radio morphology of NGC
4151 is rather different from that of Mrk 3 for example. Pedlar et al. have compared the radio structure of NGC 4151 to that of an
FR I type radio galaxy, with much of the radio emission coming from a
diffuse component, although on much smaller scales. The radio emission
in Mrk 3, by contrast, is more jet-like being unresolved with MERLIN
perpendicular to the radio source axis (Kukula et al.
1993). Thus we might consider that the radio emission in NGC 4151 is
not a well collimated jet, but rather a broad spray of plasma. Gas
clouds in the vicinity of the radio flow would thus be more naturally
accelerated in directions roughly aligned with the radio axis than
perpendicular to it.
One possible scenario is that the core of the radio jet in NGC 4151
has cleared a channel in the line emitting gas and has blown out of
the disk of the galaxy as suggested by Schulz (1988). Thus there may
have been a bow shock associated with the radio lobes in the past but
the jet has passed on to lower density region in the outer bulge and
galaxy halo. The line emitting gas is now free to flow out along the
radio axis but only weakly interacts with the jet itself and the host
galaxy ISM.
NGC 4151 is also known to have a system of nuclear absorption lines,
particularly CIV $\lambda$1549, which are blueshifted with respect to
the systemic velocity by values ranging from 0 to 1600 km s${}^{-1}$ (e.g. Weymann et al. 1997). It is tempting to link the outflow
seen in our study with that for the absorption line system. However,
these flows are observed on vastly different scales and thus a true
connection has not been established. Models invoking winds from the
nucleus to explain the NLR kinematics and other properties of Seyfert
galaxies have been proposed (e.g. Krolik & Vrtilek, 1984,
Schiano, 1986, Smith, 1993). One suggestion is that X-ray heating of
the molecular torus is the source of the wind (Krolik & Begelman,
1986). The base of the wind forms the electron scattering region which
serves as the “mirror” allowing a view of the BLR in polarized light
in some Seyfert 2 galaxies. At larger radii one might expect that the
steep potential of the galaxy bulge tends to decelerate the wind. We
conclude that the kinematics in NGC 4151 seem to be consistent with
wind models for the NLR.
5 Summary
The results presented in this paper provide an interesting contrast to
the recent work on the NLR of Seyfert galaxies. Our analysis of the
longslit spectra of NGC 4151 has revealed a rather different picture
of the NLR in the sense that the prominent radio jet has very little
influence on the kinematics and physical conditions. We find that the
kinematics are best characterized by a decelerating radial outflow
from the nucleus in the form of a wind. The lack of evidence for
strong shocks near the radio axis and the uniformity of the line
ratios across the NLR supports this picture. Thus it appears that
while interaction between the radio jet and the NLR gas may be a common
occurrence it is by no means ubiquitous and does not apply in the case
of NGC 4151.
We would like to thank Diane Eggers for her assistance in the data
analysis. We would also like to thank Mark Allen for providing the
model grids for the UV line ratio diagrams. This research has been
supported in part by NASA under contract NAS5-31231.
References
()
Allen, M., Dopita, M. A., & Tsvetanov, Z. I. 1998, ApJ, 493, 571
()
Antonucci, R., 199 ARA&A, 31, 473A
()
Baum, S., Bohlin, R., Christensen, J., Debes. J.,
Downes, R., Ferguson, H., Gonnella, A., Goudfrooij, P., Hayes, J.,
Hodge, P., Hulbert, S., Katsanis, R., Keys, T., Lanning, H., McGrath, M.,
Sahu, K., Shaw, R., Smith, E., Walborn, N., Wilson, J. & Bowers, C.
1998 STIS Instrument Handbook, Version 2.0 (Baltimore: STScI)
()
Bevington, P. R. 1968, ’Data Reduction and Error Analysis for the Physical
Sciences’ (New York: McGraw-Hill)
()
Bicknell, G. V., Dopita, M. A., Tsvetanov, Z. I., & Sutherland, R. S.
1998, ApJ, 495, 680
()
Boksenberg, A., Catchpole, R. M., Macchetto, F., Albrecht, R.,
Barbieri, C., Blades, J. C., Crane, P., Deharveng, J. M., Disney,
M. J., Jakobsen, P., Kampermann, T. M., King, I. R., Mackay, C. D.,
Paresce, F., Weigelt, G., Baxter, D., Greenfield, P., Jedrzejewski,
R., Nota, A., & Sparks, W. B. 1995, ApJ, 440, 151
()
Bosma, A. Ekers, R. D., & Lequeux, J. 1977, A&A,
57, 97
()
Bowers, C. & Baum, S. 1998 STIS ISR 98-24 “Spectroscopic Mode Peculiarities”
()
Capetti, A., Axon, D. J., Macchetto, F. D., Marconi, A., Winge, C. 1999,
ApJ, 516, 184
()
Capetti, A., Axon, D. J., & Macchetto, F. D. 1997 ApJ, 487, 560
()
Capetti, A., Axon, D. J., Macchetto, F., Sparks, W. B., & Boksenberg, A. 1996,
ApJ, 469, 554
(Crenshaw et al. 1996)
Crenshaw, D.M.,
Rodriguez-Pascual, P.M., Penton, S.V., Edelson, R.A., Alloin D., et
al., 1996, ApJ, 470, 332
()
Crenshaw, D.M., & Peterson, B.M. 1986, PASP, 98, 185
()
Evans, I. N., Ford, H. C., Kinney, A. L., Antonucci, R. R. J., Armus, L.,
& Caganoff, S. 1991, ApJL, 369, L27
()
Evans, I. N., Tsvetanov, Z., Kriss, G. A., Ford, H. C., Caganoff, S., &
Koratkar, A. P. 1993, ApJ, 417, 82
()
Falcke, H., Wilson, A. S., & Simpson, C. 1998, ApJ, 502, 199
()
Ferland, G. & Netzer, H. 1983, ApJ, 264, 105
(Hutchings et al. 1998)
Hutchings, J.B.,
Crenshaw, D.M., Kaiser, M.E., Kraemer, S.B., Weistrop, D., Baum, S.,
Bowers, C.B., Feinberg, L. D., Green, R.F., Gull, T.R., Hartig, G.F.,
Hill, G., Lindler, D.J., 1998, ApJ, 492, L115
(Hutchings et al. 1999)
Hutchings, J.B.,
Crenshaw, D.M., Danks, A. C., Gull, T.R., Kraemer, S.B., Nelson, C. H.,
Weistrop, D., Kaiser, M.E., Joseph, C. L. 1999, AJ, submitted
()
Kaiser M.E., Bradley II, L. D., Hutchings J.B., Crenshaw, D. M., Gull, T. R.,
Kraemer, S. B., Nelson, C. H., Ruiz, J., & Weistrop, D. 1999, ApJ, in press
(Kaspi et al. 1996)
Kaspi, S., Maoz, D., Netzer,
H., Peterson, B.M., Alexander, T., Barth, A.J., et al., 1996, ApJ,
470, 336
(Kimble et al. 1998)
Kimble, R.A., Woodgate,
B.E., Bowers, C.W., Kraemer, S.B., Kaiser, M.E., et al. 1998, ApJ,
492, L83
()
Kraemer, S. B., Crenshaw, D. M., Hutchings, J. B., Gull, T. R., Kaiser, M. E.
Nelson, C. H., & Weistrop, D. M. 1999, ApJ, submitted (Paper II)
()
Krolik, J. H. & Vrtilek, J. M. 1984, ApJ, 279, 521
()
Krolik, J. H., & Begelman, M. C. 1986, ApJ, 308, L55
()
Kukula, M. J., Ghosh, T., Pedlar, A., Schilizzi, R. T.,
Miley, G. K., de Bruyn, A. G., & Saika, D. J. 1993, MNRAS, 264, 893
()
Lindler, D. 1998, CALSTIS Reference Guide (CALSTIS Version 5.1)
()
Lindler, D. 1998, Private communication
()
Osterbrock, D. E. 1974, Astrophysics of Gaseous
Nebulae (San Fransisco: W. H. Freeman)
()
Pedlar A., Kukula, M.J., Longley, D.P.T., Muxlow, T.W.B., Axon, D.J.,
Baum, S., O’Dea, C., Unger,& S.W. 1993, MNRAS, 263, 471
(Penston et al. 1990)
Penston, M.V., Robinson,
A., et al., 1990, AA, 236, 53
(Peterson et al. 1998)
Peterson, B.M., Wanders,
I., Bertram, R., Hunley, J.F., Pogge, R.W., Wagner, R.M., 1998, ApJ,
501, 82
(Robinson et al. 1994)
Robinson, A.,
Vila-Vilaro, B., Axon, D.J., Perez, E., Wagner, S.J., et al., 1994,
AA, 291, 351
()
Schiano, A. V. R. 1986, ApJ, 302, 95
()
Schmitt, H. R., & Kinney, A. L. 1996, ApJ, 463, 498
()
Schulz, H. R. 1988, A&A, 203, 233
()
Schulz, H. R. 1990, AJ, 99,1442
()
Shaw, R.A., & Dufour, R. J. 1995, PASP, 895, 107
()
Simkin, S. 1975, ApJ, 200, 567
()
Smith, S. J. 1993, ApJ, 411, 1993
()
Sutherland, R. S., Bicknell, G. V., & Dopita, M. A. 1993, ApJ 414, 510
()
Taylor, D., Dyson, J. E. & Axon D. E. 1992, MNRAS, 255, 351
()
Veilleux, S. & Osterbrock, D. E. 1987, ApJS, 63, 295
(Ulrich et al. 1997)
Ulrich, M.H., Maraschi, L.,
Urry, C.M., 1997, ARAA, 35, 445
(Warwick et al. 1996)
Warwick, R.S., Smith, D.A.,
Yaqoob, T., Edelson, R., Johnson, W.N., Reichert, G.A., Clavel, J.,
Magdziarz, P., Peterson, B.M., Zdziarski, A.A., 1996, ApJ, 470, 349
(Weymann 1997)
Weymann, R.J., Morris, S.L., Gray,
M.E., Hutchings, J.B., 1997, ApJ, 483, 717
()
Winge C., Axon D.J., Macchetto F.D., Capetti A., &
Marconi A. 1999, ApJ, 519, 134
(Woodgate et al. 1998)
Woodgate, B.E., Kimble,
R.A., Bowers, C.W., Kraemer, S., Kaiser, M.E., et al. 1998, PASP, 110, 1183 |
Asymptotic Filtered Colimits
Logan Higginbotham
Campbell University, Buies Creek, NC, USA
[email protected]
and
Kevin Sinclair
Shenandoah University, Winchester, VA, USA
[email protected]
(Date:: December 1, 2020)
Abstract.
If one has a collection of large scale spaces $\{(X_{s},\mathcal{LSS}_{s})\}_{s\in S}$ with certain compatibility conditions one may define a large scale space on $X=\bigcup\limits_{s\in S}X_{s}$ in a way where every function on $X$ is large scale continuous if and only if the function restricted to every $X_{s}$ is large scale continuous. This large scale structure is called the asymptotic filtered colimit of $\{(X_{s},\mathcal{LSS}_{s})\}_{s\in S}$. In this paper, we explore a wide variety of coarse invariants that are preserved between $\{(X_{s},\mathcal{LSS}_{s})\}_{s\in S}$ and the asymptotic filtered colimit $(X,\mathcal{LSS})$. These invariants include finite asymptotic dimension, exactness, property A, and being coarsely embeddable into a separable Hilbert space. We also put forth some questions and show some examples of filtered colimits that give an insight into how to construct filtered colimits and what may not be preserved as well.
Key words and phrases:
1. Introduction
The main focus of this paper is to introduce the notion of asymptotic filtered colimits. We do this by deriving the definition of the asymptotic filtered colimit construction and then look at some examples of asymptotic filtered colimits. We then show multiple coarse invariants that asymptotic filtered colimits preserve while stating some questions along the way. Finally, we end this paper with something that asymptotic filtered colimits do not preserve. We will start by introducing definitions associated with families of subsets of a set $X$ in order to define large scale structures:
Definition 1.1.
Let $\mathcal{U}$ be a family of subsets of a set $X$ and let $V$ be a subset of $X$. The star of $V$ against $\mathcal{U}$, denoted $\operatorname*{st}(V,\mathcal{U})$, is the set
$$\bigcup\limits_{\begin{subarray}{c}U\in\mathcal{U}\\
{U\cap V\neq\varnothing}\end{subarray}}U$$
If $\mathcal{V}$ is another family of subsets of $X$, then the family of subsets of $X$ $\left\{\operatorname*{st}(V,\mathcal{U})|V\in\mathcal{V}\right\}$ is denoted $\operatorname*{st}(\mathcal{V},\mathcal{U})$ for convenience.
Definition 1.2.
Let $\mathcal{U},\mathcal{V}$ be families of subsets of a set $X$. We say $\mathcal{U}$ is a refinement of $\mathcal{V}$ provided for every $U\in\mathcal{U}$ there is a $V\in\mathcal{V}$ so that $U\subseteq V$. In this same situation, we also say that $\mathcal{V}$ coarsens $\mathcal{U}$. Refiniement is denoted as $\mathcal{U}\prec\mathcal{V}$.
It is sometimes needed that we need to consider covers of $X$ instead of collections of subsets of $X$. To distinguish families of subsets of $X$ from covers of $X$, we call covers of $X$ scales:
Definition 1.3.
Given a set $X$, we say $\mathcal{U}$ is a scale of $X$ if $\mathcal{U}$ is a family of subsets of $X$ that covers $X$. If $\mathcal{U}$ is a collection of subsets of $X$, we can make $\mathcal{U}$ into a cover via constructing $\mathcal{U}^{\prime}=\mathcal{U}\cup\left\{\{x\}\right\}_{x\in X}$. This extension is often called the trivial extension of $\mathcal{U}$.
The definition of large scale structues was given in by Dydak in [2]. This interpretation of coarse structures give coarse geometry a more topological flavor.
Definition 1.4.
[2] Let $X$ be a set. A large scale structure on $X$ is a non-empty set of families of subsets of $X$ $\mathcal{LSS}$ so that the following conditions are satisfied:
(1)
If $\mathcal{U},\mathcal{V}$ are families of subsets of X with $\mathcal{V}\in\mathcal{LSS}$ and each element $U$ of $\mathcal{U}$ consisting of more than one point is contained in some $V$ of $\mathcal{V}$, then $\mathcal{V}\in\mathcal{LSS}$.
(2)
If $\mathcal{U},\mathcal{V}\in\mathcal{LSS},$ then $\operatorname*{st}(\mathcal{U},\mathcal{V})\in\mathcal{LSS}$.
Elements $\mathcal{U}$ of $\mathcal{LSS}$ are called uniformly bounded families or uniformly bounded scales.
We note here closure under refinements implies the first condition above. The advantage of having a weaker first requirement is that a large scale structure as defined "disregards" one point sets. That is, one point sets do not "change" the large scale structure. Also, the first item in the definition gives us that the cover $\{\{x\}\}_{x\in X}$ is uniformly bounded for any large scale structure. We will now remind the reader of some preliminary definitions about maps from one large scale structure to another.
Definition 1.5.
Let $(X,\mathcal{LSS}_{X})$ and $(Y,\mathcal{LSS}_{Y})$ be large scale spaces and let $f:X\to Y$. We say $f$ is large scale continuous or bornologous if for every
$\mathcal{U}\in\mathcal{LSS}_{X},~{}f(\mathcal{U})\in\mathcal{LSS}_{Y}$, where $f(\mathcal{U})=\{f(U)|~{}U\in\mathcal{U}\}$.
Definition 1.6.
Let $(X,\mathcal{LSS}_{X})$ and $(Y,\mathcal{LSS}_{Y})$ be large scale spaces and let $f,g:X\to Y$. We say $f$ and $g$ are close provided there is a $\mathcal{V}\in\mathcal{LSS}_{Y}$ so that for any $x\in X,~{}f(x),g(x)\in V$ for some $V\in\mathcal{V}$.
Definition 1.7.
Let $(X,\mathcal{LSS}_{X})$ and $(Y,\mathcal{LSS}_{Y})$ be large scale spaces and let $f:X\to Y$ be large scale continuous. f is a coarse equivalence if and only if there exists a large scale continuous map $g:Y\to X$ so that $f\circ g$ is close to $id_{Y}$ and $g\circ f$ is close to $id_{X}$.
Definition 1.8.
Let $(X,\mathcal{LSS}_{X})$ be a large scale space. A property $P$ is a coarse invariant if for any $(Y,\mathcal{LSS}_{Y})$ that has property $P$ and is coarsely equivalent to $(X,\mathcal{LSS}_{X})$ we have that $(X,\mathcal{LSS}_{X})$ also has property $P$.
Coarse invariants include, but are not limited to: Metrizability, finite asymptotic dimension, asymptotic property C, property A, exactness, coarse amenability, and coarse embeddability into a separable Hilbert space. We shall explore some of these coarse invariants and show that they are preserved by the asymptotic filtered colimit construction. But we first must define it.
2. Asymptotic Filtered Colimits
We begin this chapter by introducing the notion of an asymptotic filtered colimit.
Definition 2.1.
Suppose $X$ is a set with $\left\{(X_{s},\mathcal{LSS}_{s})\right\}_{s\in S}$ subsets of $X$ and for each $s\in S$, $X_{s}$ has the large scale structure $\mathcal{LSS}_{s}$.
Further, assume $\bigcup\limits_{s\in S}X_{s}=X$ and for every $r,s\in S$ we have that the restrictions of the large scale structures $\mathcal{LSS}_{r}$ and $\mathcal{LSS}_{s}$ to the set $X_{r}\cap X_{s}$ coincide.
Also, $\forall r,s\in S~{}\exists t\in S$ such that $X_{r}\cup X_{s}\subseteq X_{t}$.
Then the asymptotic filtered colimit of $\left\{(X_{s},\mathcal{LSS}_{s})\right\}_{s\in S}$ of $X$ is the following large scale structure:
$\mathcal{U}$ is uniformly bounded if and only if $\exists s\in S$ and $\mathcal{V}\in\mathcal{LSS}_{s}$ so that for any $U\in\mathcal{U}$ with $|U|>1~{}\exists V\in\mathcal{V}$ so that $U\subseteq V$ (and consequently $U\subseteq X_{s}$). The construction of creating $(X,\mathcal{LSS})$ from $\{(X_{s},\mathcal{LSS}_{s})\}$ is also called the asymptotic filtered colimit construction.
We note here that another way to think of the uniformly bounded families in the asymptotic filtered colimit $\mathcal{LSS}$ of $\left\{(X_{s},\mathcal{LSS}_{s})\right\}_{s\in S}$ is the following: For any $\mathcal{U}\in\mathcal{LSS}$ there is an $s\in S$ so that $\mathcal{U}^{*}\in\mathcal{LSS}_{s}$, where $\mathcal{U}^{*}$ is $\mathcal{U}$ with all one-point sets outside of $X_{s}$ removed. As a consequence, for every $s\in S$, $\mathcal{LSS}_{s}\subseteq\mathcal{LSS}$. We will make use of these remarks moving forward.
Definition 2.2.
Suppose $X$ is a set and $\mathcal{LSS}$ is the asymptotic filtered colimit of subsets $\left\{(X_{s},\mathcal{LSS}_{s})\right\}_{s\in S}$ of $X$; let $\mathcal{U}\in\mathcal{LSS}$. Define $\mathbfcal{U^{*}}$ to be $\mathcal{U}$ with all one point sets outside of $X_{s}$ removed, where $X_{s}$ is the subset from $\{X_{s}\}_{s\in S}$ for which all elements of $\mathcal{U}$ of cardinality greater than one are a subset of by the definition of asymptotic filtered colimit.
Proposition 2.3.
The asymptotic filtered colimit of $\left\{(X_{s},\mathcal{LSS}_{s})\right\}_{s\in S}$ of $X$ (denoted $\mathcal{LSS}$) is indeed a large scale structure.
Proof.
Let $\mathcal{U}\in\mathcal{LSS}$ and suppose we have a family of subsets of $X$, $\mathcal{W}$, so that $|W|>1$ implies there exists a $U\in\mathcal{U}$ so that $W\subseteq U$.
Since $\mathcal{U}\in\mathcal{LSS}$, $\exists s\in S$ and $\mathcal{V}\in\mathcal{LSS}_{s}$ so that $|U|>1$ implies there is a $V\in\mathcal{V}$ such that $U\subseteq V$.
If $|W|>1$ and $W\subseteq U$ along with $U\subseteq V$, then we have that $W\subseteq V$. Then by definition and choice of $s\in S$, we have $\mathcal{W}\in\mathcal{LSS}$.
Now suppose $\mathcal{U},\mathcal{V}\in\mathcal{LSS}$. Then $\exists r\in S$ and $\mathcal{F}\in\mathcal{LSS}_{r}$ so that for any $U\in\mathcal{U}$ with $|U|>1$, we have $\exists F\in\mathcal{F}$ such that $U\subseteq F$.
Also, $\exists s\in S$ and $\mathcal{G}\in\mathcal{LSS}_{s}$ so that for any $V\in\mathcal{V}$ with $|V|>1$, we have $\exists G\in\mathcal{G}$ such that $V\subseteq G$.
Select $t\in S$ such that $X_{r}\cup X_{s}\subseteq X_{t}$. We show that $\operatorname*{st}(\mathcal{U},\mathcal{V})\in\mathcal{LSS}$.
Define $\mathcal{U^{*}}=\mathcal{U}\setminus\left\{U\in\mathcal{U}~{}|~{}U=\{x\},~{}x%
\in X\setminus X_{r}\right\}$. Likewise, define $\mathcal{V^{*}}=\mathcal{V}\setminus\left\{V\in\mathcal{V}~{}|~{}V=\{x\},~{}x%
\in X\setminus X_{s}\right\}$.
Notice that we have $\mathcal{U^{*}}\in\mathcal{LSS}_{r}$ and that $\mathcal{V^{*}}\in\mathcal{LSS}_{s}$. Since $X_{r}\subseteq X_{t}$ and $X_{s}\subseteq X_{t}$ and the restrictions of the large scale structures of $\mathcal{LSS}_{r}$ and $\mathcal{LSS}_{t}$ (respectively $\mathcal{LSS}_{s}$ and $\mathcal{LSS}_{t}$) to the intersection $X_{r}\cap X_{t}=X_{r}$ (respectively $X_{s}\cap X_{t}=X_{s}$) coincide, we therefore have that $\mathcal{U^{*}}\in\mathcal{LSS}_{r}$ implies $\mathcal{U^{*}}\in\mathcal{LSS}_{t}$ along with $\mathcal{V^{*}}\in\mathcal{LSS}_{s}$ implies $\mathcal{V^{*}}\in\mathcal{LSS}_{t}$.
Since $(X_{r},\mathcal{LSS}_{r})$ and $(X_{s},\mathcal{LSS}_{s})$ coincide with $(X_{t},\mathcal{LSS}_{t})$, we have that $\mathcal{U^{*}}\in\mathcal{LSS}_{r}$ implies there is a uniformly bounded family $\mathcal{U^{\prime}}\in\mathcal{LSS}_{t}$ so that $\mathcal{U^{\prime}}|_{X_{r}}=\mathcal{U^{*}}$, where $\mathcal{U^{\prime}}|_{X_{r}}:=\left\{U^{\prime}\cap X_{r}~{}|~{}U^{\prime}\in%
\mathcal{U^{\prime}}\right\}$. But this means that for every $U\in\mathcal{U}$ with $|U|>1$, there is a $U^{\prime}\in\mathcal{U^{\prime}}$ so that $U\subseteq U^{\prime}$. Thus, $\mathcal{U^{*}}\in\mathcal{LSS}_{t}$.
Since $\mathcal{U^{*}},\mathcal{V^{*}}\in\mathcal{LSS}_{t}$, we have that $\operatorname*{st}(\mathcal{U^{*}},\mathcal{V^{*}})\in\mathcal{LSS}_{t}$. Let $\mathcal{W}=\operatorname*{st}(\mathcal{U^{*}},\mathcal{V^{*}})\cup\left\{V\in%
\mathcal{V}~{}|~{}V=\{x\},~{}x\in X\right\}$. Then $\mathcal{W}\in\mathcal{LSS}$.
We show that for any $U\in\mathcal{U}$ with $|\operatorname*{st}(U,\mathcal{V})|>1$, we have that there exists $W\in\mathcal{W}$ so that $\operatorname*{st}(U,\mathcal{V})\in\mathcal{W}$. This would show that $\operatorname*{st}(\mathcal{U},\mathcal{V})\in\mathcal{LSS}$.
If $|U|>1$, then $U\in\mathcal{U^{*}}$ which implies that $\operatorname*{st}(U,\mathcal{V})\in\mathcal{W}$.
If $|U|=1$, then since $|\operatorname*{st}(U,\mathcal{V})|>1$ we have that there is a $V\in\mathcal{V}$ such that $|V|>1$ and $U\subseteq V$. This gives us that $\operatorname*{st}(U,\mathcal{V})\in\mathcal{W}$.
∎
Now that we’ve established that asymptotic filtered colimits are large scale structures, we will now provide a couple of examples of asymptotic filtered colimits:
Example 2.4.
Let $X$ be the group of all sequences with integer entries that converge to zero; the operation is componentwise addition. One might consider making $X$ a metric space by using the metric $d((x_{1},x_{2},...),(y_{1},y_{2},...))=\mathop{\Sigma}\limits_{i=1}^{\infty}|x%
_{i}-y_{i}|$. As a consequence, one yields a large scale structure $\mathcal{LSS}$ for $X$ which is induced from the metric. By that we mean $\mathcal{U}\in\mathcal{LSS}$ if and only if $\mathop{\sup}\limits_{U\in\mathcal{U}}\mathrm{diam}{(U)}<\infty$. This large scale structure is an asymptotic filtered colimit in the following way: Let $X_{s}=\mathbb{Z}^{s}\times\{0\}\times\{0\}\times...$ and let $\mathcal{LSS}_{s}$ be induced from the metric $d((x_{1},...,x_{s},0,0,...),(y_{1},...,y_{s},0,0,..))=\mathop{\Sigma}\limits_{%
i=1}^{s}|x_{i}-y_{i}|$. Then $(X,\mathcal{LSS})$ is the asymptotic filtered colimit of $\{(X_{s},\mathcal{LSS}_{s})\}_{s\in\mathbb{N}}$.
Example 2.5.
Let $\{M_{s},d^{M}_{s}\}_{s\in S}$ be a collection of metric spaces with $M_{s}\cap M_{t}=\varnothing$, $s\neq t$. Then one may define an $\infty$ metric on $X=\bigcup\limits_{s\in S}M_{s}$ in the following way: $d(x,y)=d^{M}_{s}(x,y)$ if $x,y\in M_{s}$ and $d(x,y)=\infty$ if $x\in M_{s}$ and $y\in M_{t}$, $s\neq t$. Let $\mathcal{LSS}$ be the large scale structure induced from the infinity metric $d$. Then $(X,\mathcal{LSS})$ is an asymptotic filtered colimit of $\{X_{F},\mathcal{LSS}_{F}\}_{F\in\mathcal{P}_{fin}(S)}$, where $\mathcal{P}_{fin}(S)$ is the collection of all finite subsets of $S$, $X_{F}=\bigcup\limits_{s\in F}M_{s}$, and $\mathcal{LSS}_{F}$ is the large scale structure induced from the $\infty$ metric $d_{F}(x,y)=d^{M}_{s}(x,y)$ if $x,y\in M_{s}$ and $d_{F}(x,y)=\infty$ if $x\in M_{s}$, $y\in M_{t}$, $s\neq t$.
The second example is a useful one to keep in mind when dealing with the asymptotic filtered colimit $\mathcal{LSS}$ of $\{X_{s},\mathcal{LSS}_{s}\}_{s\in S}$; points within the same $X_{s}$ behave with respect to $\mathcal{LSS}_{s}$, while two points with one in $X_{s}$ and one outside of $X_{s}$ in certain circumstances may be regarded as very far away with respect to $\mathcal{LSS}$. Asymptotic filtered colimits can also be formed by "building up to $X$ from smaller $X_{s}$’s. This was shown in example 1.
The following proposition shows that large scale continuous functions of the asymptotic filtered colimit of $\{(X_{s},\mathcal{LSS}_{s})\}$ are precisely functions that are large scale continuous on every restriction to $(X_{s},\mathcal{LSS}_{s})$.
Proposition 2.6.
Suppose $X$ is a set and $\mathcal{LSS}_{X}$ is the asymptotic filtered colimit
of subsets $\left\{(X_{s},\mathcal{LSS}_{s})\right\}_{s\in S}$ of $X$ and $f:X\to Y$ is a function
to a large scale space $Y$. $f$ is bornologous if and only if $f|_{X_{s}}$ is bornologous for each $s$.
Proof.
$\left(\Rightarrow\right):$ Let $s\in S$ and $\mathcal{U}_{s}\in\mathcal{LSS}_{s}$. Then notice that $\mathcal{U}_{s}\in\mathcal{LSS}_{X}$ which implies that $f(\mathcal{U}_{s})\in\mathcal{LSS}_{Y}$. Since $U_{s}\in\mathcal{U}_{s}$ gives us $U_{s}\subseteq X_{s}$, we have $f(\mathcal{U}_{s})=f|_{X_{s}}(\mathcal{U}_{s})$.
$\left(\Leftarrow\right):$ Let $\mathcal{U}\in\mathcal{LSS}_{X}$. Then there is an $s\in S$ and a $\mathcal{V}\in\mathcal{LSS}_{s}$ such that for any $U\in\mathcal{U}$ with $|U|>1$, there is a $V\in\mathcal{V}$ such that $U\subseteq V$.
Define $\mathcal{U^{*}}=\mathcal{U}\setminus\left\{U\in\mathcal{U}~{}|~{}U=\{x\},~{}x%
\in X\setminus X_{s}\right\}$. Then $\mathcal{U^{*}}\in\mathcal{LSS}_{s}$ and $f(\mathcal{U^{*}})=f|_{X_{s}}(\mathcal{U^{*}})$. So $f(\mathcal{U^{*}})\in\mathcal{LSS}_{Y}$.
We show that if $f(U)\in f(\mathcal{U})$ with $|f(U)|>1$, then $f(U)\in f(\mathcal{U^{*}})$. Indeed, $|f(U)|>1$ implies $|U|>1$ and hence $U\in\mathcal{U^{*}}$ which implies $f(U)\in f(\mathcal{U^{*}})$. So $f(\mathcal{U})\in\mathcal{LSS}_{Y}$.
∎
It turns out that slowly oscillating functions behave similarly to large scale continuous functions with respect to asymptotic filtered colimits. The following definitions are slight generalizations of the ones found in [3]
Definition 2.7.
Let $\left(X,\mathcal{LSS}\right)$ be given and let $\mathcal{U}\in\mathcal{LSS}$.
We say a $\mathbfcal{U}$-chain component of $X$ is an equivalence class of the following equivalence relation. $x\sim y$ if and only if there is a finite sequence $\left\{U_{i}\right\}_{i=1}^{n}\subseteq\mathcal{U}$ such that $U_{i}\cap U_{i+1}\neq\varnothing$ for every $i$ and $x\in U_{1}$ along with $y\in U_{n}$.
A coarse chain component of $x\in X$ is the union of its $\mathcal{U}$-chain components, where $\mathcal{U}$ ranges over every uniformly bounded family of $\mathcal{LSS}$.
A subset $B\subseteq X$ is called weakly bounded if its intersection with each coarse chain component is contained in some $U$ for $U\in\mathcal{U}$ and $\mathcal{U}\in\mathcal{LSS}$.
Definition 2.8.
Let $f:X\to Y$ where $\left(X,\mathcal{LSS}\right)$ is a large scale structure and $Y$ is a metric space. $f$ is slowly oscillating if $\forall\mathcal{U}\in\mathcal{LSS}$ and $\epsilon>0~{}\exists B\subseteq X$ weakly bounded such that for any $U\in\mathcal{U}$ with $U\not\subseteq B$ implies $\mathrm{diam}(f(U))<\epsilon$.
Proposition 2.9.
Let $X$ be a set and let $\mathcal{LSS}$ be the asymptotic filtered colimit of $\left\{\left(X_{s},\mathcal{LSS}_{s}\right)\right\}_{s\in S}$. Let $Y$ be a metric space and let $f:X\to Y$. Then $f$ is slowly oscillating if and only if $f|_{X_{s}}$ is slowly oscillating for all $s\in S$.
Proof.
$\left(\Rightarrow\right):$ Let $\mathcal{U}_{s}\in\mathcal{LSS}_{s}$ and $\epsilon>0$. Then there is a $B\subseteq X$ weakly bounded such that for any $\mathcal{U}_{s}\in\mathcal{LSS}_{s}$ with $U_{s}\not\subseteq B$ implies $\mathrm{diam}(f(U_{s}))<\epsilon$.
But $U_{s}\subseteq X_{s}$ implies $f(U_{s})=f|_{X_{s}}(U_{s})$ and we are done with choice of weakly bounded subset $B\cap X_{s}$.
Indeed, suppose $U_{s}\in\mathcal{U}_{s}$ and $U\not\subseteq\left(B\cap X_{s}\right)$. Then since $U_{s}\subseteq X_{s}$, we have that $U_{s}\not\subseteq B$ which implies $\mathrm{diam}(f(U_{s}))=\mathrm{diam}(f|_{X_{s}}(U_{s}))<\epsilon$.
$\left(\Leftarrow\right):$ Let $\mathcal{U}\in\mathcal{LSS}$ and $\epsilon>0$. Then there is an $s\in S$ and $\mathcal{V}\in\mathcal{LSS}_{s}$ such that for any $U\in\mathcal{U}$ with $|U|>1$ implies $U\subseteq V$ for $V\in\mathcal{V}$.
Define $\mathcal{U^{*}}$ to be $\mathcal{U}$ with one point sets removed outside of $X_{s}$. Then $\mathcal{U^{*}}\in\mathcal{LSS}_{s}$ which implies there is a $B\subseteq X_{s}\subseteq X$ weakly bounded such that for any $U\in\mathcal{U^{*}}$ with $U\not\subset B$ we have that $\mathrm{diam}(f(U))<\epsilon$.
Notice that for any $U\in\mathcal{U}\setminus\mathcal{U}^{*}$ with $U\not\subseteq B$, we have that $\mathrm{diam}(f(U))=0<\epsilon$. Therefore, $B$ is a choice of a weakly bounded set with the property that for any $U\in\mathcal{U}$ with $U\not\subset B$, we have $\mathrm{diam}(f(U))<\epsilon$. So $f$ is slowly oscillating.
∎
We now will showcase various coarse properties that are preserved by the asymptotic filtered colimit We will begin with metrizability of coarse spaces. For completeness, we remind the reader of the following from [2]. In particular, this statement is a combination of proposition 1.6 and theorem 1.8 in the paper cited:
Proposition 2.10.
Let $\mathcal{LSS}$ be a large scale structure on a set $X$ and suppose there exists a set of families of $X$, $\mathcal{LSS}^{\prime}$, such that for any $\mathcal{B}_{1},\mathcal{B}_{2}\in\mathcal{LSS}^{\prime}$
there exists $\mathcal{B}_{3}\in\mathcal{LSS}^{\prime}$ such that $\mathcal{B}_{1}\cup\mathcal{B}_{2}\cup\operatorname*{st}\left(\mathcal{B}_{1},%
\mathcal{B}_{2}\right)$ refines $\mathcal{B}_{3}$.
Then if the cardinality of $\mathcal{LSS}^{\prime}$ is countable, then $\mathcal{LSS}$ is metrizable as a coarse space.
Proof.
See [2].
∎
Proposition 2.11.
Let $\left(X,\mathcal{LSS}\right)$ be an asymptotic filtered colimit of $\left\{\left(X_{s},\mathcal{LSS}_{s}\right)\right\}_{s\in S}$ and that for every $s\in S$ we have that $X_{s}$ is metrizable as a coarse space. Then if $S$ is countable, then $X$ is metrizable as a coarse space.
Proof.
By 2.10 we have that for every $s\in S$, there is a $\mathcal{LSS^{\prime}}_{s}$ such that $|\mathcal{LSS^{\prime}}_{s}|$ is countable and $\forall\mathcal{B}_{1}^{s},\mathcal{B}_{2}^{s}\in\mathcal{LSS^{\prime}}_{s}~{}%
\exists\mathcal{B}_{3}^{s}$ such that $\mathcal{B}_{1}^{s}\cup\mathcal{B}_{2}^{s}\cup\operatorname*{st}(\mathcal{B}_{%
1}^{s},\mathcal{B}_{2}^{s})$ is a refinement of $\mathcal{B}_{3}^{s}$.
Let $\mathcal{LSS}^{\prime}=\bigcup\limits_{s\in S}\mathcal{LSS^{\prime}}_{s}$. Then $|\mathcal{LSS}^{\prime}|$ is countable since the countable union of countable sets is countable.
Let $\mathcal{A^{\prime}}_{s},\mathcal{B^{\prime}}_{r}\in\mathcal{LSS^{\prime}}$. Then note that there is a $t\in S$ so that $X_{r}\cup X_{s}\subseteq X_{t}$ and $\mathcal{A^{\prime}}_{s},\mathcal{B^{\prime}}_{s}\in\mathcal{LSS^{\prime}}_{t}$ which implies there is a $\mathcal{W^{\prime}}_{t}\in\mathcal{LSS^{\prime}}_{t}$ so that $\mathcal{A^{\prime}}_{s}\cup\mathcal{B^{\prime}}_{r}\cup\operatorname*{st}(%
\mathcal{A^{\prime}}_{s},\mathcal{B^{\prime}}_{r})\in\mathcal{W^{\prime}}_{t}$
Since, $\mathcal{W^{\prime}}_{t}\in\mathcal{LSS^{\prime}}$, we have by 2.10 that $X$ is metrizable as a coarse space.
∎
We use the following definition of Asymptotic Dimension from [2]:
Definition 2.12.
Let $\left(X,\mathcal{LSS}\right)$ be a large scale structure. We say $\left(X,\mathcal{LSS}\right)$ has asymptotic dimension at most n if every uniformly bounded family $\mathcal{U}$ in $X$ there is a uniformly bounded coarsening $\mathcal{V}$ such that the multiplicity of $\mathcal{V}$ is at most $n+1$ (i.e. each point $x\in X$ is contained in at most $n+1$ elements of $\mathcal{V}$).
Proposition 2.13.
Suppose $X$ is a set and $\mathcal{LSS}$ is the asymptotic filtered colimit
of subsets $\left\{(X_{s},\mathcal{LSS}_{s})\right\}_{s\in S}$ of $X$
The asymptotic dimension of $X$ is at most $n$
if and only if the asymptotic dimension of every $(X_{s},\mathcal{LSS}_{s})$ is at most $n$.
Proof.
$\left(\Rightarrow\right):$ Let $\mathcal{U}_{s}\in\mathcal{LSS}_{s}$.
Then we have that $\mathcal{U}_{s}\in\mathcal{LSS}$ and hence $\mathcal{U}_{s}$ has a coarsening $\mathcal{V}$ with multiplicity at most $n+1$. The desired coarsening is $\mathcal{V^{\prime}}=\left\{V\cap X_{s}~{}|~{}V\in\mathcal{V}\right\}$
$\left(\Leftarrow\right):$ Let $\mathcal{U}\in\mathcal{LSS}$.
Then there is an $s\in S$ and a $\mathcal{V}\in\mathcal{LSS}_{s}$ such that for any $U\in\mathcal{U}$ with $|U|>1$, there is a $V\in\mathcal{V}$ such that $U\subseteq V$. Define $\mathcal{U^{*}}$ as before.
Then $\mathcal{U^{*}}\in\mathcal{LSS}_{s}$ and hence there is a coarsening $\mathcal{W}\in\mathcal{LSS}_{s}$ with multiplicity at most $n+1$.
Then the family $\mathcal{W}\cup\left\{U\in\mathcal{U}~{}|~{}U=\{x\},~{}x\in X\setminus X_{s}\right\}$. is the desired coarsening of $\mathcal{U}$ with multiplicity at most $n+1$.
∎
Given how nicely asymptotic filtered colimits preserve finite asymptotic dimension, one might wonder if the asymptotic filtered colimit construction preserves asymptotic property C. It is not currently known if this is the case. Below is a definition of asymptotic property C that agrees with the more commonly seen definition for metric spaces. We note here that this generalized definition is preserved under subspaces and is also a coarse invariant:
Definition 2.14.
Let $(X,\mathcal{LSS})$ be given. We say that $(X,\mathcal{LSS})$ has asymptotic property C or APC if for any sequence of uniformly bounded families $\mathcal{U}_{1}\prec\mathcal{U}_{2}\prec...$ there is a natural number $n$ and $\mathcal{V}_{1},...,\mathcal{V}_{n}\in\mathcal{LSS}$ so that $\bigcup\limits_{i=1}^{n}\mathcal{V}_{i}$ covers X and for all $j$, $1\leq j\leq n$, $V,V^{\prime}\in\mathcal{V}_{j}$ with $V\neq V^{\prime}$, $\operatorname*{st}(V,\mathcal{U}_{j})\cap V^{\prime}=\varnothing$.
From the remarks, we get a simple corollary.
Corollary 2.15.
Let $X$ be a set and let $\mathcal{LSS}$ be the asymptotic filtered colimit of $\left\{\left(X_{s},\mathcal{LSS}_{s}\right)\right\}_{s\in S}$. If $\mathcal{LSS}$ has APC, then $\left(X_{s},\mathcal{LSS}_{s}\right)$ has APC for any $s\in S$.
Question 2.16.
Let $X$ be a set and let $\mathcal{LSS}$ be the asymptotic filtered colimit of $\left\{\left(X_{s},\mathcal{LSS}_{s}\right)\right\}_{s\in S}$. If for all $s\in S,~{}(X_{s},\mathcal{LSS}_{s})$ has APC, then does $(X,\mathcal{LSS})$ have APC?
Question 2.17.
Suppose $X$ is a set and $\mathcal{LSS}$ is the asymptotic filtered colimit of subsets $\left\{(X_{s},\mathcal{LSS}_{s})\right\}_{s\in S}$ of $X$ and that the asymptotic dimension of each $(X_{s},\mathcal{LSS}_{s})$ is finite. Does $(X,\mathcal{LSS})$ have asymptotic property C?
We’ll now show that exactness is preserved by the asymptotic filtered colimit construction. We remind the reader of the following definitions. The following is adapted from [4]:
Definition 2.18.
Let $X$ be a set. We say $\left(f_{i}\right)_{i\in I}$ is a partition of unity of $X$ if $f_{i}:X\to[0,\infty)$ for all $i$ and for all $x\in X$, $\sum\limits_{i\in I}f_{i}(x)=1$.
The following definition is adapted from [4]:
Definition 2.19.
Let $(X,\mathcal{LSS})$ be a large scale structure. $(X,\mathcal{LSS})$ is exact if for every $\mathcal{U}\in\mathcal{LSS}$ and $\epsilon>0$ there exists a partition of unity $(f_{i})_{i\in I}$ of $X$ so that
the cover of $X$, $\mathcal{V}=\left\{\mathrm{support}(f_{n})~{}|~{}i\in I\right\}$, is uniformly bounded and that if (for $U\in\mathcal{U}$) $x,y\in U$, then $\sum\limits_{i\in I}|f_{i}(x)-f_{i}(y)|<\epsilon$.
Proposition 2.20.
Suppose $X$ is a set and $\mathcal{LSS}$ is the asymptotic filtered colimit of subsets $\left\{X_{s}\right\}_{s\in S}$ of $X$. $(X,\mathcal{LSS})$ is exact if and only if for each $s\in S$, $(X_{s},\mathcal{LSS}_{s})$ is exact.
Proof.
$\left(\Rightarrow\right):$ Let $\mathcal{U}_{s}\in\mathcal{LSS}_{s}$ and $\epsilon>0$. Note that for any $s\in S$ $\mathcal{LSS}_{s}\subseteq\mathcal{LSS}$. Then we have $\mathcal{U}_{s}\in\mathcal{LSS}$; since $(X,\mathcal{LSS})$ is exact, we can find the desired partition of unity of $X$. Restrict this partition of unity of $X$ to a partition of unity of $X_{s}$. This shows that $(X_{s},\mathcal{LSS}_{s})$ is exact.
$\left(\Leftarrow\right):$ Let $\mathcal{U}\in\mathcal{LSS}$ and $\epsilon>0$. Then there exists $s\in S$ and $\mathcal{V}\in\mathcal{LSS}_{s}$ such that for every $U\in\mathcal{U}$ with $|U|>1$ there exists a $V\in\mathcal{V}$ so that $V\subseteq U$.
Let $\mathcal{U^{*}}$ be as shown in other proofs. Then $\mathcal{U^{*}}\in\mathcal{LSS}_{s}$ which means there is a partition of unity of $X_{s}$, $\left(f_{i}\right)_{i\in I}$ so that the family $\left\{\mathrm{support}(f_{i})~{}|~{}i\in I\right\}$ is uniformly bounded and if $U\in\mathcal{U}$ and $x,y\in U$, then $\sum\limits_{i\in I}|f_{i}(x)-f_{i}(y)|<\epsilon$.
For any value $j\in X\setminus X_{s}$, define $f_{j}:X\to[0,\infty)$ via $f_{j}(j)=1$ and zero elsewhere. Also, for any $i\in I$ extend $f_{i}:X_{s}\to[0,\infty)$ to $X$ by defining $f_{i}(j)=0$ for any $j\in X\setminus X_{s}$. Let the set $J$ index the various $f_{j}$’s and let $K=I\cup J$.
We claim that $\left(f_{k}\right)_{k\in K}$ is the desired partition of unity of $X$. Indeed, notice that aside from a collection of one point sets (i.e. $\mathrm{support}(f_{j})$ for $j\in J$), we have that the family $\left\{\mathrm{support}(f_{k})~{}|~{}k\in K\right\}=\left\{\mathrm{support}(f_%
{i})~{}|~{}i\in I\right\}\in\mathcal{LSS}_{s}\subset\mathcal{LSS}$.
Now let $U\in\mathcal{U}$. If $|U|=1$, then we have that $x,y\in U$ implies that $x=y$ and thus $\sum\limits_{k\in K}|f_{k}(x)-f_{k}(y)|=0<\epsilon$. If $|U|>1$, then we have that $U\subseteq X_{s}$ and since $\left(f_{i}\right)_{i\in I}$ is a partition of unity for $X_{s}$ and that $f_{j}(U)\equiv 0$, we have that $x,y\in U$ implies $\sum\limits_{k\in K}|f_{k}(x)-f_{k}(y)|=\sum\limits_{i\in I}|f_{i}(x)-f_{i}(y)%
|<\epsilon$. Now we show that for every $x\in X$, $\sum\limits_{k\in K}f_{k}(x)=1$. Suppose $x\in X_{s}$. then for any $i\in I,~{}f_{i}(x)=0$ and there is a unique $j\in J$ so that $f_{j}(x)=1$. So $\sum\limits_{k\in K}f_{k}(x)=1$. If $x\in X\setminus X_{s}$, then we have that for any $j\in J,~{}f_{j}(x)=0$ and since $\left(f_{i}\right)_{i\in I}$ form a partition of unity for $X_{s}$, we have that $\sum\limits_{k\in K}f_{k}(x)=\sum\limits_{i\in I}f_{i}(x)=1$.
∎
We will now show that embeddability into separable Hilbert spaces is preserved by the asymptotic filtered colimit construction. The notion of coarse embeddability was introduced in [9]. Recall that for any two separable Hilbert spaces $G$ and $H$, there is an isometric isomorphism between the two. We will also use some pinch space theory. The following definition and theorem is adapted from [5]:
Definition 2.21.
Let $(X,\mathcal{LSS})$ be a large scale space, $K$ a metric space, and $c>0$. We say $(X,\mathcal{LSS})$ c-pinch-spaces to K if for every $\mathcal{U}\in\mathcal{LSS}$ and $\epsilon>0$ there is a $\mathcal{V}\in\mathcal{LSS}$ and a function $f:X\to K$ so that $\sup\limits_{U\in\mathcal{U}}\mathrm{diam}(f(U))<\epsilon$ and for every $x,y\in X$ so that $\{x,y\}\not\subseteq V$ for every $V\in\mathcal{V}$ we have that $d_{K}(f(x),f(y))\geq c$.
Theorem 2.22.
If $X$ is a metric space, then $X$ coarsely embedds into a Hilbert space if and only if $X$ c-pinch-spaces to a Hilbert space for some $c>0$.
Theorem 2.23.
Let $S$ be a countable index set and let $H$ be a fixed separable Hilbert space. Let $\left(X,\mathcal{LSS}\right)$ be the asymptotic filtered colimit of $\left\{\left(X_{s},\mathcal{LSS}_{s}\right)\right\}_{s\in S}$ with every $X_{s}$ countable. Then $\left(X,\mathcal{LSS}\right)$ coarsely embedds into $H$ if and only if $\left(X_{s},\mathcal{LSS}_{s}\right)$ coarsely embedds into $H$ for all $s\in S$.
Proof.
$\left(\Rightarrow\right):$ This follows via restriction of the embedding function $f:X\to H$ to any $X_{s}$.
$\left(\Leftarrow\right):$ Note that $\bigoplus\limits_{s\in S}H\cong H$ since $S$ is countable. Likewise, $H\oplus H\cong H$. We show $\left(X,\mathcal{LSS}\right)$ 1-pinch-spaces to $H$.
Let $\mathcal{U}\in\mathcal{LSS}$ and $\epsilon>0$. Define $\mathcal{U}^{*}$ to be $\mathcal{U}$ with one point sets removed. Then by definition of $\mathcal{LSS}$, $\mathcal{U}^{*}\in\mathcal{LSS}_{s}$ for some s.
Since $\left(X_{s},\mathcal{LSS}_{s}\right)$ 1-pinch-spaces to H, there exists $f_{\epsilon,s}^{\mathcal{U}^{*}}:X_{s}\to H$ and $\mathcal{W}_{s}\in\mathcal{LSS}_{s}$ such that $\sup\limits_{U\in\mathcal{U}^{*}}\mathrm{diam}({f_{\epsilon,s}^{\mathcal{U}^{*%
}}(U)})<\epsilon$
and for any $x,y\in X_{s}$ with $\{x,y\}\not\subseteq W$ for every $W\in\mathcal{W}_{s}$ we have $\|f_{\epsilon,s}^{\mathcal{U}^{*}}(x)-f_{\epsilon,s}^{\mathcal{U}^{*}}(y)\|\geq
1$ (the norm is in $H$).
Now, since $X=\bigcup\limits_{s\in S}X_{s}$ and $X_{s}$ is countable for every $s$, we may index an orthonormal basis of $H$ via $\left\{e_{x}\right\}_{x\in X}$.
Furthermore, define $f_{\epsilon}^{\mathcal{U}}:X\to H\oplus H$ via $f_{\epsilon}^{\mathcal{U}}(x)=\left(f_{s,\epsilon}^{\mathcal{U}^{*}}(x),0\right)$ for any $x\in X_{s}$ and $f_{\epsilon}^{\mathcal{U}}(x)=\left(0,e_{x}\right)$ for any $x$ not in $X_{s}$.
Define $\mathcal{V}\in\mathcal{LSS}$ as $\mathcal{V}=\mathcal{W}_{s}\cup\left\{{x}\right\}_{x\in X}$. We will show that $\left(f_{\epsilon}^{\mathcal{U}},\mathcal{V}\right)$ satisfies the 1-pinch space conditions.
Note that $\epsilon>\sup\limits_{U\in\mathcal{U}^{*}}\mathrm{diam}(f_{\epsilon,s}^{%
\mathcal{U}^{*}}(U))~{}=~{}\sup\limits_{U\in\mathcal{U}}\mathrm{diam}(f_{%
\epsilon}^{\mathcal{U}}(U))$
since $|U|>1$ implies $U\subseteq X_{s}$ and $U\in\mathcal{U}^{*}$ which implies that $\mathrm{diam}(U)<\epsilon$. If $|U|=1$, then $\mathrm{diam}(f_{\epsilon}^{\mathcal{U}}(U))=0<\epsilon$. Hence, $\sup\limits_{U\in\mathcal{U}}\mathrm{diam}(f_{\epsilon}^{\mathcal{U}}(U))<\epsilon$.
Now, let $x,y\in X$ so that $\left\{x,y\right\}\not\subseteq V$ for every $V\in\mathcal{V}$. We have three cases:
Suppose $\left\{x,y\right\}\subseteq X\setminus X_{s}$. Then $f_{\epsilon}^{\mathcal{U}}(x)=(0,e_{x})$ and $f_{\epsilon}^{\mathcal{U}}(y)=(0,e_{y})$. Then we have that $\|(0,e_{x})-(0,e_{y})\|_{H\oplus H}=\sqrt{\|0\|_{H}^{2}+\|e_{x}-e_{y}\|_{H}^{2%
}}=\sqrt{2}>1$.
Suppose $\left\{x,y\right\}\subseteq X_{s}$. Then $\left\{x,y\right\}\not\subseteq V$ for every $V\in\mathcal{V}$ implies that $\left\{x,y\right\}\not\subseteq W$ for every $W\in\mathcal{W}_{s}$.
Then we have that $\|f_{\epsilon}^{\mathcal{U}}(x)-f_{\epsilon}^{\mathcal{U}}(y)\|_{H\oplus H}=%
\sqrt{\|f_{\epsilon,s}^{\mathcal{U}^{*}}(x)-f_{\epsilon,s}^{\mathcal{U}^{*}}(y%
)\|_{H}^{2}+\|0\|_{H}^{2}}\geq 1$ by assumption that $\left(X_{s},\mathcal{LSS}_{s}\right)$ 1-pinch-spaces to $H$.
Suppose that $x\in X_{s}$ and $y\in X\setminus X_{s}$. Then $f_{\epsilon}^{\mathcal{U}}(x)=(f_{\epsilon,s}^{\mathcal{U}^{*}}(x),0)$ and $f_{\epsilon}^{\mathcal{U}}(y)=(0,e_{y})$.
Then we have that $\|f_{\epsilon}^{\mathcal{U}}(x)-f_{\epsilon}^{\mathcal{U}}(y)\|_{H\oplus H}=\|%
(f_{\epsilon,s}^{\mathcal{U}^{*}}(x),0)-(0,e_{y})\|_{H\oplus H}=\sqrt{\|f_{%
\epsilon,s}^{\mathcal{U}^{*}}(x)-0\|_{H}^{2}+\|0-e_{y}\|_{H}^{2}}\geq 1$.
So in all cases, $\|f_{\epsilon}^{\mathcal{U}}(x)-f_{\epsilon}^{\mathcal{U}}(y)\|_{H\oplus H}\geq
1$. Defining $h:X\to H$ to be the composition of $f_{\epsilon}^{\mathcal{U}}$ with the isometric isomorphism from $H\oplus H$ to $H$, we have that $\left(h,\mathcal{V}\right)$ 1-pinch-spaces to $H$ which means that $X$ coarsely embedds into $H$.
∎
We will now show that coarse amenability is preserved through the asymptotic filtered colimit construction. This definition of coarse amenability is given in [1].
Definition 2.24.
Let $X$ be a set, $A\subseteq X$, and $\mathcal{U}$ a family of subsets of $X$. Then the horizon of A against $\mathcal{U}$, denoted $hor(A,\mathcal{U})$, is the set $\left\{U\in\mathcal{U}|A\cap U\neq\varnothing\right\}$.
Here are some useful properties of the horizon that we will use:
Lemma 2.25.
Let $X$ be a set, $A,B\subseteq X$, and $\mathcal{U},\mathcal{V}$ be families of subsets of $X$. Then:
(1)
$A\subseteq B\Rightarrow hor(A,\mathcal{U})\subseteq hor(B,\mathcal{U})$
(2)
$\mathcal{U}\prec\mathcal{V}\Rightarrow hor(A,\mathcal{U})\subseteq hor(A,%
\mathcal{V})$
(3)
$A\subseteq B$ and $\mathcal{U}\prec\mathcal{V}\Rightarrow hor(A,\mathcal{U})\subseteq hor(B,%
\mathcal{V})$.
Proof.
Let $U\in hor(A,\mathcal{U})$. Then $\varnothing\neq U\cap A\subseteq U\cap B$ which implies that $B\cap U\neq\varnothing$. So $U\in hor(B,\mathcal{U})$.
For the second item, let $U\in hor(A,\mathcal{U})$. Then $\varnothing\neq U\cap A$. But $U\in\mathcal{U}\prec\mathcal{V}$ implies $U\in hor(A,\mathcal{V})$.
The last statement is a combination of the first two.
∎
Definition 2.26.
Let $\left(X,\mathcal{LSS}\right)$ be a large scale structure. Then $\left(X,\mathcal{LSS}\right)$ is coarsely amenable if for every $\mathcal{U}\in\mathcal{LSS}$ and $\epsilon>0$, there exists
$\mathcal{V}\in\mathcal{LSS}$ so that for any $x\in\bigcup\limits_{U\in\mathcal{U}}U$, $|hor(\operatorname*{st}({x},\mathcal{U}),\mathcal{V})|<\infty$ and
$$\frac{|hor({x},\mathcal{V})|}{|hor(\operatorname*{st}({x},\mathcal{U}),%
\mathcal{V})|}>1-\epsilon$$
For simplicity, we denote $hor(\{x\},\mathcal{V})$ as $hor(x,\mathcal{V})$ and $hor(\operatorname*{st}(\{x\},\mathcal{U}),\mathcal{V})$ as $hor(\operatorname*{st}(x,\mathcal{U}),\mathcal{V})$.
Theorem 2.27.
Suppose $S$ is an index set, $\left(X,\mathcal{LSS}\right)$ the asymptotic filtered colimit of $\left\{\left(X_{s},\mathcal{LSS}_{s}\right)\right\}_{s\in S}$. Then $\left(X,\mathcal{LSS}\right)$ is coarsely amenable if and only if $\left(X_{s},\mathcal{LSS}_{s}\right)$ be coarsely amenable for every $s\in S$.
Proof.
$\left(\Rightarrow\right):$ It is shown in [1] that coarse amenability is preserved by taking subspaces.
$\left(\Leftarrow\right):$Let $\mathcal{U}\in\mathcal{LSS}$. Then for some $s\in S,~{}\mathcal{U}^{*}\in\mathcal{LSS}_{s}$, where $\mathcal{U}^{*}$ is $\mathcal{U}$ with one point sets outside of $X_{s}$ removed.
As $\mathcal{LSS}_{s}$ is coarsely amenable, there is a $\mathcal{V}^{*}\in\mathcal{LSS}_{s}$ so that for any $x\in\bigcup\limits_{U\in\mathcal{U}^{*}}U$, $|hor(\operatorname*{st}({x},\mathcal{U}^{*}),\mathcal{V}^{*})|<\infty$ and
$\frac{|hor({x},\mathcal{V}^{*})|}{|hor(\operatorname*{st}({x},\mathcal{U}^{*})%
,\mathcal{V}^{*})|}>1-\epsilon$.
Define $\mathcal{V}=\mathcal{V}^{*}~{}\cup\left(\mathcal{U}\setminus\mathcal{U}^{*}\right)$. Then $\mathcal{V}\in\mathcal{LSS}$. Note that by construction, $\mathcal{V}\setminus\mathcal{V}^{*}=\mathcal{U}\setminus\mathcal{U}^{*}$. We now show that for any $x\in\bigcup\limits_{U\in\mathcal{U}}U$,
$hor(\operatorname*{st}(x,\mathcal{U}),\mathcal{V})=hor(\operatorname*{st}(x,%
\mathcal{U}^{*}),\mathcal{V}^{*})\cup hor(\operatorname*{st}(x,\mathcal{U}%
\setminus\mathcal{U}^{*}),\mathcal{V}\setminus\mathcal{V}^{*})$. Furthermore, we will show that the $hor(\operatorname*{st}(x,\mathcal{U}^{*}),\mathcal{V}^{*})\cap hor(%
\operatorname*{st}(x,\mathcal{U}\setminus\mathcal{U}^{*}),\mathcal{V}\setminus%
\mathcal{V}^{*})=\varnothing$.
$\left(\subseteq\right):$ Let $V\in hor(\operatorname*{st}(x,\mathcal{U}),\mathcal{V})$. Then $V\in\mathcal{V}^{*}$ or it isn’t. Suppose $V\in\mathcal{V}^{*}$. Then there is a $U\in\mathcal{U}$ so that $x\in U$ and $U\cap V\neq\varnothing$. We will show that $U\in\mathcal{U}^{*}$.
Suppose not (for contradiction). Then $U\subseteq\left(X\setminus X_{s}\right)$ and $|U|=1$. Hence $U=\left\{x\right\}$ and $U\subseteq V$ as $U\cap V\neq\varnothing$. Thus, $x\in V$ so $V\not\subseteq X_{s}$ which implies $V\not\in\mathcal{V}^{*}$ which is a contradiction. So we must have that $U\in\mathcal{U}^{*}$ hence $V\in hor(\operatorname*{st}(x,\mathcal{U}^{*}),\mathcal{V}^{*})$.
Now, if $V\not\in\mathcal{V}^{*}$, then there is a $U\in\mathcal{U}$ so that $x\in U$ and $U\cap V\neq\varnothing$. As $V\not\in\mathcal{V}^{*}$, we have that $|V|=1$ which means that $V\subseteq U$.
As $V\not\subseteq X_{s}$, we have that $U\not\subseteq X_{s}$ which implies (by definition of $\mathcal{LSS}$) $|U|=1$. So $U=V=\left\{x\right\}$ and $U\in\mathcal{U}\setminus\mathcal{U}^{*}$.
Therefore, $x\in U$ implies $U\in\operatorname*{st}(x,\mathcal{U}\setminus\mathcal{U}^{*})$ which implies $V\in hor(\operatorname*{st}(x,\mathcal{U}\setminus\mathcal{U}^{*}),\mathcal{V}%
\setminus\mathcal{V}^{*})$.
$\left(\supseteq\right)$: This follows via two applications of the previous lemma.
We now show that $hor(\operatorname*{st}(x,\mathcal{U}^{*}),\mathcal{V}^{*})\cap hor(%
\operatorname*{st}(x,\mathcal{U}\setminus\mathcal{U}^{*}),\mathcal{V}\setminus%
\mathcal{V}^{*})=\varnothing$.
Note that $hor(\operatorname*{st}(x,\mathcal{U}\setminus\mathcal{U}^{*}),\mathcal{V}%
\setminus\mathcal{V}^{*})=\{x\}$ or is the empty set since $hor(\operatorname*{st}(x,\mathcal{U}\setminus\mathcal{U}^{*}))=\{x\}$ or the empty set. If this set is the singelton $\{x\}$, then $x\not\in X_{s}$ which implies that $\operatorname*{st}(x,\mathcal{U}^{*})=\varnothing$
which means that $hor(\operatorname*{st}(x,\mathcal{U}^{*}),\mathcal{V}^{*})\cap hor(%
\operatorname*{st}(x,\mathcal{U}\setminus\mathcal{U}^{*}),\mathcal{V}\setminus%
\mathcal{V}^{*})=\varnothing$ as desired.
Since $hor(x,\mathcal{V})=hor(x,\mathcal{V}^{*})\cup hor(x,\mathcal{V}\setminus%
\mathcal{V}^{*})$ (and the union is disjoint) and by the previous lemma $hor(x,\mathcal{V}\setminus\mathcal{V}^{*})=hor(\operatorname*{st}(x,\mathcal{U%
}\setminus\mathcal{U}^{*}),\mathcal{V}\setminus\mathcal{V}^{*})$, we therefore have that:
$$\frac{|hor(x,\mathcal{V})|}{|hor(\operatorname*{st}(x,\mathcal{U}),\mathcal{V}%
)|}=\frac{|hor(x,\mathcal{V}^{*})|+|hor(x,\mathcal{V}\setminus\mathcal{V}^{*})%
|}{|hor(\operatorname*{st}(x,\mathcal{U}^{*}),\mathcal{V}^{*})|+|hor(%
\operatorname*{st}(x,\mathcal{U}\setminus\mathcal{U}^{*}),\mathcal{V}\setminus%
\mathcal{V}^{*})|}=$$
$$\frac{|hor(x,\mathcal{V}^{*})|+|hor(x,\mathcal{V}\setminus\mathcal{V}^{*})|}{|%
hor(\operatorname*{st}(x,\mathcal{U}^{*}),\mathcal{V}^{*})|+|hor(x,\mathcal{V}%
\setminus\mathcal{V}^{*})|}$$
If we can show the fraction above is greater than $1-\epsilon$ for any $x\in\bigcup\limits_{U\in\mathcal{U}}U$, then we’re done. Let $x\in\bigcup\limits_{U\in\mathcal{U}}U$. Then $x\in\bigcup\limits_{U\in\mathcal{U}^{*}}U$ or $x\in\bigcup\limits_{U\in\mathcal{U}\setminus\mathcal{U}^{*}}U$.
If $x\in\bigcup\limits_{U\in\mathcal{U}\setminus\mathcal{U}^{*}}U$, then $x\in X\setminus X_{s}$ and for some $U\in\mathcal{U}$, $U=\{x\}$. Thus, $|hor(x,\mathcal{V}\setminus\mathcal{V}^{*})|=1$ and $|hor(\operatorname*{st}(x,\mathcal{V}^{*}))|=0$ (as $x\not\in X_{s}$) so $|hor(\operatorname*{st}(x,\mathcal{U}),\mathcal{V})|=1<\infty$ and for any $\epsilon$ between one and zero, $\frac{|hor(x,\mathcal{V})|}{|hor(\operatorname*{st}(x,\mathcal{U}),\mathcal{V}%
)|}=1>1-\epsilon$.
If $x\in\bigcup\limits_{U\in\mathcal{U}^{*}}U$, then we have that $x\in X_{s}$ which implies that $|hor(x,\mathcal{V}\setminus\mathcal{V}^{*})|=0$ and hence $\frac{|hor(x,\mathcal{V})|}{|hor(\operatorname*{st}(x,\mathcal{U}),\mathcal{V}%
)|}=\frac{|hor(x,\mathcal{V}^{*})|}{|hor(\operatorname*{st}({x},\mathcal{U}^{*%
}),\mathcal{V}^{*})|}>1-\epsilon$. So $\left(X,\mathcal{LSS}\right)$ is coarsely amenable.
∎
We will show that property A is preserved by the asymptotic filtered colimit construction. The following definitions are from [8]. They are generalizations from the typical definition of property A (defined only on metric spaces with bounded geometry) to large scale spaces with bounded geometry:
Definition 2.28.
$(X,\mathcal{LSS})$ is a bounded geometry coarse space if for any $\mathcal{U}\in\mathcal{LSS},~{}\sup\limits_{U\in\mathcal{U}}|U|~{}<\infty$.
Definition 2.29.
Let $(X,\mathcal{LSS})$ be a bounded geometry coarse space. We say that $(X,\mathcal{LSS})$ has property A if for any $\epsilon>0$ and $\mathcal{U}\in\mathcal{LSS}$ there is a $\mathcal{V}\in\mathcal{LSS}$ and a family of subsets of $X\times\mathbb{N},~{}\left\{A_{x}\right\}_{x\in X}~{},$ so that for each $x\in X$:
$|A_{x}|<\infty$, $(x,1)\in A_{x}$, $A_{x}\subseteq\operatorname*{st}(x,\mathcal{V})\times\mathbb{N}$, and for any $y\in\operatorname*{st}(x,\mathcal{U})$ we have $\frac{|A_{x}\Delta A_{y}|}{|A_{x}\cap A_{y}|}<\epsilon$, where $A_{x}\Delta A_{y}$ is the symmetric difference of $A_{x}$ and $A_{y}$.
Proposition 2.30.
Let $(X,\mathcal{LSS})$ be an asymptotic filtered colimit of $\{(X_{s},\mathcal{LSS}_{s})\}_{s\in S}$. If $(X_{s},\mathcal{LSS}_{s})$ is a bounded geometry coarse space with property A for every $s\in S$, then $(X,\mathcal{LSS})$ is a bounded geometry coarse space with property A.
Proof.
Note that $(X,\mathcal{LSS})$ is a bounded geometry coarse space since for any $\mathcal{U}\in\mathcal{LSS}$, we have that $\mathcal{U}^{*}\in\mathcal{LSS}_{s}$ for some $s\in S$ and that $(X_{s},\mathcal{LSS}_{s})$ is a bounded geometry coarse space.
We now show that $(X,\mathcal{LSS})$ has property A. Let $\mathcal{U}\in\mathcal{LSS}$ and $\epsilon>0$. Then we have that for some $s\in S,~{}\mathcal{U}^{*}\in\mathcal{LSS}_{s}$. Since $(X_{s},\mathcal{LSS}_{s})$ has property A, we have that there is a $\mathcal{V}_{s}\in\mathcal{LSS}_{s}$ and a collection of subsets of $X_{s}\times\mathbb{N}$, $\{A_{x}\}_{x\in X_{s}}$, so that the requirements of property A are satisfied in $(X_{s},\mathcal{LSS}_{s})$.
Note that $\mathcal{V}_{s}\in\mathcal{LSS}$ and define $\mathcal{V}\in\mathcal{LSS}$ via $\mathcal{V}=\mathcal{V}_{s}~{}\cup\{\{x\}|x\in X\setminus X_{s}\}$. Define $\{B_{x}\}_{x\in X}$ via $B_{x}=A_{x}$ if $x\in X_{s}$ and $B_{x}=\{(x,1)\}$ otherwise. We show that $\mathcal{V}$ and $\{B_{x}\}_{x\in X}$ satisfy the requirements in the definition of property A.
Let $x\in X$. Then $|B_{x}|<\infty$ and $(x,1)\in B_{x}$ are obvious. $B_{x}\subseteq\operatorname*{st}(x,\mathcal{V})$ since $x\in X_{s}$ implies that $B_{x}=A_{x}\subseteq\operatorname*{st}(x,\mathcal{V}_{s})\times\mathbb{N}=%
\operatorname*{st}(x,\mathcal{V})\times\mathbb{N}$. Otherwise, $B_{x}=(x,1)\subseteq\operatorname*{st}(x,\mathcal{V})\times\mathbb{N}=\{x\}%
\times\mathbb{N}$.
Lastly, let $y\in\operatorname*{st}(x,\mathcal{U})$. If $x\in X_{s}$, then we have that $\operatorname*{st}(x,\mathcal{U})=\operatorname*{st}(x,\mathcal{U}^{*})$ hence $y\in\operatorname*{st}(x,\mathcal{U}^{*})$ (i.e. $y\in X_{s}$) and $B_{x}=A_{x}$ and $B_{y}=A_{y}$. So $\frac{|A_{x}\Delta A_{y}|}{|A_{x}\cap A_{y}|}<\epsilon$ since $(X_{s},\mathcal{LSS}_{s})$ has property A. If $x\in X\setminus X_{s}$, then $y\in\operatorname*{st}(x,\mathcal{U})$ implies that $y=x$. Hence $|B_{x}\Delta B_{y}|=0$ and $\frac{|B_{x}\Delta B_{y}|}{|B_{x}\cap B_{y}|}=0<\epsilon$. So $(X,\mathcal{LSS})$ has property A.
∎
The converse of this theorem is most likely true. One would need to show that Property A is preserved by subspaces. It was shown in [6] that this is true in the case of uniformly discrete metric spaces.
We have presented multiple properties that are preserved through asymptotic filtered colimits. It turns out that close functions are not preserved through asymptotic filtered colimits. The following is such an example:
Example 2.31.
Let $X=\left(0,1\right]$ and let $X_{n}=\left[\frac{1}{n+1},1\right]$ for $n\in\left\{1,2,...\right\}$. Let $X_{n}$ have the large scale structure induced by the metric of absolute value. Then we have that $\bigcup\limits_{n=1}^{\infty}X_{n}=X$ and that $X_{n}\subseteq X_{n+1}$ for every $n$.
Let $\mathcal{LSS}$ be the asymptotic filtered colimit of $\left\{(X_{n},\mathcal{LSS}_{n})\right\}_{n\in\mathbb{N}}$ of $X$. Let $f:X\to\mathbb{R}$ be defined via $f(x)=\frac{1}{x}$. Also, define $g:X\to\mathbb{R}$ be defined via $g(x)=1$ and give $\mathbb{R}$ the large scale structure induced by the metric of absolute value.
For any $n$, we have that $X_{n}$ is a compact set. Since the function $|f-g|$ is continuous on $X_{n}$, we have that $f|_{X_{n}}$ is close to $g|_{X_{n}}$ for all $n$.
However, $f$ is not close to $g$. Indeed, suppose for contradiction that $f$ is close to $g$. Then there is a uniformly bounded family $\mathcal{V}$ of $\mathbb{R}$ so that for any $x\in X$, $\left\{f(x),g(x)\right\}\subseteq V$ for some $V\in\mathcal{V}\cup\left\{\left\{y\right\}~{}|~{}y\in Y\right\}$. By definition of the large scale structure of $\mathbb{R}$, there exists an $M>0$ so that for any $V\in\mathcal{V},~{}\mathrm{diam}(V)<M$.
This implies that for any $x\in X,~{}|f(x)-g(x)|<M$ i.e. for any $x\in\left(0,1\right]$, $\frac{1-x}{x}<M$. This is a contradiction. Indeed, choose $x=\frac{1}{M+2}$.
References
[1]
M. Cencelj, J. Dydak, and A. Vavpetic, Coarse Amenability vs Paracompactness, Journal of Topology and Analysis, vol. 6 no. 1 pp. 125-152 2014.
[2]
J. Dydak and C.S. Hoffland, An Alternative Definition of Coarse Structures, Topology and its Applications 155 (9), 2008, 1013-1021.
[3]
J. Dydak and T. Weighill, Extension Theorems for Large Scale Spaces via Coarse Neighbourhoods, Mediterranean Journal of Mathematics, 2018, 15:59.
[4]
E. Guentner, Recent Progress in General Topology III: Permanence in Coarse Geomery, Atlantis Press, 2013, 507-533.
[5]
M. Holloway, Duality of scales, PhD Thesis, The University of Tennessee, 2016.
[6]
P. Nowak and G. Yu, Large Scale Geometry, European Mathematical Society Publishing House, 2012.
[7]
J. Roe, Lectures in Coarse Geometry, University Lecture Series 31, American Mathematical Society, Providence, RI, 2003.
[8]
K. Sinclair, Generalizations of Coarse Properties in Large Scale Spaces, PhD Thesis, The University of Tennessee, 2017.
[9]
G. Yu, The coarse Baum-Connes conjecture for spaces which admit a uniform embedding into Hilbert space, Inventiones mathematicae 139.1, 2000, 201-240. |
\dottedcontents
section[2.5em]2.9em1pc
Heisenberg varieties and the existence of de Rham lifts
Zhongyipan Lin
Abstract.
Let $F$ be a $p$-adic field.
For certain non-abelian nilpotent algebraic groups $U$
over $\bar{\mathbb{Z}}_{p}$ equipped with $\operatorname{Gal}_{F}$-action,
we study
the associated
Heisenberg varieties
which model the non-abelian cohomology set “$H^{1}(\operatorname{Gal}_{F},U)$”.
The construction of Heisenberg varieties involves the Herr complexes
including their cup product structure.
Write $U_{n}$ for a quasi-split unitary group
and assume $p\neq 2$.
We classify mod $p$ Langlands parameters
for $U_{n}$ (quasi-split), $\operatorname{SO}_{2n+1}$, $\operatorname{SO}_{2n}$, $\operatorname{Sp}_{2n}$, ${\operatorname{GSpin}}_{2m}$
and ${\operatorname{GSpin}}_{2m+1}$ (split)
over $F$,
and show they are successive Heisenberg-type extensions
of elliptic Langlands parameters.
We employ the Heisenberg variety to
study the obstructions
for lifting a non-abelian cocycle along the map
$H^{1}(\operatorname{Gal}_{F},U(\bar{\mathbb{Z}}_{p}))\to H^{1}(\operatorname{Gal}_{F},U(\bar{\mathbb{F}}_{p}))$.
We present a precise theorem that reduces the task of finding de Rham lifts of mod $p$ Langlands parameters for unitary, symplectic, orthogonal, and spin similitude groups to the dimension analysis of specific closed substacks of the reduced Emerton-Gee stacks for the corresponding group.
Finally, we carry out
the dimension analysis for the unitary Emerton-Gee stacks
using the geometry of Grassmannian varieties.
The paper culminates in the proof of the existence of
potentially crystalline lifts of regular Hodge type for
all mod $p$ Langlands parameters for $p$-adic (possibly ramified) unitary groups $U_{n}$.
It is the first general existence of de Rham lifts result for
non-split (ramified) groups,
and provides evidence for the topological Breuil-Mézard conjecture
for more general groups.
Contents
1 Introduction
2 Heisenberg equations
3 Extensions of $(\varphi,\Gamma)$-modules and non-abelian $(\varphi,\Gamma)$-cohomology
4 Cohomologically Heisenberg lifting problems
5 Applications to Galois cohomology
6 Interaction of cup products with $\mathbb{Z}/2$-action
7 Example A: unitary groups
8 Example B: symplectic groups
9 Example C: odd and even orthogonal groups
10 The Emerton-Gee stacks for unitary groups
1. Introduction
Let $G$ be a reductive group over a $p$-adic field $F$
which splits over a tame extension $K/F$, and
let ${{}^{L}\!G}=\widehat{G}\rtimes\operatorname{Gal}(K/F)$
be the Langlands dual group of $G$.
In our previous work [Lin23],
we classified elliptic mod $p$ Langlands parameters for $G$
and constructed their de Rham lifts.
In this paper, we shift our attention to parabolic mod $p$ Langlands parameters.
Let $P\subset\widehat{G}$ be a $\operatorname{Gal}(K/F)$-stable parabolic.
Write ${{}^{L}\!P}$ for $P\rtimes\operatorname{Gal}(K/F)$.
A mod $p$ Langlands parameter is either elliptic,
or factors through some ${{}^{L}\!P}$.
Let $\bar{\rho}:\operatorname{Gal}_{F}\to{{}^{L}\!P}(\bar{\mathbb{F}}_{p})$ be a parabolic mod $p$
Langlands parameter.
We are interested in the following question:
Question: Does there exist
a de Rham lift $\rho:\operatorname{Gal}_{F}\to{{}^{L}\!P}(\bar{\mathbb{Z}}_{p})$
of regular Hodge type?
This question is addressed for $G=\operatorname{GL}_{n}$ in the book
by Emerton and Gee,
and has important applications to the geometric Breuil-Mézard conjecture(see [EG23] and [Le+23]).
We briefly describe the general strategy employed in [EG23]
and explain how the proof breaks for groups which are not $\operatorname{GL}_{n}$.
Let ${{}^{L}\!M}$ denote a $\operatorname{Gal}(K/F)$-stable Levi subgroup of
${{}^{L}\!P}$ and write $U$ for the unipotent radical of $P$.
Write $\bar{\rho}_{M}:\operatorname{Gal}_{F}\xrightarrow{\bar{\rho}}{{}^{L}\!P}(\bar{\mathbb{F}}_{p})\to{{}^{L}\!M}(\bar{\mathbb{F}}_{p})$
for the ${{}^{L}\!M}$-semisimplification of $\bar{\rho}$.
The construction $\rho$ follows a $2$-step process:
•
Step 1: Carefully choose a lift $\rho_{M}:\operatorname{Gal}_{F}\to{{}^{L}\!M}(\bar{\mathbb{Z}}_{p})$ of $\bar{\rho}_{M}$.
•
Step 2:
Endow $U(\bar{\mathbb{Z}}_{p})$ with the $\operatorname{Gal}_{F}$-action induced by $\rho_{M}$.
Show that the image of
$$H^{1}(\operatorname{Gal}_{F},U(\bar{\mathbb{Z}}_{p}))\to H^{1}(\operatorname{Gal}_{F},U(\bar{\mathbb{F}}_{p}))$$
contains the cocycle corresponding to $\bar{\rho}$.
1.1. Partial lifts after abelianization and the work of Emerton-Gee
Let $[U,U]$ denote the derived subgroup of $U$
and write $U^{\operatorname{ab}}:=U/[U,U]$
for the abelianization of $U$.
The first approximation of a de Rham lift of $\bar{\rho}$
is a continuous group homomorphism
$$\operatorname{Gal}_{F}\to\frac{{{}^{L}\!P}}{[U,U]}(\bar{\mathbb{Z}}_{p})$$
lifting $\bar{\rho}$ modulo $[U,U]$.
The groundbreaking idea presented in [EG23] is that we can achieve the following:
•
Step 1: Choose a lift $\rho_{M}:\operatorname{Gal}_{F}\to{{}^{L}\!M}(\bar{\mathbb{Z}}_{p})$ of $\bar{\rho}_{M}$.
•
Step 2: Guarantee the image of
$$H^{1}(\operatorname{Gal}_{F},U^{\operatorname{ab}}(\bar{\mathbb{Z}}_{p}))\to H^{1}(\operatorname{Gal}_{F},U^{\operatorname{ab}}(\bar{\mathbb{F}}_{p}))$$
contains the cocycle corresponding to $\bar{\rho}\mod[U,U]$,
as long as we can estimate the dimension of certain substacks
of the reduced Emerton-Gee stacks for $M$.
We formalize the output of their geometric argument as follows:
Property EPL.
(Existence of partial lifts)
Let $\operatorname{Spec}R$ be a non-empty potentially crystalline
deformation ring of $\bar{\rho}_{M}$
such that for some $x\in\operatorname{Spec}R(\bar{\mathbb{Q}}_{p})$,
$H^{1}_{{\operatorname{crys}}}(\operatorname{Gal}_{K},U^{\operatorname{ab}}(\bar{\mathbb{Q}}_{p}))=H^{1}(\operatorname{Gal}_{K},U^{\operatorname{ab}}(\bar{\mathbb{Q}}_{p}))$.
Then there exists a point $y\in\operatorname{Spec}R(\bar{\mathbb{Z}}_{p})$
such that the image of
$H^{1}(\operatorname{Gal}_{F},U^{\operatorname{ab}}(\bar{\mathbb{Z}}_{p}))\to H^{1}(\operatorname{Gal}_{F},U^{\operatorname{ab}}(\bar{\mathbb{F}}_{p}))$
contains the cocycle corresponding to $\bar{\rho}\mod[U,U]$.
The geometric input is as follows:
Property SSD.
(Sufficiently small dimension)
Write $\mathcal{X}_{F,{{}^{L}\!M},{\operatorname{red}}}$ for the reduced Emerton-Gee
stacks for $M$ and
write $X_{s}\subset\mathcal{X}_{F,{{}^{L}\!M},{\operatorname{red}}}$
for the (scheme-theoretic closure of the) locus
$$\{x|\dim_{\bar{\mathbb{F}}_{p}}H^{2}(\operatorname{Gal}_{F},U^{\operatorname{ab}}(\bar{\mathbb{F}}_{p}))\geq s\}.$$
Then
$$\dim X_{s}+s\leq[F:\mathbb{Q}_{p}]\dim M/B_{M}$$
where $B_{M}$ is a Borel of $M$.
Here we define the dimension of an empty set to be $-\infty$
to avoid confusion.
Theorem 1.
Property SSD implies Property EPL.
Proof.
The proof of [EG23, Theorem 6.1.1, Theorem 6.3.2]
works verbatim.
See also [Lin21, Theorem 5.1.2].
The proof needs the algebraicity of the reduced Emerton-Gee stacks,
which is established in [Lin23b],
as well as the basic properties of the potentially crystalline
deformation rings, which are the main results of [BG19].
∎
For $G=\operatorname{GL}_{n}$, we have $F=K$ and
we can choose $P$ such that $U=U^{\operatorname{ab}}$.
However,
for general groups $G$,
we immediately run into the problem that
$U\neq U^{\operatorname{ab}}$.
1.2. Heisenberg-type extensions
The good news is that for classical groups,
we can always choose $P$ such that
$U$ is the next best thing after abelian groups,
namely, unipotent algebraic groups of nilpotency class $2$.
Theorem 2.
(Lemma 7.2, Lemma 8.1, Lemma 9.1)
Let ${{}^{L}\!G}$ be any of ${{}^{L}\!U}_{n},\operatorname{GSp}_{2n}$ or ${\operatorname{GSO}}_{n}$.
Then each mod $p$ Langlands parameter
$\bar{\rho}:\operatorname{Gal}_{F}\to{{}^{L}\!G}(\bar{\mathbb{F}}_{p})$
is either elliptic,
or factors through a maximal proper parabolic
${{}^{L}\!P}$
such that $\bar{\rho}$
is a Heisenberg-type extension (see Definition 5.1)
of some $\bar{\rho}_{M}:\operatorname{Gal}_{F}\to{{}^{L}\!M}(\bar{\mathbb{F}}_{p})$
where ${{}^{L}\!M}$ is the Levi factor of ${{}^{L}\!P}$.
A Heisenberg-type extension
is, roughly speaking,
an extension which has the least amount of “non-linearity”.
More precisely, if $\bar{\rho}_{P}:\operatorname{Gal}_{F}\to{{}^{L}\!P}(\bar{\mathbb{F}}_{p})$
is a Heisenberg-type extension of $\bar{\rho}_{M}$, then
$[U,U]$ is an abelian group, and
$$\dim_{\bar{\mathbb{F}}_{p}}H^{2}(\operatorname{Gal}_{K},[U,U](\bar{\mathbb{F}}_{p}))\leq 1.$$
The key technical result of this paper
is Theorem 5.5.
Roughly speaking,
for Heisenberg-type extensions,
the non-linear part of the obstruction for lifting
is so mild that it can be killed through manipulating
cup products.
To make this idea work, we need
a resolution of Galois cohomology
supported on degrees $[0,2]$,
which is compatible with cup products
on the cochain level.
In this paper, the resolution used
is the Herr complexes.
Although Herr complexes are infinite-dimensional
resolutions,
we can truncate them to a finite system
while still retaining the structure of cup products.
The Heisenberg equations are defined through
cup products on the truncated Herr cochain groups.
The main technical work is done in Section 1-5.
In Section 6-9, we study unitary groups,
symplectic groups and orthogonal groups
on a case-by-case basis and prove the following.
Theorem 3.
Let ${{}^{L}\!G}_{n}$ be any of ${{}^{L}\!U}_{n},\operatorname{Sp}_{2n},\operatorname{SO}_{n},\operatorname{GSp}_{2n}$ or ${\operatorname{GSO}}_{n}$,
and assume $p\neq 2$.
Assume for each $n$ and each maximal proper Levi
${{}^{L}\!M}$ of ${{}^{L}\!U}_{n}$,
Property SSD holds for ${{}^{L}\!M}$.
Then all
$L$-parameters $\operatorname{Gal}_{F}\to{{}^{L}\!G}_{n}(\bar{\mathbb{F}}_{p})$
admit a potentially crystalline lift of regular Hodge type.
We remark that
$\operatorname{GSp}_{2n}$ and ${\operatorname{GSO}}_{2n}$
are the Langlands dual groups
of the spin similitude groups,
while
$\operatorname{Sp}_{2n}$, $\operatorname{SO}_{2n}$ and $\operatorname{SO}_{2n+1}$
are the Langlands dual groups
of $\operatorname{SO}_{2n+1}$, $\operatorname{SO}_{2n}$ and $\operatorname{Sp}_{2n}$,
resp..
1.3. The Emerton-Gee stacks for unitary groups
We prove the following theorem.
Theorem 4.
(Theorem 10.1)
Let $\bar{\alpha}:\operatorname{Gal}_{K}\to\operatorname{GL}_{a}(\bar{\mathbb{F}}_{p})$
be an irreducible Galois representation.
The locus of $\bar{x}\in\mathcal{X}_{F,{{}^{L}\!U}_{n},{\operatorname{red}}}$
such that
$\operatorname{Hom}_{\operatorname{Gal}_{K}}(\bar{\alpha},\bar{x}|_{\operatorname{Gal}_{K}})\geq r$
is of dimension at most
$[F:\mathbb{Q}_{p}]\frac{n(n-1)}{2}-r^{2}+\frac{r}{2}$.
Since Theorem 4 is stronger than Property SSD, we have established the existence of de Rham lifts for $U_{n}$.
Theorem 5.
If $p\neq 2$, all $L$-parameters
$\operatorname{Gal}_{F}\to{{}^{L}\!U}_{n}(\bar{\mathbb{F}}_{p})$
admits a potentially crystalline lift
of regular Hodge type.
Proof.
Note that
$H^{2}(\operatorname{Gal}_{F},U^{\operatorname{ab}}(\bar{\mathbb{F}}_{p}))\cong H^{2}(\operatorname{Gal}_{K},\bar{\alpha}\otimes\bar{x}|_{\operatorname{Gal}_{K}}^{\vee})$
and $\lfloor-r^{2}+r/2\rfloor\leq-r$ for all $r\geq 1$.
∎
The proof of Theorem 4
is much more involved compared to its $\operatorname{GL}_{n}$-analogue
worked out in [EG23].
The $\operatorname{GL}_{n}$-case only requires the computation
of the rank of certain vector bundles,
and is a completely linear problem.
For $U_{n}$,
we need to compute the relative dimension
of certain quadratic cones.
To prove the required bound for $U_{n}$, we need a very precise
control of the rank of cup products
for extensions of the form
$\begin{bmatrix}\bar{\alpha}(1)^{\oplus r}&*&*\\
&\bar{\tau}&*\\
&&\bar{\alpha}^{\oplus r}\end{bmatrix}$,
and in order to do that,
we need to relate the rank of cup products
to the dimension of Grassmannian manifolds
(with the key lemma being 10.10).
The inequality is extremely tight at many steps.
After we have obtained an estimate for the rank of cup products,
we still need to divide the question into multiple cases
and perform min-max optimization on multi-variable polynomial functions in each of these cases.
The method of proof we presented in this paper also applies to
orthogonal/symplectic/spin similitude groups.
However, we do not treat these groups in this paper
due to the complexity of the analysis involved.
1.4. Final remarks
In the literature,
the geometric Breuil-Mézard conjecture
is often proved for sufficiently generic tame inertial types
(for example, see [Le+23]).
For $\operatorname{GL}_{d}$, if we only care about the generic situation,
then the existence of de Rham lifts is straightforward.
Let $\bar{r}=\begin{bmatrix}\bar{r}_{1}&*&\dots&*\\
&\bar{r}_{2}&\dots&*\\
&&\dots&*\\
&&&\bar{r}_{m}\end{bmatrix}$
be a Galois representation
which is maximally non-split,
meaning it factors through a unique minimal parabolic.
If $\bar{r}_{i}(1)\neq\bar{r}_{i+1}$
for each $i$,
then $\bar{r}$ admits a crystalline lift for trivial reasons.
Indeed,
put
$\bar{r}^{i}:=\begin{bmatrix}\bar{r}_{i}&*&\dots&*\\
&\bar{r}_{i+1}&\dots&*\\
&&\dots&*\\
&&&\bar{r}_{m}\end{bmatrix}$;
once an arbitrary lift $r^{i}$ of $\bar{r}^{i}$ is chosen,
we can construct a lift of $r^{i-1}$
as an extension of $r^{i}$ and a lift of $\bar{r}_{i}$.
However, for general groups such as the unitary groups,
regardless of how generic the situation is,
we do not have an easy way of constructing de Rham lifts.
The reason being that “maximal non-splitness” is not
very useful for general groups.
Although it remains a strong constraint,
it is not easy to be directly utilized.
Consider the symplectic similitude group situation
for the sake of reusing notations.
The argument in the previous paragraph breaks
unless $\bar{r}_{i}(1)\neq\bar{r}_{j}$
for all $i,j$.
Put
$\bar{\tau}:=\begin{bmatrix}\bar{r}_{2}&\dots&*\\
&\dots&*\\
&&\bar{r}_{m-1}\end{bmatrix}$
and thus
$\bar{\rho}=\begin{bmatrix}\bar{r}_{1}&\bar{c}_{1}&\bar{c}_{3}\\
&\bar{\tau}&\bar{c}_{2}\\
&&\bar{r}_{m}\end{bmatrix}$.
Suppose we have chosen a lift
$(r_{1},\tau,r_{m})$
of $(\bar{r}_{1},\bar{\tau},\bar{r}_{m})$.
A lift $c_{1}$ of $\bar{c}_{1}$ uniquely determines
a lift $c_{2}$ of $\bar{c}_{2}$
and vice versa.
Let’s choose a $c_{1}$ and, thus, a $c_{2}$.
Now we run into the problem that a lift $c_{3}$ of $\bar{c}_{3}$
does not exist
for all choices of $c_{1}$.
For a lift $c_{3}$ to exist, we must have $c_{1}\cup c_{2}=0$,
which is a non-linear condition.
Even if $c_{1}\cup c_{2}=0$, we can only ensure
there exists a $c_{3}$ which makes
$\begin{bmatrix}r_{1}&c_{1}&c_{3}\\
&\tau&c_{2}\\
&&r_{m}\end{bmatrix}$
a group homomorphism;
there is no guarantee that
$c_{3}$ lifts $\bar{c}_{3}$!
The obstruction disappears if $\bar{r}_{1}(1)\neq\bar{r}_{m}$; but
such restrictions
will force the Serre weights to lie within very narrow strips
of a chosen alcove.
From this perspective, Theorem 5 is necessary even if we only aim to prove the Breuil-Mézard conjecture in the generic situation.
2. Heisenberg equations
Let $r,s,t\in\mathbb{Z}_{+}$ be integers.
Let $\Lambda$ be a DVR with uniformizer $\varpi$.
Let $d\in\operatorname{Mat}_{s\times t}(\Lambda)$
and $\Sigma_{1},\dots,\Sigma_{s}\in\operatorname{Mat}_{r\times r}(\Lambda)$
be constant matrices.
For ease of notation, for $x\in\operatorname{Mat}_{r\times 1}(\Lambda)$, write
$$x^{t}\Sigma x:=\begin{bmatrix}x^{t}\Sigma_{1}x\\
\dots\\
x^{t}\Sigma_{s}x\end{bmatrix}\in\operatorname{Mat}_{s\times 1}(\Lambda).$$
Here $x^{t}$ denotes the transpose of $x$.
We are interested in solving systems of equations in $(r+t)$ variables of the form
$$x^{t}\Sigma x+dy=0$$
where $x\in\Lambda^{\oplus r}$ and
$y\in\Lambda^{\oplus t}$
are the $(r+t)$ variables.
We will call ($\dagger$)
the quadratic equation with coefficient matrix $(\Sigma,d)$.
2.1. Lemma
Let $\Lambda$ be a DVR with uniformizer $\varpi$.
Let $M$ be a finite flat $\Lambda$-module.
If $N\subset M$ is a submodule such that $M/N\cong\Lambda/\varpi^{n}$ ($n>0$),
then there exists a $\Lambda$-basis $\{x_{1},x_{2},\dots,x_{s}\}$
of $M$ such that
$N=\operatorname{span}(\varpi^{n}x_{1},x_{2},\dots,x_{s})$.
Proof.
Let $\{e_{1},\dots,e_{s}\}$ be a $\Lambda$-basis of $M$
and let $\{f_{1},\dots,f_{s}\}$ be a $\Lambda$-basis of $N$.
There exists a matrix $X\in\operatorname{GL}_{s}(\Lambda[1/\varpi])$
such that $(f_{1}|\dots|f_{s})=X(e_{1}|\dots|e_{s})$.
By the theory of Smith normal form,
$X=SDT$ where $S,T\in\operatorname{GL}_{s}(\Lambda)$
and $D$ is a diagonal matrix.
We have $D={\operatorname{Diag}}(\varpi^{n},1,\dots,1)$.
Set $(x_{1}|\dots|x_{s}):=T(e_{1}|\dots|e_{s})$,
and we are done.
∎
2.2. Definition
A quadratic equation with
coefficient matrix $(\Sigma,d)$
is said to be Heisenberg
if
(H1)
$\operatorname{coker}d\cong\Lambda$ or $\Lambda/\varpi^{n}$; and
(H2)
there exists $f\in\Lambda^{\oplus r}$
such that $f^{t}\Sigma f\neq 0$ mod $(\varpi,\operatorname{Im}(d))$.
2.3. Theorem
Let $(\Sigma,d)$ be a Heisenberg equation over $\Lambda$.
If there exists a mod $\varpi$ solution $(\bar{x},\bar{y})\in(\Lambda/\varpi)^{\oplus r+t}$
to $(\Sigma,d)$ (that is, $\bar{x}^{t}\Sigma\bar{x}+d\bar{y}\in\varpi\Lambda^{\oplus s}$),
then there exists an extension of DVR $\Lambda\subset\Lambda^{\prime}$
such that there exists a solution $(x,y)\in\Lambda^{\prime\oplus r+t}$
of $(\Sigma,d)$ lifting $(\bar{x},\bar{y})$.
Proof.
Write $\{e_{1},\dots,e_{s}\}$ for the standard basis for $\Lambda^{\oplus s}$.
By Lemma 2.1,
we can assume $\operatorname{Im}(d)=\operatorname{span}(\varpi^{n}e_{1},e_{2},\dots,e_{s})$
or $\operatorname{span}(e_{2},\dots,d_{s})$.
Write
$d=\begin{bmatrix}d_{1}\\
\dots\\
d_{s}\end{bmatrix}$.
By Definition 2.2,
there exists an element $f\in\Lambda^{\oplus r}$
such that $f^{t}\Sigma f\neq 0$ mod $(\varpi,\operatorname{Im}(d))$;
equivalently, $f^{t}\Sigma_{1}f\neq 0$ mod $\varpi$.
Let $(x,y)$ be an arbitrary lift of $(\bar{x},\bar{y})$.
Let $\Lambda^{\prime}$ be the ring of integers of the algebraic closure of
$\Lambda[1/\varpi]$.
Let $\lambda\in\Lambda^{\prime}$.
Consider
$$\displaystyle(x+\lambda f)^{t}\Sigma_{1}(x+\lambda f)+d_{1}y$$
$$\displaystyle=(f^{t}\Sigma_{1}f)\lambda^{2}+(x^{t}\Sigma_{1}f+f^{t}\Sigma_{1}x)\lambda+(x^{t}\Sigma_{1}x+d_{1}y);$$
note that the $\varpi$-adic valuation of the quadratic term is $0$
while the $\varpi$-adic valuation of the constant team is positive.
By inspecting the Newton polygon, the quadratic equation above admits a solution
$\lambda\in\Lambda^{\prime}$ of positive $\varpi$-adic valuation.
By replacing $x$ by $x+\lambda f$,
we can assume
$$x^{t}\Sigma_{1}x+d_{1}y=0.$$
Equivalently,
$x^{t}\Sigma x+dy\in\operatorname{span}(e_{2},\dots,e_{s})\subset\operatorname{Im}(d)$.
By replacing $\Lambda$ by $\Lambda[\lambda]$,
we may assume $(x,y)\in\Lambda^{\oplus r+t}$.
In particular, there exists an element $z\in\Lambda^{\oplus t}$
such that
$$x^{t}\Sigma x+dy=dz,$$
and it remains to show we can ensure $z=0$ mod $\varpi$.
We do know $dz=0$ mod $\varpi$.
So $dz\in\operatorname{span}(\varpi e_{2},\dots,\varpi e_{s})$.
Say $dz=\varpi u$, we have
$u\in\operatorname{span}(e_{2},\dots,e_{s})\subset\operatorname{Im}(d)$.
Say $u=dv$.
So $dz=\varpi dv=d\varpi v$.
By replacing $z$ by $\varpi v$,
we have
$$x^{t}\Sigma x+dy=d\varpi v.$$
Finally, replacing $y$ by $(y-\varpi v)$,
we are done.
∎
We will call affine varieties defined by Heisenberg equations Heisenberg varieties.
Theorem 2.3 says
all $\Lambda/\varpi$-points of a Heisenberg variety
admit a $\varpi$-adic thickening.
3. Extensions of $(\varphi,\Gamma)$-modules and non-abelian $(\varphi,\Gamma)$-cohomology
Fix a pinned split reductive group $(\widehat{G},\widehat{B},\widehat{T},\{Y_{\alpha}\})$
over $\mathbb{Z}$.
Fix a parabolic $P\subset\widehat{G}$ containing $\widehat{B}$
with Levi subgroupn $\widehat{M}$
and unipotent radical $U$.
Let $K$ be a $p$-adic field.
Let $A$ be a $\mathbb{Z}_{p}$-algebra.
The ring $\mathbb{A}_{K,A}$ is defined as in [Lin23b, Definition 4.2.8].
See [Lin23b, Section 4.2] for the definition of the procyclic group
$H_{K}$.
Fix a topological generator $\gamma$ of $H_{K}$.
Note that $\mathbb{A}_{K,A}$ admits a Frobenius action $\varphi$
which commutes $\gamma$.
3.1. Framed parabolic $(\varphi,\Gamma)$-modules
A framed $(\varphi,\gamma)$-module with $P$-structure
and $A$-coefficients is
a pair of matrices $[\phi],[\gamma]\in P(\mathbb{A}_{K,A})$,
satisfying
$[\phi]\varphi([\gamma])=[\gamma]\gamma([\phi])$.
A framed $(\varphi,\Gamma)$-module with $P$-structure
and $A$-coefficients
is a framed $(\varphi,\gamma)$-module $([\phi],[\gamma])$
with $P$-structure
and $A$-coefficients
such that there exists a closed algebraic group embedding
$P\hookrightarrow\operatorname{GL}_{d}=\operatorname{GL}(V)$ and
$1-[\gamma]$ induces a topologically nilpotent
$\gamma$-semilinear endomorphism of $V(\mathbb{A}_{K,A})$.
To make is concrete, if $\{e_{1},\dots,e_{d}\}$ is
the standard basis of $V$, then
$[\gamma]$ sends $\alpha e_{i}$ to $\gamma(\alpha)[\gamma]e_{i}$.
3.2. Levi factor of $(\varphi,\gamma)$-modules
Let $([\varphi],[\gamma])$ be a framed $(\varphi,\gamma)$-module
with $P$-structure and $A$-coefficients.
Write $([\varphi]_{\widehat{M}},[\gamma]_{\widehat{M}})$
for its image under the projection $P\to\widehat{M}$.
Note that $([\varphi]_{\widehat{M}},[\gamma]_{\widehat{M}})$
is a framed $(\varphi,\gamma)$-module with $\widehat{M}$-structure.
3.3. Lemma
Let $([\varphi],[\gamma])$ be a framed $(\varphi,\gamma)$-module
with $P$-structure and $A$-coefficients.
Then $([\varphi],[\gamma])$ is a framed $(\varphi,\Gamma)$-module
with $P$-structure and $A$-coefficients
if and only if
$([\varphi]_{\widehat{M}},[\gamma]_{\widehat{M}})$ is a framed $(\varphi,\Gamma)$-module
with $\widehat{M}$-structure and $A$-coefficients.
Proof.
Note that $P=U\rtimes\widehat{M}$ and $\widehat{M}$ is a subgroup of $P$.
We will regard both $P$ and $\widehat{M}$ as a subgroup of $\operatorname{GL}_{d}\subset\operatorname{Mat}_{d\times d}$
by fixing an embedding $P\hookrightarrow\operatorname{GL}_{d}$.
Write $[u]:=[\gamma]-[\gamma]_{\widehat{M}}$.
Note that the Jordan decomposition of $[\gamma]$ and $[\gamma]_{\widehat{M}}$
has the same semisimple part;
write $[\gamma]=g_{s}g_{u}$
and $[\gamma]_{\widehat{M}}=g_{s}g_{u}^{\prime}$
for the Jordan decomposition where both $g_{u}$ and $g_{u}^{\prime}$
lies in the unipotent radical of a Borel of $P$
(if we replace $\widehat{M}$ by one of its conjugate in $P$, then $g_{u}$ and $g_{u}^{\prime}$
lies in the unipotent radical of the same Borel of $P$).
By the Lie-Kolchin theorem, $(g_{u}-g_{u}^{\prime})$ is nilpotent.
Since $g_{s}$ commutes with $g_{u}$ and $g_{u}^{\prime}$, $[u]$ is nilpotent.
We have $(1-[\gamma]_{\widehat{M}})=(1-[\gamma]+[u])$.
Since $[u]$ is nilpotent, $(1-[\gamma]_{\widehat{M}})$
is topologically nilpotent if and only if
$(1-[\gamma])$ is topologically nilpotent.
∎
3.4. Extensions of $(\varphi,\Gamma)$-modules
In this paragraph, we classify
all framed $(\varphi,\Gamma)$-modules
with $P$-structure and $A$-coefficients
whose Levi factor
is equal to a fixed $(\varphi,\Gamma)$-module with $\widehat{M}$-structure
$([\phi]_{\widehat{M}},[\gamma]_{\widehat{M}})$.
For ease of notation, write $f=[\phi]_{\widehat{M}}$
and $g=[\gamma]_{\widehat{M}}$.
We denote by
$$H^{1}_{{\operatorname{Herr}}}(f,g)$$
the equivalence classes of all extensions
of $(f,g)$ to a framed $(\varphi,\Gamma)$-module
with $P$-structure.
Let $u_{f},u_{g}\in U(\mathbb{A}_{K,A})$.
Set
$[\phi]=u_{f}f$ and $[\gamma]=u_{g}g$.
Note that $([\phi],[\gamma])$
is a $(\varphi,\Gamma)$-module
if and only if
$$u_{f}{\operatorname{Int}}_{g}(\gamma(u_{f}^{-1}))=u_{g}{\operatorname{Int}}_{f}(\varphi(u_{g}^{-1}))$$
by Lemma 3.3.
Here ${\operatorname{Int}}_{?}(*)=?*?^{-1}$.
3.5. Assumption
We assume $U$ is a unipotent algebraic group
of nilpotency class $2$ and $p\neq 2$.
Assume there exists an embedding
$\iota:U\hookrightarrow\operatorname{GL}_{N}$ such that
$(\iota(x)-1)^{2}=0$ for all $x\in U$.
In particular,
there is a well-defined truncated log map
$$\displaystyle\log:U\to$$
$$\displaystyle\operatorname{Lie}U$$
$$\displaystyle u\mapsto$$
$$\displaystyle(u-1)-\frac{(u-1)^{2}}{2}$$
whose inverse is the truncated exponential map
$$\exp:\operatorname{Lie}U\to U.$$
Here we embed $U$ into $\operatorname{Mat}_{d\times d}$ in order to define
addition and substraction
(the embedding is not important).
Write $x=\log(u_{f})$ and $y=\log(u_{g})$.
3.6. Lemma
$([\phi],[\gamma])$ is a $(\varphi,\Gamma)$-module
with $P$-structure
extending $([\phi]_{\widehat{M}},[\gamma]_{\widehat{M}})$
if and only if
$$(1-{\operatorname{Int}}_{g}\circ\gamma)(x)-\frac{1}{2}[x,{\operatorname{Int}}_{g}\gamma(x)]=(1-{\operatorname{Int}}_{f}\circ\varphi)(y)-\frac{1}{2}[y,{\operatorname{Int}}_{f}\varphi(y)].$$
Proof.
It follows from the Baker-Campbell-Hausdorff formula.
∎
Recall that a nilpotent Lie algebra of nilpotency class $2$
is isomorphic to its associated graded Lie algebra
(with respect to either the lower or the upper central filtration).
We fix such an isomorphism $\operatorname{Lie}U\cong\operatorname{gr}^{\bullet}\operatorname{Lie}U=\operatorname{gr}^{1}\operatorname{Lie}U\oplus\operatorname{gr}^{0}\operatorname{Lie}U$
where $\operatorname{gr}^{0}\operatorname{Lie}U$ is the derived subalgebra of $\operatorname{Lie}U$.
Note that $\operatorname{gr}^{0}\operatorname{Lie}U$ is contained in the center of $\operatorname{Lie}U$.
In particular, if $x\in\operatorname{Lie}U$,
we can write $x=x_{0}+x_{1}$ where $x_{i}\in\operatorname{gr}^{i}\operatorname{Lie}U$.
3.7. Lemma
$([\phi],[\gamma])$ is a $(\varphi,\Gamma)$-module
extending $([\phi]_{\widehat{M}},[\gamma]_{\widehat{M}})$
if and only if
$$\begin{cases}(1-{\operatorname{Int}}_{g}\circ\gamma)(x_{1})-(1-{\operatorname{Int}}_{f}\circ\varphi)(y_{1})=0\\
(1-{\operatorname{Int}}_{g}\circ\gamma)(x_{0})-(1-{\operatorname{Int}}_{f}\circ\varphi)(y_{0})=\frac{1}{2}[x_{1},{\operatorname{Int}}_{g}\gamma(x_{1})]-\frac{1}{2}[y_{1},{\operatorname{Int}}_{f}\varphi(y_{1})].\end{cases}$$
Proof.
Arrange terms according to their degree
in the graded Lie algebra.
∎
3.8. Herr complexes
Let $V$ be a vector space over $\operatorname{Spec}A$,
and let $(s,t)$ be a framed $(\varphi,\Gamma)$-module
with $\operatorname{GL}(V)$-structure and $A$-coefficients.
Then the Herr complex associated to $(s,t)$ is by definition
the following
$$C^{\bullet}_{{\operatorname{Herr}}}(s,t):=[V(\mathbb{A}_{K,A})\xrightarrow{(s\circ\varphi-1,t\circ\gamma-1)}V(\mathbb{A}_{K,A})\oplus V(\mathbb{A}_{K,A})\xrightarrow{(t\circ\gamma-1,1-s\circ\varphi)^{t}}V(\mathbb{A}_{K,A})]$$
Write $Z^{\bullet}_{{\operatorname{Herr}}}(s,t)$,
$B^{\bullet}_{{\operatorname{Herr}}}(s,t)$
and $H^{\bullet}_{{\operatorname{Herr}}}(s,t)$
for the cocycle group, the coboundary group
and the cohomology group of $C^{\bullet}_{{\operatorname{Herr}}}(s,t)$.
The reader can easily check that our definiton is consistent with
that of [EG23, Section 5.1].
We will denote by $d$ the differential operators in
$C^{\bullet}_{{\operatorname{Herr}}}(f,g)$.
Note that $\widehat{M}$ acts on $\operatorname{gr}^{1}\operatorname{Lie}(U)$
and $\operatorname{gr}^{0}\operatorname{Lie}(U)$ by conjugation.
Write
$$\displaystyle{\operatorname{Int}}^{0}:\widehat{M}\to\operatorname{GL}(\operatorname{gr}^{0}\operatorname{Lie}(U)),$$
$$\displaystyle{\operatorname{Int}}^{1}:\widehat{M}\to\operatorname{GL}(\operatorname{gr}^{1}\operatorname{Lie}(U))$$
for the conjugation actions.
3.9. Cup products
Define a map
$$\displaystyle Q:C^{1}_{{\operatorname{Herr}}}({\operatorname{Int}}^{1}(f,g))$$
$$\displaystyle\to C^{2}_{{\operatorname{Herr}}}({\operatorname{Int}}^{0}(f,g))$$
$$\displaystyle(x_{1},y_{1})$$
$$\displaystyle\mapsto\frac{1}{2}[x_{1},{\operatorname{Int}}_{g}^{1}\gamma(x_{1})]-\frac{1}{2}[y_{1},{\operatorname{Int}}_{f}^{1}\varphi(y_{1})]$$
and a symmetric bilinear pairing
$$\displaystyle\cup:C^{1}_{{\operatorname{Herr}}}({\operatorname{Int}}^{1}(f,g))\times C^{1}_{{\operatorname{Herr}}}({\operatorname{Int}}^{1}(f,g))$$
$$\displaystyle\to C^{2}_{{\operatorname{Herr}}}({\operatorname{Int}}^{0}(f,g))$$
$$\displaystyle((x_{1},y_{1}),(x_{1}^{\prime},y_{1}^{\prime}))$$
$$\displaystyle\mapsto\frac{1}{2}(Q(x_{1}+x_{1}^{\prime},y_{1}+y_{1}^{\prime})-Q(x_{1},y_{1})-Q(x_{1}^{\prime},y_{1}^{\prime})).$$
3.10. Proposition
Define
$$\displaystyle Z_{{\operatorname{Herr}}}^{1}(f,g):=\{(x_{0}+x_{1},y_{0}+y_{1})\in$$
$$\displaystyle C^{1}_{{\operatorname{Herr}}}({\operatorname{Int}}^{0}(f,g)\oplus{\operatorname{Int}}^{1}(f,g))|$$
$$\displaystyle\begin{cases}(x_{0},y_{0})\in C^{1}_{{\operatorname{Herr}}}({\operatorname{Int}}^{0}(f,g))\\
(x_{1},y_{1})\in C^{1}_{{\operatorname{Herr}}}({\operatorname{Int}}^{1}(f,g))\\
d(x_{1},y_{1})=0\\
d(x_{0},y_{0})+(x_{1},y_{1})\cup(x_{1},y_{1})=0\end{cases}\}.$$
There exists a surjective map
$$\displaystyle Z_{{\operatorname{Herr}}}^{1}(f,g)$$
$$\displaystyle\to H^{1}_{{\operatorname{Herr}}}(f,g)$$
$$\displaystyle(x,y)$$
$$\displaystyle\mapsto(\exp(x)f,\exp(y)g)$$
Proof.
It is a reformulation of Lemma 3.7.
∎
3.11. Lemma
The cup product induces a well-defined symmetric bilinear pairing
$$\cup_{H}:H^{1}_{{\operatorname{Herr}}}({\operatorname{Int}}^{1}(f,g))\times H^{1}_{{\operatorname{Herr}}}({\operatorname{Int}}^{1}(f,g))\to H^{2}_{{\operatorname{Herr}}}({\operatorname{Int}}^{0}(f,g)).$$
Proof.
The proof is formally similar to [Lin21, Lemma 2.3.3.2].
∎
3.12. Non-split groups
We remark that all results in this section
holds for non-split groups.
More precisely,
Let $F\subset K$ be a $p$-adic field
and fix an action of $\Delta:=\operatorname{Gal}(K/F)$
on the pinned group $(\widehat{G},\widehat{B},\widehat{T},\{Y_{\alpha}\})$
and assume both $\widehat{M}$ and $P$ are $\Delta$-stable.
Set ${{}^{L}\!G}:=\widehat{G}\rtimes\Delta$,
${{}^{L}\!P}:=P\rtimes\Delta$,
and ${{}^{L}\!M}\rtimes\Delta$.
Denote by $\operatorname{GL}^{!}(\operatorname{Lie}U)$
the (parabolic) subgroup of the general linear
group $\operatorname{GL}(\operatorname{Lie}U)$ that preserves the lower central filtration
of $\operatorname{Lie}U$.
Using the truncated log/exp map,
we have a group scheme homomorphism
${{}^{L}\!M}\to\operatorname{GL}^{!}(\operatorname{Lie}U)$,
which extends to a group scheme homomorphism
$${{}^{L}\!P}={{}^{L}\!M}\rtimes U\to\operatorname{GL}^{!}(\operatorname{Lie}U)\rtimes U.$$
Name the group $\operatorname{GL}^{!}(\operatorname{Lie}U)\rtimes U$
as $\widetilde{P}$,
and name the homomorphism
${{}^{L}\!P}\to\widetilde{P}$
as $\Xi$.
3.13. Definition
In the non-split setting,
a framed $(\varphi,\Gamma)$-module
with ${{}^{L}\!P}$-structure
is a $(\varphi,\Gamma)$-module $(F,\phi_{F},\gamma_{F})$ with ${{}^{L}\!P}$-structure,
and a framed $(\varphi,\Gamma)$-module
$([\phi],[\gamma])$ with $\widetilde{P}$-structure,
together with an identification
$\Xi_{*}(F,\phi_{F},\gamma_{F})\cong([\phi],[\gamma])$.
The reason we make the definition above is because
$(\varphi,\Gamma)$-modules with $H$-structure
are not represented by a pair of matrices
if $H$ is a disconnected group.
So by choosing the map ${{}^{L}\!P}\to\widetilde{P}$,
we are able to work with connected groups $\widetilde{P}$.
Since the whole purpose of this section is to
understand extensions of $(\varphi,\Gamma)$-modules
and
we fix the ${{}^{L}\!M}$-semisimplification
of framed $(\varphi,\Gamma)$-module
with ${{}^{L}\!P}$-structure,
the reader can easily see all results carry over to the non-split case
by using $\widetilde{P}$ in place of $P$.
4. Cohomologically Heisenberg lifting problems
We keep notations from the previous section.
Let $\Lambda\subset\bar{\mathbb{Z}}_{p}$ be a DVR.
Let $([\bar{\phi}],[\bar{\gamma}])$
be a framed $(\varphi,\Gamma)$-module
with $P$-structure and $\bar{\mathbb{F}}_{p}$-coefficients.
Write $(\bar{f},\bar{g})$ for $([\bar{\phi}]_{\widehat{M}},[\bar{\gamma}]_{\widehat{M}})$.
Fix a framed $(\varphi,\Gamma)$-module
$(f,g)$ with $\widehat{M}$-structure and $\Lambda$-coefficients
lifting $(\bar{f},\bar{g})$.
By Proposition 3.10,
there exists an element
$(\bar{x},\bar{y})\in Z^{1}_{{\operatorname{Herr}}}(\bar{f},\bar{g})$
representing $([\bar{\phi}],[\bar{\gamma}])$.
We can write $\bar{x}=\bar{x}_{0}+\bar{x}_{1}$
and $\bar{y}=\bar{y}_{0}+\bar{y}_{1}$
such that
$(\bar{x}_{i},\bar{y}_{i})\in C^{1}_{{\operatorname{Herr}}}({\operatorname{Int}}^{i}(\bar{f},\bar{g}))$.
4.1. Definition
A cohomologically Heisenberg lifting problem
is a tuple
$(f,g,\bar{x},\bar{y},H)$
consisting of
•
a framed $(\varphi,\Gamma)$-module with $\widehat{M}$-structure and $\Lambda$-coefficients $(f,g)$;
•
an element $(\bar{x},\bar{y})\in Z^{1}_{{\operatorname{Herr}}}(\bar{f},\bar{g})$, and
•
a $\Lambda$-submodule $H\subset H^{1}_{{\operatorname{Herr}}}({\operatorname{Int}}^{1}(f,g))$,
such that
(HL1)
$(\bar{x}_{1},\bar{y}_{1})$ lies in the image of
$H$ in $H^{1}_{{\operatorname{Herr}}}({\operatorname{Int}}^{1}(\bar{f},\bar{g}))$,
(HL2)
the pairing $\cup_{H}|_{H}:H\times H\to H^{2}_{{\operatorname{Herr}}}({\operatorname{Int}}^{0}(f,g))$ is surjective,
(HL3)
$H^{2}_{{\operatorname{Herr}}}({\operatorname{Int}}^{0}(\bar{f},\bar{g}))\cong\Lambda/\varpi$.
A solution to the lifting problem
$(f,g,\bar{x},\bar{y},H)$
is an element $(x,y)\in Z^{1}_{{\operatorname{Herr}}}(f,g)$
lifting $(\bar{x},\bar{y})$
such that
the image of $(x_{1},y_{1})$ in
$H^{1}_{{\operatorname{Herr}}}({\operatorname{Int}}^{1}(f,g))$
is contained in $H$.
At first sight, a cohomologically Heisenberg lifting problem
defines an infinite system of quadratic polynomial equations.
In the following theorem, we show that we may truncate
the infinite system to a finite system and solve cohomologically
Heisenberg lifting problems.
4.2. Theorem
Each cohomologically Heisenberg lifting problem is solvable
after replacing $\Lambda$ by an extension of DVR $\Lambda^{\prime}\subset\bar{\mathbb{Z}}_{p}$.
Proof.
Before we start, we remark that
$C^{\bullet}_{{\operatorname{Herr}}}({\operatorname{Int}}^{i}(f,g))\otimes_{\Lambda}\Lambda/\varpi=C^{\bullet}_{{\operatorname{Herr}}}({\operatorname{Int}}^{i}(\bar{f},\bar{g}))$
while it is not generally true that
$H^{\bullet}_{{\operatorname{Herr}}}({\operatorname{Int}}^{i}(f,g))\otimes_{\Lambda}\Lambda/\varpi=H^{\bullet}_{{\operatorname{Herr}}}({\operatorname{Int}}^{i}(\bar{f},\bar{g}))$.
Write $Z_{H}$ for the preimage of $H$ in $Z^{1}_{{\operatorname{Herr}}}({\operatorname{Int}}^{1}(f,g))$.
Since $Z^{1}_{{\operatorname{Herr}}}({\operatorname{Int}}^{1}(f,g))$ is $\Lambda$-torsion-free,
so is $Z_{H}$.
Let $X\subset Z_{H}$ be a finite $\Lambda$-submodule
which maps surjectively onto $H$.
Such an $X$ exists because $H^{1}_{{\operatorname{Herr}}}({\operatorname{Int}}^{1}(f,g))$
is a finite $\Lambda$-module ([EG23, Theorem 5.1.22]).
Since $\Lambda$ is a DVR, $X$ is finite free over $\Lambda$.
Let $W\subset C^{2}_{{\operatorname{Herr}}}({\operatorname{Int}}^{0}(f,g))$
be a finite free $\Lambda$-submodule
containing $X\cup X$.
By (HL2), $W\to H^{2}_{{\operatorname{Herr}}}({\operatorname{Int}}^{0}(f,g))$
is surjective.
Set $B_{W}:=B^{2}_{{\operatorname{Herr}}}({\operatorname{Int}}^{0}(f,g))\cap W$,
we have $H^{2}_{{\operatorname{Herr}}}({\operatorname{Int}}^{0}(f,g))=W/B_{W}$.
Again, $B_{W}$ is a finite free $\Lambda$-module.
Finally, let $Y\subset C^{1}_{{\operatorname{Herr}}}({\operatorname{Int}}^{0}(f,g))$
be a finite free $\Lambda$-submodule
which maps surjectively onto $B_{W}$
and contains at least one lift of $(\bar{x}_{0},\bar{y}_{0})$.
Now consider the system of equations
$$\mathfrak{x}\cup\mathfrak{x}+d\mathfrak{y}=0\in W$$
where $\mathfrak{x}\in X$ and $\mathfrak{y}\in Y$
are the variables and $W$ is the value space.
We check that ($\dagger$) is a Heisenberg equation in the sense of
Definition 2.2.
(H1) follows from (HL3) and the Nakayama lemma,
while (H2) follows from (HL2).
The equation ($\dagger$) admits a mod $\varpi$ solution
$\bar{\mathfrak{x}},\bar{\mathfrak{y}}$
defined by $(\bar{x},\bar{y})\in Z^{1}_{{\operatorname{Herr}}}(\bar{f},\bar{g})$.
By Theorem 2.3,
($\dagger$) admits a solution lifting
$\bar{\mathfrak{x}},\bar{\mathfrak{y}}$
after extending the coefficient ring $\Lambda$.
The solution to the equation ($\dagger$)
is also a solution to the lifting problem.
∎
5. Applications to Galois cohomology
Let $F/\mathbb{Q}_{p}$ be a $p$-adic field,
and let $G$ is a tamely ramified quasi-split reductive group over $F$
which splits over $K$.
Write $\Delta:=\operatorname{Gal}(K/F)$.
There exists a $\Delta$-stable pinning $(G,B,T,\{X_{\alpha}\})$
of $G$,
and let $(\widehat{G},\widehat{B},\widehat{T},\{Y_{\alpha}\})$
be the dual pinned group.
Let $P\subset\widehat{G}$ be a $\Delta$-stable parabolic of $\widehat{G}$
with $\Delta$-stable Levi subgroup $\widehat{M}$
and unipotent radical $U$.
Denote by ${{}^{L}\!P}$ the semi-direct product $U\rtimes{{}^{L}\!M}$
where ${{}^{L}\!M}=\widehat{M}\rtimes\Delta$.
In the terminology of [Lin21], ${{}^{L}\!P}$
is a big pseudo-parabolic of ${{}^{L}\!G}=\widehat{G}\rtimes\Delta$
and all big pseudo-parabolic of ${{}^{L}\!G}$
are of the form ${{}^{L}\!P}$
(see [Lin21, Section 3]).
We enforce Assumption 3.5 throughout this section.
Note that ${{}^{L}\!M}$ acts on $\operatorname{Lie}U=\operatorname{gr}^{0}\operatorname{Lie}U\oplus\operatorname{gr}^{1}\operatorname{Lie}U$
by adjoint, and we denote the adjoint actions by
${\operatorname{Int}}^{i}$ as in Paragraph 3.8.
5.1. Definition
Let $\bar{\rho}_{P}:\operatorname{Gal}_{F}\to{{}^{L}\!P}(\bar{\mathbb{F}}_{p})$
be a Langlands parameter with Levi factor
$\bar{\rho}_{M}:\operatorname{Gal}_{F}\to{{}^{L}\!M}(\bar{\mathbb{F}}_{p})$.
We say $\bar{\rho}_{P}$ is a Heisenberg-type extension of $\bar{\rho}_{M}$
if
$$\dim_{\bar{\mathbb{F}}_{p}}H^{2}(\operatorname{Gal}_{K},\operatorname{gr}^{0}\operatorname{Lie}U(\bar{\mathbb{F}}_{p}))\leq 1.$$
Here the $\operatorname{Gal}_{F}$-action on $\operatorname{gr}^{0}\operatorname{Lie}U(\bar{\mathbb{F}}_{p})$
is obtained from composing $\bar{\rho}_{M}$
and ${\operatorname{Int}}^{0}:{{}^{L}\!M}\to\operatorname{GL}(\operatorname{gr}^{0}\operatorname{Lie}U)$.
5.2. Cup products on Galois cohomology
Let $A$ be either $\bar{\mathbb{F}}_{p}$ or $\bar{\mathbb{Z}}_{p}$.
If $\rho_{M}:\operatorname{Gal}_{F}\to{{}^{L}\!M}(A)$ is an $L$-parameter,
we can equip $\operatorname{Lie}U(A)$ with $\operatorname{Gal}_{F}$-action
via $\rho_{M}$.
Note that there exists a symmetric bilinear pairing
$$H^{1}(\operatorname{Gal}_{F},\operatorname{gr}^{1}\operatorname{Lie}U(A))\times H^{1}(\operatorname{Gal}_{F},\operatorname{gr}^{1}\operatorname{Lie}U(A))\to H^{2}(\operatorname{Gal}_{F},\operatorname{gr}^{0}\operatorname{Lie}U(A)),$$
which is defined in [Lin21, Section 3.2].
Alternatively, we can transport the symmetric cup product
on $(\varphi,\Gamma)$-cohomology defined in Definition 3.9
and Lemma 3.11,
and later generalized in 3.12
to Galois cohomology.
5.3. Partial extensions and partial lifts
A partial extension of $\rho_{M}$
is a continuous group homomorphism
$\rho^{\prime}:\operatorname{Gal}_{F}\to\frac{{{}^{L}\!P}}{[U,U]}(\bar{\mathbb{Z}}_{p})$
extending $\rho_{M}:\operatorname{Gal}_{F}\to{{}^{L}\!M}(\bar{\mathbb{Z}}_{p})=\frac{{{}^{L}\!P}}{U}(\bar{\mathbb{Z}}_{p})$.
Here $[U,U]$ is the derived subgroup of $U$.
The set of equivalence classes of partial extensions of
$\rho_{M}$ are in natural bijection with
$H^{1}(\operatorname{Gal}_{F},\frac{U}{[U,U]}(\bar{\mathbb{Z}}_{p}))=H^{1}(\operatorname{Gal}_{F},\operatorname{gr}^{1}\operatorname{Lie}U(\bar{\mathbb{Z}}_{p}))$.
Let $\bar{\rho}_{P}:\operatorname{Gal}_{F}\to{{}^{L}\!P}(\bar{\mathbb{F}}_{p})$ be an $L$-parameter.
A partial lift of $\bar{\rho}_{P}$
is a group homomorphism $\rho^{\prime}:\operatorname{Gal}_{F}\to\frac{{{}^{L}\!P}}{[U,U]}(\bar{\mathbb{Z}}_{p})$
which lifts $\bar{\rho}_{P}$ mod $[U,U]$.
5.4. Lemma
A partial extension $\rho^{\prime}:\operatorname{Gal}_{F}\to\frac{{{}^{L}\!P}}{[U,U]}(A)$
of $\rho_{M,A}$ extends to a full extension
$\rho:\operatorname{Gal}_{F}\to{{}^{L}\!P}(A)$
if and only if $c\cup c=0$
where $c\in H^{1}(\operatorname{Gal}_{F},\operatorname{gr}^{1}\operatorname{Lie}U(A))$
is the cohomology class corresponding to $\rho^{\prime}$.
Proof.
It follows immediately from Proposition 3.10.
∎
5.5. Theorem
Assume $p\neq 2$.
Let $\bar{\rho}_{P}:\operatorname{Gal}_{F}\to{{}^{L}\!P}(\bar{\mathbb{F}}_{p})$
be an extension of $\bar{\rho}_{M}$.
Assume $\bar{\rho}_{M}$ admits a lift $\rho_{M}:\operatorname{Gal}_{F}\to{{}^{L}\!M}(\bar{\mathbb{Z}}_{p})$
such that
(i)
$\bar{\rho}_{P}$
is a Heisenberg-type extension of $\bar{\rho}_{M}$,
(ii)
$\bar{\rho}_{P}|_{\operatorname{Gal}_{K}}$ admits a partial lift
which is a partial extension of
$\rho_{M}|_{\operatorname{Gal}_{K}}$, and
(iii)
the pairing
$$H^{1}(\operatorname{Gal}_{F},\operatorname{gr}^{1}\operatorname{Lie}U(\bar{\mathbb{Z}}_{p}))\underset{\bar{\mathbb{Z}}_{p}}{\otimes}\bar{\mathbb{F}}_{p}\times H^{1}(\operatorname{Gal}_{F},\operatorname{gr}^{1}\operatorname{Lie}U(\bar{\mathbb{Z}}_{p}))\underset{\bar{\mathbb{Z}}_{p}}{\otimes}\bar{\mathbb{F}}_{p}\to H^{2}(\operatorname{Gal}_{F},\operatorname{gr}^{0}\operatorname{Lie}U(\bar{\mathbb{F}}_{p}))$$
is non-trivial
unless $H^{2}(\operatorname{Gal}_{F},\operatorname{gr}^{0}\operatorname{Lie}U(\bar{\mathbb{F}}_{p}))=0$,
then $\bar{\rho}_{P}$
admits a lift $\rho_{P}:\operatorname{Gal}_{F}\to{{}^{L}\!P}(\bar{\mathbb{Z}}_{p})$
with Levi factor $\rho_{M}$.
Proof.
Let $A$ be either $\bar{\mathbb{F}}_{p}$ or $\bar{\mathbb{Z}}_{p}$,
and let $\rho_{M,A}$ be either $\bar{\rho}_{M}$ or $\rho_{M}$,
resp..
The set of equivalence classes
of $L$-parameters $\operatorname{Gal}_{F}\to{{}^{L}\!P}/[U,U](A)$
extending $\rho_{M,A}$
is in natural bijection with
the $A$-module
$H^{1}(\operatorname{Gal}_{F},\operatorname{gr}^{1}\operatorname{Lie}U(A))$.
Since $K/F$ is assumed to have prime-to-$p$ degree,
we have
$$H^{1}(\operatorname{Gal}_{F},\operatorname{gr}^{1}\operatorname{Lie}U(A))=H^{1}(\operatorname{Gal}_{K},\operatorname{gr}^{1}\operatorname{Lie}U(A))^{\Delta}$$
by [Koc02, Theorem 3.15].
The $L$-parameter $\bar{\rho}_{P}$ defines
an element $\bar{c}\in H^{1}(\operatorname{Gal}_{K},\operatorname{gr}^{1}\operatorname{Lie}U(\bar{\mathbb{F}}_{p}))^{\Delta}$.
By item (ii), there exists a lift
$c^{\prime}\in H^{1}(\operatorname{Gal}_{K},\operatorname{gr}^{1}\operatorname{Lie}U(\bar{\mathbb{Z}}_{p}))$
lifting $\bar{c}$.
Define $c:=\frac{1}{[K:F]}\sum_{\gamma\in\Delta}\gamma c\in H^{1}(\operatorname{Gal}_{F},\operatorname{gr}^{1}\operatorname{Lie}U(\bar{\mathbb{Z}}_{p}))$.
It is clear that $c$
lifts $\bar{c}$.
There are two possibilities: either
$$\dim_{\bar{\mathbb{F}}_{p}}H^{2}(\operatorname{Gal}_{F},\operatorname{gr}^{0}\operatorname{Lie}U(\bar{\mathbb{F}}_{p}))=0,$$
or
$$\dim_{\bar{\mathbb{F}}_{p}}H^{2}(\operatorname{Gal}_{F},\operatorname{gr}^{0}\operatorname{Lie}U(\bar{\mathbb{F}}_{p}))=1.$$
In the former case, there is no obstruction to extension and lifting
and the Theorem follows from [Lin21, Proposition 5.3.1].
Now we consider the latter case.
By Fontaine’s theory of $(\varphi,\Gamma)$-modules,
$\rho_{M}$ corresponds to a framed $(\varphi,\Gamma)$-module
$(f,g)$ with ${{}^{L}\!M}$-structure (or rather $\operatorname{Gal}^{!}(\operatorname{Lie}U)$-structure by Paragraph 3.12), and
$\bar{\rho}_{P}$ corresponds to an element
$(\bar{x},\bar{y})\in Z^{1}_{{\operatorname{Herr}}}(\bar{f},\bar{g})$.
Here $(\bar{f},\bar{g})$ is the reduction of $(f,g)$.
Since Galois cohomology is naturally isomorphic to the cohomology
of Herr complexes ([EG23, Theorem 5.1.29]),
we can identify $H^{1}_{{\operatorname{Herr}}}({\operatorname{Int}}^{i}(f,g))$
with $H^{1}(\operatorname{Gal}_{F},\operatorname{gr}^{i}\operatorname{Lie}U(\bar{\mathbb{Z}}_{p}))$.
Consider the tuple $(f,g,\bar{x},\bar{y},H^{1}(\operatorname{Gal}_{F},\operatorname{gr}^{1}\operatorname{Lie}U(\bar{\mathbb{Z}}_{p})))$.
We want to check that this tuple is a cohomologically Heisenberg lifting problem in the sense of Definition 4.1.
(HL3) follows from assumption (i),
(HL2) follows from assumption (iii) and (HL3),
and (HL1) follows from assumption (ii) and the discussion in the second paragraph of this proof.
We finish the proof by invoking Theorem 4.2.
∎
6. Interaction of cup products with $\mathbb{Z}/2$-action
Let $K$ be a $p$-adic field.
Let $a=c$ and $b$ be positive integers.
Fix a Galois representation
$$\bar{\tau}=\begin{bmatrix}\bar{\tau}_{a}&&\\
&\bar{\tau}_{b}&\\
&&\bar{\tau}_{c}\end{bmatrix}:\operatorname{Gal}_{K}\to\begin{bmatrix}\operatorname{GL}_{a}&&\\
&\operatorname{GL}_{b}&\\
&&\operatorname{GL}_{c}\end{bmatrix}(\bar{\mathbb{F}}_{p}),$$
as well as
a lift
$$\tau=\begin{bmatrix}\tau_{a}&&\\
&\tau_{b}&\\
&&\tau_{c}\end{bmatrix}:\operatorname{Gal}_{K}\to\begin{bmatrix}\operatorname{GL}_{a}&&\\
&\operatorname{GL}_{b}&\\
&&\operatorname{GL}_{c}\end{bmatrix}(\bar{\mathbb{Z}}_{p})$$
of $\bar{\tau}$.
Write $\operatorname{gr}^{0}\operatorname{Lie}U:=\operatorname{Mat}_{a\times c}$,
and $\operatorname{gr}^{1}\operatorname{Lie}U:=\operatorname{Mat}_{a\times b}\oplus\operatorname{Mat}_{b\times c}$.
Recall that we have defined a (symmetrized) cup product
$$H^{1}(\operatorname{Gal}_{K},\operatorname{gr}^{1}\operatorname{Lie}U(A))\times H^{1}(\operatorname{Gal}_{K},\operatorname{gr}^{1}\operatorname{Lie}U(A))\to H^{2}(\operatorname{Gal}_{K},\operatorname{gr}^{0}\operatorname{Lie}U(A))$$
for $A=\bar{\mathbb{F}}_{p},\bar{\mathbb{Z}}_{p}$.
If $c\in H^{i}(\operatorname{Gal}_{K},\operatorname{gr}^{1}\operatorname{Lie}U(A))$,
write $c=(c_{1},c_{2})$
where $c_{1}\in H^{i}(\operatorname{Gal}_{K},\operatorname{Mat}_{a\times b}(A))$,
and $c_{2}\in H^{i}(\operatorname{Gal}_{K},\operatorname{Mat}_{b\times c}(A))$,
6.1. Lemma
For the cup product
$$H^{1}(\operatorname{Gal}_{K},\operatorname{gr}^{1}\operatorname{Lie}U(A))\times H^{1}(\operatorname{Gal}_{K},\operatorname{gr}^{1}\operatorname{Lie}U(A))\to H^{2}(\operatorname{Gal}_{K},\operatorname{gr}^{0}\operatorname{Lie}U(A)),$$
we have
$$\displaystyle(c_{1},0)\cup(c_{1},0)$$
$$\displaystyle=0$$
$$\displaystyle(0,c_{2})\cup(0,c_{2})$$
$$\displaystyle=0$$
$$\displaystyle(c_{1},0)\cup(0,c_{2})$$
$$\displaystyle=\frac{1}{2}(c_{1},c_{2})\cup(c_{1},c_{2})$$
$$\displaystyle(c_{1},0)\cup(c_{1}^{\prime},0)$$
$$\displaystyle=0$$
$$\displaystyle(0,c_{2})\cup(0,c_{2}^{\prime})$$
$$\displaystyle=0,$$
for any $c_{1},c_{2},c_{1}^{\prime},c_{2}^{\prime}$.
Proof.
The first two identities follows from Lemma 5.4.
The last three identity follows from the first two identities.
∎
Write $\Delta$ for the finite group $\{1,\j\}$ with two elements.
While $\Delta$ denotes the Galois group $\operatorname{Gal}(K/F)$ in the previous sections,
$\Delta$ is merely an abstract group in this section.
6.2. Definition
An action of $\Delta$ on each of
$H^{i}(\operatorname{Gal}_{K},\operatorname{gr}^{j}\operatorname{Lie}U(A))$
is said to be classical
if
•
$\j H^{1}(\operatorname{Gal}_{K},\operatorname{Mat}_{a\times b}(A))\subset H^{1}(\operatorname{Gal}_{K},\operatorname{Mat}_{b\times c}(A))$,
•
$\j H^{1}(\operatorname{Gal}_{K},\operatorname{Mat}_{b\times c}(A))\subset H^{1}(\operatorname{Gal}_{K},\operatorname{Mat}_{a\times b}(A))$,
•
the $\Delta$-action is compatible with the cup products.
We also call such $\Delta$-actions
classical $\Delta$-structure.
6.3. Proposition
Fix a classical $\Delta$-structure.
If
$$H^{2}(\operatorname{Gal}_{K},\operatorname{gr}^{0}\operatorname{Lie}U(\bar{\mathbb{F}}_{p}))=H^{2}(\operatorname{Gal}_{K},\operatorname{gr}^{0}\operatorname{Lie}U(\bar{\mathbb{F}}_{p}))^{\Delta}=\bar{\mathbb{F}}_{p},$$
then the cup product
$$H^{1}(\operatorname{Gal}_{K},\operatorname{gr}^{1}\operatorname{Lie}U(\bar{\mathbb{Z}}_{p}))^{\Delta}\underset{\bar{\mathbb{Z}}_{p}}{\otimes}\bar{\mathbb{F}}_{p}\times H^{1}(\operatorname{Gal}_{K},\operatorname{gr}^{1}\operatorname{Lie}U(\bar{\mathbb{Z}}_{p}))^{\Delta}\underset{\bar{\mathbb{Z}}_{p}}{\otimes}\bar{\mathbb{F}}_{p}\to H^{2}(\operatorname{Gal}_{F},\operatorname{gr}^{0}\operatorname{Lie}U(\bar{\mathbb{F}}_{p}))^{\Delta}$$
is non-trivial if and only if
$$H^{1}(\operatorname{Gal}_{K},\operatorname{gr}^{1}\operatorname{Lie}U(\bar{\mathbb{Z}}_{p}))\underset{\bar{\mathbb{Z}}_{p}}{\otimes}\bar{\mathbb{F}}_{p}\times H^{1}(\operatorname{Gal}_{K},\operatorname{gr}^{1}\operatorname{Lie}U(\bar{\mathbb{Z}}_{p}))\underset{\bar{\mathbb{Z}}_{p}}{\otimes}\bar{\mathbb{F}}_{p}\to H^{2}(\operatorname{Gal}_{K},\operatorname{gr}^{0}\operatorname{Lie}U(\bar{\mathbb{F}}_{p}))$$
is non-trivial.
Proof.
Since $\cup$ is a symmetric pairing,
non-triviality of $\cup$
on $H^{1}(\operatorname{Gal}_{K},\operatorname{gr}^{1}\operatorname{Lie}U(\bar{\mathbb{Z}}_{p}))$
implies $(c_{1},c_{2})\cup(c_{1},c_{2})\neq 0$
for some $(c_{1},c_{2})\in H^{1}(\operatorname{Gal}_{K},\operatorname{gr}^{1}\operatorname{Lie}U(\bar{\mathbb{Z}}_{p}))\underset{\bar{\mathbb{Z}}_{p}}{\otimes}\bar{\mathbb{F}}_{p}$.
By Lemma 6.1,
$(c_{1},0)\cup(0,c_{2})\neq 0$.
Since
$$H^{2}(\operatorname{Gal}_{K},\operatorname{gr}^{0}\operatorname{Lie}U(\bar{\mathbb{F}}_{p}))^{\Delta}=H^{2}(\operatorname{Gal}_{K},\operatorname{gr}^{0}\operatorname{Lie}U(\bar{\mathbb{F}}_{p})),$$
we conclude $\Delta$-acts trivially on $H^{2}(\operatorname{Gal}_{K},\operatorname{gr}^{0}\operatorname{Lie}U(\bar{\mathbb{F}}_{p}))$.
We argue by contradition and assume
$\cup$ is trivial on $H^{1}(\operatorname{Gal}_{K},\operatorname{gr}^{1}\operatorname{Lie}U(\bar{\mathbb{Z}}_{p}))^{\Delta}\underset{\bar{\mathbb{Z}}_{p}}{\otimes}\bar{\mathbb{F}}_{p}$.
We claim $x\cup y=0$
for each $x\in H^{1}(\operatorname{Gal}_{K},\operatorname{gr}^{1}\operatorname{Lie}U(\bar{\mathbb{Z}}_{p}))^{\Delta}\underset{\bar{\mathbb{Z}}_{p}}{\otimes}\bar{\mathbb{F}}_{p}$ and
$y\in H^{1}(\operatorname{Gal}_{K},\operatorname{gr}^{1}\operatorname{Lie}U(\bar{\mathbb{Z}}_{p}))\underset{\bar{\mathbb{Z}}_{p}}{\otimes}\bar{\mathbb{F}}_{p}$.
Indeed,
$$2(x\cup y)=x\cup y+\j(x\cup y)=x\cup y+(\j x)\cup(\j y)=x\cup y+x\cup\j y=x\cup(y+\j y)=0$$
because both $x$ and $y+\j y$ lies in
$H^{1}(\operatorname{Gal}_{K},\operatorname{gr}^{1}\operatorname{Lie}U(\bar{\mathbb{Z}}_{p}))^{\Delta}\underset{\bar{\mathbb{Z}}_{p}}{\otimes}\bar{\mathbb{F}}_{p}$.
Since $(c_{1},0)+\j(c_{1},0)\in H^{1}(\operatorname{Gal}_{K},\operatorname{gr}^{1}\operatorname{Lie}U(\bar{\mathbb{Z}}_{p}))^{\Delta}\underset{\bar{\mathbb{Z}}_{p}}{\otimes}\bar{\mathbb{F}}_{p}$,
we have
$((c_{1},0)+\j(c_{1},0))\cup(0,c_{2})=0$.
However, since $(c_{1},0)\cup(0,c_{2})\neq 0$,
we must have $\j(c_{1},0)\cup(0,c_{2})\neq 0$.
By the classicality of the $\Delta$-structure,
$\j(c_{1},0)=(0,c_{2}^{\prime})$ for some $c_{2}^{\prime}$ and thus by Lemma 6.1,
$\j(c_{1},0)\cup(0,c_{2})=0$ and we get a contradiction.
∎
Next, we establish a general non-triviality of cup products
result,
and before that we need a non-degeneracy result.
6.4. Lemma
Assume $\bar{\tau}_{a}$ and $\bar{\tau}_{c}$ are irreducible.
Then the cup product
$$H^{1}(\operatorname{Gal}_{K},\operatorname{Mat}_{a\times b}(\bar{\mathbb{F}}_{p}))\times H^{1}(\operatorname{Gal}_{K},\operatorname{Mat}_{b\times c}(\bar{\mathbb{F}}_{p}))\to H^{2}(\operatorname{Gal}_{K},\operatorname{Mat}_{a\times c}(\bar{\mathbb{F}}_{p}))$$
is non-degenerate.
Proof.
Fix a non-zero element $x\in H^{1}(\operatorname{Gal}_{K},\operatorname{Mat}_{b\times c}(\bar{\mathbb{F}}_{p}))$.
Such an extension class $x$ corresponds to a non-split extension
$\bar{\tau}_{d}=\begin{bmatrix}\bar{\tau}_{b}&*\\
&\bar{\tau}_{c}\end{bmatrix}$.
In particular, the map
$$H^{0}(\operatorname{Gal}_{K},\bar{\tau}_{a}^{\vee}\otimes\bar{\tau}_{d}(1))\to H^{0}(\operatorname{Gal}_{K},\bar{\tau}_{a}^{\vee}\otimes\bar{\tau}_{c}(1))$$
is the zero map (if otherwise the socle of $\bar{\tau}_{d}$ is strictly larger than the socle of $\bar{\tau}_{b}$ and $\bar{\tau}_{d}$ must be a split extension).
By local Tate duality,
the map
$$H^{2}(\operatorname{Gal}_{K},\bar{\tau}_{c}^{\vee}\otimes\bar{\tau}_{a})\to H^{2}(\operatorname{Gal}_{K},\bar{\tau}_{d}^{\vee}\otimes\bar{\tau}_{a})$$
is also the zero map.
The short exact sequence
$$0\to\bar{\tau}_{b}\to\bar{\tau}_{d}\to\bar{\tau}_{c}\to 0$$
induces
the long exact sequence
$$H^{1}(\operatorname{Gal}_{K},\bar{\tau}_{c}^{\vee}\otimes\bar{\tau}_{a})\to H^{1}(\operatorname{Gal}_{K},\bar{\tau}_{d}^{\vee}\otimes\bar{\tau}_{a})\to H^{1}(\operatorname{Gal}_{K},\bar{\tau}_{b}^{\vee}\otimes\bar{\tau}_{a})\to H^{2}(\operatorname{Gal}_{K},\bar{\tau}_{c}^{\vee}\otimes\bar{\tau}_{a})\xrightarrow{0}H^{2}(\operatorname{Gal}_{K},\bar{\tau}_{d}^{\vee}\otimes\bar{\tau}_{a}).$$
Since
$H^{2}(\operatorname{Gal}_{K},\bar{\tau}_{c}^{\vee}\otimes\bar{\tau}_{a})\neq 0$,
there exists an element
$y\in H^{1}(\operatorname{Gal}_{K},\bar{\tau}_{b}^{\vee}\otimes\bar{\tau}_{a})$
which maps to a non-zero element of $H^{2}(\operatorname{Gal}_{K},\bar{\tau}_{c}^{\vee}\otimes\bar{\tau}_{a})$,
and $y$ does not admit an extension
to $H^{1}(\operatorname{Gal}_{K},\bar{\tau}_{d}^{\vee}\otimes\bar{\tau}_{a})$.
By Lemma 5.4,
$(x,y)\cup(x,y)\neq 0$,
and thus by Lemma 6.1,
$x\cup y=\frac{1}{2}((x,y)\cup(x,y))\neq 0$.
∎
6.5. Lemma
Let $X,Y$ be vector spaces over a field $\kappa$.
Let
$$\cup:X\times Y\to\kappa$$
be a non-degenerate
bilinear pairing.
Let $H_{X}\subset X$ and $H_{Y}\subset Y$
be subspaces such that
$x\cup y=0$ for all $x\in H_{X}$ and $y\in H_{Y}$.
Then either $\dim X\geq 2\dim H_{X}$
or $\dim Y\geq 2\dim H_{Y}$.
Proof.
It suffices to show
$\dim X+\dim Y\geq 2(\dim H_{X}+\dim H_{Y})$.
We define a (symmetric) inner product structure on
$X\oplus Y$ by setting
$(x,y)\cdot(x,y)=x\cup y+y\cup x$.
The lemma now follows from the Gram-Schmidt process.
∎
6.6. Lemma
Assume both $\bar{\tau}_{a}$ and $\bar{\tau}_{c}$ are irreducible.
The cup product
$$H^{1}(\operatorname{Gal}_{K},\operatorname{Mat}_{a\times b}(\bar{\mathbb{Z}}_{p}))\underset{\bar{\mathbb{Z}}_{p}}{\otimes}\bar{\mathbb{F}}_{p}\times H^{1}(\operatorname{Gal}_{K},\operatorname{Mat}_{b\times c}(\bar{\mathbb{Z}}_{p}))\underset{\bar{\mathbb{Z}}_{p}}{\otimes}\bar{\mathbb{F}}_{p}\to H^{2}(\operatorname{Gal}_{K},\operatorname{Mat}_{a\times c}(\bar{\mathbb{F}}_{p}))$$
is non-trivial unless
all of the following holds
•
$a=1$,
•
$K=\mathbb{Q}_{p}$,
•
either $\bar{\tau}_{b}=\bar{\tau}_{a}(-1)^{\oplus b}$
and $\bar{\tau}_{c}=\bar{\tau}_{a}(-1)$; or
$\bar{\tau}_{b}=\bar{\tau}_{a}^{\oplus b}$
and $\bar{\tau}_{c}=\bar{\tau}_{a}(-1)$.
Proof.
By Lemma 6.5
and Lemma 6.4,
the lemma holds if
$$\dim_{\bar{\mathbb{F}}_{p}}H^{1}(\operatorname{Gal}_{K},\operatorname{Mat}_{a\times b}(\bar{\mathbb{Z}}_{p}))\underset{\bar{\mathbb{Z}}_{p}}{\otimes}\bar{\mathbb{F}}_{p}>\frac{1}{2}\dim_{\bar{\mathbb{F}}_{p}}H^{1}(\operatorname{Gal}_{K},\operatorname{Mat}_{a\times b}(\bar{\mathbb{F}}_{p}))$$
and
$$\dim_{\bar{\mathbb{F}}_{p}}H^{1}(\operatorname{Gal}_{K},\operatorname{Mat}_{b\times c}(\bar{\mathbb{Z}}_{p}))\underset{\bar{\mathbb{Z}}_{p}}{\otimes}\bar{\mathbb{F}}_{p}>\frac{1}{2}\dim_{\bar{\mathbb{F}}_{p}}H^{1}(\operatorname{Gal}_{K},\operatorname{Mat}_{b\times c}(\bar{\mathbb{F}}_{p})).$$
We prove by contradiction
and assume either ($*$) or ($**$) fails.
Since ($*$) and ($**$) are completely similar,
we assume the contraposition of ($*$)
that
(1)
$$H^{1}(\operatorname{Gal}_{K},\operatorname{Mat}_{a\times b}(\bar{\mathbb{Z}}_{p}))\underset{\bar{\mathbb{Z}}_{p}}{\otimes}\bar{\mathbb{F}}_{p}\leq\frac{1}{2}\dim_{\bar{\mathbb{F}}_{p}}H^{1}(\operatorname{Gal}_{K},\operatorname{Mat}_{a\times b}(\bar{\mathbb{F}}_{p})).$$
By the universal coefficient theorem, we have
the short exact sequence
$$0\to H^{1}(\operatorname{Gal}_{K},\operatorname{Mat}_{a\times b}(\bar{\mathbb{Z}}_{p}))\underset{\bar{\mathbb{Z}}_{p}}{\otimes}\bar{\mathbb{F}}_{p}\to H^{1}(\operatorname{Gal}_{K},\operatorname{Mat}_{a\times b}(\bar{\mathbb{F}}_{p}))\to{\operatorname{Tor}}^{\bar{\mathbb{Z}}_{p}}_{1}(H^{2}(\operatorname{Gal}_{K},\operatorname{Mat}_{a\times b}(\bar{\mathbb{Z}}_{p})),\bar{\mathbb{F}}_{p})\to 0.$$
Therefore the assumption (1) is equivalent to
(2)
$$\dim_{\bar{\mathbb{F}}_{p}}H^{1}(\operatorname{Gal}_{K},\operatorname{Mat}_{a\times b}(\bar{\mathbb{Z}}_{p}))\underset{\bar{\mathbb{Z}}_{p}}{\otimes}\bar{\mathbb{F}}_{p}\leq\dim_{\bar{\mathbb{F}}_{p}}{\operatorname{Tor}}^{\bar{\mathbb{Z}}_{p}}_{1}(H^{2}(\operatorname{Gal}_{K},\operatorname{Mat}_{a\times b}(\bar{\mathbb{Z}}_{p})),\bar{\mathbb{F}}_{p}).$$
Note that
$$\dim_{\bar{\mathbb{F}}_{p}}{\operatorname{Tor}}^{\bar{\mathbb{Z}}_{p}}_{1}(H^{2}(\operatorname{Gal}_{K},\operatorname{Mat}_{a\times b}(\bar{\mathbb{Z}}_{p})),\bar{\mathbb{F}}_{p})\leq\dim_{\bar{\mathbb{F}}_{p}}H^{2}(\operatorname{Gal}_{K},\operatorname{Mat}_{a\times b}(\bar{\mathbb{F}}_{p})),$$
since $H^{2}$ commutes with base change;
also see [Wei94, Example 3.1.7] for the computation of ${\operatorname{Tor}}$.
Also note that
$$\dim_{\bar{\mathbb{F}}_{p}}H^{1}(\operatorname{Gal}_{K},\operatorname{Mat}_{a\times b}(\bar{\mathbb{Z}}_{p}))\underset{\bar{\mathbb{Z}}_{p}}{\otimes}\bar{\mathbb{F}}_{p}\geq\operatorname{rank}_{\bar{\mathbb{Z}}_{p}}H^{1}(\operatorname{Gal}_{K},\operatorname{Mat}_{a\times b}(\bar{\mathbb{Z}}_{p}))_{\text{torsion-free}},$$
and
$$\operatorname{rank}_{\bar{\mathbb{Z}}_{p}}H^{1}(\operatorname{Gal}_{K},\operatorname{Mat}_{a\times b}(\bar{\mathbb{Z}}_{p}))_{\text{torsion-free}}=\dim_{\bar{\mathbb{Q}}_{p}}H^{1}(\operatorname{Gal}_{K},\operatorname{Mat}_{a\times b}(\bar{\mathbb{Q}}_{p})).$$
By local Euler characteristic, we have
$$\displaystyle\dim_{\bar{\mathbb{Q}}_{p}}H^{1}(\operatorname{Gal}_{K},\operatorname{Mat}_{a\times b}(\bar{\mathbb{Q}}_{p}))=$$
$$\displaystyle\dim_{\bar{\mathbb{Q}}_{p}}H^{0}(\operatorname{Gal}_{K},\operatorname{Mat}_{a\times b}(\bar{\mathbb{Q}}_{p}))$$
$$\displaystyle+\dim_{\bar{\mathbb{Q}}_{p}}H^{2}(\operatorname{Gal}_{K},\operatorname{Mat}_{a\times b}(\bar{\mathbb{Q}}_{p}))+[K:\mathbb{Q}_{p}]ab$$
$$\displaystyle\geq$$
$$\displaystyle[K:\mathbb{Q}_{p}]ab.$$
On the other hand, by local Tate duality
$$\dim_{\bar{\mathbb{F}}_{p}}H^{2}(\operatorname{Gal}_{K},\operatorname{Mat}_{a\times b}(\bar{\mathbb{F}}_{p}))=\dim_{\bar{\mathbb{F}}_{p}}H^{0}(\operatorname{Gal}_{K},\tau_{a}^{\vee}\otimes\tau_{b}(1))\leq ab.$$
Combine all above, (2) becomes
$$[K:\mathbb{Q}_{p}]ab\leq ab.$$
So, all inequalities above must be equalities and we are forced to have
(i)
$K=\mathbb{Q}_{p}$,
(ii)
$H^{1}(\operatorname{Gal}_{K},\operatorname{Mat}_{a\times b}(\bar{\mathbb{Z}}_{p}))$ is torsion-free,
and
(iii)
$\dim_{\bar{\mathbb{F}}_{p}}H^{2}(\operatorname{Gal}_{K},\operatorname{Mat}_{a\times b}(\bar{\mathbb{F}}_{p}))=ab$.
Item (iii) further forces $a=1$ because $\bar{\tau}_{a}$ is assumed to be irreducible.
∎
6.7. Corollary
Fix a classical $\Delta$-structure.
Assume both $\bar{\tau}_{a}$ and $\bar{\tau}_{c}$ are irreducible.
The cup product
$$H^{1}(\operatorname{Gal}_{K},\operatorname{gr}^{1}\operatorname{Lie}U(\bar{\mathbb{Z}}_{p}))^{\Delta}\underset{\bar{\mathbb{Z}}_{p}}{\otimes}\bar{\mathbb{F}}_{p}\times H^{1}(\operatorname{Gal}_{K},\operatorname{gr}^{1}\operatorname{Lie}U(\bar{\mathbb{Z}}_{p}))^{\Delta}\underset{\bar{\mathbb{Z}}_{p}}{\otimes}\bar{\mathbb{F}}_{p}\to H^{2}(\operatorname{Gal}_{K},\operatorname{Mat}_{a\times c}(\bar{\mathbb{F}}_{p}))^{\Delta}$$
is non-trivial unless
all of the following holds
•
$a=1$,
•
$K=\mathbb{Q}_{p}$,
•
either $\bar{\tau}_{b}=\bar{\tau}_{a}(-1)^{\oplus b}$
and $\bar{\tau}_{c}=\bar{\tau}_{a}(-1)$; or
$\bar{\tau}_{b}=\bar{\tau}_{a}^{\oplus b}$
and $\bar{\tau}_{c}=\bar{\tau}_{a}(-1)$.
Proof.
Combine Lemma 6.6
and Proposition 6.3.
∎
7. Example A: unitary groups
Now assume $G=U_{n}$ is a quasi-split tamely ramified unitary group over $F$
which splits over the quadratic extension $K/F$
(thus we have implicitly assumed $p\neq 2$).
The Dynkin diagram of $G$ is a chain of $(n-1)$-vertices
(\dynkinA),
and $\Delta=\operatorname{Gal}(K/F)$ acts on ${\operatorname{Dyn}}(G)$ by reflection.
The maximal proper $\Delta$-stable subsets of ${\operatorname{Dyn}}(G)$
are given by removing either two symmetric vertices, or the middle vertex.
Therefore, the Levi subgroups of maximal proper $F$-parabolics
of $G$ are of the form
$$M_{k}:=\operatorname{Res}_{K/F}\operatorname{GL}_{k}\times U_{n-2k}.$$
If ${{}^{L}\!P}$ is a maximal proper parabolic of ${{}^{L}\!G}$,
then the Levi of ${{}^{L}\!P}$ is
of the form ${{}^{L}\!M}_{k}$;
we will write ${{}^{L}\!P}_{k}$ for ${{}^{L}\!P}$ to emphasize its type.
7.1. Proposition
Let $\bar{\rho}:\operatorname{Gal}_{F}\to{{}^{L}\!G}(\bar{\mathbb{F}}_{p})$ be an $L$-parameter.
Then either $\bar{\rho}$ is elliptic,
or $\bar{\rho}$ factors through
${{}^{L}\!P}_{k}(\bar{\mathbb{F}}_{p})$ for some $k$
such that the composite
$\bar{r}:\operatorname{Gal}_{F}\xrightarrow{\bar{\rho}^{\operatorname{ss}}}{{}^{L}\!M}_{k}(\bar{\mathbb{F}}_{p})\to{{}^{L}\!\operatorname{Res}}_{K/F}\operatorname{GL}_{k}(\bar{\mathbb{F}}_{p})$
is elliptic.
Proof.
By [Lin23, Theorem B],
$\bar{\rho}$ is either elliptic, or factors through
some ${{}^{L}\!P}_{k}(\bar{\mathbb{F}}_{p})$.
By the non-abelian Shapiro’s lemma,
$L$-parameters $\operatorname{Gal}_{F}\to{{}^{L}\!\operatorname{Res}}_{K/F}\operatorname{GL}_{k}(\bar{\mathbb{F}}_{p})$
are in natural bijection to $L$-parameters
$\operatorname{Gal}_{K}\to\operatorname{GL}_{k}(\bar{\mathbb{F}}_{p})$;
and this bijection clearly preserves ellipticity.
Suppose $\bar{r}$ is not elliptic, then
$\bar{r}$, when regarded as a Galois representation
$\operatorname{Gal}_{K}\to\operatorname{GL}_{k}(\bar{\mathbb{F}}_{p})$,
contains a proper irreducible subrepresentation
$\bar{r}_{0}:\operatorname{Gal}_{K}\to\operatorname{GL}_{s}(\bar{\mathbb{F}}_{p})$.
It is easy to see that $\bar{\rho}$
also factors through ${{}^{L}\!P}_{s}(\bar{\mathbb{F}}_{p})$. So we are done.
∎
We take a closer look at ${{}^{L}\!P}_{k}$:
$${{}^{L}\!P}_{k}=\begin{bmatrix}\operatorname{GL}_{k}&\operatorname{Mat}_{k\times(n-2k)}&\operatorname{Mat}_{k\times k}\\
&\operatorname{GL}_{n-2k}&\operatorname{Mat}_{(n-2k)\times k}\\
&&\operatorname{GL}_{k}\end{bmatrix}\rtimes\Delta$$
7.2. Lemma
If $\bar{\rho}:\operatorname{Gal}_{F}\to{{}^{L}\!G}(\bar{\mathbb{F}}_{p})$
is not elliptic, then there exists a parabolic
${{}^{L}\!P}_{k}$ through which $\bar{\rho}$ factors
and $\bar{\rho}$ is a Heisenberg-type extension of
some $\bar{\rho}_{M}:\operatorname{Gal}_{F}\to{{}^{L}\!M}_{k}(\bar{\mathbb{F}}_{p})$.
Proof.
By Proposition 7.1,
there exists a parabolic ${{}^{L}\!P}_{k}$ such that
$\bar{r}:\operatorname{Gal}_{F}\xrightarrow{\bar{\rho}^{\operatorname{ss}}}{{}^{L}\!M}_{k}(\bar{\mathbb{F}}_{p})\to{{}^{L}\!\operatorname{Res}}_{K/F}\operatorname{GL}_{k}(\bar{\mathbb{F}}_{p})$
is elliptic.
Write
$$\bar{r}|_{\operatorname{Gal}_{K}}=\begin{bmatrix}\bar{r}_{1}&&\\
&1_{n-2k}&\\
&&\bar{r}_{2}\end{bmatrix},$$
where $\bar{r}_{1},\bar{r}_{2}:\operatorname{Gal}_{K}\to\operatorname{GL}_{k}(\bar{\mathbb{F}}_{p})$.
By the non-abelian Shapiro’s lemma (see [GHS18, Subsection 9.4] for details),
$\bar{r}$ can be fully reconstructed from $\bar{r}_{1}$,
and $\bar{r}_{2}$ is completely determined by $\bar{r}_{1}$;
in particular, both $\bar{r}_{1}$ and $\bar{r}_{2}$
are irreducible Galois representations.
We
have
$$H^{2}(\operatorname{Gal}_{K},\operatorname{gr}^{0}\operatorname{Lie}U(\bar{\mathbb{F}}_{p}))=H^{2}(\operatorname{Gal}_{K},\operatorname{Hom}(\bar{r}_{2},\bar{r}_{1})).$$
Since both $\bar{r}_{1}$ and $\bar{r}_{2}$
are irreducible, by local Tate duality,
we have $\dim H^{2}(\operatorname{Gal}_{K},\operatorname{Hom}(\bar{r}_{2},\bar{r}_{1}))=\dim H^{0}(\operatorname{Gal}_{K},\operatorname{Hom}(\bar{r}_{1},\bar{r}_{2}(1)))\leq 1$.
∎
Next, we study cup products.
Now fix the parabolic type ${{}^{L}\!P}_{k}$.
We have
$$\operatorname{gr}^{1}\operatorname{Lie}U=\operatorname{Mat}_{k\times(n-2k)}\oplus\operatorname{Mat}_{(n-2k)\times k}$$
and
$$\operatorname{gr}^{0}\operatorname{Lie}U=\operatorname{Mat}_{k\times k}.$$
We will use all notations introduced in Section 6.
By [Koc02, Theorem 3.15],
we have
$$H^{i}(\operatorname{Gal}_{F},\operatorname{gr}^{j}\operatorname{Lie}U(A))=H^{i}(\operatorname{Gal}_{K},\operatorname{gr}^{j}\operatorname{Lie}U(A))^{\operatorname{Gal}(K/F)}=H^{i}(\operatorname{Gal}_{K},\operatorname{gr}^{j}\operatorname{Lie}U(A))^{\Delta},$$
for all $i$ and $j$.
7.3. Lemma
The $\Delta=\operatorname{Gal}(K/F)$-action on $H^{1}(\operatorname{Gal}_{K},\operatorname{gr}^{1}\operatorname{Lie}U(A))$
satisfies
$$\displaystyle\j(c_{1},0)=(0,*)$$
$$\displaystyle\j(0,c_{2})=(*,0),$$
for any $c_{1},c_{2}$.
Proof.
Write
$$w=\begin{bmatrix}0&0&J_{1}\\
0&J_{2}&0\\
J_{3}&0&0\end{bmatrix}$$
for (a representative of) the longest Weyl group element.
Let
$$\rho=\begin{bmatrix}A&B&*\\
&D&E\\
&&F\end{bmatrix}:\operatorname{Gal}_{K}\to P(A)$$
be a group homomorphism.
Note that each of $A,B,D,E,F$
is a matrix-valued function on $\operatorname{Gal}_{K}$.
Write $A^{\prime}$ for $\gamma\mapsto A(\j^{-1}\gamma\j)$
and similarly define $B^{\prime},D^{\prime},E^{\prime},F^{\prime}$.
We have
$$\j\rho(\j^{-1}-\j)\j^{-1}=w\rho(\j^{-1}-\j)^{-t}w^{-1}=\begin{bmatrix}J_{1}F^{\prime-t}J_{1}^{-1}&-J_{1}F^{\prime-t}E^{\prime t}D^{\prime-t}J_{2}^{-1}&*\\
&J_{2}B^{\prime-t}J_{2}^{-1}&-J_{2}D^{\prime-t}B^{\prime t}A^{\prime-t}J_{3}^{-1}\\
&&J_{3}A^{\prime-t}A^{-1}J_{3}^{-1}\end{bmatrix}.$$
In particular, we see that the $\widehat{j}$-involution on
$H^{1}(\operatorname{Gal}_{K},\operatorname{gr}^{1}\operatorname{Lie}U(A))$
permutes the two direct summands.
∎
The lemma above immediately implies the following.
7.4. Corollary
The Galois action $\operatorname{Gal}(K/F)$ on
$H^{i}(\operatorname{Gal}_{K},\operatorname{gr}^{j}\operatorname{Lie}U(A))$
is a classical $\Delta$-system in the sense of
Definition 6.2.
7.5. Theorem
Theorem 3 holds for ${{}^{L}\!G}_{n}={{}^{L}\!U}_{n}$.
Moreover, for unramified unitary groups, $\bar{\rho}$
admits a crystalline lift.
Proof.
If $\bar{\rho}$ is elliptic, then it is [Lin23, Theorem C].
Suppose $\bar{\rho}$ is not elliptic,
then by Lemma 7.2,
there exists a parabolic ${{}^{L}\!P}_{k}$
through which $\bar{\rho}$ factors through
and that $\bar{\rho}$ is a Heisenberg-type extension
of some $\bar{\rho}_{M}:\operatorname{Gal}_{F}\to{{}^{L}\!M}_{k}(\bar{\mathbb{F}}_{p})$.
We have
$${{}^{L}\!M}_{k}={{}^{L}\!(}\operatorname{Res}_{K/F}\operatorname{GL}_{k}\times U_{n-2k})=\begin{bmatrix}\operatorname{GL}_{k}&&\\
&\operatorname{GL}_{n-2k}&\\
&&\operatorname{GL}_{k}\end{bmatrix}\rtimes\{1,\j\}$$
where
$$\begin{bmatrix}\operatorname{GL}_{k}&&\\
&I_{n-2k}&\\
&&\operatorname{GL}_{k}\end{bmatrix}\rtimes\{1,\j\}\cong{{}^{L}\!\operatorname{Res}}_{K/F}\operatorname{GL}_{k},~{}{\text{and}}~{}\begin{bmatrix}I_{k}&&\\
&\operatorname{GL}_{n-2k}&\\
&&I_{k}\end{bmatrix}\rtimes\{1,\j\}\cong{{}^{L}\!U}_{n-2k}.$$
Write
$$\bar{\rho}_{M}=\begin{bmatrix}\bar{\rho}_{a}&&\\
&\bar{\rho}_{b}&\\
&&\bar{\rho}_{c}\end{bmatrix}\rtimes*$$
By induction on the semisimple rank of $G_{n}$,
we assume $\bar{\rho}_{b}$
admits a lift $\rho_{b}:\operatorname{Gal}_{F}\to{{}^{L}\!U}_{n-2k}(\bar{\mathbb{Z}}_{p})$.
Write
$w=\begin{bmatrix}J_{1}&&\\
&J_{2}&\\
&&J_{3}\end{bmatrix}$ for a longest Weyl group element.
Let $(\rho_{a},\rho_{c}):\operatorname{Gal}_{F}\to{{}^{L}\!\operatorname{Res}}_{F/K}\operatorname{GL}_{k}(\bar{\mathbb{Z}}_{p})$
be a potentially crystalline lift of $(\bar{\rho}_{a},\bar{\rho}_{c})$.
We have
$$\rho_{c}(-)=J_{3}\rho_{a}(\widehat{j}-\widehat{j}^{-1})^{-t}J_{3}^{-1}.$$
In particular, if $\lambda$ is a potentially crystalline character
with trivial mod $p$ reduction,
then
$(\lambda\rho_{a},\lambda^{-1}\rho_{c})$
is another potentially crystalline lift of $(\bar{\rho}_{a},\bar{\rho}_{c})$.
By choosing $\lambda:=\bar{\mathbb{Z}}_{p}(n)$ with trivial reduction
for $n$ sufficiently large,
we may assume
the Hodge-Tate weights of
$\operatorname{Hom}(\rho_{b},\rho_{a})$, $\operatorname{Hom}(\rho_{c},\rho_{a})$
and $\operatorname{Hom}(\rho_{c},\rho_{b})$
are all positive integers $\geq 2$;
in particular
$H^{1}(\operatorname{Gal}_{K},\operatorname{Lie}U(\bar{\mathbb{Q}}_{p}))=H^{1}_{{\operatorname{crys}}}(\operatorname{Gal}_{K},\operatorname{Lie}U(\bar{\mathbb{Q}}_{p}))$.
By Theorem 1,
we can modify $\rho_{b}$ without changing its Hodge-Tate weights
and reduction mod $p$ such that
$\bar{\rho}|_{\operatorname{Gal}_{K}}$
admits a partial lift which is a partial extension of
$(\rho_{a}|_{\operatorname{Gal}_{K}},\rho_{b}|_{\operatorname{Gal}_{K}},\rho_{c}|_{\operatorname{Gal}_{K}})$.
Since $K$ is a quadratic extension of $F$, $K\neq\mathbb{Q}_{p}$,
the non-triviality of cup products follows from
Lemma 6.6 and Proposition 6.3.
Now the theorem follows from Theorem 5.5 and the main theorem of [Lin23a].
For the moreover part, note that we can choose $\rho_{a}$
and $\rho_{b}$ and $\rho_{c}$ such that they are crystalline
after restricting to $\operatorname{Gal}_{K}$; if $G$ is unramified,
it means $\rho_{a}$
and $\rho_{b}$ and $\rho_{c}$ are already crystalline.
∎
8. Example B: symplectic groups
Since $G=\operatorname{GSp}_{2n}$ is a split group, we have $K=F$.
The Dynkin diagram for $G$
is \dynkinB.
Thus the maximal proper Levi subgroups of $G$
are of the form
$$\operatorname{GL}_{k}\times\operatorname{GSp}_{2(n-k)},~{}\text{or}~{}\operatorname{GL}_{n}.$$
Set
$$M_{k}:=\begin{cases}\operatorname{GL}_{k}\times\operatorname{GSp}_{2(n-k)}&k<n\\
\operatorname{GL}_{n}&k=n\end{cases}$$
and write $P_{k}$ for the corresponding parabolic subgroup.
Set
$\Omega_{k}=\begin{bmatrix}&I_{n-k}\\
-I_{n-k}&\end{bmatrix}$.
We use the following presentation of $\operatorname{GSp}_{2n}$:
$$\operatorname{GSp}_{2n}=\{X\in\operatorname{GL}_{2n}|X^{t}\begin{bmatrix}&&I_{k}\\
&\Omega_{k}&\\
-I_{k}&&\end{bmatrix}X=\lambda\begin{bmatrix}&&I_{k}\\
&\Omega_{k}&\\
-I_{k}&&\end{bmatrix}\}$$
We have
$$P_{k}=\operatorname{GSp}_{2n}\cap\begin{bmatrix}\operatorname{GL}_{k}&\operatorname{Mat}_{k\times(2n-2k)}&\operatorname{Mat}_{k\times k}\\
&\operatorname{GL}_{2n-2k}&\operatorname{Mat}_{(2n-2k)\times k}\\
&&\operatorname{GL}_{k}\end{bmatrix}=:\operatorname{GSp}_{2n}\cap Q_{k}$$
where
$Q_{k}$ is the corresponding parabolic of $\operatorname{GL}_{2n}$.
8.1. Lemma
If $\bar{\rho}:\operatorname{Gal}_{F}\to\operatorname{GSp}_{2n}(\bar{\mathbb{F}}_{p})$
is not elliptic, then there exists a parabolic
$P_{k}$ through which $\bar{\rho}$ factors
and $\bar{\rho}$ is a Heisenberg-type extension of
some $\bar{\rho}_{M}:\operatorname{Gal}_{F}\to M_{k}(\bar{\mathbb{F}}_{p})$.
Proof.
The proof is similar to that of Lemma 7.2.
A parabolic $\bar{\rho}$ factors through $P_{k}$
for some $k$.
Write
$$\bar{\rho}=\begin{bmatrix}\bar{r}_{1}&*&*\\
&\bar{r}_{2}&*\\
&&\bar{r}_{3}\end{bmatrix}$$
If $\bar{r}_{1}$ or $\bar{r}_{3}$ is not irreducible,
then $\bar{\rho}$ also factors through $P_{s}$
for some $s$ strictly less than $k$.
So we can assume both $\bar{r}_{1}$ and $\bar{r}_{3}$ are irreducible.
Finally, local Tate duality ensures $\bar{\rho}$ is a Heisenberg-type extension.
∎
Write $U$ and $V$ for
the unipotent radical of $Q_{k}$ and $P_{k}$,
respectively.
We have
$$\operatorname{gr}^{0}\operatorname{Lie}U=\operatorname{Mat}_{k\times k}$$
and
$$\operatorname{gr}^{1}\operatorname{Lie}U=\operatorname{Mat}_{k\times 2(n-k)}\times\operatorname{Mat}_{2(n-k)\times k}$$
Define an $\Delta:=\{1,\j\}$-action on $\operatorname{Lie}U$ by
(3)
$$\displaystyle\j(x,y):=$$
$$\displaystyle(y^{t}\Omega_{k},\Omega_{k}x^{t}),$$
$$\displaystyle(x,y)\in\operatorname{Mat}_{k\times 2(n-k)}\times\operatorname{Mat}_{2(n-k)\times k}$$
(4)
$$\displaystyle\j z:=$$
$$\displaystyle z^{t},$$
$$\displaystyle z\in\operatorname{Mat}_{k\times k}.$$
8.2. Lemma
We have
$\operatorname{Lie}V=(\operatorname{Lie}U)^{\Delta}$.
Proof.
Clear.
∎
8.3. Lemma
The $\Delta$-action on $\operatorname{Lie}U$
induces a classical $\Delta$-action on
$H^{i}(\operatorname{Gal}_{K},\operatorname{gr}^{j}\operatorname{Lie}U(A))$
for each $i$, $j$, and $A=\bar{\mathbb{F}}_{p},~{}\bar{\mathbb{Z}}_{p}$.
Proof.
Clear by Equation (3).
∎
8.4. Corollary
For Galois representation
$\rho_{M}=\begin{bmatrix}\tau_{a}&&\\
&\tau_{b}&\\
&&\tau_{c}\end{bmatrix}:\operatorname{Gal}_{K}\to M_{k}(\bar{\mathbb{Z}}_{p})$.
The cup product
$$H^{1}(\operatorname{Gal}_{K},\operatorname{gr}^{1}\operatorname{Lie}U(\bar{\mathbb{Z}}_{p}))^{\Delta}\underset{\bar{\mathbb{Z}}_{p}}{\otimes}\bar{\mathbb{F}}_{p}\times H^{1}(\operatorname{Gal}_{K},\operatorname{gr}^{1}\operatorname{Lie}U(\bar{\mathbb{Z}}_{p}))^{\Delta}\underset{\bar{\mathbb{Z}}_{p}}{\otimes}\bar{\mathbb{F}}_{p}\to H^{2}(\operatorname{Gal}_{K},\operatorname{Mat}_{a\times c}(\bar{\mathbb{F}}_{p}))^{\Delta}$$
is non-trivial.
Proof.
By Corollary 6.7
and Lemma 8.3,
the cup product is non-trivial unless
$K=\mathbb{Q}_{p}$,
$k=1$,
and either
$$\bar{\tau}_{b}=\bar{\tau}_{a}(-1)^{\oplus 2n-2},~{}\text{and}~{}\bar{\tau}_{c}=\bar{\tau}_{a}(-1)$$
or
$$\bar{\tau}_{b}=\bar{\tau}_{a}^{\oplus 2n-2},~{}\text{and}~{}\bar{\tau}_{c}=\bar{\tau}_{a}(-1).$$
The symplecticity of $\rho_{M}$ implies
$$\bar{\tau}_{a}\bar{\tau}_{c}=\lambda$$
and
$$\bar{\tau}_{b}^{t}\Omega_{k}\bar{\tau}_{b}=\lambda\Omega_{k}$$
where $\lambda$ is the similitude character.
Since $\bar{\tau}_{b}=\bar{\tau}_{a}(m)I_{2n-2}$ ($m=0,~{}-1$) is forced to be a scalar matrix,
we have
$$\bar{\tau}_{a}(m)^{2}=\lambda.$$
Thus
$$\bar{\tau}_{a}^{2}(2m)=\bar{\tau}_{a}\bar{\tau}_{c}=\bar{\tau}_{a}(-1)$$
which implies
$\bar{\mathbb{F}}_{p}(1)=\bar{\mathbb{F}}_{p}$, which contradicts the fact that $K=\mathbb{Q}_{p}$.
∎
8.5. Theorem
Theorem 3 holds for ${{}^{L}\!G}_{n}=\operatorname{GSp}_{2n}$ and
$\operatorname{Sp}_{2n}$.
Proof.
We will only treat $\operatorname{GSp}_{2n}$;
for $\operatorname{Sp}_{2n}$, the reader can check that in the proof it is possible
to ensure the similitude character is always $1$.
If $\bar{\rho}$ is elliptic, then it is [Lin23, Theorem C].
Suppose $\bar{\rho}$ is not elliptic,
then by Lemma 8.1,
there exists a parabolic $P_{k}$
through which $\bar{\rho}$ factors through
and that $\bar{\rho}$ is a Heisenberg-type extension
of some $\bar{\rho}_{M}:\operatorname{Gal}_{F}\to M_{k}(\bar{\mathbb{F}}_{p})$.
Write
$$\bar{\rho}_{M}=\begin{bmatrix}\bar{\rho}_{a}&&\\
&\bar{\rho}_{b}&\\
&&\bar{\rho}_{c}\end{bmatrix}$$
By induction on the semisimple rank of $G_{n}$,
we assume $\bar{\rho}_{b}$
admits a lift $\rho_{b}:\operatorname{Gal}_{F}\to\operatorname{GSp}_{n-2k}(\bar{\mathbb{Z}}_{p})$
with similitude character $\mu$.
Let $\rho_{a}:\operatorname{Gal}_{F}\to\operatorname{GL}_{k}(\bar{\mathbb{Z}}_{p})$
be a crystalline lift of $\bar{\rho}_{a}$.
Set $\rho_{c}:=\mu\rho_{a}^{-t}$.
If $\lambda$ is a potentially crystalline character
with trivial mod $p$ reduction,
then
$(\lambda\rho_{a},\rho_{b},\lambda^{-1}\rho_{c})$
is a crystalline lift of $\bar{\rho}_{M}$.
By choosing $\lambda=\bar{\mathbb{Z}}_{p}(n)$ with trivial reduction for $n$ sufficiently large,
we may assume
the Hodge-Tate weights of
$\operatorname{Hom}(\rho_{b},\rho_{a})$, $\operatorname{Hom}(\rho_{c},\rho_{a})$
and $\operatorname{Hom}(\rho_{c},\rho_{b})$
are all positive integers $\geq 2$;
in particular
$H^{1}(\operatorname{Gal}_{K},\operatorname{Lie}U(\bar{\mathbb{Q}}_{p}))=H^{1}_{{\operatorname{crys}}}(\operatorname{Gal}_{K},\operatorname{Lie}U(\bar{\mathbb{Q}}_{p}))$.
By Theorem 1,
we can modify $\rho_{b}$ without changing its Hodge-Tate weights
and reduction mod $p$ such that
$\bar{\rho}$
admits a partial lift which is a partial extension of
$(\rho_{a},\rho_{b},\rho_{c})$.
By Corollary 8.4,
the theorem follows from Theorem 5.5 and the main theorem of [Lin23a].
∎
9. Example C: odd and even orthogonal groups
Let $G={\operatorname{GSO}}_{n}$ be the split form of orthogonal similitude groups.
We have $K=F$.
The Dynkin diagram for $G$
is \dynkinC or \dynkinD.
Thus the maximal proper Levi subgroups of $G$
are of the form
$$\operatorname{GL}_{k}\times{\operatorname{GSO}}_{n-2k},~{}\text{or}~{}\operatorname{GL}_{n}.$$
Set
$$M_{k}:=\begin{cases}\operatorname{GL}_{k}\times{\operatorname{GSO}}_{n-2k}&2k<n\\
\operatorname{GL}_{n}&2k=n\end{cases}$$
and write $P_{k}$ for the corresponding parabolic subgroup.
We use the following presentation of ${\operatorname{GSO}}_{n}$:
$${\operatorname{GSO}}_{n}=\{X\in\operatorname{GL}_{n}|X^{t}\begin{bmatrix}&&I_{k}\\
&I_{n-2k}&\\
I_{k}&&\end{bmatrix}X=\lambda\begin{bmatrix}&&I_{k}\\
&I_{n-2k}&\\
I_{k}&&\end{bmatrix}\}$$
We have
$$P_{k}={\operatorname{GSO}}_{n}\cap\begin{bmatrix}\operatorname{GL}_{k}&\operatorname{Mat}_{k\times(n-2k)}&\operatorname{Mat}_{k\times k}\\
&\operatorname{GL}_{n-2k}&\operatorname{Mat}_{(n-2k)\times k}\\
&&\operatorname{GL}_{k}\end{bmatrix}=:{\operatorname{GSO}}_{n}\cap Q_{k}$$
where
$Q_{k}$ is the corresponding parabolic of $\operatorname{GL}_{n}$.
9.1. Lemma
If $\bar{\rho}:\operatorname{Gal}_{F}\to{\operatorname{GSO}}_{n}(\bar{\mathbb{F}}_{p})$
is not elliptic, then there exists a parabolic
$P_{k}$ through which $\bar{\rho}$ factors
and $\bar{\rho}$ is a Heisenberg-type extension of
some $\bar{\rho}_{M}:\operatorname{Gal}_{F}\to M_{k}(\bar{\mathbb{F}}_{p})$.
Proof.
It is completely similar to Lemma 8.1.
∎
Write $U$ and $V$ for
the unipotent radical of $Q_{k}$ and $P_{k}$,
respectively.
We have
$$\operatorname{gr}^{0}\operatorname{Lie}U=\operatorname{Mat}_{k\times k}$$
and
$$\operatorname{gr}^{1}\operatorname{Lie}U=\operatorname{Mat}_{k\times(n-2k)}\times\operatorname{Mat}_{(n-2k)\times k}$$
Define an $\Delta:=\{1,\j\}$-action on $\operatorname{Lie}U$ by
(5)
$$\displaystyle\j(x,y):=$$
$$\displaystyle(-y^{t},-x^{t}),$$
$$\displaystyle(x,y)\in\operatorname{Mat}_{k\times(n-2k)}\times\operatorname{Mat}_{(n-2k)\times k}$$
(6)
$$\displaystyle\j z:=$$
$$\displaystyle-z^{t},$$
$$\displaystyle z\in\operatorname{Mat}_{k\times k}.$$
9.2. Lemma
We have
$\operatorname{Lie}V=(\operatorname{Lie}U)^{\Delta}$.
Proof.
Clear.
∎
9.3. Lemma
The $\Delta$-action on $\operatorname{Lie}U$
induces a classical $\Delta$-action on
$H^{i}(\operatorname{Gal}_{K},\operatorname{gr}^{j}\operatorname{Lie}U(A))$
for each $i$, $j$, and $A=\bar{\mathbb{F}}_{p},~{}\bar{\mathbb{Z}}_{p}$.
Proof.
Clear by Equation (5).
∎
9.4. Corollary
For Galois representation
$\rho_{M}=\begin{bmatrix}\tau_{a}&&\\
&\tau_{b}&\\
&&\tau_{c}\end{bmatrix}:\operatorname{Gal}_{K}\to M_{k}(\bar{\mathbb{Z}}_{p})$.
The cup product
$$H^{1}(\operatorname{Gal}_{K},\operatorname{gr}^{1}\operatorname{Lie}U(\bar{\mathbb{Z}}_{p}))^{\Delta}\underset{\bar{\mathbb{Z}}_{p}}{\otimes}\bar{\mathbb{F}}_{p}\times H^{1}(\operatorname{Gal}_{K},\operatorname{gr}^{1}\operatorname{Lie}U(\bar{\mathbb{Z}}_{p}))^{\Delta}\underset{\bar{\mathbb{Z}}_{p}}{\otimes}\bar{\mathbb{F}}_{p}\to H^{2}(\operatorname{Gal}_{K},\operatorname{Mat}_{a\times c}(\bar{\mathbb{F}}_{p}))^{\Delta}$$
is non-trivial.
Proof.
By Corollary 6.7
and Lemma 9.3,
the cup product is non-trivial unless
$K=\mathbb{Q}_{p}$,
$k=1$,
and either
$$\bar{\tau}_{b}=\bar{\tau}_{a}(-1)^{\oplus 2n-2},~{}\text{and}~{}\bar{\tau}_{c}=\bar{\tau}_{a}(-1)$$
or
$$\bar{\tau}_{b}=\bar{\tau}_{a}^{\oplus 2n-2},~{}\text{and}~{}\bar{\tau}_{c}=\bar{\tau}_{a}(-1).$$
The orthogonality of $\rho_{M}$ implies
$$\bar{\tau}_{a}\bar{\tau}_{c}=\lambda$$
and
$$\bar{\tau}_{b}^{t}\bar{\tau}_{b}=\lambda I_{n-2}$$
where $\lambda$ is the similitude character.
Since $\bar{\tau}_{b}=\bar{\tau}_{a}(m)I_{n-2}$ ($m=0,~{}-1$) is forced to be a scalar matrix,
we have
$$\bar{\tau}_{a}(m)^{2}=\lambda.$$
Thus
$$\bar{\tau}_{a}^{2}(2m)=\bar{\tau}_{a}\bar{\tau}_{c}=\bar{\tau}_{a}(-1)$$
which implies
$\bar{\mathbb{F}}_{p}(1)=\bar{\mathbb{F}}_{p}$, which contradicts the fact that $K=\mathbb{Q}_{p}$.
∎
9.5. Theorem
Theorem 3 holds for ${{}^{L}\!G}_{n}={\operatorname{GSO}}_{n}$ and $\operatorname{SO}_{n}$.
Proof.
We will only treat ${\operatorname{GSO}}_{n}$;
the reader can verify that in the proof
it is possible to ensure the similitude character is always $1$.
If $\bar{\rho}$ is elliptic, then it is [Lin23, Theorem C].
Suppose $\bar{\rho}$ is not elliptic,
then by Lemma 9.1,
there exists a parabolic $P_{k}$
through which $\bar{\rho}$ factors through
and that $\bar{\rho}$ is a Heisenberg-type extension
of some $\bar{\rho}_{M}:\operatorname{Gal}_{F}\to M_{k}(\bar{\mathbb{F}}_{p})$.
Write
$$\bar{\rho}_{M}=\begin{bmatrix}\bar{\rho}_{a}&&\\
&\bar{\rho}_{b}&\\
&&\bar{\rho}_{c}\end{bmatrix}$$
By induction on the semisimple rank of $G_{n}$,
we assume $\bar{\rho}_{b}$
admits a lift $\rho_{b}:\operatorname{Gal}_{F}\to{\operatorname{GSO}}_{n-2k}(\bar{\mathbb{Z}}_{p})$
with similitude character $\mu$.
Let $\rho_{a}:\operatorname{Gal}_{F}\to\operatorname{GL}_{k}(\bar{\mathbb{Z}}_{p})$
be a crystalline lift of $\bar{\rho}_{a}$.
Set $\rho_{c}:=\mu\rho_{a}^{-t}$.
If $\lambda$ is a potentially crystalline character
with trivial mod $p$ reduction,
then
$(\lambda\rho_{a},\rho_{b},\lambda^{-1}\rho_{c})$
is a crystalline lift of $\bar{\rho}_{M}$.
By choosing $\lambda=\bar{\mathbb{Z}}_{p}(n)$ with trivial reduction for $n$ sufficiently large,
we may assume
the Hodge-Tate weights of
$\operatorname{Hom}(\rho_{b},\rho_{a})$, $\operatorname{Hom}(\rho_{c},\rho_{a})$
and $\operatorname{Hom}(\rho_{c},\rho_{b})$
are all positive integers $\geq 2$;
in particular
$H^{1}(\operatorname{Gal}_{K},\operatorname{Lie}U(\bar{\mathbb{Q}}_{p}))=H^{1}_{{\operatorname{crys}}}(\operatorname{Gal}_{K},\operatorname{Lie}U(\bar{\mathbb{Q}}_{p}))$.
By Theorem 1,
we can modify $\rho_{b}$ without changing its Hodge-Tate weights
and reduction mod $p$ such that
$\bar{\rho}$
admits a partial lift which is a partial extension of
$(\rho_{a},\rho_{b},\rho_{c})$.
By Corollary 8.4,
the theorem follows from Theorem 5.5 and the main theorem of [Lin23a].
∎
10. The Emerton-Gee stacks for unitary groups
When we speak of
“the locus of … in the moduli stack of something”,
we mean
“the scheme-theoretic closure of the scheme-theoretic image
of all families of something whose $\bar{\mathbb{F}}_{p}$-points are of the form … in the moduli stack of something”.
So a “locus” is techinically always a closed substack.
However, since we are interested in dimension analysis only,
it is almost always harmless to replace a “locus”
by its dense open substacks.
When we speak of “the moduli of $L$-parameters”,
we always mean “the moduli of $(\varphi,\Gamma)$-modules”
in the sense of [Lin23b].
When we write $H^{\bullet}(\operatorname{Gal}_{F},-)$,
we always mean $(\varphi,\Gamma)$-cohomology
(or the cohomology of the corresponding Herr complex).
10.1. Theorem
Let $\bar{\alpha}:\operatorname{Gal}_{K}\to\operatorname{GL}_{a}(\bar{\mathbb{F}}_{p})$
be an irreducible Galois representation.
The locus of $\bar{x}\in\mathcal{X}_{F,{{}^{L}\!U}_{n},{\operatorname{red}}}$
such that
$\operatorname{Hom}_{\operatorname{Gal}_{K}}(\bar{\alpha},\bar{x}|_{\operatorname{Gal}_{K}})\geq r$
is of dimension at most
$$d_{n,r}:=[F:\mathbb{Q}_{p}]\frac{n(n-1)}{2}-r^{2}+\frac{r}{2}.$$
The whole section is devoted to the proof of Theorem 10.1.
We denote by $\mathcal{X}^{\bar{\alpha}^{\oplus r}}_{n}$
the locus considered in Theorem 10.1.
We will prove Theorem 10.1
by induction on $n$, and assume it holds for
$n^{\prime}<n$ throughout the section.
10.2. Involution of Galois representations
Write $\Delta=\operatorname{Gal}(K/F)=\{1,\j\}$.
For each irreducible representation $\beta:\operatorname{Gal}_{K}\to\operatorname{GL}_{a}(\bar{\mathbb{F}}_{p})$,
set $\theta(\beta):=\beta(\j^{-1}\circ-\circ\j)^{-t}$.
Suppose
$$\begin{bmatrix}\bar{\alpha}&&\\
&\bar{\tau}&\\
&&\bar{\beta}\end{bmatrix}\rtimes*$$
is an $L$-parameter $\operatorname{Gal}_{F}\to{{}^{L}\!M}_{a}$.
If $\j$ acts on $\operatorname{GL}_{n}$ via
$x\mapsto wx^{-t}w^{-1}$,
then direct computation shows
$$\bar{\beta}=bw\theta(\bar{\alpha})w^{-1}b^{-1}$$
where $w$ is the longest Weyl group element
and $b=\bar{\beta}(\j)\j^{-1}\in\operatorname{GL}_{a}(\bar{\mathbb{F}}_{p})$.
In particular, $\bar{\beta}$ is isomorphic to $\theta(\bar{\alpha})$
as a $\operatorname{Gal}_{K}$-representation.
10.3. Base case of the induction
We first consider the elliptic locus of
$\mathcal{X}_{n}^{\bar{\alpha}^{\oplus r}}$
(that is, the locus consisting of elliptic $L$-parameters).
We only need to understand the case where $n=ra$.
Let $x:\operatorname{Gal}_{F}\to{{}^{L}\!U}_{n}(\bar{\mathbb{F}}_{p})$ be an $L$-parameter in the elliptic locus of
$\mathcal{X}_{ra}^{\bar{\alpha}^{\oplus r}}$.
Since $x|_{\operatorname{Gal}_{K}}=\begin{bmatrix}\bar{\alpha}(-1)&&\\
&\dots&\\
&&\bar{\alpha}(-1)\end{bmatrix}$,
it is immediate that we must have $\bar{\alpha}(-1)\cong\theta(\bar{\alpha}(-1))$.
Moreover, $x(\j)$ permutes all diagonal blocks of $x|_{\operatorname{Gal}_{K}}$
without orbits of size $2$;
however, since all orbits have size at most $2$,
$x(\j)w^{-1}\j^{-1}$ is a diagonal matrix.
We have the following:
10.4. Lemma
If $\bar{\alpha}(-1)\not\cong\theta(\bar{\alpha}(-1))$,
the elliptic locus of $\mathcal{X}_{ra}^{\bar{\alpha}^{\oplus r}}$
is empty.
If $\bar{\alpha}(-1)\cong\theta(\bar{\alpha}(-1))$,
the elliptic locus of $\mathcal{X}_{ra}^{\bar{\alpha}^{\oplus r}}$
has dimension at most $ra-r^{2}$.
Moreover, if $r=1$, then the elliptic locus has
dimension $-1$.
Proof.
We can take $x(\j)w^{-1}\j^{-1}$ to be any diagonal matrix
(which completely determines $x$),
and we can quotient out the block diagonal with scalar block entries.
So there exists a surjective map from
$[\operatorname{G}_{m}^{\times n}/\operatorname{GL}_{r}]$ to the elliptic locus of $\mathcal{X}_{ra}^{\bar{\alpha}^{\oplus r}}$
(here $\operatorname{G}_{m}^{\times n}$ parameterizes all diagonal matrices
and $\operatorname{GL}_{r}$ parameterizes block matrices with $r\times r$ scalar matrix blocks).
For the moreover part, note that if $r=1$,
then by Schur’s lemma, $x(\j)w^{-1}\j^{-1}$ is a scalar matrix.
Since $x(\j)^{2}=x(\j^{2})=\bar{\alpha}(-1)(\j^{2})$ is fixed, $x(\j)$ has at most $2$ choices
while the automorphism of $x$ include all scalar matrices and is at least one-dimensional.
Finally, we want to show $d_{ra,r}\geq ra-r^{2}$ of $r>1$.
We can rewrite it as
$\frac{a[F:\mathbb{Q}_{p}]}{2}(ra-1)+1/2\geq a$.
If $a>1$, then $\frac{a[F:\mathbb{Q}_{p}]}{2}(ra-1)\geq\frac{3}{2}a>a$.
If $a=1$, then
$\frac{a[F:\mathbb{Q}_{p}]}{2}(ra-1)+1/2\geq 1/2+1/2=1$.
∎
10.5. The shape of $x\in\mathcal{X}_{n}^{\bar{\alpha}^{\oplus r}}$
If $\bar{\alpha}\not\cong\theta(\bar{\alpha})$,
then such $x$ is of the form
$$\begin{bmatrix}\bar{\alpha}(-1)^{\oplus r}&*&*\\
&\bar{\tau}&*\\
&&\theta(\bar{\theta}(-1))^{\oplus r}\end{bmatrix}\rtimes*$$
where $\bar{\tau}\rtimes*$ is an $L$-parameter for $U_{n-2ar}$.
If $\bar{\alpha}\cong\theta(\bar{\alpha})$,
then $x$ can also be of the form
$$\begin{bmatrix}\bar{\alpha}(-1)^{\oplus k_{1}}&0&0&0&0&0&0\\
&\bar{\alpha}(-1)^{\oplus k_{2}}&*&0&*&*&0\\
&&\bar{\tau}_{1}&0&*&*&0\\
&&&\bar{\tau}&0&0&0\\
&&&&\bar{\tau}_{2}&*&0\\
&&&&&\bar{\alpha}(-1)^{\oplus k_{2}}&0\\
&&&&&&\bar{\alpha}(-1)^{\oplus k_{1}}\end{bmatrix}\rtimes*$$
where $\bar{\tau}$ is an elliptic $L$-parameter
for $U_{ak_{3}}$
and that $2k_{1}+k_{2}+k_{3}=r$.
Intuitively, the dimension of the locus of ($\dagger$)
should be larger than the dimension of the locus of ($\ddagger$)
because there are more zero entries in ($\ddagger$);
the next lemma partially confirms our intuition
and allows us to consider only ($\dagger$).
10.6. Lemma
(1)
Either $\dim\mathcal{X}^{\bar{\alpha}^{\oplus r}}_{n}\leq d_{n,r}$, or
the dimension of
$\mathcal{X}^{\bar{\alpha}^{\oplus r}}_{n}$
is equal to the dimension of the locus of
$$\begin{bmatrix}\bar{\alpha}(-1)^{\oplus r}&*&*\\
&\bar{\tau}&*\\
&&\theta(\bar{\alpha}(-1))^{\oplus r}\end{bmatrix}\rtimes*,$$
where
$~{}\bar{\tau}\rtimes*$ is an $L$-parameter for $U_{n-2ar}$.
(2)
The semisimple locus of $\mathcal{X}_{n}^{\bar{\alpha}^{\oplus r}}$
has dimension at most $d_{n,r}$.
Proof.
(1)
It is clear if $\theta(\bar{\alpha}(-1))\not\cong\bar{\alpha}(-1)$.
So, suppose $\theta(\bar{\alpha}(-1))\cong\bar{\alpha}(-1)$.
Consider ($\ddagger$).
There are two cases: $k_{3}=0$ and $k_{3}\neq 0$.
We first assume $k_{3}=0$. Set $k=k_{1}$ and thus $k_{2}=n-2k$.
Since
$$\operatorname{Aut}_{\operatorname{Gal}_{K}}(\bar{\alpha}^{\oplus k})=\operatorname{Aut}_{\operatorname{Gal}_{F}}(\begin{bmatrix}\bar{\alpha}^{\oplus k}&\\
&\theta(\bar{\alpha})^{\oplus k}\end{bmatrix}\rtimes*)$$
has dimension $k^{2}$,
it remains to show
(6)
$$-k^{2}+d_{n-2ka,r-2k}\leq d_{n,r},$$
which is equivalent to
(7)
$$[F:\mathbb{Q}_{p}]a(-2ak+2n-1)-(4r-5k-1)\geq 0.$$
If $a=1$, then the derivative of the LHS of (7)
with respect to $k$ is positive.
So we only need to consider the $k=k_{\min}=0$ case, which is clear.
If $a>1$, then the derivative with respect to $k$ is negative,
and we only need to consider the $k=k_{\max}=r$-case:
$$[F:\mathbb{Q}_{p}]a(-2ar+rn-1)\geq-r-1$$
whose LHS $>0$ and RHS $<0$.
Next assume $k_{3}\neq 0$.
Then
$$\begin{bmatrix}\bar{\alpha}(-1)^{\oplus k_{2}}&*&*&*\\
&\bar{\tau}_{1}&*&*\\
&&\bar{\tau}_{2}&*\\
&&&\bar{\alpha}(-1)^{\oplus k_{2}}\end{bmatrix}\rtimes*$$
is an $L$-parameter for $U_{n-2k_{1}a-k_{3}a}$.
By reusing the inequality (6),
it suffices to show
(8)
$$d_{n-2k_{1}a-k_{3}a,k_{2}}+d_{k_{3}a,k_{3}}\leq d_{n-2k_{1}a,r-2k_{1}}$$
We can set
$$\displaystyle n^{\prime}$$
$$\displaystyle=n-2k_{1}a$$
$$\displaystyle r^{\prime}$$
$$\displaystyle=r-2k_{1}$$
$$\displaystyle k^{\prime}$$
$$\displaystyle=k_{2}$$
and rename $n^{\prime},r^{\prime},k^{\prime}$ to $n,r,k$.
By Lemma 10.4,
(8) becomes
(9)
$$d_{n-(r-k)a,k}+(r-k)a-(r-k)^{2}\leq d_{n,r}$$
Set $g(n,r,k)=d_{n,r}-(d_{n-(r-k)a,k}+(r-k)a-(r-k)^{2})$.
We have
$\frac{\partial^{2}g}{\delta r^{2}}=-[F:\mathbb{Q}_{p}]a^{2}/2<0$.
Thus $g$ achieves minimum at the boundary points of
the range for $r$.
Since $k\leq r\leq\frac{n}{a}-k$ and $g|_{r=k}=0$,
it suffices to show $f(n,k)=g(n,n/a-k,r)\geq 0$.
We have
$\frac{\partial^{2}f}{\delta k^{2}}=2(2-[F:\mathbb{Q}_{p}]a^{2})$.
When $2\leq[F:\mathbb{Q}_{p}]a^{2}$,
$f$ achieves minimum at the boundary points of the range for $k$.
Since $0\leq k\leq n/a$, $f|_{k=0}=\frac{n}{2a}+\frac{1}{2}n([F:\mathbb{Q}_{p}]n-[F:\mathbb{Q}_{p}]-2)>0$ (because $n\geq 3$),
and $f|_{k=n/a}=0$, (9) holds.
When $2>[F:\mathbb{Q}_{p}]a^{2}$, we must have $a=1$ and $F=\mathbb{Q}_{p}$;
now since $\frac{\partial f}{\partial k}=4k-2n+2<0$,
$f_{\min}=f|_{k=n/a}=0$.
So we are done.
(2)
It has been implicitly proved in the proof of part (1).
∎
To analyze the locus of
$$\begin{bmatrix}\bar{\alpha}(-1)^{\oplus r}&*&*\\
&\bar{\tau}&*\\
&&\theta(\bar{\alpha}(-1))^{\oplus r}\end{bmatrix}\rtimes*,$$
we need to consider parabolic Emerton-Gee stacks.
10.7. Parabolic Emerton-Gee stacks
Let $A$ be a reduced finite type $\bar{\mathbb{F}}_{p}$-algebra.
For any morphism $\operatorname{Spec}A\to\mathcal{X}_{{{}^{L}\!M}_{ra},{\operatorname{red}}}$,
there is always a scheme-theoretically surjective morphism
$\operatorname{Spec}B\to\operatorname{Spec}A$
such that $\operatorname{Spec}B\to\mathcal{X}_{{{}^{L}\!M}_{ra},{\operatorname{red}}}$
is a basic morphism
(see [Lin23b, Lemma 10.1.1, Definition 10.1.2]).
Here $B$ is also a reduced finite type $\bar{\mathbb{F}}_{p}$-algebra.
By replacing $A$ by $B$, we assume
$\operatorname{Spec}A\to\mathcal{X}_{{{}^{L}\!M}_{ra},{\operatorname{red}}}$
is a basic morphism.
Then
$$\operatorname{Spec}A\times_{\mathcal{X}_{{{}^{L}\!M}_{ra},{\operatorname{red}}}}\mathcal{X}_{{{}^{L}\!P}_{ra}}$$
is an algebraic stack ([Lin23b, Proposition 10.1.8]).
Write $[U,U]$ for the derived subgroup of $U$,
where $U$ is the unipotent radical of ${{}^{L}\!P}_{ra}$.
Note that $[U,U]\cong\operatorname{Mat}_{ra\times ra}$,
$U^{\operatorname{ab}}:=U/[U,U]\cong\operatorname{Mat}_{ra\times(n-2ra)}\oplus\operatorname{Mat}_{(n-2ra)\times ra}$.
For ease of notation, set
$$X_{A}:=\operatorname{Spec}A,$$
$$Y_{A}:=\operatorname{Spec}A\times_{\mathcal{X}_{{{}^{L}\!M}_{ra},{\operatorname{red}}}}\mathcal{X}_{{{}^{L}\!P}_{ra}/[U,U]},$$
and
$$Z_{A}:=\operatorname{Spec}A\times_{\mathcal{X}_{{{}^{L}\!M}_{ra},{\operatorname{red}}}}\mathcal{X}_{{{}^{L}\!P}_{ra}}.$$
We can regard $Y_{A}$ as a sheaf of groupoids
over $X_{A}$.
Denote by $Y_{A}^{C}$ the coarse moduli sheaf (of sets)
of $Y_{A}$ over $X_{A}$.
Then
$Y_{A}^{C}$ is
representable by a scheme, and is
a vector bundle over $X_{A}$
of rank $H^{1}(\operatorname{Gal}_{F},U^{\operatorname{ab}}(A))$,
and
$$Y_{A}=[Y_{A}^{C}/H^{0}(\operatorname{Gal}_{F},U^{\operatorname{ab}}(A))]$$
(see [Lin23b, Corollary 10.1.7]).
In like manner, denote by $Z_{A}^{C}$ the coarse moduli sheaf
of $Z_{A}$ over $X_{A}$.
Then $Z_{A}^{C}$ is an affine bundle over $Y_{A}^{C}$
of rank $H^{1}(\operatorname{Gal}_{F},[U,U](A))$;
and
$$Z_{A}=[Z_{A}^{C}/H^{0}(\operatorname{Gal}_{F},[U,U](A))\rtimes H^{0}(\operatorname{Gal}_{F},U^{\operatorname{ab}}(A))].$$
Write $W_{A}$
for the scheme-theoretic image of $Z_{A}^{C}$
in $Y_{A}^{C}$.
Indeed, the image of $Z_{A}^{C}$ in $Y_{A}^{C}$ is already closed
and (the underlying set of) $W_{A}$ is the set-theoretic image.
Assume the codimension of $W_{A}$ in $Y_{A}^{C}$
is $c$.
We have
(10)
$$\dim Z_{A}^{C}=\dim X_{A}-c+\operatorname{rank}H^{1}(\operatorname{Gal}_{F},\operatorname{Lie}U(A))$$
by the discussion above.
10.8. Lemma
Let $\operatorname{Spec}A$ be an irreducible, finite type $\bar{\mathbb{F}}_{p}$-variety.
Let $\operatorname{Spec}A\to\mathcal{X}_{F,{{}^{L}\!M}_{r},{\operatorname{red}}}$
be a basic morphism of finite type
such that each $x\in\operatorname{Spec}A(\bar{\mathbb{F}}_{p})$
corresponds to an $L$-parameter of the form
$$\begin{bmatrix}\bar{\alpha}(-1)^{\oplus r}&&\\
&\bar{\tau}&\\
&&\theta(\bar{\alpha}(-1))^{\oplus r}\end{bmatrix}\rtimes*$$
such that $\bar{\tau}\in\mathcal{X}_{n-2r}^{\bar{\alpha}(-1)^{\oplus s}}\backslash\mathcal{X}_{n-2r}^{\bar{\alpha}(-1)^{\oplus(s+1)}}$.
Assume the scheme-theoretic image of $f:X_{A}\to\mathcal{X}_{F,{{}^{L}\!M}_{ra},{\operatorname{red}}}$
has dimension $d_{X}$,
and the scheme-theoretic image of $g:Z_{A}\to\mathcal{X}_{F,{{}^{L}\!U}_{n},{\operatorname{red}}}$
has dimension $d_{Z}$.
Then
$$d_{Z}\leq d_{X}+rs-c+[F:\mathbb{Q}_{p}](2nra-3r^{2}a^{2})+\begin{cases}r^{2}&\theta(\bar{\alpha}(-1))=\bar{\alpha}(-2)\\
0&\theta(\bar{\alpha}(-1))\neq\bar{\alpha}(-2).\end{cases}$$
We also have
$d_{X}\leq d_{n-2ra,s}-r^{2}$.
Proof.
Note that
•
$\operatorname{rank}H^{2}(\operatorname{Gal}_{F},[U,U](A))\leq\begin{cases}r^{2}&\theta(\bar{\alpha}(-1))=\bar{\alpha}(-2)\\
0&\theta(\bar{\alpha}(-1))\neq\bar{\alpha}(-2).\end{cases}$,
•
$\operatorname{rank}H^{2}(\operatorname{Gal}_{F},U^{\operatorname{ab}}(A))\leq rs$ (see the sublemma below), and
•
$\dim U=2ra(n-2ra)+r^{2}a^{2}=2nra-3r^{2}a^{2}$.
Sublemma
$\operatorname{rank}H^{2}(\operatorname{Gal}_{F},U^{\operatorname{ab}}(A))\leq rs$.
Proof.
It is clear that
$\operatorname{rank}H^{2}(\operatorname{Gal}_{K},\operatorname{Mat}_{ra\times(n-2ra)}(A))=\operatorname{rank}H^{2}(\operatorname{Gal}_{K},\operatorname{Mat}_{(n-2ra)\times ra}(A))\leq rs$.
Note that
$$H^{2}(\operatorname{Gal}_{K},U^{\operatorname{ab}}(A))=H^{2}(\operatorname{Gal}_{K},\operatorname{Mat}_{ra\times(n-2ra)}(A))\oplus H^{2}(\operatorname{Gal}_{K},\operatorname{Mat}_{(n-2ra)\times ra}(A)),$$
and the $\operatorname{Gal}(K/F)$-action swaps the two direct summands
(Lemma 7.3).
In particular,
$\operatorname{rank}H^{2}(\operatorname{Gal}_{K},U^{\operatorname{ab}}(A))^{\operatorname{Gal}(K/F)}\leq rs.$
∎
By the local Euler characteristic
$$\operatorname{rank}H^{0}(\operatorname{Gal}_{F},\operatorname{Lie}U(A))-\operatorname{rank}H^{1}(\operatorname{Gal}_{F},\operatorname{Lie}U(A))+\operatorname{rank}H^{2}(\operatorname{Gal}_{F},\operatorname{Lie}U(A))=-[F:\mathbb{Q}_{p}]\dim U$$
we have
$$\displaystyle\operatorname{rank}H^{1}(\operatorname{Gal}_{F},\operatorname{Lie}U(A))$$
$$\displaystyle\leq[F:\mathbb{Q}_{p}](2nra-3r^{2}a^{2})+\begin{cases}r^{2}&\theta(\bar{\alpha}(-1))=\bar{\alpha}(-2)\\
0&\theta(\bar{\alpha}(-1))\neq\bar{\alpha}(-2)\end{cases}$$
$$\displaystyle\hskip 14.22636pt+rs+\operatorname{rank}H^{0}(\operatorname{Gal}_{F},\operatorname{Lie}U(A)).$$
By Equation (10),
it suffices to show
(11)
$$d_{Z}-d_{X}\leq\dim Z_{A}^{C}-\dim X_{A}-\operatorname{rank}H^{0}(\operatorname{Gal}_{F},\operatorname{Lie}U(A)).$$
Let $W_{A}^{\prime}\subset W_{A}$
be an irreducible component of largest dimension
(see [aut, 0DR4] if the reader is not familiar with irreducible components of algebraic stacks).
Set $(Z_{A}^{C})^{\prime}:=Z_{A}^{C}\times_{W_{A}}W_{A}^{\prime}$.
We have $\dim Z_{A}^{C}=\dim(Z_{A}^{C})^{\prime}$
since $Z_{A}^{C}\to W_{A}$ is an affine bundle.
Moreover, since $W_{A}$ has only finitely many irreducible components,
we can assume $d_{Z}$ is the dimension of
the scheme-theoretic image of $(Z_{A}^{C})^{\prime}$ in $\mathcal{X}_{F,{{}^{L}\!U}_{n},{\operatorname{red}}}$.
So, after suitable replacements,
it is harmless to assume $W_{A},X_{A}$ and $Z_{A}^{C}$ are irreducible varieties
when proving (11).
Now we can invoke [aut, Lemma Tag 0DS4]:
after replacing $X_{A}$ by a dense open (which does not change any quantity in (11) by the irreducibility of $X_{A}$),
we can assume
$\dim_{t}(X_{A})_{f(t)}=\dim X_{A}-d_{X}$
for all $t\in W_{A}(\bar{\mathbb{F}}_{p})$;
similarly, after replacing $Z_{A}^{C}$ by a dense open,
we can assume
$\dim_{x}(Z_{A}^{C})_{g(x)}=\dim Z_{A}^{C}-d_{Z}$
for all $x\in Z_{A}^{C}(\bar{\mathbb{F}}_{p})$.
Label the projection $Z_{A}^{C}\to W_{A}$ by $\pi$.
Now (11) becomes
$$\dim_{\pi(x)}(X_{A})_{f(\pi(x))}\leq\dim_{x}(Z_{A}^{C})_{x}-\operatorname{rank}H^{0}(\operatorname{Gal}_{F},\operatorname{Lie}U(A)).$$
Denote by $G_{\pi(x)}\subset(\widehat{M_{ra}})_{\bar{\mathbb{F}}_{p}}$ and
$G_{x}\subset(\widehat{U_{n}})_{\bar{\mathbb{F}}_{p}}$
the automorphism group
of the $L$-parameters corresponding to $\pi(x)$
and $x$, resp..
Note that the immersion
$[\operatorname{Spec}\bar{\mathbb{F}}_{p}/G_{\pi(x)}]\hookrightarrow\mathcal{X}_{F,{{}^{L}\!M}_{ra},{\operatorname{red}}}$
induces an immersion
$[(X_{A})_{f(\pi(x))}/G_{\pi(x)}]\hookrightarrow X_{A}$.
Similarly, we have an immersion
$[(Z_{A}^{C})_{x}/G_{x}]\hookrightarrow Z_{A}^{C}$.
Note that the image of the composite
$[(Z_{A}^{C})_{x}/G_{x}]\hookrightarrow Z_{A}^{C}\to X_{A}$
contains the image of $[(X_{A})_{f(\pi(x))}/G_{\pi(x)}]\hookrightarrow X_{A}$
(by, for example, the moduli interpretation),
and therefore
$\dim[(Z_{A}^{C})_{x}/G_{x}]\geq\dim[(X_{A})_{f(\pi(x))}/G_{\pi(x)}]$.
Since
$$\displaystyle\dim[(Z_{A}^{C})_{x}/G_{x}]$$
$$\displaystyle=\dim(Z_{A}^{C})_{x}-\dim G_{x}$$
$$\displaystyle\dim[(X_{A})_{f(\pi(x))}/G_{\pi(x)}]$$
$$\displaystyle=\dim(X_{A})_{f(\pi(x))}-\dim G_{\pi(x)},$$
it remains to show
$$\dim G_{\pi(x)}\leq\dim G_{x}-\operatorname{rank}H^{0}(\operatorname{Gal}_{F},\operatorname{Lie}U(A)),$$
which is clear since
$G_{\pi(x)}\rtimes(H^{0}(\operatorname{Gal}_{F},[U,U])\rtimes H^{0}(\operatorname{Gal}_{F},U^{\operatorname{ab}}))\subset G_{x}$.
Finally, $d_{X}\leq d_{n-2r,s}-r^{2}$
is clear since
$$\operatorname{Aut}_{\operatorname{Gal}_{K}}(\bar{\alpha}^{\oplus r})=\operatorname{Aut}_{\operatorname{Gal}_{F}}(\begin{bmatrix}\bar{\alpha}^{\oplus r}&\\
&\theta(\bar{\alpha})^{\oplus r}\end{bmatrix}\rtimes*)$$
has dimension $r^{2}$.
∎
10.9. Initial estimates
To prove Theorem 10.1,
we want to show
$d_{Z}\leq d_{n,r}$
for all $0\leq s\leq\frac{n-2r}{2a}$.
By Lemma 10.8,
it suffices to prove
(12)
$$(d_{n-2r,s}-r^{2})+r^{2}+rs-c+[F:\mathbb{Q}_{p}](2nra-3r^{2}a^{2})\leq d_{n,r}.$$
for all $0\leq s\leq\frac{n-2r}{2a}$.
Expanding (12),
we get
(13)
$$r^{2}+rs-s^{2}+\frac{s-r}{2}\leq[F:\mathbb{Q}_{p}](r^{2}-r)+c.$$
Regarding the LHS of (12) as a quadratic polynomial in $s$,
it achieves maximum at $s=r/2+1/4$; since $s$ only takes integral values,
we only need to prove (12) for $s=r/2$:
(14)
$$\frac{5}{4}r^{2}-\frac{1}{4}r\leq[F:\mathbb{Q}_{p}](a^{2}r^{2}-ar)+c$$
Using the trivial estimate $c\geq 0$,
we only need to prove
(15)
$$\frac{5}{4}r-\frac{1}{4}\leq[F:\mathbb{Q}_{p}](a^{2}r-a)$$
which clearly holds when $a\geq 2$.
10.10. Lemma
(The codimension lemma)
Write $h$ for $\operatorname{rank}H^{1}(\operatorname{Gal}_{K},\bar{\tau}\otimes\theta(\bar{\alpha}(-1))^{\vee})$.
Assume
•
$H^{2}(\operatorname{Gal}_{K},[U,U](A))\neq 0$ and
•
$a=1$.
Then
$$c\geq c(r,h):=\min_{1\leq k\leq r}\frac{1}{2}(k^{2}+hr-hk)=\begin{cases}r^{2}/2&h>2r\\
(hr-h^{2}/4)/2&h\leq 2r.\end{cases}$$
Note that the minimum of $C=\min_{1\leq k\leq r}\frac{1}{2}(k^{2}+hr-hk)$ is achieved at
either $k=r$ or $k=h/2$.
When $k=r$, the $C=r^{2}/2$.
When $k=h/2$,
$C=\frac{1}{2}(hr-h^{2}/4)\leq r^{2}/2$.
We will postpone the proof of the codimension lemma to the end
of this section.
Next, we prove that the codimension lemma
implies Theorem 10.1.
Proof of Theorem 10.1.
We’ve already settled the $a>1$ case.
So, assume $a=1$.
If $h>2r$,
after plugging the codimension lemma 10.10
into (14),
we only need to prove
(16)
$$\frac{5}{4}r^{2}-\lceil\frac{1}{2}r^{2}\rceil-\frac{1}{4}r\leq[F:\mathbb{Q}_{p}](a^{2}r^{2}-ar)$$
which holds for all integers $r$ and $a$.
We remark that the inequality (16) is tight,
and it achieves equality when $r=1$, $a=1$, and $[F:\mathbb{Q}_{p}]$ arbitrary.
Suppose $h\leq 2r$.
If $n\leq 3r$, then the LHS (13)
achieves maximum at $s=\frac{n-2r}{2}$.
So we need to prove
(17)
$$\lfloor r^{2}+r\frac{n-2r}{2}-(\frac{n-2r}{2})^{2}+\frac{n-2r-r}{2}-\frac{hr-h^{2}/4}{2}\rfloor\leq[F:\mathbb{Q}_{p}](r^{2}-r)$$
because the dimension only takes integral values.
By the local Euler characteristic,
$2r\geq h\geq n-2r+1$;
and thus the LHS of (17) achieves
minimum at $h=n-2r+1$.
So we get
(18)
$$\lfloor-n^{2}/8+nr/2+r^{2}/2+n/2-2r+1/8\rfloor\leq[F:\mathbb{Q}_{p}](r^{2}-r),$$
whose LHS achieves maximum at the critical point $n=2r+2$:
(19)
$$\lfloor r^{2}-r+5/8\rfloor\leq[F:\mathbb{Q}_{p}](r^{2}-r)$$
which clearly holds.
If $n>3r$, then the LHS (13)
achieves maximum at $s=r/2$.
So we need to prove
(20)
$$\lfloor\frac{5}{4}r^{2}-\frac{1}{4}r-\frac{hr-h^{2}/4}{2}\rfloor\leq[F:\mathbb{Q}_{p}](r^{r}-r).$$
By the local Euler characteristic,
$2r\geq h\geq n-2r+1$;
and thus the LHS of (17) achieves
minimum at $h=n-2r+1$.
So we get
(21)
$$\lfloor n^{2}/8-nr+11r^{2}/4+n/4-5r/4+1/8\rfloor\leq[F:\mathbb{Q}_{p}](r^{2}-r),$$
whose LHS achieves maximum at the boundary point
$n=3r+1$:
(22)
$$\lfloor\frac{7}{8}r^{2}-\frac{3}{4}r+1/2\rfloor\leq[F:\mathbb{Q}_{p}](r^{2}-r),$$
which clearly holds.
∎
Finally, we turn to the codimension lemma.
10.11. Lemma
Let $\kappa$ be a field.
Let $f_{1}(\mathbf{x}),\dots,f_{m}(\mathbf{x})\in\kappa[\mathbf{x}]:=\kappa[x_{1},\dots,x_{n}]$
be homogeneous polynomials in $n$ variables.
Define
$$X=\operatorname{Spec}\kappa[\mathbf{x}]/(f_{1}(\mathbf{x}),\dots f_{m}(\mathbf{x}))$$
and
$$Y=\operatorname{Spec}\kappa[\mathbf{x},\mathbf{y}]/(f_{1}(\mathbf{x})-f_{1}(\mathbf{y}),\dots,f_{m}(\mathbf{x})-f_{m}(\mathbf{y})).$$
Then
$\dim X\leq\frac{1}{2}\dim Y$.
Equivalently, the codimension of $X$ in $\operatorname{Spec}\kappa[\mathbf{x}]$
is at least half of the codimension of $Y$ in $\operatorname{Spec}\kappa[\mathbf{x},\mathbf{y}]$.
Proof.
There exists an embedding
$X\times X\hookrightarrow Y$,
$(\mathbf{x},\mathbf{x}^{\prime})\mapsto(\mathbf{x},\mathbf{y})$.
∎
We remark that the inequality in Lemma 10.11 is sharp.
If $X=\operatorname{Spec}\mathbb{F}[x_{1},x_{2},x_{3}]/(x_{1}x_{2},x_{1}x_{3})$
and $Y=\operatorname{Spec}\mathbb{F}[x_{1},x_{2},x_{3},y_{1},y_{2},y_{3}]/(x_{1}x_{2}-y_{1}y_{2},x_{1}x_{3}-y_{1}y_{3})$,
then $\dim X=2$ and $\dim Y=4$.
10.12. Lemma
Let $Y\to X$ be a morphism of finite type schemes over a field.
Then there exists a point $x:\operatorname{Spec}\kappa\to X$
such that $\dim Y-\dim X\leq\dim Y\times_{X,x}\operatorname{Spec}\kappa$.
In particular, if $Y^{\prime}\subset Y$ is a closed subscheme,
then the codimension of $Y^{\prime}$ in $Y$
is at least as large as the largest codimension
of $Y^{\prime}\times_{X,x}\kappa$ in $Y\times_{X,x}\operatorname{Spec}\kappa$
for some point $x$.
Proof.
It is a standard fact.
See, for example, [aut, Tag 0DS4].
∎
Recall that
$$H^{1}(\operatorname{Gal}_{K},U^{\operatorname{ab}}(A))=H^{1}(\operatorname{Gal}_{K},\operatorname{Mat}_{ra\times(n-2ra)}(A))\oplus H^{1}(\operatorname{Gal}_{K},\operatorname{Mat}_{(n-2ra)\times ra}(A)),$$
and the $\operatorname{Gal}(K/F)=\{1,\j\}$-action swaps the two direct summands.
We can also decompose $H^{1}(\operatorname{Gal}_{K},U^{\operatorname{ab}}(A))$ according to the
eigenvalues of $\j$:
$$H^{1}(\operatorname{Gal}_{K},U^{\operatorname{ab}}(A))=H^{1}(\operatorname{Gal}_{K},U^{\operatorname{ab}}(A))^{+}\oplus H^{1}(\operatorname{Gal}_{K},U^{\operatorname{ab}}(A))^{-}$$
where
$$\displaystyle H^{1}(\operatorname{Gal}_{K},U^{\operatorname{ab}}(A))^{+}$$
$$\displaystyle=\{x^{+}:=(x,\j x)\}=H^{1}(\operatorname{Gal}_{F},U^{\operatorname{ab}}(A))$$
$$\displaystyle H^{1}(\operatorname{Gal}_{K},U^{\operatorname{ab}}(A))^{-}$$
$$\displaystyle=\{x^{-}:=(x,-\j x)\}$$
for $x\in H^{1}(\operatorname{Gal}_{K},\operatorname{Mat}_{ra\times(n-2ra)}(A))$.
There exists a bijection
$$H^{1}(\operatorname{Gal}_{K},U^{\operatorname{ab}}(A))^{+}\xrightarrow{x^{+}\mapsto x^{-}}H^{1}(\operatorname{Gal}_{K},U^{\operatorname{ab}}(A))^{-}.$$
Note that
$$\displaystyle x^{+}\cup x^{+}$$
$$\displaystyle=2(x,0)\cup(0,\j x)$$
$$\displaystyle x^{-}\cup x^{-}$$
$$\displaystyle=-2(x,0)\cup(0,\j x)$$
and thus $x^{+}\cup x^{+}=-x^{-}\cup x^{-}$.
The upshot is there exists an isomorphism
respecting cup products
$$H^{1}(\operatorname{Gal}_{K},U^{\operatorname{ab}}(A))\to H^{1}(\operatorname{Gal}_{F},U^{\operatorname{ab}}(A))\oplus H^{1}(\operatorname{Gal}_{F},U^{\operatorname{ab}}(A));$$
here we define cup products on the RHS by
$(a,b)\cup(c,d)=a\cup c-b\cup d$.
Proof of the codimension lemma 10.10.
By Lemma 10.11, Lemma 10.12
and the discussion above,
it suffices to show
for each $\bar{\mathbb{F}}_{p}$-point of $A$,
the codimension
of the locus
$W^{C}:=\{x\in H^{1}(\operatorname{Gal}_{K},U^{\operatorname{ab}}(\bar{\mathbb{F}}_{p}))|x\cup x=0\}$
in $H^{1}(\operatorname{Gal}_{K},U^{\operatorname{ab}}(\bar{\mathbb{F}}_{p}))$ (when regarded as a vector bundle over $\operatorname{Spec}\bar{\mathbb{F}}_{p}$)
is at least $2c(r,h)$;
as it forces the codimension
of $\{x\in H^{1}(\operatorname{Gal}_{F},U^{\operatorname{ab}}(\bar{\mathbb{F}}_{p}))|x\cup x=0\}$
in $H^{1}(\operatorname{Gal}_{F},U^{\operatorname{ab}}(\bar{\mathbb{F}}_{p}))$
to be at least $c(r,h)$.
Consider all extensions of $\operatorname{Gal}_{K}$-modules
$$\begin{bmatrix}\bar{\alpha}(-1)^{\oplus r}&*&*&*&*\\
&\bar{\alpha}(-2)^{\oplus s}&?&?&*\\
&&?&?&*\\
&&&\bar{\alpha}(-1)^{\oplus s}&*\\
&&&&\bar{\alpha}(-2)^{\oplus r}\end{bmatrix}=:\begin{bmatrix}\bar{\alpha}(-1)^{\oplus r}&*&*\\
&\bar{\tau}&*\\
&&\bar{\alpha}(-2)^{\oplus r}\end{bmatrix}=:\begin{bmatrix}\bar{\alpha}(-1)^{\oplus r}&*\\
&\bar{\eta}\end{bmatrix}=:\bar{\rho}$$
where $?$ means fixed and $*$ means undetermined.
The coarse moduli space $Y^{C}$
of all extensions modulo $[U,U]$
is the vector space $H^{1}(\operatorname{Gal}_{K},U^{\operatorname{ab}}(\bar{\mathbb{F}}_{p}))$.
There is another way to think about extensions.
We can first extend $\bar{\alpha}(-1)^{\oplus r}\oplus\bar{\tau}\oplus\bar{\alpha}(-2)^{\oplus r}$ to $\bar{\alpha}(-1)^{\oplus r}\oplus\bar{\eta}$,
and then extend $\bar{\alpha}(-1)^{\oplus r}\oplus\bar{\eta}$ to $\bar{\rho}$.
The coarse moduli space $T^{C}$ of all extensions $\bar{\eta}$
is the vector space $H^{1}(\operatorname{Gal}_{K},\bar{\tau}\otimes\bar{\alpha}(-2)^{\oplus r\vee})$.
Denote by $T^{C}_{k}\subset T^{C}$ the subvariety
consisting of $\bar{\eta}$ such that
$$\dim H^{2}(\operatorname{Gal}_{K},\bar{\alpha}(-1)\otimes\bar{\eta}^{\vee})-\dim H^{2}(\operatorname{Gal}_{K},\bar{\alpha}(-1)\otimes\bar{\tau}^{\vee})=r-k.$$
Write $Z^{C}$ for the coarse moduli space of all extensions $\bar{\rho}$.
Set $Z^{C}_{k}:=Z^{C}\times_{T^{C}}T^{C}_{k}$.
If
$\bar{\eta}=\begin{bmatrix}\bar{\tau}&*\\
&\bar{\alpha}(-2)^{\oplus r}\end{bmatrix}$
lies in $T^{C}_{k}$,
then the column space
of $*$ is $k$-dimensional.
To specify a point of $T^{C}_{k}$
is the same as specifying a point
of the Grassmannian ${\operatorname{Gr}}(k,r)$
and a point of $H^{1}(\operatorname{Gal}_{K},\bar{\tau}\otimes\bar{\alpha}(-2)^{\oplus k\vee})$:
$$\displaystyle\dim T^{C}_{k}$$
$$\displaystyle\leq\dim{\operatorname{Gr}}(k,r)+kh$$
$$\displaystyle=k(r-k)+kh$$
$$\displaystyle=\dim H^{1}(\operatorname{Gal}_{K},\bar{\tau}\otimes\bar{\alpha}(-2)^{\oplus r\vee})-rh+k(r-k)+kh.$$
Note that
there exists a stratification of locally closed subvarieties
of $T^{C}_{k}$
such that
$H^{\bullet}(\operatorname{Gal}_{K},\bar{\alpha}(-1)^{\oplus r}\otimes\bar{\eta}^{\vee})$
has constant dimension over each stratum.
After replacing $T^{C}_{k}$
by the disjoint union of its stratums,
$Z^{C}_{k}$ is a vector bundle over $T^{C}_{k}$ of rank
$$\displaystyle\dim H^{1}(\operatorname{Gal}_{K},\bar{\alpha}(-1)^{\oplus r}\otimes\bar{\eta}^{\vee})$$
$$\displaystyle=[K:\mathbb{Q}_{p}]r(n-2r)+\dim H^{0}(\operatorname{Gal}_{K},\bar{\alpha}(-1)^{\oplus r}\otimes\eta^{\vee})$$
$$\displaystyle\hskip 28.45274pt+\dim H^{2}(\operatorname{Gal}_{K},\bar{\alpha}(-1)^{\oplus r}\otimes\eta^{\vee})$$
$$\displaystyle\leq\dim H^{1}(\operatorname{Gal}_{K},\bar{\alpha}(-1)^{\oplus r}\otimes\bar{\tau}^{\vee})+\dim H^{1}(\operatorname{Gal}_{K},\bar{\alpha}(-1)^{\oplus r}\otimes\bar{\alpha}(-2)^{\oplus r\vee})$$
$$\displaystyle\hskip 28.45274pt+\dim H^{2}(\operatorname{Gal}_{K},\bar{\alpha}(-1)^{\oplus r}\otimes\eta^{\vee})-\dim H^{2}(\operatorname{Gal}_{K},\bar{\alpha}(-1)^{\oplus r}\otimes\tau^{\vee})$$
$$\displaystyle\hskip 28.45274pt-\dim H^{2}(\operatorname{Gal}_{K},\bar{\alpha}(-1)^{\oplus r}\otimes\bar{\alpha}(-2)^{\oplus r\vee})$$
$$\displaystyle=\dim H^{1}(\operatorname{Gal}_{K},\bar{\alpha}(-1)^{\oplus r}\otimes\bar{\tau}^{\vee})+\dim H^{1}(\operatorname{Gal}_{K},\bar{\alpha}(-1)^{\oplus r}\otimes\bar{\alpha}(-2)^{\oplus r\vee})$$
$$\displaystyle\hskip 28.45274pt+r(r-k)-r^{2}$$
$$\displaystyle=\dim H^{1}(\operatorname{Gal}_{K},\bar{\alpha}(-1)^{\oplus r}\otimes\bar{\tau}^{\vee})+\dim H^{1}(\operatorname{Gal}_{K},\bar{\alpha}(-1)^{\oplus r}\otimes\bar{\alpha}(-2)^{\oplus r\vee})-kr.$$
Therefore
$$\displaystyle\dim Z^{C}_{k}$$
$$\displaystyle=\dim T^{C}_{k}+\dim H^{1}(\operatorname{Gal}_{K},\bar{\alpha}(-1)^{\oplus r}\otimes\eta^{\vee})$$
$$\displaystyle\leq\dim H^{1}(\operatorname{Gal}_{K},\bar{\tau}\otimes\bar{\alpha}(-2)^{\oplus r\vee})-rh+k(r-k)+kh$$
$$\displaystyle\hskip 28.45274pt\dim H^{1}(\operatorname{Gal}_{K},\bar{\alpha}(-1)^{\oplus r}\otimes\bar{\tau}^{\vee})+\dim H^{1}(\operatorname{Gal}_{K},\bar{\alpha}(-1)^{\oplus r}\otimes\bar{\alpha}(-2)^{\oplus r\vee})-kr$$
$$\displaystyle=\dim H^{1}(\operatorname{Gal}_{K},\operatorname{Lie}U(\bar{\mathbb{F}}_{p}))-rh+k(r-k)+kh-kr$$
$$\displaystyle=\dim H^{1}(\operatorname{Gal}_{K},\operatorname{Lie}U(\bar{\mathbb{F}}_{p}))-rh-k^{2}+kh.$$
Since $Z^{C}$ is the union of $Z^{C}_{k}$, we have
$$\displaystyle\dim Z^{C}$$
$$\displaystyle\leq\dim H^{1}(\operatorname{Gal}_{K},\operatorname{Lie}U(\bar{\mathbb{F}}_{p}))-\min_{k}(rh+k^{2}-kh)$$
$$\displaystyle=\dim Y^{C}+\dim H^{1}(\operatorname{Gal}_{K},[U,U](\bar{\mathbb{F}}_{p}))-\min_{k}(rh+k^{2}-kh).$$
On the other hand, $Z^{C}$ is an affine bundle over
$W^{C}$ of rank $\dim H^{1}(\operatorname{Gal}_{K},[U,U](\bar{\mathbb{F}}_{p}))$.
Thus
$$\dim Z_{C}\geq\dim W_{C}+\dim H^{1}(\operatorname{Gal}_{K},[U,U](\bar{\mathbb{F}}_{p})).$$
Finally,
$$\displaystyle\dim Y^{C}-\dim W_{C}$$
$$\displaystyle\geq\dim Y^{C}-(\dim Z_{C}-\dim H^{1}(\operatorname{Gal}_{K},[U,U](\bar{\mathbb{F}}_{p})))$$
$$\displaystyle\geq\dim Y^{C}-(\dim Y^{C}+\dim H^{1}(\operatorname{Gal}_{K},[U,U](\bar{\mathbb{F}}_{p}))-\min_{k}(rh+k^{2}-kh)$$
$$\displaystyle\hskip 28.45274pt-\dim H^{1}(\operatorname{Gal}_{K},[U,U](\bar{\mathbb{F}}_{p})))$$
$$\displaystyle=\min_{k}(rh+k^{2}-kh)\qed$$
References
[aut]
Stacks authors
“The Stacks project 2017”
[BG19]
Rebecca Bellovin and T. Gee
“G-valued local deformation rings and global lifts”
In Algebra and Number Theory 13, 2019, pp. 333–378
[EG21]
Matthew Emerton and Toby Gee
““Scheme-theoretic images” of morphisms of stacks”
In Algebraic Geometry, 2021
[EG23]
Matthew Emerton and Toby Gee
“Moduli stacks of étale $(\varphi,\Gamma)$-modules and the
existence of crystalline lifts”, 2023
[GHS18]
Toby Gee, Florian Herzig and David Savitt
“General Serre weight conjectures”
In J. Eur. Math. Soc. (JEMS) 20.12, 2018, pp. 2859–2949
DOI: 10.4171/JEMS/826
[Koc02]
Helmut Koch
“Galois theory of $p$-extensions” With a foreword by I. R. Shafarevich, Translated from the 1970
German original by Franz Lemmermeyer, With a postscript by the author and
Lemmermeyer, Springer Monographs in Mathematics
Springer-Verlag, Berlin, 2002, pp. xiv+190
DOI: 10.1007/978-3-662-04967-9
[Le+23]
Daniel Le, Bao V. Le Hung, Brandon Levin and Stefano Morra
“Local models for Galois deformation rings and applications”
In Invent. Math. 231.3, 2023, pp. 1277–1488
DOI: 10.1007/s00222-022-01163-4
[Lin21]
Zhongyipan Lin
“Lyndon-Demuškin method and crystalline lifts of
$G_{2}$-valued Galois representations”, 2021
URL: https://sharkoko.space/pdf/lyndon-g2.pdf
[Lin23]
Zhongyipan Lin
“A Deligne-Lusztig type correspondence for tame $p$-adic
groups”, 2023
arXiv:2306.02093 [math.NT]
[Lin23a]
Zhongyipan Lin
“Extension of Crystalline representations valued in general
reductive groups”, 2023
URL: https://sharkoko.space/pdf/unobs.pdf
[Lin23b]
Zhongyipan Lin
“The Emerton-Gee stacks for tame groups, I”, 2023
arXiv:2304.05317 [math.NT]
[Lin23c]
Zhongyipan Lin
“The Emerton-Gee stacks for tame groups, II”, 2023
[Sti10]
Jakob Stix
“Trading degree for dimension in the section conjecture: the
non-abelian Shapiro lemma”
In Math. J. Okayama Univ. 52, 2010, pp. 29–43
[Wei94]
Charles A. Weibel
“An introduction to homological algebra” 38, Cambridge Studies in Advanced Mathematics
Cambridge University Press, Cambridge, 1994, pp. xiv+450
DOI: 10.1017/CBO9781139644136 |
Who Does What? Work Division and Allocation Strategies of Computer Science Student Teams
Anna van der Meulen
Leiden Institute of Advanced Computer Science
University of Leiden, The Netherlands
[email protected]
Efthimia Aivaloglou
Leiden Institute of Advanced Computer Science
University of Leiden, The Netherlands
Open University of The Netherlands
[email protected]
Abstract
Collaboration skills are important for future software engineers. In computer science education, these skills are often practiced through group assignments, where students develop software collaboratively. The approach that students take in these assignments varies widely, but often involves a division of labour. It can then be argued whether collaboration still takes place. The discipline of computing education is especially interesting in this context, because some of its specific features (such as the variation in entry skill level and the use of source code repositories as collaboration platforms) are likely to influence the approach taken within groupwork. The aim of this research is to gain insight into the work division and allocation strategies applied by computer science students during group assignments. To this end, we interviewed twenty students of four universities. The thematic analysis shows that students tend to divide up the workload to enable working independently, with pair programming and code reviews being often employed. Motivated primarily by grade and efficiency factors, students choose and allocate tasks primarily based on their prior expertise and preferences. Based on our findings, we argue that the setup of group assignments can limit students’ motivation for practicing new software engineering skills, and that interventions are needed towards encouraging experimentation and learning.
Index Terms:
Computing education, group projects, teamwork, programming
I Introduction
Courses in computer science curricula often involve software engineering projects that are assigned to groups of students. Group assignments are commonly the first experiences that computer science students gain in developing software collaboratively. Through group assignments, students get the opportunity to work on software projects that are of larger scale than individual course projects can be. At the same time, group projects enable students to practice their collaboration skills [1], which are important in the software development industry [2, 3, 4], with current industrial trends promoting cooperative working techniques such as shared code ownership and pair programming [5]. However, even though collaboration and teamwork skills are important for the next generation of software engineers, it has been found that communication and teamwork skills are areas where graduating computer science students frequently fall short of the expectations and work requirements of industry [6, 7].
Research in the area of group assignments has highlighted their advantages and disadvantages related to labour market preparation [8, 9, 10, 11, 12]. Researched topics include the formation and set up of groups, differences in contributions between team members, and grading [9, 10, 11, 12]. Further, group assignments are also used as an instructional strategy in the form of team-based learning [13] and collaborative learning [14, 15]. Their documented benefits concern the learning process itself, with actively working together towards a mutual learning goal [16] having been found to be more effective, compared to individual approaches, in certain types of learning [17, 15]. This can be understood from the cognitive load theory, which describes the limitations of the individual working memory during complex tasks and the benefits of sharing this task in a group [17]. Together, these different research lines form the basis of understanding the set up, experiences, and outcomes of students’ group assignments.
Groupwork in the field of computing education has particular characteristics that can influence how students approach and experience their group assignments. These characteristics include the fact that students enter computer science courses with varying levels of prior programming experience [18, 19], that they could be given the opportunity to apply practices like pair programming in their assignments and, finally, that within software projects source code repositories can be used as collaboration platforms, making process data [20] available. The way in which students approach their group assignment and the workload, which often involves some division of labour [21, 22, 23], might be affected by these particular aspects of computing education.
The aim of this study is to examine the following research question: What are the work division and allocation strategies that university-level computer science students employ during their group assignments? This is approached from the students’ perspective of these strategies, in order to gain an in-depth understanding of their experiences and motivations. To answer our research question, we interviewed 20 final-year Bachelor’s and Master’s students from four research-intensive universities in the Netherlands about their experiences and perceptions on group programming assignments throughout their studies.
Our thematic analysis revealed that students tend to initially divide up the workload so that they can work independently, while commonly employing practices such as code reviews and pair programming. Their motivation while dividing and distributing the workload is most often the grade outcome and efficiency, and rarely the potential learning benefits. The allocation of tasks to group members is commonly guided by the preferences, skills, and expertise that the members already possess, with students often identifying specific tasks (for example, front-end development) that they prefer to take upon.
II Background and Related Work
II-A Collaboration Strategies
The overall approach of students to groupwork in educational settings can vary significantly [21]. Division of labour has been found to be a common starting point [21, 22, 23]. This entails that, at the start of an assignment, students divide up the work into separate tasks, and assign a task and related responsibility to each group member who works on and completes this task individually [21, 23]. Different ways of working, some of which entail division of labour, have also been identified [21, 22]. These include pair collaborations (where two members of the group work together on an activity), group collaborations (where multiple members of the group work together on an activity), and delegation (involving one individual taking sole responsibility, for instance, for an overall check of the end product) [22]. Studies identifying these approaches come from various fields and topics, including writing assignments [22] and engineering projects [23].
From the students’ perspective, an important consideration in deciding which approach to take, and favoring the division of labour, is ‘efficiency of work progress’ [22]. The reasoning behind dividing up the work is that it allows students to focus on or specialize on their specific task, and, combined with delegation, choosing which students are most suited for which aspect. Students can however also take into account ‘quality of the work process’, which can result in them choosing strategies of pair or group collaboration [22]. This difference in prioritizing work efficiency or process is interesting from the perspective of educational aims within groupwork. Previously the distinction between ‘cooperation’ and ‘collaboration’ has been made [21]. Cooperation specifically refers to the strategy of dividing up a group task and having the resulting parts completed entirely individually by the group members [24]. Collaboration, in turn, explicitly involves a continued process to construct and maintain a shared concept of a problem [25]. Consequently, which approaches are taken by students, and whether a certain approach is desirable to be stimulated, are important from the perspective of the aim of a group assignment as well. Furthermore, the starting point of division of labour is likely to be highly prevalent in computing education, because of the different levels of prior knowledge and expertise that computing students can have [18, 19].
II-B Computer Science Work Division and Allocation Strategies
Within the approach to divide up an overall assignment into sub tasks and responsibilities, an important step consists of the allocation of these different parts of the work to the group members. Insights on the allocation of work mainly come from research on professional software development teams. The issue of task allocation has received special attention within the context of distributed software development [26], where it is recognized as a major challenge due to an insufficient understanding of the criteria that influence task allocation decisions [27].
When introducing self-organizing teams in agile software development, the most important barrier has been found to be the developers’ highly specialized skills and the corresponding division of work [28]. At the same time, expertise coordination has been found to be crucial among software development teams, since it strongly affects team performance [29]. The performance of software teams was also found to be positively impacted by knowledge diversity and a proper level of task conflict, indicating disagreement among team members regarding the content of the tasks
being performed [30]. Lin et al. quantitatively analyzed the effects of team member’s competence and task difficulty on their workload variation [31], while Amrit examined the effect of social network structures on task distribution [32]. Specific team roles in software teams (for example, team leader, systems analyst, programmer) have been linked to personality traits [33, 34]. Overall, it appears that, in the professional field of software development, specialization of skills and diversity of knowledge are important factors in teamwork, which can both be challenging and potentially beneficial.
Examining groupwork in computing education, Lingard and Berry analyzed data from 39 teams working on software projects and found a significant correlation between team project success, team synergy and the degree to which work is equitably shared among team members [35] without, however, examining specific work distribution strategies. Process data can give insight into work distribution. Automated tools have been proposed for monitoring student collaboration and contributions, including tools utilizing students’ wikis and software version control system (svn) repositories [36], analyzing online team discussion transcripts to visualize team mood, role distribution and emotional climate [37], and calculating productivity metrics describing student contributions in their git repositories [38].
III Methodology
The goal of this study is to explore the strategies that university-level computer science students employ to approach the workload in group programming assignments. Our focus is on their work division and task allocation strategies. We aim to gain insights on the extent to which collaboration occurs and the extend to which new knowledge is gained and practiced. To this end, 20 semi-structured interviews were conducted with students from computer science departments of four universities in the Netherlands. The following paragraphs describe the participants and the interview and data analysis process.
III-A Participants
Twenty computer science students participated in the research. Participants were invited during their classes and lab sessions, as well as through Slack and other student communication channels. Students who were at a later stage of their study, i.e. finally year Bachelor (twelve participants) or Master (eight participants), were recruited in order to capture a wide range of experiences with group programming assignments. Further, students were recruited from four different public research-intensive universities in the Netherlands, including a university of technology and a university that offers distance learning opportunities. After 20 interviews, it was determined that no new information was being gathered and thus saturation was reached.
At least four students from each university were included. Four of the Master’s students had completed their Bachelor degrees in other universities, three of which in other countries. The participants indicated to originate from (alphabetically): Bangladesh, India, the Netherlands, Switzerland, Ukraine and the U.S. Gender was reported by all participants (5 female and 15 male), age by most (known age range was between 20 and 33), and some participants reported to have autism or a speech disorder. The self-reported expected grade of the participants for the degree that they were pursuing varied from 6,5 to 9, with a mean value of 7,5 out of 10.
III-B Interview Process
The current research question on students’ collaboration and work division strategies was part of a broader project on students’ experiences with group programming assignments. The interview protocol consisted of 11 questions in five topics: background, experiences with group programming assignments, perceptions on assignment setup, experiences with grading, and perceptions on grading. In order to answer the current research question, information from the background and the first two topics was used. The interview questions for these topics can be found in Table 1.
After the first two interviews, with one male and one female student, the scope and questions of the interview protocol were reconsidered, after which they remained unchanged for the other interviews. Ten of the interviews were conducted face-to-face and the other ten through Skype. Of all sessions voice recordings were made, which were transcribed with automatic transcription software. A manual check and correction of the transcriptions was performed after this. The length of the interviews ranged from 12 to 58 minutes, with an average length of 23 minutes.
III-C Data Processing
The methodological framework followed in processing the data was a thematic analysis approach [39]. In this approach, patterns (referred to as themes) are identified in qualitative data in order to organize and described the data and, further, interpret them in the context of the research topic. Thematic analysis is flexible in that the determination of themes can be both theory- and data-driven [40, 39]. For the current research, the main themes were theory driven, while subthemes (referred to below as labels) within these main themes were datadriven, generated from the interview data. This application of a thematic analysis approach is fitting since we build upon existing knowledge on collaboration strategies, yet openly examine how this takes form in the specific population of computer science students. The process is described in detail below.
First, seven themes were determined based on the research questions of the project, closely in line with the topics of the interview protocol: student profile, experience with group assignments, work division and collaboration strategy and motivation, task allocation strategy, organisation of the group work, experience with grading, and perception on grading. Next, labels were developed within these themes. Initially, the two researchers independently labelled two interviews. For example, within the theme “work selection strategy and motivation” the label ’how to divide/approach work division’ was generated. The labels were compared and, together with the themes, discussed by the researchers, after which the theme “experience with group assignments” was added. Both researchers continued to independently label four additional interviews to continue generating appropriate labels. After discussing together again, the researchers determined fixed labels within each theme.
Second, all interviews were coded using these labels. For the current study, only the themes student profile, experience with group assignments, work division and collaboration strategy and motivation, and task allocation strategy were further processed. The interviews were divided between the researchers. Doubts as to which label was appropriate for a given excerpt were discussed.
Third, for each theme the information was integrated across participants, as well as when applicable integrated across themes. For example, information on the profile of the students (institution and current degree) was included in the description of the other themes.
IV Results
The participants have been assigned random numbers and are referred to as S1 to S20. The quotes that are included in the description are verbatim. To protect the anonymity of the participants, identifiable information has been suppressed and person pronouns have been converted to the male form. The results are analyzed under two main parts in relation to the themes: (1) work division strategy, and (2) task allocation strategy.
In terms of experience with group programming assignments, almost all students (18) responded that they have participated in several group assignments during their studies as part of several courses. Group size varies; students refer both to groups of two people and, commonly, of larger size. Two undergraduate students from the same university reported having none to very limited (only one group assignment with one other person) experience.
IV-A Work Division Strategy
All students who have participated in group assignments described work division strategies. Table 2 presents an overview of the work division strategies grouped under three sub-themes: project startup, collaboration after initial division, and motivation factors. Even though this study is not a quantitative study, Table 2 includes the frequency of repetition of each label, to allow identification of the most commonly mentioned labels.
IV-A1 Project startup
All students who have participated in group assignments describe the process of getting started with the assignments. The same main strategy was described by all but one of these students: dividing up the the overall assignment into sub-components or tasks that everyone can continue to work on by themselves. The student who does not refer to this strategy describes the process of working together in a pair.
Surrounding this main process of dividing op the overall work and having group members continue on their own, different additional strategies were described. Explaining how they start up the project, eight students talk about looking at and agreeing about the content of the assignment together, by going through the work, asking each other questions about it, making an overview, agreeing on the scope or end product, or, as one participant describes, even starting with creating the basics together. For instance:
•
“What we do is first thing we just looked through it, we take like one day maybe to all read the assignment and ask questions if something is not clear. But when everybody reads the assignment and everything is clear, we, the first thing we do usually, is just try to split it into parts” (S8), and
•
“So we’ve all made a list of topics and then discuss which ones everybody thought for themselves. And then we pick, we had discussed them and picked one and that one we worked on” (S1).
Five other students talk about getting started by looking at the qualities and experiences of all group members, and four participants only mention the start of the process as directly dividing up the work without additional discussions or considerations. One student describes the experience of different ways to get started with the group assignment, either explicitly defining roles and tasks or “it just all clicked” (S4) and everyone started working.
IV-A2 Collaboration after initial division
After the start of the project, there are a variety of ways (mentioned by 12 students) through which group members keep in touch, re-group, and in general collaborate once everyone starts working on their individual parts. There is quite some difference in how extensive this contact appears to be, ranging from regularly discussing and keeping each other updated to only asking questions when needed. Three approaches appear to be (1) a regular check-in, for instance in weekly meetings, to discuss and possibly set new tasks, and/or to be in touch when needed, (2) three students mention only being in touch in case of a question or problem, and (3) one says only at the end to try to put things together.
Ten students also mention the aspect of reviewing each other’s code. One student indicates that they rarely do any code reviews, and two participants say they do it to a limited extent, checking the code briefly or only when there is problem. The majority, seven students, talk about more extensive peer review, with two even indicating that they set rules or make agreements about checking each other’s code, for instance always having someone double check another person’s code:
•
“And our rule is that when someone has dragged it to the check column, someone else has to check on Github whether the code is actually correct and they drag it to the done column. So if everything goes well, everything will be checked before it’s actually considered done” (S17).
One student indicates that the importance of code review is stressed by the teacher, whereas three students say that they themselves want to review the code to make sure that they understand everything, and because (according to one) this improves the quality or (according to another) it is necessary to make sure the code can be connected. As one student describes:
•
“So that’s how we felt like to, you know, all have a mutual understanding of what the code of the project is, that we should review each other’s code and also of course for quality” (S5).
Some students also reflect on checking each other’s work and code. Two students indicate, within their view on code reviews, that they do trust the other team members. Two other students describe the process of distinguishing between giving suggestions and pointing out actual errors, which they do, and really perfecting the algorithm or implementing changes in other’s work, which they don’t. As one of these students indicates:
•
“So in the end, it’s also you, you look at each other’s work and you give suggestions. You can propose your ideas, you can propose some changes, but you never really make those changes yourself, even though you know, you can have a better, sometimes I know I have a better idea” (S8).
Two other, specific ways of working are also brought forward. The first is pair programming, mentioned by 12 students. Six of these students indicate that they do pair programming within group assignments, two of whom only occasionally, which depends on whether there are exercises in the assignment for which this is a good fit, whether others also prefer this way of working, and whether its works out practically. Two students explicitly describe their reasons for preferring pair programming: you can ask questions, share your ideas, check with each other, and it is more creative. As one students says:
•
“I prefer pair programming, so you can like share your ideas and like the logic and someone can check it also like whether you are doing it correctly or not” (S6).
Five other students indicate that they do not engage a lot or at all in pair programming, either because that is not common in their university, it is not common within group assignments, time constraints make it difficult, or because they do not prefer it. One student only engages specifically in pair debugging. One other student indicates that there are advantages in terms of learning in pair programming only for the one of the pair who has a lower skill level:
•
“Even though you might learn a lot […] or you can give someone a lot. But for someone that knows a lot, uh, it’s a time waste because how does he benefit from it?” (S18).
The second other way of working is only described by one student, and concerns an approach within group assignments where both group members work separately on the same parts, to later compare and then take the one that is best.
IV-A3 Motivation for work division and collaboration strategy
Several motives are described for the selection of work division and collaboration strategies. Most students explain that they engage in the approach of splitting up the work because it helps them get a good or sufficient grade. For instance:
•
“We both wanted to have a high grade and use the techniques that were discussed in the literature, but we didn’t know exactly what the capabilities of the other were. So, someone can say, I can do this, but you don’t know whether it is according your own standards. So that’s why we had to find out, what’s your level of experience? What’s the way you code? […] So you get to know each other and, and then make a decision how to make the best of it” (S16), and
•
“The end goal is usually to get the highest grade possible. So that’s different from learning the most possible” (S17).
Work division is also preferred because it involves less work for everyone and is necessary in terms of time management, it has learning benefits, it is a fair approach where it is clear what everyone has to do, or because it is not possible/useful to both work on the same code. Reasons to get started together include that it is important to make sure everyone is on the same process, to understand and sketch out the assignment together, and to see what everyone’s talents are. It is mentioned by two students that how to divide up the work depends on the specific assignments.
IV-B Task Allocation Strategy
Most (17) students who have participated in group assignments described task allocation strategies. Table 3 presents an overview, grouped under two sub-themes: the assignment of specific tasks, and the assignment of team roles.
IV-B1 Assignment of specific tasks
Continuing on the process of dividing up a group assignment into sub-components or tasks, almost all (17) students discuss the assignment of specific tasks to group members. Often mentioned is to consider the skills, experience, or interests that group members have, and to divide up and assign tasks accordingly. Specifically, eight students indicate that parts are assigned based on the team member skills, experiences, what they good at, or, as one student indicates, what is familiar to them. One student describes that this is difficult to determine at first, and that it is necessary to adjust along the way:
•
“The sort of like the amount of progress any single person can make. Um, and that’s just sort of like, well, how did you split it up? Is this person better at this kind of problem or this kind of problem? Or maybe this person is just much more productive or just much better at programming or much smarter” (S7).
Further, nine students in total mention considering the preferences of each team member:
•
“It was basically just you do what you feel, um, you should do” (S1), and
•
“Since I have some prior expertise or like I feel like taking this part” (S10).
Interestingly, three students describe how the skills developed through their studies affected the tasks they would take upon. Two of these students indicate that, at the beginning of their studies, they were less experienced and had to take on the easier parts, whereas, as one of them specifically describes, later on they could take on more:
•
“Um, in the earlier projects you, they’re quite simple and uh, there’s always someone who is a lot better than the rest. So you can give those people who are fairly new to programming the more learning-based tasks” (S4), and
•
‘Well as I experienced it, like in the beginning, uh, the first year it was like I wasn’t very good at programming. So, uh, it was very hard with, uh, I did like a small percentage of the tasks but, as I got better, I did higher percentages and like, uh, did more work for the programming assignments” (S13).
The other student refers more to a general development of people becoming experts and specialization becoming possible:
•
“So in the early group projects, um, so early in the Bachelor’s you have to do certain things, like everybody has to program this or that. Um, but as you go later in the Bachelor, you get more freedom. So the more freedom we got, the more we decided to really let the person do what they’re very good at to, to work as efficiently as possible” (S5).
Apart from mentioning expertise or preference at a general level, several students also describe other or more detailed motivations to choose or to be assigned specific sub-components of the assignment. Seven students refer to the content of particular sub-components. The students talk about specific types of activities (often mentioning both what they do like and what they don’t like), including user interface and visual aspects versus algorithmic problems, back-end programming, background logistics, web development, trying out different algorithms, designing the structure versus programming, or more abstract aspects, such as work on the overview and the logistics behind the project. For example,
•
“I usually work with friends who love these algorithmic problems and then they take them and then I’ll take, you know, the user interface, the visual things, graphical work, mathematical work” (S3), and
•
‘I usually leave the heavy back-end programming for the people I know that are better at that then than I am. So I was very soon in, like early in my bachelor’s, I recognized that that is not my best part in programming. Um, so I always try to leave that to people who are better at it than I and take the parts that I felt I’m good at and that I can work with” (S5).
One student has experienced that certain parts are expected of her because of her being social or being the only girl in the group:
•
“I always ended up doing the presentation, either because I look relatively social or because I was the girl in the group or whatever” (S2).
Finally, three students do not specify concrete factors for their choices, either specifying they really don’t have any true interest, that they just pick what needs to be done first, or that it depends on the project.
IV-B2 Assignment of roles
Eight students talk about the roles that are assigned within group projects, referring not to tasks or activities but to the place members have within the group. All but one student talk specifically about one person taking in a leadership role:
•
“Within all the other groups, it was always someone who took the lead and that doesn’t necessarily mean uh, content wise, but maybe just communicating with the teacher, making sure everybody’s there on time” (S1), and
•
“And you can all be in a group, you can all be equals, but at some point you, you’ll find someone who will take on some kind of leader form. But if you don’t have that person, the group can become kind of unguided” (S2).
One student who does not refer to someone taking in a leadership role only mentions himself preferring to stay at the background and having others make the decisions. Out of the other seven, all but one indicate to have taken in this leadership role themselves. The other student has not experienced anyone taking this role, but thinks it is something that should happen and that should be implemented as part of the assignment:
•
“So either there should be a system like that where you need to pick who is responsible for the whole group, who is responsible for particular parts of a group or the teacher could assign specific, like who could break down the assignments and confirm who is going to do which part” (S9).
For four students, taking in the role of leader is a common and deliberate choice:
•
“I like to have a general overview of the project. Right. So usually connecting everything and uh, UI related things” (S5), and
•
“So, most of the time I start the project, um, and lead and we will just decide who is best in what part and we’ll really split it up. And if someone is having a hard time, we will meet up and sit together. That’s what I like to do best” (S19).
Two other students have only taken in the leadership role incidentally, which seems to depend on the specific group.
V Discussion
The research question of this study was: what are the work division and allocation strategies that university-level computer science students employ during their group assignments? Overall, our findings indicate that students tend to divide up the work, and choose and assign tasks primarily based on their preferences and prior expertise. At the same time, several joined practices are mentioned, such as brainstorming sessions at the start, regularly checking in, and adjusting tasks and responsibilities along the way. The motivations of the students were found to mainly include grading and efficiency but also, less commonly, wanting to learn and having a fair approach for all group members.
V-A Dividing and Allocating Based on Preferences and Prior Expertise
The computer science students participating in this research appear to be well aware of the approach they take in their group assignments and what they take into consideration. Clearly, division of labour is central in their approach, in line with previous insights in education within different disciplines [21, 22, 23] as well as in professional software development teams [29, 30]. The way in which the students describe their approaches also illustrates that this division of labour can take diverse forms, often involving several joined practices. These primarily take place at the start, when students discuss the scope of the assignment or get started with the basics. In addition however, some practices continue through the working process, such as checking each other’s code or reconvening to discuss the progress and next steps.
Our findings indicate that prior programming knowledge and expertise is one of the deciding factors for task allocation among student teams. This is in line with findings on the work distribution of professional software development teams, where expertise, along with the availability of people, are the most important criteria for task allocation [27], as well as with findings from engineering students [23]. The students of this research often indicate to be motivated by specific aspects of their studies (for example focusing on user interface and visual aspects, or focusing on algorithmic programming tasks), also suggesting the presence of early specialization. Some students also describe how their relatively low programming skills at the start of their studies impacted their role in groupwork. At the same time, however, previous research on groupwork as an instructional strategy has looked at effects of asymmetry in knowledge or expertise between members on effective collaboration [15], substantiating that different levels of expertise and, likely related, in preference are factors that generally have a common role in groupwork approaches.
V-B Presence of Collaborative Practices
The identification of collaboration and work division strategies that students engage in, especially in the context of the specific field, is valuable in itself. However, it also gives insight into the extent to which these practices do in fact reflect collaboration. Setting a group of students together with an assignment does not automatically entail that collaboration takes place [21]. Especially concerning the approach of dividing up the work, it has been questioned whether collaboration, referring to a joined thinking process, [25] occurs [21]. However, although the students of this study clearly favour a set up where a significant portion of the groupwork is being done by them individually, the mix of practices they use, as well as their description of their experiences and motivations, also suggests that overall elements of collaboration appear present. For instance, brainstorming together at the start and regularly checking in does appear to be in line with a joined thinking process. There are also several practices that reflect that students take collaboration into account. As described quite insightfully by two students, they consider what the correct practice is when checking other group member’s code. They will check and point out errors, but not actually do the other person’s work or implement their own ideas. This could be seen as sophisticated collaborative behavior. Previously it has also been argued that collaboration and cooperation are ultimately not that distinct as they are often presented [41]. Our findings show that a hybrid approach is often applied, where parts of groupwork are completed entirely by individual members of the team, and at the same time a joined thinking process occurs to a certain extent.
Further, the question arises of whether this approach is in line with the purpose of the students’ assignments and education. It remains questionable whether the overall approach of labour division serves the learning goals of computer science study programs. If team members repeatedly opt to work on the tasks that are most familiar to them throughout the group projects within the computer science curriculum, this finding could indicate that they are led to premature specialization. This was evident in the responses of some students who revealed commonly being assigned tasks of specific nature, for example front-end development, because they were the best in their teams at these specific tasks. Mitigating this tendency to specialize on specific software development tasks might not be trivial, as long as team efficiency and performance are what motivates task allocation decisions. It might be an interesting consideration here whether such specialization is to a certain extent desired both as a natural part of their studies and for their future professional software engineering career. Ultimately, the fact that specialization of skills and diversity of knowledge is seen both as a challenge and an advantage in professional software development teams [29, 30], reflects that this is a larger discussion on optimization of teamwork in software development.
V-C Limitations
Our research was based on interviews conducted in a small number of students of four universities of one country. The experiences and perceptions of this student population may differ from the ones of other students in the same or other institutions, countries and cultures. Additionally, students who consent to participate in interviews about group work and have their answers used in research projects may not reflect the general student population. Moreover, the reason behind the choice of recruiting final-year Bachelor’s students and Master’s students was to include participants with sufficient experience to describe and to inform their perceptions. This, however, left inexperienced students out of our sample. In the interviews the students often described experiences from the early years of their studies and their perceptions at the time, but their reflections might have been influenced by the experience that they gained since then.
Regarding the internal validity of our study, a threat is the social pressure that the respondents might have felt when disclosing their perceptions about group work and about the policies they have encountered during their studies. Overall, all students appeared comfortable during their interviews and not hesitant to give their honest opinion. Still, they might have answered differently with another interviewer or in a more anonymous data collection setting. There were quite some differences between the interviews in total duration, whether they were conducted physically or online, and whether they were conducted in the native language of the students. All students did however seem at ease and fluent in English, and variation in the interview duration is not uncommon in a semi-structured approach. Concerning data processing, in the case of a thematic analysis of the type of data as included in the current research, decisions on the approach are guided by the underlying aim of the study [39]. In the case of our study, the aim was to explore the perception of students, therefore, within our pre-specified topics, the approach was data-driven. The different experiences and ideas of the students were integrated yet described extensively, giving context and providing quotes to illustrate and substantiate our interpretation.
VI Concluding Remarks
The aim of this study was to explore the experiences and perceptions of computer science on their work division and allocation strategies within group assignments. The use of semi-structured interviews proved valuable, since it showed students’ underlying motivations and reasoning within the overall favored approach for division of labour. The hybrid approach that computer science students appear to take, in which mostly individual completions of tasks are combined with several practices (including brainstorming sessions at the start, regularly checking in, and adjusting tasks and responsibilities along the way) that suggest a joined thinking process, provides an understanding of the way in which groupwork is approached in the specific field of computer science. Moreover, these findings show that it is important to consider what the educational aims of group assignments are in the first place, and how these aims can best be fostered in the set up and instruction of the assignments.
Our results suggest several possible directions for future work. The effect of the varying skills and prior programming experience, as well as of other possible characteristics such as gender, could be studied in depth through both qualitative and quantitative studies. A larger scale quantitative study could also assess whether and how the factors identified through our interviews are interrelated. It is important that future research on groupwork concerning programming assignments takes into account the aspects where this type of groupwork is similar or deviates from group work in other disciplines and topics. Further, research on how the educational aims of group assignments in computing education can be fostered should include measures and interventions for encouraging students to practice new software engineering skills and take upon tasks that they are not already familiar with. Finally, the relation to future work roles and expectations within software engineering teams could be researched further.
Data availability
In order to protect the privacy of the participants in our study and due to potentially identifiable information in the interview transcripts, the data collected in this research are not made publicly available.
Acknowledgments
The authors would like to thank Marina Milo (Vrije Universiteit Amsterdam) for helping with setting up the interviews and processing the interview transcripts. We would also like to thank the students that were involved in this research. We are grateful for your time and for sharing your experiences and insights with us.
References
[1]
E. Pfaff and P. Huddleston, “Does it matter if i hate teamwork? what impacts
student attitudes toward teamwork,” Journal of marketing education,
vol. 25, no. 1, pp. 37–45, 2003.
[2]
M. Hewner and M. Guzdial, “What game developers look for in a new graduate:
Interviews and surveys at one game company,” in Proceedings of the
41st ACM Technical Symposium on Computer Science Education, ser. SIGCSE
’10. New York, NY, USA: Association
for Computing Machinery, 2010, p. 275–279. [Online]. Available:
https://doi.org/10.1145/1734263.1734359
[3]
P. L. Li, A. J. Ko, and J. Zhu, “What makes a great software engineer?”
in 2015 IEEE/ACM 37th IEEE International Conference on Software
Engineering, vol. 1, May 2015, pp. 700–710.
[4]
C. Scaffidi, “Employers’ needs for computer science, information technology
and software engineering skills among new graduates,” International
Journal of Computer Science, Engineering and Information Technology, vol. 8,
pp. 01–12, 02 2018.
[5]
V. Tirronen and V. Isomöttönen, “Making teaching of programming
learning-oriented and learner-directed,” in Proceedings of the 11th
Koli Calling International Conference on Computing Education Research, ser.
Koli Calling ’11. New York, NY, USA:
Association for Computing Machinery, 2011, p. 60–65. [Online]. Available:
https://doi.org/10.1145/2094131.2094143
[6]
A. Radermacher and G. Walia, “Gaps between industry expectations and the
abilities of graduates,” in Proceeding of the 44th ACM Technical
Symposium on Computer Science Education, ser. SIGCSE ’13. New York, NY, USA: Association for Computing
Machinery, 2013, p. 525–530. [Online]. Available:
https://doi.org/10.1145/2445196.2445351
[7]
M. Craig, P. Conrad, D. Lynch, N. Lee, and L. Anthony, “Listening to early
career software developers,” J. Comput. Sci. Coll., vol. 33, no. 4,
p. 138–149, Apr. 2018.
[8]
S. B. Feichtner and E. A. Davis, “Why some groups fail: a survey of students’
experiences with learning groups,” Organizational Behavior Teaching
Review, vol. 9, no. 4, pp. 58–73, 1984. [Online]. Available:
https://doi.org/10.1177/105256298400900409
[9]
J. P. LaBeouf, J. C. Griffith, and M. C. Schultz, “The value of academic group
work: An examination of faculty and student perceptions,” The Business
Review, Cambridge, vol. 22, no. 1, pp. 32–39, 2014.
[10]
J. P. LaBeouf, J. C. Griffith, and D. L. Roberts, “Faculty and student issues
with group work: What is problematic with college group assignments and
why?” Journal of Education and Human Development, vol. 5, no. 1,
p. 13, 2016.
[11]
Y. Bentley and S. Warwick, “Students’ experience and perceptions of group
assignments,” Journal of Pedagogic Development, vol. 3, no. 3, pp.
11–19, 2013.
[12]
N. van Hattum-Janssen, “Student perceptions of group work,” University
of Minho, Research Centre in Education, 4710, vol. 57, 2013.
[13]
L. K. Michaelsen and M. Sweet, “The essential elements of team-based
learning,” New Directions for Teaching and Learning, vol. 2008, no.
116, pp. 7–27, 2008. [Online]. Available:
https://onlinelibrary.wiley.com/doi/abs/10.1002/tl.330
[14]
R. E. Slavin, “When does cooperative learning increase student achievement?”
Psychological bulletin, vol. 94, no. 3, p. 429, 1983.
[15]
P. A. Kirschner, J. Sweller, F. Kirschner, and J. Zambrano, “From cognitive
load theory to collaborative cognitive load theory,” International
Journal of Computer-Supported Collaborative Learning, vol. 13, no. 2, pp.
213–233, 2018.
[16]
S. D. Teasley and J. Roschelle, “Constructing a joint problem space: The
computer as a tool for sharing knowledge,” Computers as cognitive
tools, pp. 229–258, 1993.
[17]
F. Kirschner, F. Paas, and P. A. Kirschner, “Individual and group-based
learning from complex cognitive tasks: Effects on retention and transfer
efficiency,” Computers in Human Behavior, vol. 25, no. 2, pp.
306–314, 2009.
[18]
C. Wilcox and A. Lionelle, “Quantifying the benefits of prior programming
experience in an introductory computer science course,” in Proceedings
of the 49th ACM Technical Symposium on Computer Science Education, ser.
SIGCSE ’18. New York, NY, USA: ACM,
2018, pp. 80–85.
[19]
C. Alvarado, G. Umbelino, and M. Minnes, “The persistent effect of pre-college
computing experience on college cs course grades,” in Proceedings of
the 49th ACM Technical Symposium on Computer Science Education, ser. SIGCSE
’18. New York, NY, USA: ACM, 2018, pp.
876–881.
[20]
G. Gousios, E. Kalliamvakou, and D. Spinellis, “Measuring developer
contribution from software repository data,” in Proceedings of the
2008 International Working Conference on Mining Software Repositories, ser.
MSR ’08. New York, NY, USA:
Association for Computing Machinery, 2008, p. 129–132. [Online]. Available:
https://doi.org/10.1145/1370750.1370781
[21]
T. M. Paulus, “Collaboration or cooperation? analyzing small group
interactions in educational environments,” in Computer-supported
collaborative learning in higher education. IGI Global, 2005, pp. 100–124.
[22]
E. Sormunen, M. Tanni, T. Alamettälä, and J. Heinström, “Students’
group work strategies in source-based writing assignments,” Journal of
the Association for Information Science and Technology, vol. 65, no. 6, pp.
1217–1231, 2014.
[23]
N. Saleh and A. Large, “Collaborative information behaviour in undergraduate
group projects: A study of engineering students,” Proceedings of the
American Society for Information Science and Technology, vol. 48, no. 1, pp.
1–10, 2011.
[24]
F. Henri and C. R. Rigault, “Collaborative distance learning and computer
conferencing,” in Advanced educational technology: Research issues and
future potential. Springer, 1996, pp.
45–76.
[25]
J. Roschelle and S. D. Teasley, “The construction of shared knowledge in
collaborative problem solving,” in Computer supported collaborative
learning. Springer, 1995, pp. 69–97.
[26]
M. Simão Filho, P. Pinheiro, and A. Albuquerque, “Task allocation approaches
in distributed agile software development: A systematic review,”
Advances in Intelligent Systems and Computing, vol. 349, p. 252, 04
2015.
[27]
A. Lamersdorf, J. Munch, and D. Rombach, “A survey on the state of the
practice in distributed software development: Criteria for task allocation,”
in 2009 Fourth IEEE International Conference on Global Software
Engineering, 2009, pp. 41–50.
[28]
N. B. Moe, T. Dingsøyr, and T. Dybå, “Understanding self-organizing
teams in agile software development,” in 19th Australian Conference on
Software Engineering (aswec 2008), 2008, pp. 76–85.
[29]
S. Faraj and L. Sproull, “Coordinating expertise in software development
teams,” Management Science, vol. 46, pp. 1554–1568, 12 2000.
[30]
T.-P. Liang, C.-C. Liu, T.-M. Lin, and B. Lin, “Effect of team diversity on
software project performance,” Industrial Management and Data
Systems, vol. 107, pp. 636–653, 05 2007.
[31]
J. Lin, H. Yu, Z. Shen, and C. Miao, “Studying task allocation decisions of
novice agile teams with data from agile project management tools,” in
Proceedings of the 29th ACM/IEEE International Conference on Automated
Software Engineering, ser. ASE ’14. New York, NY, USA: Association for Computing Machinery, 2014, p. 689–694.
[Online]. Available: https://doi.org/10.1145/2642937.2642959
[32]
C. Amrit, “Coordination in software development: The problem of task
allocation,” SIGSOFT Softw. Eng. Notes, vol. 30, no. 4, p. 1–7, May
2005. [Online]. Available: https://doi.org/10.1145/1082983.1083107
[33]
N. Gorla and Y. W. Lam, “Who should work with whom? building effective
software project teams,” Commun. ACM, vol. 47, no. 6, p. 79–82,
Jun. 2004. [Online]. Available: https://doi.org/10.1145/990680.990684
[34]
N. N. V. Ferreira and J. J. Langerman, “The correlation between
personality type and individual performance on an ict project,” in
2014 9th International Conference on Computer Science Education, 2014,
pp. 425–430.
[35]
R. Lingard and E. Berry, “Teaching teamwork skills in software engineering
based on an understanding of factors affecting group performance,” in
32nd Annual Frontiers in Education, vol. 3, 2002, pp. S3G–S3G.
[36]
J. Kim, E. Shaw, H. Xu, and A. G. V, “Assisting instructional assessment of
undergraduate collaborative wiki and svn activities,” in International
Conference on Educational Data Mining, 2012.
[37]
H. Tarmazdi, R. Vivian, C. Szabo, K. Falkner, and N. Falkner, “Using learning
analytics to visualise computer science teamwork,” in Proceedings of
the 2015 ACM Conference on Innovation and Technology in Computer Science
Education, ser. ITiCSE ’15. New
York, NY, USA: Association for Computing Machinery, 2015, p. 165–170.
[Online]. Available: https://doi.org/10.1145/2729094.2742613
[38]
J. J. Sandee and E. Aivaloglou, “Gitcanary: A tool for analyzing student
contributions in group programming assignments,” in Koli Calling ’20:
Proceedings of the 20th Koli Calling International Conference on Computing
Education Research. New York, NY,
USA: Association for Computing Machinery, 2020. [Online]. Available:
https://doi.org/10.1145/3428029.3428563
[39]
V. Braun and V. Clarke, “Using thematic analysis in psychology,”
Qualitative research in psychology, vol. 3, no. 2, pp. 77–101, 2006.
[40]
E. Blair, “A reflexive exploration of two qualitative data coding
techniques,” Journal of Methods and Measurement in the Social
Sciences, vol. 6, no. 1, pp. 14–29, 2015.
[41]
J.-W. Strijbos, R. L. Martens, W. M. Jochems, and N. J. Broers, “The effect of
functional roles on perceived group efficiency during computer-supported
collaborative learning: a matter of triangulation,” Computers in Human
Behavior, vol. 23, no. 1, pp. 353–380, 2007. |
\noblackbox\lref\GiveonZN
A. Giveon and D. Kutasov,
“Seiberg Duality in Chern-Simons Theory,”
Nucl. Phys. B 812, 1 (2009).
[arXiv:0808.0360 [hep-th]].
\lref\bult
F. van de Bult,
“Hyperbolic Hypergeometric Functions,”
http://dare.uva.nl/document/97725.
\lref\BeniniMF
F. Benini, C. Closset and S. Cremonesi,
“Comments on 3d Seiberg-like dualities,”
JHEP 1110, 075 (2011).
[arXiv:1108.5373 [hep-th]].
\lref\WillettGP
B. Willett and I. Yaakov,
“N=2 Dualities and Z Extremization in Three Dimensions,”
[arXiv:1104.0487 [hep-th]].
\lref\Rainslimits
E. M. Rains,
“Limits of elliptic hypergeometric integrals,”
[arXiv:math.CA/0607093].
\lref\Rainstrans
E. M. Rains,
“Transformations of elliptic hypergeometric integrals,”
Ann. Math. Vol. 171, 1 (2010)
\lref\DolanQI
F. A. Dolan and H. Osborn,
“Applications of the Superconformal Index for Protected Operators and q-Hypergeometric Identities to N=1 Dual Theories,”
Nucl. Phys. B 818, 137 (2009).
[arXiv:0801.4947 [hep-th]].
\lref\SpiridonovZA
V. P. Spiridonov and G. S. Vartanov,
“Elliptic Hypergeometry of Supersymmetric Dualities,”
Commun. Math. Phys. 304, 797 (2011).
[arXiv:0910.5944 [hep-th]].
\lref\MoritaCS
T. Morita and V. Niarchos,
“F-theorem, duality and SUSY breaking in one-adjoint Chern-Simons-Matter theories,”
Nucl. Phys. B 858, 84 (2012).
[arXiv:1108.4963 [hep-th]].
\lref\SeibergNZ
N. Seiberg and E. Witten,
“Gauge dynamics and compactification to three-dimensions,”
In *Saclay 1996, The mathematical beauty of physics* 333-366.
[hep-th/9607163].
\lref\DolanRP
F. A. H. Dolan, V. P. Spiridonov and G. S. Vartanov,
“From 4d superconformal indices to 3d partition functions,”
Phys. Lett. B 704, 234 (2011).
[arXiv:1104.1787 [hep-th]].
\lref\NiarchosJB
V. Niarchos,
“Seiberg Duality in Chern-Simons Theories with Fundamental and Adjoint Matter,”
JHEP 0811, 001 (2008).
[arXiv:0808.2771 [hep-th]].
\lref\NiarchosAA
V. Niarchos,
“R-charges, Chiral Rings and RG Flows in Supersymmetric Chern-Simons-Matter Theories,”
JHEP 0905, 054 (2009).
[arXiv:0903.0435 [hep-th]].
\lref\RomelsbergerEC
C. Romelsberger,
“Calculating the Superconformal Index and Seiberg Duality,”
[arXiv:0707.3702 [hep-th]].
\lref\GaddeIA
A. Gadde and W. Yan,
“Reducing the 4d Index to the $S^{3}$ Partition Function,”
[arXiv:1104.2592 [hep-th]].
\lref\ImamuraUW
Y. Imamura,
“Relation between the 4d superconformal index and the $S^{3}$ partition function,”
JHEP 1109, 133 (2011).
[arXiv:1104.4482 [hep-th]].
\lref\ImamuraWG
Y. Imamura and D. Yokoyama,
“N=2 supersymmetric theories on squashed three-sphere,”
Phys. Rev. D 85, 025015 (2012).
[arXiv:1109.4734 [hep-th]].
\lref\HamaAV
N. Hama, K. Hosomichi and S. Lee,
“Notes on SUSY Gauge Theories on Three-Sphere,”
JHEP 1103, 127 (2011).
[arXiv:1012.3512 [hep-th]].
\lref\HamaEA
N. Hama, K. Hosomichi and S. Lee,
“SUSY Gauge Theories on Squashed Three-Spheres,”
JHEP 1105, 014 (2011).
[arXiv:1102.4716 [hep-th]].
\lref\JafferisUN
D. L. Jafferis,
“The Exact Superconformal R-Symmetry Extremizes Z,”
[arXiv:1012.3210 [hep-th]].
\lref\KapustinKZ
A. Kapustin, B. Willett and I. Yaakov,
“Exact Results for Wilson Loops in Superconformal Chern-Simons Theories with Matter,”
JHEP 1003, 089 (2010).
[arXiv:0909.4559 [hep-th]].
\lref\IntriligatorNE
K. A. Intriligator and P. Pouliot,
“Exact superpotentials, quantum vacua and duality in supersymmetric SP(N(c)) gauge theories,”
Phys. Lett. B 353, 471 (1995).
[hep-th/9505006].
\lref\SeibergPQ
N. Seiberg,
“Electric - magnetic duality in supersymmetric nonAbelian gauge theories,”
Nucl. Phys. B 435, 129 (1995).
[hep-th/9411149].
\lref\AharonyGP
O. Aharony,
“IR duality in d = 3 N=2 supersymmetric USp(2N(c)) and U(N(c)) gauge theories,”
Phys. Lett. B 404, 71 (1997).
[hep-th/9703215].
\lref\SpiridonovQV
V. P. Spiridonov and G. S. Vartanov,
“Superconformal indices of ${{\cal N}}=4$ SYM field theories,”
Lett. Math. Phys. 100, 97 (2012).
[arXiv:1005.4196 [hep-th]].
\lref\SpiridonovEM
V. P. Spiridonov,
“Elliptic beta integrals and solvable models of statistical mechanics,”
[arXiv:1011.3798 [hep-th]].
\lref\KapustinXQ
A. Kapustin, B. Willett and I. Yaakov,
“Nonperturbative Tests of Three-Dimensional Dualities,”
JHEP 1010, 013 (2010).
[arXiv:1003.5694 [hep-th]].
\lref\KapustinMH
A. Kapustin, B. Willett and I. Yaakov,
“Tests of Seiberg-like Duality in Three Dimensions,”
[arXiv:1012.4021 [hep-th]].
\lref\KutasovVE
D. Kutasov,
“A Comment on duality in N=1 supersymmetric nonAbelian gauge theories,”
Phys. Lett. B 351, 230 (1995).
[hep-th/9503086].
\lref\BrodieVX
J. H. Brodie,
“Duality in supersymmetric SU(N(c)) gauge theory with two adjoint chiral superfields,”
Nucl. Phys. B 478, 123 (1996).
[hep-th/9605232].
\lref\Spiridonov
V. P. Spiridonov,
“Essays on the theory of elliptic hypergeometric functions,”
Ru. Math. Surveys, Volume 63, Issue 3, 405-472 (2008).
[arXiv:0805.3135 [math.CA]].
\lref\Diejen
J. F. van Diejen and V. P. Spiridonov,
“Unit circle elliptic beta integrals,”
Ramanujan Journal 10, no 2, 187-204 (2005)
[arXiv:math/0309279].
\lref\Spiridonova
V. P. Spiridonov,
“On the elliptic beta function,”
Uspekhi Mat. Nauk 56 (1) 181-182 (2001),
(Russian Math. Surveys 56 (1) 185-186 (2001)).
\lref\Spiridonovb
V. P. Spiridonov,
“Theta hypergeometric integrals,”
Algebra i Analiz 15 (6) 161-215 (2003),
(St.Petersburg Math. J. 15 (6) (2004), 929–967)
[arXiv:math/0303205].
CCTP-2012-07
\Title
Seiberg dualities and the 3d/4d connection
Vasilis Niarchos
Crete Center for Theoretical Physics
Department of Physics, University of Crete, 71303, Greece
[email protected]
Abstract
We discuss the degeneration limits of $d=4$ superconformal indices that relate
Seiberg duality for the $d=4$ ${\cal N}=1$ SQCD theory to Aharony and Giveon-Kutasov dualities
for $d=3$ ${\cal N}=2$ SQCD theories. On a mathematical level we argue that this 3d/4d
connection entails a new set of non-standard degeneration identities between
hyperbolic hypergeometric integrals. On a physical level we propose that such
degeneration formulae provide a new route to the still illusive Seiberg
dualities for $d=3$ ${\cal N}=2$ SQCD theories with $SU(N)$ gauge group.
\Date\eqnres@t
1.0pt. Degeneration schemes of partition functions and the 3d/4d connection
\writetoca\secsym Degeneration schemes of partition functions and the 3d/4d connection
\seclab\intro
Quantum field theories (QFTs) related by reduction on a spatial $S^{1}$ frequently exhibit similar
properties, $e.g.$ similarities in duality and spontaneous supersymmetry breaking patterns.
One can try to trace the quantum dynamics of the compactified theory as a function of the
compactification radius \SeibergNZ, but this is typically hard.
The superconformal indices (SCIs) of $d=4$ QFTs provide an interesting new perspective on such
relations. Under an $S^{1}$ reduction the SCI, which is a partition function on $S^{3}\times S^{1}$,
reduces to the $S^{3}$ partition function of a three dimensional QFT
\refs\DolanRP\GaddeIA\ImamuraUW-\ImamuraWG. For generic gauge theories
with a known UV Lagrangian description there is a standard prescription for the computation of
$d=4$ SCIs and $S^{3}$ partition functions. The $d=4$ SCIs are expressed in terms of elliptic
hypergeometric integrals \DolanQI and the $S^{3}$ partition functions in terms of hyperbolic
hypergeometric integrals \refs\KapustinKZ\JafferisUN\HamaAV-\HamaEA. Original work
on the mathematics of the elliptic hypergeometric integrals was performed in
\refs\Spiridonova,\Spiridonovb (see \Spiridonov for a review). A lengthy treatise on
hyperbolic hypergeometric integrals, whose notation we will follow closely, is \bult. The
first paper to describe the reduction from elliptic to hyperbolic hypergeometric integrals was \Diejen.
In this framework, a field theory duality in four dimensions translates to a corresponding
duality transformation property of elliptic hypergeometric integrals. The subsequent reduction of
this transformation to hyperbolic hypergeometric integrals implies a corresponding field theory
duality in three dimensions. It is believed that every duality in four dimensions \SpiridonovZA
descends in this manner to a duality in three dimensions \DolanRP.
In practice, the descent between a four dimensional and a three dimensional duality identity
is not just a single $S^{1}$ reduction of the four dimensional SCI but a sequence of
reductions whose purpose is to remove constraining conditions on external parameters, $e.g.$ real
masses, and/or add extra parameters like Fayet-Iliopoulos (FI) terms and Chern-Simons (CS)
interactions. The latter steps are crucial at the end of the process when we read off the specifics of the
three dimensional duality from the corresponding form of the duality transformation properties of
hyperbolic hypergeometric integrals. Examples of such reductions in a field theory context have been
provided in \refs\SpiridonovQV,\DolanRP.
The mathematical implementation of these steps relies on specific degeneration schemes between
elliptic and/or hyperbolic hypergeometric integrals. Such schemes have been studied in the
mathematics literature in \refs\Rainslimits,\bult and have been implemented in
\refs\SpiridonovQV,\DolanRP,\KapustinXQ\KapustinMH\WillettGP-\BeniniMF to demonstrate
certain $d=3$ dualities on the level of $S^{3}$ partition functions. In this note we will argue that the
generic reduction between a $d=4$ and a $d=3$ duality involves more general degeneration
schemes with qualitatively new features whose study is both physically and mathematically
interesting.
For concreteness, in this paper we will focus on the example of $d=4$ Seiberg duality \SeibergPQ
and its reduction to $d=3$ Aharony \AharonyGP and Giveon-Kutasov dualities \GiveonZN.
The known route to the integral identities implied by the matching of $S^{3}$ partition functions in
Aharony/Giveon-Kutasov dualities proceeds along the lines of the following degeneration scheme.
The starting point is Seiberg duality for the $d=4$ ${\cal N}=1$ SQCD theory with gauge group
$Sp(2N)$ (also known as the Intriligator-Pouliot duality \IntriligatorNE), and the corresponding
transformation properties of SCIs, which were proven in \Rainstrans, are of the BC type.
The $S^{1}$ degeneration of these identities
becomes a duality transformation property of the so-called $I_{BC}$ top level integral.
A subsequent degeneration scheme \bult that reduces the $I_{BC}$ top level integral to the $S^{3}$
partition functions of ${\cal N}=2$ SQCD and Chern-Simons SQCD theories with gauge group $U(N)$
allows the derivation of the transformation properties implied by Aharony/Giveon-Kutasov dualities.
All the reductions involved in this particular degeneration scheme share the following
(technically convenient) features: $(i)$ they keep the number of integration variables invariant,
and $(ii)$ they can be derived by exchanging the integral with the degeneration limits.
In what follows we will call such reductions ‘standard’. We will argue that there are also more
involved reductions that do not obey $(i)$ and $(ii)$. We will call the latter ‘non-standard reductions’.
We notice that the starting point of the above scheme is not Seiberg duality for the $d=4$ ${\cal N}=1$
SQCD theory with $SU(N)$ gauge group. Since there is a direct $S^{1}$ reduction between
the $d=4$ ${\cal N}=1$ and $d=3$ ${\cal N}=2$ SQCD theories with unitary gauge group it is physically
more interesting to find a degeneration scheme between the partition functions of these theories.
This entails a more direct connection between the duality transformation properties of
the $d=4$ ${\cal N}=1$ $SU(N)$ SQCD theory \refs\Rainstrans,\DolanQI,
and the transformation properties of hyperbolic hypergeometric integrals required by
Aharony/Giveon-Kutasov dualities \bult.
Our main goal will be to discuss explicitly how this connection is implemented and what
mathematical properties it requires. We will find that
by gauging the baryon symmetry of the $d=4$ SQCD theory
we recover the Aharony/Giveon-Kutasov dualities for $U(N)$ ${\cal N}=2$ SQCD theories.
Without gauging the baryon symmetry of the four dimensional theory we obtain a mathematically
concrete route towards a long-suspected 3d Seiberg duality for $SU(N)$ ${\cal N}=2$ SQCD theories.
The latter cannot be derived using the degeneration scheme that starts with the $d=4$
$Sp(2N)$ Intriligator-Pouliot duality.
Analogous reduction schemes can be implemented for more general $d=4$ Seiberg dualities
\SpiridonovZA. The descent between Kutasov \KutasovVE and Brodie \BrodieVX dualities with
adjoint and fundamental matter to their $d=3$ descendants \refs\NiarchosJB,\NiarchosAA is an
example. We will not discuss explicitly this possibility in this paper.
\eqnres@t
2.0pt. From $d=4$ Seiberg duality to $d=3$ Aharony/Giveon-Kutasov duality
\writetoca\secsym From $d=4$ Seiberg duality to $d=3$ Aharony/Giveon-Kutasov duality
\secsym
1. The superconformal index of $d=4$ ${\cal N}=1$ SQCD
\writetoca“quad\secsym1. The superconformal index of $d=4$ ${\cal N}=1$ SQCD
The SCI of the $d=4$ ${\cal N}=1$ SQCD theory with $N_{f}$ pairs of quark supermultiplets
in the (anti)fundamental representation of the gauge group $SU(N_{c})$ is
\refs\RomelsbergerEC,\DolanQI
(for the precise conventions used here see also eqs. (4.6), (4.7) of the review \SpiridonovZA):
\writedef(\secsym0.0pt)\leftbracket(\secsym0.0pt)
$$\eqalign{&I_{E}^{(SU)}(N_{c},N_{f};s;t)={(p;p)_{\infty}^{N_{c}-1}(q;q)_{\infty%
}^{N_{c}-1}\over N_{c}!}\cr&\int_{{\rm I\kern-4.4ptT}^{N_{c}-1}}\prod_{j=1}^{N%
_{c}-1}{dz_{j}\over 2\pi{\bf i}z_{j}}\,{\displaystyle\prod_{a=1}^{N_{f}}\prod_%
{j=1}^{N_{c}}\Gamma_{e}(s_{a}z_{j},t_{a}^{-1}z_{j}^{-1};p,q)\over\displaystyle%
\prod_{1\leq i<j\leq N_{c}}\Gamma_{e}(z_{i}z_{j}^{-1},z_{i}^{-1}z_{j};p,q)}%
\Bigg{|}_{\prod_{j=1}^{N_{c}}z_{j}=1}}$$
(\secsym0.0pt)\eqlabeL(\secsym0.0pt)\secsym0.0𝑝𝑡\eqlabeL\secsym0.0𝑝𝑡( 0.0 italic_p italic_t ) ( 0.0 italic_p italic_t )
for the electric description, and
\writedef(\secsym1.0pt)\leftbracket(\secsym1.0pt)
$$\eqalign{&I_{M}^{(SU)}(\tilde{N}_{c},N_{f};s;t)={(p;p)_{\infty}^{\tilde{N}_{c}%
-1}(q;q)^{\tilde{N}_{c}-1}_{\infty}\over\tilde{N}_{c}!}\prod_{a,b=1}^{N_{f}}%
\Gamma_{e}(s_{a}t_{b}^{-1};p,q)\cr&\int_{{\rm I\kern-4.4ptT}^{\tilde{N}_{c}-1}%
}\prod_{j=1}^{\tilde{N}_{c}-1}{dz_{j}\over 2\pi{\bf i}z_{j}}\,{\displaystyle%
\prod_{a=1}^{N_{f}}\prod_{j=1}^{\tilde{N}_{c}}\Gamma_{e}(S^{{1\over\tilde{N}_{%
c}}}s_{a}^{-1}z_{j},T^{-{1\over\tilde{N}_{c}}}t_{a}z_{j}^{-1};p,q)\over%
\displaystyle\prod_{1\leq i<j\leq\tilde{N}_{c}}\Gamma_{e}(z_{i}z_{j}^{-1},z_{i%
}^{-1}z_{j};p,q)}\Bigg{|}_{\prod_{j=1}^{\tilde{N}_{c}}z_{j}=1}}$$
(\secsym1.0pt)\eqlabeL(\secsym1.0pt)\secsym1.0𝑝𝑡\eqlabeL\secsym1.0𝑝𝑡( 1.0 italic_p italic_t ) ( 1.0 italic_p italic_t )
for the magnetic description.
We make a short parenthesis to explain the notation. The rank of the dual gauge group will be
denoted as
$$\tilde{N}_{c}=N_{f}-N_{c}~{}.$$
$(z,p)_{\infty}$ is the $q$-Pochhammer symbol (thus $(p,p)_{\infty}$ is equivalent to the
Euler function $\phi(p)$) and $\Gamma_{e}(z;p,q)$ the elliptic $\Gamma_{e}$-function (we refer the
reader to \refs\SpiridonovZA,\bult for precise definitions and references to the original literature).
We use the common convention $\Gamma_{e}(z_{1},z_{2};p,q)=\Gamma_{e}(z_{1};p,q)\Gamma_{e}(z_{2};p,q)$.
In the expressions (\secsym0.0pt), (\secsym1.0pt) the external vector
parameters $s=(s_{1},\ldots,s_{N_{f}})$ and $t=(t_{1},\ldots,t_{N_{f}})$ denote (renormalized)
fugacities of the global flavor group, whereas $p,q$ are fugacities related to the U(1)
$R$-symmetry of the theory (further details are available in the review \SpiridonovZA).
The fugacities obey the balancing conditions
\writedef(\secsym2.0pt)\leftbracket(\secsym2.0pt)
$$S:=\prod_{a=1}^{N_{f}}s_{a}=(pq)^{N_{f}r_{Q}}~{},~{}~{}T:=\prod_{a=1}^{N_{f}}t%
_{a}=(pq)^{-N_{f}r_{\tilde{Q}}}~{}$$
(\secsym2.0pt)\eqlabeL(\secsym2.0pt)\secsym2.0𝑝𝑡\eqlabeL\secsym2.0𝑝𝑡( 2.0 italic_p italic_t ) ( 2.0 italic_p italic_t )
where
\writedef(\secsym3.0pt)\leftbracket(\secsym3.0pt)
$$r_{Q}={\tilde{N}_{c}\over 2N_{f}}+x~{},~{}~{}r_{\tilde{Q}}={{\tilde{N}_{c}}%
\over 2N_{f}}-x$$
(\secsym3.0pt)\eqlabeL(\secsym3.0pt)\secsym3.0𝑝𝑡\eqlabeL\secsym3.0𝑝𝑡( 3.0 italic_p italic_t ) ( 3.0 italic_p italic_t )
are the $R$-charges of the fundamental and antifundamental multiplets $Q$, $\tilde{Q}$. $x$ captures
the effects of a baryon $U(1)_{B}$ fugacity.
Seiberg duality implies the following mathematical identity
\writedef(\secsym4.0pt)\leftbracket(\secsym4.0pt)
$$I_{E}^{(SU)}(N_{c},N_{f};s;t)=I_{M}^{(SU)}(\tilde{N}_{c},N_{f};s;t)~{}.$$
(\secsym4.0pt)\eqlabeL(\secsym4.0pt)\secsym4.0𝑝𝑡\eqlabeL\secsym4.0𝑝𝑡( 4.0 italic_p italic_t ) ( 4.0 italic_p italic_t )
It was shown by Dolan and Osborn \DolanQI that this identity coincides with the
$A_{n}\leftrightarrow A_{m}$ root systems symmetry transformation established by Rains in
\Rainstrans.
By gauging the baryon symmetry $U(1)_{B}$ it is not difficult to derive the $U(N)$ version of
the identity (\secsym4.0pt) \writedef(\secsym5.0pt)\leftbracket(\secsym5.0pt)
$$I_{E}^{(U)}(N_{c},N_{f};s;t)=I_{M}^{(U)}(\tilde{N}_{c},N_{f};s,t)$$
(\secsym5.0pt)\eqlabeL(\secsym5.0pt)\secsym5.0𝑝𝑡\eqlabeL\secsym5.0𝑝𝑡( 5.0 italic_p italic_t ) ( 5.0 italic_p italic_t )
where
\writedef(\secsym6.0pt)\leftbracket(\secsym6.0pt)
$$\eqalign{I_{E}^{(U)}(N_{c},N_{f};s;t)&={(p;p)_{\infty}^{N_{c}-1}(q;q)^{N_{c}-1%
}_{\infty}\over N_{c}!}\int_{{\rm I\kern-4.4ptT}^{N_{c}}}\prod_{j=1}^{N_{c}}{%
dz_{j}\over 2\pi{\bf i}z_{j}}{\displaystyle\prod_{a=1}^{N_{f}}\prod_{j=1}^{N_{%
c}}\Gamma_{e}(s_{a}z_{j},t_{a}^{-1}z_{j}^{-1};p,q)\over\displaystyle\prod_{1%
\leq i<j\leq N_{c}}\Gamma_{e}(z_{i}z_{j}^{-1},z_{i}^{-1}z_{j};p,q)}\cr&=\int_{%
S^{1}}{dx\over 2\pi{\bf i}x}I_{E}^{(SU)}(N_{c},N_{f};x^{-1}s;x^{-1}t)~{},}$$
(\secsym6.0pt)\eqlabeL(\secsym6.0pt)\secsym6.0𝑝𝑡\eqlabeL\secsym6.0𝑝𝑡( 6.0 italic_p italic_t ) ( 6.0 italic_p italic_t )
and
\writedef(\secsym7.0pt)\leftbracket(\secsym7.0pt)
$$\eqalign{&I_{M}^{U}(\tilde{N}_{c},N_{f};s;t)={(p;p)_{\infty}^{\tilde{N}_{c}-1}%
(q;q)^{\tilde{N}_{c}-1}_{\infty}\over\tilde{N}_{c}!}\prod_{a,b=1}^{N_{f}}%
\Gamma_{e}(s_{a}t_{b}^{-1};p,q)\cr&\int_{{\rm I\kern-4.4ptT}^{\tilde{N}_{c}}}%
\prod_{j=1}^{\tilde{N}_{c}}{dz_{j}\over 2\pi{\bf i}z_{j}}{\displaystyle\prod_{%
a=1}^{N_{f}}\prod_{j=1}^{\tilde{N}_{c}}\Gamma_{e}(S^{{1\over\tilde{N}_{c}}}s_{%
a}^{-1}z_{j},T^{-{1\over\tilde{N}_{c}}}t_{a}z_{j}^{-1};p,q)\over\displaystyle%
\prod_{1\leq i<j\leq\tilde{N}_{c}}\Gamma_{e}(z_{i}z_{j}^{-1},z_{i}^{-1}z_{j};p%
,q)}=\int_{S^{1}}{dx\over 2\pi{\bf i}x}I_{M}^{(SU)}(\tilde{N}_{c},N_{f};x^{-1}%
s;x^{-1}t).}$$
(\secsym7.0pt)\eqlabeL(\secsym7.0pt)\secsym7.0𝑝𝑡\eqlabeL\secsym7.0𝑝𝑡( 7.0 italic_p italic_t ) ( 7.0 italic_p italic_t )
In what follows we consider a degeneration scheme based on the $U(N)$ identity (\secsym5.0pt). We will
return to the reduction of the $SU(N)$ identity (\secsym4.0pt) in section 4.
\secsym
2. First degeneration: the $S^{1}$ reduction
\writetoca“quad\secsym2. First degeneration: the $S^{1}$ reduction
Following standard procedure we set
\writedef(\secsym8.0pt)\leftbracket(\secsym8.0pt)
$$p=e^{2\pi{\bf i}v\omega_{1}}~{},~{}~{}q=e^{2\pi{\bf i}v\omega_{2}}~{},~{}~{}s_%
{a}=e^{2\pi{\bf i}v\mu_{a}}~{},~{}~{}t_{a}=e^{-2\pi{\bf i}v\nu_{a}}~{},~{}~{}z%
_{j}=e^{2\pi{\bf i}vu_{j}}~{},$$
(\secsym8.0pt)\eqlabeL(\secsym8.0pt)\secsym8.0𝑝𝑡\eqlabeL\secsym8.0𝑝𝑡( 8.0 italic_p italic_t ) ( 8.0 italic_p italic_t )
where $i,j=1,\ldots,N_{c}$, $a=1,\ldots,N_{f}$, and take the degeneration limit $v\to 0$.
In this limit the elliptic $\Gamma_{e}$-functions reduce to hyperbolic $\Gamma_{h}$-functions
and by exchanging limit and integral we obtain the degeneration formulae
\writedef(\secsym9.0pt)\leftbracket(\secsym9.0pt)
$$\lim_{v\to 0}I_{E}^{(U)}(N_{c},N_{f};s,t)=\sqrt{-v^{2}\omega_{1}\omega_{2}}\,e%
^{{\pi{\bf i}\omega(N_{c}^{2}+1)\over 6v\omega_{1}\omega_{2}}}\,J_{N_{c},(N_{f%
},N_{f}),0}(\mu;\nu;0)~{},$$
(\secsym9.0pt)\eqlabeL(\secsym9.0pt)\secsym9.0𝑝𝑡\eqlabeL\secsym9.0𝑝𝑡( 9.0 italic_p italic_t ) ( 9.0 italic_p italic_t )
\writedef
(\secsym10.0pt)\leftbracket(\secsym10.0pt)
$$\eqalign{\lim_{v\to 0}I_{M}^{(U)}(\tilde{N}_{c},N_{f};s,t)=&\sqrt{-v^{2}\omega%
_{1}\omega_{2}}\,e^{{\pi{\bf i}\omega(N_{c}^{2}+1)\over 6v\omega_{1}\omega_{2}%
}}\prod_{a,b=1}^{N_{f}}\Gamma_{h}(\mu_{a}+\nu_{b};\omega_{1},\omega_{2})\cr&J_%
{\tilde{N}_{c},(N_{f},N_{f}),0}(\omega-\nu;\omega-\mu;0)~{},}$$
(\secsym10.0pt)\eqlabeL(\secsym10.0pt)\secsym10.0𝑝𝑡\eqlabeL\secsym10.0𝑝𝑡( 10.0 italic_p italic_t ) ( 10.0 italic_p italic_t )
where $J_{n,(s_{1},s_{2}),t}$ is the function
\writedef(\secsym11.0pt)\leftbracket(\secsym11.0pt)
$$\eqalign{&J_{n,(s_{1},s_{2}),t}(\mu;\nu;2\lambda)={1\over n!}\int\prod_{j=1}^{%
n}\left({du_{j}\over\sqrt{-\omega_{1}\omega_{2}}}e^{{2\pi{\bf i}\lambda u_{j}%
\over\omega_{1}\omega_{2}}}e^{{\pi{\bf i}tu_{j}^{2}\over 2\omega_{1}\omega_{2}%
}}\right)\cr&{\displaystyle\prod_{j=1}^{n}\prod_{a=1}^{s_{1}}\Gamma_{h}(\mu_{a%
}-u_{j};\omega_{1},\omega_{2})\prod_{b=1}^{s_{2}}\Gamma_{h}(\nu_{b}+u_{j};%
\omega_{1},\omega_{2})\over\prod_{1\leq i<j\leq n}\Gamma_{h}(u_{i}-u_{j};%
\omega_{1},\omega_{2})\Gamma_{h}(u_{j}-u_{i};\omega_{1},\omega_{2})}}$$
(\secsym11.0pt)\eqlabeL(\secsym11.0pt)\secsym11.0𝑝𝑡\eqlabeL\secsym11.0𝑝𝑡( 11.0 italic_p italic_t ) ( 11.0 italic_p italic_t )
The contour of the integral, which goes from ${\rm I\kern-1.8ptR}{\rm e}\,u=-\infty$ to ${\rm I\kern-1.8ptR}{\rm e}\,u=+\infty$,
is chosen appropriately to avoid the poles of the $\Gamma_{h}$-functions
(see \bult for further details).
The function $J_{N_{c},(N_{f},N_{f}),0}(\mu;\nu;0)$ that appears in the first degeneration formula
(\secsym9.0pt) expresses the partition function of the $d=3$ ${\cal N}=2$ SQCD theory with gauge group
$U(N_{c})$. When the parameters $\omega_{1},\omega_{2}$ are chosen to have the form
\writedef(\secsym12.0pt)\leftbracket(\secsym12.0pt)
$$\omega_{1}={\bf i}b~{},~{}~{}\omega_{2}={\bf i}b^{-1}~{},~{}~{}b\in{\rm I\kern%
-1.8ptR}_{+}$$
(\secsym12.0pt)\eqlabeL(\secsym12.0pt)\secsym12.0𝑝𝑡\eqlabeL\secsym12.0𝑝𝑡( 12.0 italic_p italic_t ) ( 12.0 italic_p italic_t )
this is a partition function on the squashed $S^{3}$ with squashing parameter $b$
\refs\HamaEA,\ImamuraWG. The parameters $\mu_{a}$, $\nu_{a}$ are related to the real masses
$m_{a}$, $\tilde{m}_{a}$ and $m_{A}$ (understood as background values of scalars for
external vector multiplets of $SU(N_{f})_{L}$, $SU(N_{f})_{R}$, and $U(1)_{A}$ respectively) by the
following relation
\writedef(\secsym13.0pt)\leftbracket(\secsym13.0pt)
$$\mu_{a}=\tilde{m}_{a}+m_{A}+\omega R_{Q}~{},~{}~{}\nu_{a}=-m_{a}+m_{A}+\omega R%
_{Q}~{},~{}~{}\sum_{a=1}^{N_{f}}m_{a}=\sum_{a=1}^{N_{f}}\tilde{m}_{a}=0~{}.$$
(\secsym13.0pt)\eqlabeL(\secsym13.0pt)\secsym13.0𝑝𝑡\eqlabeL\secsym13.0𝑝𝑡( 13.0 italic_p italic_t ) ( 13.0 italic_p italic_t )
$R_{Q}$ is the $U(1)_{R}$ charge of the $d=3$ theory and
$$\omega:={\omega_{1}+\omega_{2}\over 2}~{}.$$
With these conventions the balancing conditions (\secsym2.0pt) reduce to ((\secsym13.0pt) sets $x=0$)
\writedef(\secsym14.0pt)\leftbracket(\secsym14.0pt)
$$\sum_{a=1}^{N_{f}}\mu_{a}=\sum_{a=1}^{N_{f}}\nu_{a}=N_{f}(m_{A}+\omega R_{Q})=%
\tilde{N}_{c}\omega~{}.$$
(\secsym14.0pt)\eqlabeL(\secsym14.0pt)\secsym14.0𝑝𝑡\eqlabeL\secsym14.0𝑝𝑡( 14.0 italic_p italic_t ) ( 14.0 italic_p italic_t )
Similarly, the second degeneration formula (\secsym10.0pt) expresses the (squashed) $S^{3}$ partition
function of a $d=3$ $U(N_{c})$ SYM theory. If this limit captures correctly the reduction to Aharony
duality, then the partition function on the rhs of eq. (\secsym10.0pt) ought to be the partition function of the
magnetic dual of the $d=3$ ${\cal N}=2$ $U(N_{c})$ SQCD theory. This is indeed the case. Combining the
$d=4$ duality transformation property (\secsym5.0pt) with the degeneration formulae (\secsym9.0pt), (\secsym10.0pt) we obtain
the identity
\writedef(\secsym15.0pt)\leftbracket(\secsym15.0pt)
$$J_{N_{c},(N_{f},N_{f}),0}(\mu;\nu;0)=\prod_{a,b=1}^{N_{f}}\Gamma_{h}(\mu_{a}+%
\nu_{b};\omega_{1},\omega_{2})\ J_{\tilde{N}_{c},(N_{f},N_{f}),0}(\omega-\nu;%
\omega-\mu;0)~{},$$
(\secsym15.0pt)\eqlabeL(\secsym15.0pt)\secsym15.0𝑝𝑡\eqlabeL\secsym15.0𝑝𝑡( 15.0 italic_p italic_t ) ( 15.0 italic_p italic_t )
which is a special case of eq. (5.5.21) in Theorem 5.5.11 of Ref. \bult that expresses Aharony
duality. To the best of our knowledge this particular derivation of the identity (\secsym15.0pt) has not appeared
in the literature before.
In the next subsection we will see that the balancing condition (\secsym14.0pt), which was inherited from four
dimensions, trivializes the contribution of the gauge-singlet chiral superfields $V_{\pm}$ that are part of
the magnetic description of the $d=3$ ${\cal N}=2$ SQCD theory and makes them invisible in the
degeneration formula (\secsym10.0pt). Hence, without relaxing the conditions (\secsym14.0pt), it is impossible to read off
the complete matter content of the magnetic theory. The general form of the identities implied by
Aharony duality can be obtained by further degeneration limits that remove the conditions (\secsym14.0pt).
\secsym
3. Second degeneration: removal of the balancing conditions and FI terms
\writetoca“quad\secsym3. Second degeneration: removal of the balancing conditions and FI terms
The second degeneration step removes two pairs of quark supermultiplets by sending two pairs
of real masses with opposite signs to infinity. In order to obtain a final theory with $N_{f}$
quark supermultiplets we start from the identity (\secsym15.0pt) renaming
\writedef(\secsym16.0pt)\leftbracket(\secsym16.0pt)
$$N_{f}\to N_{f}+2~{},~{}~{}\tilde{N}_{c}\to\tilde{N}_{c}+2~{}.$$
(\secsym16.0pt)\eqlabeL(\secsym16.0pt)\secsym16.0𝑝𝑡\eqlabeL\secsym16.0𝑝𝑡( 16.0 italic_p italic_t ) ( 16.0 italic_p italic_t )
We keep the definition $\tilde{N}_{c}=N_{f}-N_{c}$ unchanged. In the resulting expression we set
\writedef(\secsym17.0pt)\leftbracket(\secsym17.0pt)
$$\mu_{N_{f}+1}=\xi_{1}+\alpha S~{},~{}~{}\nu_{N_{f}+1}=\zeta_{1}-\alpha S~{},$$
(\secsym17.0pt)\eqlabeL(\secsym17.0pt)\secsym17.0𝑝𝑡\eqlabeL\secsym17.0𝑝𝑡( 17.0 italic_p italic_t ) ( 17.0 italic_p italic_t )
\writedef
(\secsym18.0pt)\leftbracket(\secsym18.0pt)
$$\mu_{N_{f}+2}=\xi_{2}-\alpha S~{},~{}~{}\nu_{N_{f}+2}=\zeta_{2}+\alpha S$$
(\secsym18.0pt)\eqlabeL(\secsym18.0pt)\secsym18.0𝑝𝑡\eqlabeL\secsym18.0𝑝𝑡( 18.0 italic_p italic_t ) ( 18.0 italic_p italic_t )
and eventually take the limit $S\to+\infty$. $\alpha$ is a pure phase chosen in a manner
that allows to perform the ensuing standard reductions by exchanging limits and integrals
\bult.
With this ansatz the balancing conditions (\secsym14.0pt) become
\writedef(\secsym19.0pt)\leftbracket(\secsym19.0pt)
$$\sum_{a=1}^{N_{f}}\mu_{a}+\xi_{1}+\xi_{2}=\sum_{a=1}^{N_{f}}\nu_{a}+\zeta_{1}+%
\zeta_{2}=N_{f}(m_{A}+\omega R_{Q})=\tilde{N}_{c}\omega$$
(\secsym19.0pt)\eqlabeL(\secsym19.0pt)\secsym19.0𝑝𝑡\eqlabeL\secsym19.0𝑝𝑡( 19.0 italic_p italic_t ) ( 19.0 italic_p italic_t )
freeing the parameters $\mu_{a},\nu_{a}$ $(a=1,\ldots,N_{f})$ from any constraints.
It will be convenient to define an additional parameter
\writedef(\secsym20.0pt)\leftbracket(\secsym20.0pt)
$$\lambda:=-\xi_{1}-\zeta_{1}+\tilde{N}_{c}\omega-{1\over 2}\sum_{a=1}^{N_{f}}(%
\mu_{a}+\nu_{a})$$
(\secsym20.0pt)\eqlabeL(\secsym20.0pt)\secsym20.0𝑝𝑡\eqlabeL\secsym20.0𝑝𝑡( 20.0 italic_p italic_t ) ( 20.0 italic_p italic_t )
in terms of which we obtain the expressions
\writedef(\secsym21.0pt)\leftbracket(\secsym21.0pt)
$$\eqalign{\xi_{1}+\zeta_{1}=-\lambda+\tilde{N}_{c}\omega-{1\over 2}\sum_{a=1}^{%
N_{f}}(\mu_{a}+\nu_{a})~{},~{}~{}\xi_{2}+\zeta_{2}=\lambda+\tilde{N}_{c}\omega%
-{1\over 2}\sum_{a=1}^{N_{f}}(\mu_{a}+\nu_{a})~{}.}$$
(\secsym21.0pt)\eqlabeL(\secsym21.0pt)\secsym21.0𝑝𝑡\eqlabeL\secsym21.0𝑝𝑡( 21.0 italic_p italic_t ) ( 21.0 italic_p italic_t )
Applying the standard reduction identity
\writedef(\secsym22.0pt)\leftbracket(\secsym22.0pt)
$$\eqalign{\lim_{S\to\infty}&J_{n,(s_{1}+2,s_{2}+2),t}(\mu,\xi_{1}+\alpha S,\xi_%
{2}-\alpha S;\nu,\zeta_{1}-\alpha S,\zeta_{2}+\alpha S;2\lambda+\xi_{1}+\zeta_%
{1}-\xi_{2}-\zeta_{2})\cr&e^{{\pi{\bf i}n\over 2\omega_{1}\omega_{2}}((\zeta_{%
1}-\alpha S-\omega)^{2}+(\xi_{2}-\alpha S-\omega)^{2}-(\xi_{1}+\alpha S-\omega%
)^{2}-(\zeta_{2}+\alpha S-\omega)^{2})}\cr=&J_{n,(s_{1},s_{2}),t}(\mu;\nu;2%
\lambda)}$$
(\secsym22.0pt)\eqlabeL(\secsym22.0pt)\secsym22.0𝑝𝑡\eqlabeL\secsym22.0𝑝𝑡( 22.0 italic_p italic_t ) ( 22.0 italic_p italic_t )
to the case at hand
\writedef(\secsym23.0pt)\leftbracket(\secsym23.0pt)
$$t=0~{},~{}~{}n=N_{c}~{},~{}~{}s_{1}=s_{2}=N_{f}$$
(\secsym23.0pt)\eqlabeL(\secsym23.0pt)\secsym23.0𝑝𝑡\eqlabeL\secsym23.0𝑝𝑡( 23.0 italic_p italic_t ) ( 23.0 italic_p italic_t )
we deduce the limit
\writedef(\secsym24.0pt)\leftbracket(\secsym24.0pt)
$$\eqalign{\lim_{S\to\infty}&J_{N_{c},(N_{f}+2,N_{f}+2),0}(\mu,\xi_{1}+\alpha S,%
\xi_{2}-\alpha S;\nu,\zeta_{1}-\alpha S,\zeta_{2}+\alpha S;0)\cr&e^{{\pi{\bf i%
}N_{c}\over 2\omega_{1}\omega_{2}}((\zeta_{1}-\alpha S-\omega)^{2}+(\xi_{2}-%
\alpha S-\omega)^{2}-(\xi_{1}+\alpha S-\omega)^{2}-(\zeta_{2}+\alpha S-\omega)%
^{2})}\cr=&J_{N_{c},(N_{f},N_{f}),0}(\mu;\nu;2\lambda)~{}.}$$
(\secsym24.0pt)\eqlabeL(\secsym24.0pt)\secsym24.0𝑝𝑡\eqlabeL\secsym24.0𝑝𝑡( 24.0 italic_p italic_t ) ( 24.0 italic_p italic_t )
The rhs of this equation expresses the (squashed) $S^{3}$ partition function of the electric
description of the $d=3$ ${\cal N}=2$ $U(N_{c})$ SQCD theory with FI term $\lambda$ and
no restrictions on the real mass parameters $\mu_{a},\nu_{a}$. This explains how the degeneration
limit $S\to\infty$ acts on the lhs of the duality relation (\secsym15.0pt).
The effect of the limit on the magnetic side of the duality follows by inserting the transformation
(\secsym15.0pt) into the lhs of the reduction formula (\secsym24.0pt)\writedef(\secsym25.0pt)\leftbracket(\secsym25.0pt)
$$\eqalign{&\lim_{S\to\infty}\Bigg{(}\prod_{a,b=1}^{N_{f}+2}\Gamma_{h}(\mu_{a}+%
\nu_{b};\omega_{1},\omega_{2})\cr&J_{\tilde{N}_{c}+2,(N_{f}+2,N_{f}+2),0}(%
\omega-\nu,\omega-\zeta_{1}+\alpha S,\omega-\zeta_{2}-\alpha S;\omega-\mu,%
\omega-\xi_{1}-\alpha S,\omega-\xi_{2}+\alpha S;0)\cr&e^{{\pi{\bf i}N_{c}\over
2%
\omega_{1}\omega_{2}}((\zeta_{1}-qS-\omega)^{2}+(\xi_{2}-qS-\omega)^{2}-(\xi_{%
1}+qS-\omega)^{2}-(\zeta_{2}+qS-\omega)^{2})}\Bigg{)}\cr&=Z_{M}(\tilde{N}_{c},%
N_{f};\mu;\nu;\lambda)e^{{\pi{\bf i}\lambda\over\omega_{1}\omega_{2}}\sum_{a=1%
}^{N_{f}}(\mu_{a}-\nu_{a})}~{}.}$$
(\secsym25.0pt)\eqlabeL(\secsym25.0pt)\secsym25.0𝑝𝑡\eqlabeL\secsym25.0𝑝𝑡( 25.0 italic_p italic_t ) ( 25.0 italic_p italic_t )
We have denoted the result of this limit by using a function $Z_{M}$. It is clear that $Z_{M}$ cannot
be obtained with the application of the standard reduction formula (\secsym22.0pt). That formula would
reduce to the function $J_{\tilde{N}_{c}+2,(N_{f},N_{f}),0}$ keeping the number of integration variables
invariant. This is in direct contradiction with the basic duality formula $\tilde{N}_{c}=N_{f}-N_{c}$, which
is already apparent from the duality identity (\secsym15.0pt). We conclude that the degeneration limit (\secsym25.0pt) is mathematically more involved than the standard one in (\secsym22.0pt) and cannot be obtained by
exchanging limits and integrals. We will call such degeneration limits ‘non-standard’ to distinguish
them from the standard ones that play a prominent role in the degeneration schemes of Ref. \bult.
Unfortunately, we are not aware of an efficient computational method for such limits, but
we will have more to say about them in the next section.
It is mathematically interesting that the alternate degeneration scheme of Ref. \bult allows us to
bypass this complicated reduction formula and derive the function $Z_{M}$ by using a significantly
different scheme based only on standard reductions.\footThere is no a priori reason to anticipate
that this alternate route will be a generic possibility. We expect non-standard reductions, like the
one above, to be one of the main steps in general reductions of $d=4$ dualities to $d=3$ dualities.
The example of $SU(N)$ dualities in section 4 appears to be an illustration of this statement.
The result, which follows from eq. (5.5.21) in Theorem 5.5.11 of \bult, determines $Z_{M}$ as the dual
of the rhs of eq. (\secsym24.0pt)\writedef(\secsym26.0pt)\leftbracket(\secsym26.0pt)
$$\eqalign{Z_{M}(\tilde{N}_{c},N_{f};\mu;\nu;\lambda)=&\Gamma_{h}\left((\tilde{N%
}_{c}+1)\omega-{1\over 2}\sum_{a=1}^{N_{f}}(\mu_{a}+\nu_{a})\pm\lambda\right)%
\prod_{a,b=1}^{N_{f}}\Gamma_{h}(\mu_{a}+\nu_{b})\cr&J_{\tilde{N}_{c},(N_{f},N_%
{f}),0}(\omega-\nu;\omega-\mu;-2\lambda)~{}.}$$
(\secsym26.0pt)\eqlabeL(\secsym26.0pt)\secsym26.0𝑝𝑡\eqlabeL\secsym26.0𝑝𝑡( 26.0 italic_p italic_t ) ( 26.0 italic_p italic_t )
The first $\Gamma_{h}$ factor on the rhs of this equation captures the contribution of the
gauge-singlet multiplets $V_{\pm}$. This factor is invisible in the special case of the balancing
condition (\secsym14.0pt) since
\writedef(\secsym27.0pt)\leftbracket(\secsym27.0pt)
$$\Gamma_{h}^{2}\left((\tilde{N}_{c}+1)\omega-{1\over 2}\sum_{a=1}^{N_{f}}(\mu_{%
a}+\nu_{a})\right)=\Gamma_{h}^{2}(\omega)=1~{}.$$
(\secsym27.0pt)\eqlabeL(\secsym27.0pt)\secsym27.0𝑝𝑡\eqlabeL\secsym27.0𝑝𝑡( 27.0 italic_p italic_t ) ( 27.0 italic_p italic_t )
The second term, which is a product of $\Gamma_{h}$-functions, captures the contribution of the
$N_{f}^{2}$ gauge-singlet meson superfields of the dual description. The contribution of the dual
gauge fields and quarks comes into the last factor $J_{\tilde{N}_{c},(N_{f},N_{f}),0}$.
\secsym
4. Third degeneration: Chern-Simons interactions
\writetoca“quad\secsym4. Third degeneration: Chern-Simons interactions
There is a standard third reduction which corresponds to integrating out real masses
with the same sign. This operation introduces the Chern-Simons interaction. The resulting
Chern-Simons-matter theories exhibit the Giveon-Kutasov duality \GiveonZN. Since this is a well
known standard step we will not discuss it explicitly here. For completeness and later convenience
we list the (squashed) $S^{3}$ partition functions for the electric and magnetic descriptions of the
$d=3$ ${\cal N}=2$ $U(N_{c})$ Chern-Simons theory at level $k$ coupled to $N_{f}$ pairs of
(anti)fundamental supermultiplets, and the duality transformation property that relates them. Without
loss of generality we assume that the level $k$ is positive.
The electric and magnetic partition functions have respectively the following forms
\writedef(\secsym28.0pt)\leftbracket(\secsym28.0pt)
$$Z_{E}(N_{c},N_{f},k;\mu;\nu;\lambda)=J_{N_{c},(N_{f},N_{f}),2k}(\mu;\nu;2%
\lambda)~{},$$
(\secsym28.0pt)\eqlabeL(\secsym28.0pt)\secsym28.0𝑝𝑡\eqlabeL\secsym28.0𝑝𝑡( 28.0 italic_p italic_t ) ( 28.0 italic_p italic_t )
\writedef
(\secsym29.0pt)\leftbracket(\secsym29.0pt)
$$Z_{M}(\tilde{N}_{c},N_{f},k;\mu;\nu;\lambda)=\prod_{a,b=1}^{N_{f}}\Gamma_{h}(%
\mu_{a}+\nu_{b})\,J_{\tilde{N}_{c},(N_{f},N_{f}),-2k}(\omega-\nu;\omega-\mu;-2%
\lambda)~{}.$$
(\secsym29.0pt)\eqlabeL(\secsym29.0pt)\secsym29.0𝑝𝑡\eqlabeL\secsym29.0𝑝𝑡( 29.0 italic_p italic_t ) ( 29.0 italic_p italic_t )
$\lambda$ denotes again a FI term and $\mu,\nu$ are vectors of real mass parameters.
The Giveon-Kutasov duality requires the transformation property
\writedef(\secsym30.0pt)\leftbracket(\secsym30.0pt)
$$Z_{E}(N_{c},N_{f},k;\mu;\nu;\lambda)=e^{{\bf i}\vartheta(N_{c},N_{f},k;\mu;\nu%
;2\lambda)}Z_{M}(\tilde{N}_{c},N_{f},k;\mu;\nu;\lambda)$$
(\secsym30.0pt)\eqlabeL(\secsym30.0pt)\secsym30.0𝑝𝑡\eqlabeL\secsym30.0𝑝𝑡( 30.0 italic_p italic_t ) ( 30.0 italic_p italic_t )
where
\writedef(\secsym31.0pt)\leftbracket(\secsym31.0pt)
$$\eqalign{&e^{{\bf i}\vartheta(N_{c},N_{f},k;\mu;\nu;\lambda)}:=e^{{\pi{\bf i}(%
\omega_{1}^{2}+\omega_{2}^{2})(k^{2}+2)\over 24\omega_{1}\omega_{2}}}e^{-{\pi{%
\bf i}\over 4\omega_{1}\omega_{2}}(\lambda^{2}+2k\omega^{2}(\tilde{N}_{c}-N_{c%
}))}\cr&e^{{\pi{\bf i}\over 4\omega_{1}\omega_{2}}(2k\sum_{a}(\omega-\mu_{a})^%
{2}+2k\sum_{a}(\omega-\nu_{a})^{2}-(2(N_{c}-N_{f})\omega+\sum_{a}\mu_{a}+\sum_%
{a}\nu_{a})^{2})}\cr&e^{-{\pi{\bf i}\over 2\omega_{1}\omega_{2}}(\lambda(\sum_%
{a}\nu_{a}-\sum_{a}\mu_{a})+2k\omega(2N_{f}\omega-\sum_{a}\mu_{a}-\sum_{a}\nu_%
{a}))}~{}.}$$
(\secsym31.0pt)\eqlabeL(\secsym31.0pt)\secsym31.0𝑝𝑡\eqlabeL\secsym31.0𝑝𝑡( 31.0 italic_p italic_t ) ( 31.0 italic_p italic_t )
Eq. (\secsym30.0pt) is the last identity of Theorem 5.5.11 in \bult as was already noticed in \WillettGP.
\eqnres@t
3.0pt. A lesson from Giveon-Kutasov duality tests
\writetoca\secsym A lesson from Giveon-Kutasov duality tests
In the original work on Seiberg duality in ${\cal N}=2$ Chern-Simons-matter theories \GiveonZN several checks were performed on the duality using a D-brane setup of D3, D5, NS5 and $(1,k)$
fivebrane bound states. One of these checks aimed to verify that the duality is consistent with the limit
where equal masses with opposite sign for a quark pair $(Q^{1},\tilde{Q}_{1})$ are sent to infinity
removing the respective supermultiplets. Since this limit is a clean example of the non-standard
reduction that we discussed in the previous section, it will be instructive to consider it here in more
detail from the partition function point of view.
An interesting feature of this reduction, that was also noted in \GiveonZN, is the fact that it involves
two separate supersymmetric vacua. In other words, the reduction can be performed in two
different ways. In vacuum 1, $N_{c}\to N_{c}$, $N_{f}\to N_{f}-1$ on the electric side; on the magnetic
side $\tilde{N}_{c}\to\tilde{N}_{c}-1$, $N_{f}\to N_{f}-1$. In vacuum 2, $N_{c}\to N_{c}-1$, $N_{f}\to N_{f}-1$
on the electric side, and $\tilde{N}_{c}\to\tilde{N}_{c}$, $N_{f}\to N_{f}-1$ on the magnetic side.
We notice that there is a possibility of two different types of reductions:
a standard one that keeps the rank of the gauge group (equivalently, the number of
integration variables in the $S^{3}$ partition function) invariant, and a non-standard one that
changes the rank of the gauge group.
In the D-brane interpretation of the duality, \GiveonZN, the following branes in ${\rm I\kern-1.8ptR}^{1,9}$
participate
\writedef(\secsym32.0pt)\leftbracket(\secsym32.0pt)
$${\eqalign{\vbox{ \offinterlineskip\halign{\cr}&0&1&2&3&4&5&6&7&8&9\cr&&&&&&&&&&%
\cr 1 NS5~{}:&$\bullet$&$\bullet$&$\bullet$&$\bullet$&$\bullet$&$\bullet$&&&&%
\cr&&&&&&&&&&\cr 1 $(1,k)$~{}:&$\bullet$&$\bullet$&$\bullet$&$\circ$&&&&$\circ%
$&$\bullet$&$\bullet$\cr&&&&&&&&&&\cr$N_{c}$ D3~{}:&$\bullet$&$\bullet$&$%
\bullet$&&&&$\bullet$&&&\cr&&&&&&&&&&\cr$N_{f}$ D5~{}:&$\bullet$&$\bullet$&$%
\bullet$&&&&&$\bullet$&$\bullet$&$\bullet$}
}}$$
(\secsym32.0pt)\eqlabeL(\secsym32.0pt)\secsym32.0𝑝𝑡\eqlabeL\secsym32.0𝑝𝑡( 32.0 italic_p italic_t ) ( 32.0 italic_p italic_t )
The circles $\circ$ indicate that the brane is oriented along a line in the $(37)$ plane.
Giving equal and opposite real masses to a quark pair corresponds to moving the corresponding
D5 brane away from the D3 branes in the 3-direction. In vacuum 1 the D3 branes continue to
stretch between the NS5 brane and the $(1,k)$ fivebrane. In vacuum 2 the D3 branes break
on the displaced D5 brane.
On the level of the $S^{3}$ partition functions (\secsym28.0pt), (\secsym29.0pt) we set
\writedef(\secsym33.0pt)\leftbracket(\secsym33.0pt)
$$\mu_{1}=\xi+\alpha S~{},~{}~{}\nu_{1}=\zeta-\alpha S$$
(\secsym33.0pt)\eqlabeL(\secsym33.0pt)\secsym33.0𝑝𝑡\eqlabeL\secsym33.0𝑝𝑡( 33.0 italic_p italic_t ) ( 33.0 italic_p italic_t )
and eventually take the limit $S\to+\infty$. This is a slightly simpler version of the reductions
(\secsym17.0pt), (\secsym18.0pt) in subsection 2.3. In the absence of balancing conditions it is now possible to
consider the limit on a single quark-antiquark supermultiplet pair.
\secsym
1. Vacuum 1
\writetoca“quad\secsym1. Vacuum 1
We perform the standard reduction on the electric side
\writedef(\secsym34.0pt)\leftbracket(\secsym34.0pt)
$$\eqalign{&\lim_{S\to\infty}J_{N_{c},(N_{f},N_{f}),2k}(\mu,\xi+\alpha S;\nu,%
\zeta-\alpha S;2\lambda+\xi+\zeta-2\omega)e^{{\pi{\bf i}N_{c}\over 2\omega_{1}%
\omega_{2}}((\zeta-\alpha S-\omega)^{2}-(\xi+\alpha S-\omega)^{2})}\cr&=J_{N_{%
c},(N_{f}-1,N_{f}-1),2k}(\mu;\nu;2\lambda)~{}.}$$
(\secsym34.0pt)\eqlabeL(\secsym34.0pt)\secsym34.0𝑝𝑡\eqlabeL\secsym34.0𝑝𝑡( 34.0 italic_p italic_t ) ( 34.0 italic_p italic_t )
This degeneration formula is the second formula in Proposition 5.3.24 of Ref. \bult for
$\tau=\omega$. It holds with certain assumptions on the external parameters $(\mu,\nu)$,
$(\xi,\zeta)$, and $\varphi={\rm arg}(\alpha)$, which are listed in \bult. The assumption we
want to single out is the assumption on $\varphi$
\writedef(\secsym35.0pt)\leftbracket(\secsym35.0pt)
$$\varphi\in\left(\varphi_{+}-\pi~{},{\varphi_{-}+\varphi_{+}-\pi\over 2}\right)%
\cap\left(\varphi_{\omega}-\pi,\varphi_{\omega}\right)~{},$$
(\secsym35.0pt)\eqlabeL(\secsym35.0pt)\secsym35.0𝑝𝑡\eqlabeL\secsym35.0𝑝𝑡( 35.0 italic_p italic_t ) ( 35.0 italic_p italic_t )
where
\writedef(\secsym36.0pt)\leftbracket(\secsym36.0pt)
$$\varphi_{\omega}=\arg(\omega)~{},~{}~{}\varphi_{+}=\max(\arg(\omega_{1}),\arg(%
\omega_{2}))~{},~{}~{}\varphi_{-}=\min(\arg(\omega_{1}),\arg(\omega_{2}))~{}.$$
(\secsym36.0pt)\eqlabeL(\secsym36.0pt)\secsym36.0𝑝𝑡\eqlabeL\secsym36.0𝑝𝑡( 36.0 italic_p italic_t ) ( 36.0 italic_p italic_t )
In the case of physical interest (\secsym12.0pt) \writedef(\secsym37.0pt)\leftbracket(\secsym37.0pt)
$$\varphi_{-}=\varphi_{+}=\varphi_{\omega}={\pi\over 2}~{}~{}~{}{\rm and}~{}~{}~%
{}\varphi\in\left(-{\pi\over 2},0\right)~{}.$$
(\secsym37.0pt)\eqlabeL(\secsym37.0pt)\secsym37.0𝑝𝑡\eqlabeL\secsym37.0𝑝𝑡( 37.0 italic_p italic_t ) ( 37.0 italic_p italic_t )
The constraint (\secsym35.0pt) restricts the direction along which we take the limit and is instrumental
when we exchange the limit and integral to derive the degeneration formula (\secsym34.0pt).
The corresponding action on the magnetic side follows from (\secsym34.0pt) with the use of the
transformation identity (\secsym30.0pt) on both sides of the equation
\writedef(\secsym38.0pt)\leftbracket(\secsym38.0pt)
$$\eqalign{&\lim_{S\to\infty}\Bigg{[}e^{{\bf i}\vartheta(N_{c},N_{f},k;\mu,\xi+%
\alpha S;\nu,\zeta-\alpha S;2\lambda+\xi+\zeta-2\omega)}e^{{\pi{\bf i}N_{c}%
\over 2\omega_{1}\omega_{2}}((\zeta-\alpha S-\omega)^{2}-(\xi+\alpha S-\omega)%
^{2})}\cr&Z_{M}\left(\tilde{N}_{c},N_{f},k;\mu,\xi+\alpha S;\nu,\zeta-\alpha S%
;\lambda+{1\over 2}(\xi+\zeta)-\omega\right)\Bigg{]}\cr&=e^{{\bf i}\vartheta(N%
_{c},N_{f}-1,k;\mu;\nu;2\lambda)}~{}Z_{M}(\tilde{N}_{c}-1,N_{f}-1,k;\mu;\nu;%
\lambda)}$$
(\secsym38.0pt)\eqlabeL(\secsym38.0pt)\secsym38.0𝑝𝑡\eqlabeL\secsym38.0𝑝𝑡( 38.0 italic_p italic_t ) ( 38.0 italic_p italic_t )
giving the expected reduction of the rank of the dual gauge group $\tilde{N}_{c}\to\tilde{N}_{c}-1$.
Formula (\secsym38.0pt) is a clean example of what we call a non-standard reduction. Notice, that as soon as
we assume the validity of (\secsym35.0pt) in order to implement (\secsym34.0pt) on the electric side, we no longer have
the option of a standard reduction, where we exchange limit and integral, on the magnetic side.
Indeed, that option on the magnetic side (\secsym38.0pt) would require taking in addition
\writedef(\secsym39.0pt)\leftbracket(\secsym39.0pt)
$$\varphi\in\left({\varphi_{-}+\varphi_{+}-\pi\over 2},\varphi_{-}\right)\cap%
\left(\varphi_{\omega}-\pi,\varphi_{\omega}\right)$$
(\secsym39.0pt)\eqlabeL(\secsym39.0pt)\secsym39.0𝑝𝑡\eqlabeL\secsym39.0𝑝𝑡( 39.0 italic_p italic_t ) ( 39.0 italic_p italic_t )
which has zero intersection with (\secsym35.0pt). Hence, we are forced uniquely by duality to reduce on the
magnetic side along the lines of eq. (\secsym38.0pt).
\secsym
2. Vacuum 2
\writetoca“quad\secsym2. Vacuum 2
In this case we make a different choice. We adopt (\secsym39.0pt) and perform a standard reduction on
the magnetic side
\writedef(\secsym40.0pt)\leftbracket(\secsym40.0pt)
$$\eqalign{\lim_{S\to\infty}\Bigg{[}&e^{{\pi{\bf i}\tilde{N}_{c}\over 2\omega_{1%
}\omega_{2}}((\xi+\alpha S)^{2}-(\zeta-\alpha S)^{2})}\prod_{a=2}^{N_{f}}e^{-{%
\pi{\bf i}\over 2\omega_{1}\omega_{2}}((\xi+\alpha S+\nu_{a})^{2}-(\zeta-%
\alpha S+\mu_{a})^{2})}\cr&Z_{M}\left(\tilde{N}_{c},N_{f},k;\mu,\xi+\alpha S;%
\nu,\zeta-\alpha S;\lambda+{1\over 2}(\xi+\zeta)\right)\Bigg{]}\cr&=\Gamma_{h}%
(\xi+\zeta)Z_{M}(\tilde{N}_{c},N_{f}-1,k;\mu,\nu;\lambda)~{}.}$$
(\secsym40.0pt)\eqlabeL(\secsym40.0pt)\secsym40.0𝑝𝑡\eqlabeL\secsym40.0𝑝𝑡( 40.0 italic_p italic_t ) ( 40.0 italic_p italic_t )
Then, combining this formula with the duality relation we obtain a new non-standard
reduction on the electric side
\writedef(\secsym41.0pt)\leftbracket(\secsym41.0pt)
$$\eqalign{&\lim_{S\to\infty}\Bigg{[}e^{{\pi{\bf i}\tilde{N}_{c}\over 2\omega_{1%
}\omega_{2}}((\xi+\alpha S)^{2}-(\zeta-\alpha S)^{2})}\prod_{a=2}^{N_{f}}e^{-{%
\pi{\bf i}\over 2\omega_{1}\omega_{2}}((\xi+\alpha S+\nu_{a})^{2}-(\zeta-%
\alpha S+\mu_{a})^{2})}\cr&e^{-{\bf i}\vartheta(N_{c},N_{f},k;\mu,\xi+\alpha S%
;\nu,\zeta-\alpha S;2\lambda+\xi+\zeta)}Z_{E}\left(N_{c},N_{f},k;\mu,\xi+%
\alpha S;\nu,\zeta-\alpha S;\lambda+{1\over 2}(\xi+\zeta)\right)\Bigg{]}\cr&=e%
^{-{\bf i}\vartheta(N_{c}-1,N_{f}-1,k;\mu;\nu;2\lambda)}\Gamma_{h}(\xi+\zeta)Z%
_{E}(N_{c}-1,N_{f}-1,k;\mu;\nu;\lambda)~{}.}$$
(\secsym41.0pt)\eqlabeL(\secsym41.0pt)\secsym41.0𝑝𝑡\eqlabeL\secsym41.0𝑝𝑡( 41.0 italic_p italic_t ) ( 41.0 italic_p italic_t )
The factor $\Gamma_{h}(\xi+\zeta)$ also comes out in accordance with the D-brane picture.
As we mentioned above, in vacuum 2 the D3 branes break on the displaced D5 brane. Hence,
as the D5 moves away along $x^{3}$, half of the broken D3 stretches between the NS5 and the D5,
while the other half stretches between the D5 and the $(1,k)$ fivebrane. The latter half gives rise to
a meson supermultiplet degree of freedom that decouples from the rest of the theory. The extra factor
$\Gamma_{h}(\xi+\zeta)$ in the rhs of eqs. (\secsym40.0pt)-(\secsym41.0pt) accounts correctly for this additional
decoupled degree of freedom.
\eqnres@t
4.0pt. Towards 3d Seiberg duality with $SU(N)$ gauge group
\writetoca\secsym Towards 3d Seiberg duality with $SU(N)$ gauge group
By gauging the baryon symmetry $U(1)_{B}$ of the $d=4$ ${\cal N}=1$ SQCD theory and then reducing
its SCI we recovered the well known partition function identities required by the Aharony and
Giveon-Kutasov dualities for the $d=3$ ${\cal N}=2$ SQCD theories with gauge group $U(N_{c})$.
Similar dualities for the $d=3$ ${\cal N}=2$ SQCD theory with gauge group $SU(N_{c})$
(in the presence or absence of Chern-Simons interactions)
have long been suspected to exist, but a viable proposal has not been proposed so far.
The general philosophy of this work suggests the following approach to this problem.
Reducing the SCI (\secsym0.0pt) of the four dimensional SQCD theory without gauging the baryon symmetry
we obtain the (squashed) $S^{3}$ partition function of the $d=3$ ${\cal N}=2$ SQCD theory
with gauge group $SU(N_{c})$ (see also Theorem 4.6 of \Rainslimits)
\writedef(\secsym42.0pt)\leftbracket(\secsym42.0pt)
$$\lim_{v\to 0}I_{E}^{(SU)}(N_{c},N_{f};s;t;p,q)=e^{{\pi{\bf i}\omega(N_{c}^{2}+%
1)\over 6v\omega_{1}\omega_{2}}}\tilde{J}_{N_{c},(N_{f},N_{f}),0}(\mu;\nu)$$
(\secsym42.0pt)\eqlabeL(\secsym42.0pt)\secsym42.0𝑝𝑡\eqlabeL\secsym42.0𝑝𝑡( 42.0 italic_p italic_t ) ( 42.0 italic_p italic_t )
where we have defined
\writedef(\secsym43.0pt)\leftbracket(\secsym43.0pt)
$$\eqalign{&\tilde{J}_{N_{c},(N_{f},N_{f}),0}(\mu;\nu)=\cr&{1\over N_{c}!}\int%
\prod_{j=1}^{N_{c}-1}{du_{j}\over\sqrt{-\omega_{1}\omega_{2}}}{\displaystyle%
\prod_{a=1}^{N_{f}}\prod_{j=1}^{N_{c}}\Gamma_{h}(\mu_{a}-u_{j};\omega_{1},%
\omega_{2})\Gamma_{h}(\nu_{a}+u_{j};\omega_{1},\omega_{2})\over\displaystyle%
\prod_{1\leq i<j\leq N_{c}}\Gamma_{h}(u_{i}-u_{j};\omega_{1},\omega_{2})\Gamma%
_{h}(u_{j}-u_{i};\omega_{1},\omega_{2})}\Bigg{|}_{\sum_{j=1}^{N_{c}}u_{j}=0}~{%
}.}$$
(\secsym43.0pt)\eqlabeL(\secsym43.0pt)\secsym43.0𝑝𝑡\eqlabeL\secsym43.0𝑝𝑡( 43.0 italic_p italic_t ) ( 43.0 italic_p italic_t )
A similar reduction on the magnetic description of the four dimensional theory gives
\writedef(\secsym44.0pt)\leftbracket(\secsym44.0pt)
$$\lim_{v\to 0}I_{M}^{(SU)}(\tilde{N}_{c},N_{f};s;t;p,q)=e^{{\pi{\bf i}\omega(N_%
{c}^{2}+1)\over 6v\omega_{1}\omega_{2}}}\prod_{a,b=1}^{N_{f}}\Gamma_{h}(\mu_{a%
}+\nu_{b};\omega_{1},\omega_{2})\tilde{J}_{\tilde{N}_{c},(N_{f},N_{f}),0}(%
\omega-\mu;\omega-\nu)~{}.$$
(\secsym44.0pt)\eqlabeL(\secsym44.0pt)\secsym44.0𝑝𝑡\eqlabeL\secsym44.0𝑝𝑡( 44.0 italic_p italic_t ) ( 44.0 italic_p italic_t )
Consequently, the duality identity (\secsym4.0pt) implies
\writedef(\secsym45.0pt)\leftbracket(\secsym45.0pt)
$$\tilde{J}_{N_{c},(N_{f},N_{f}),0}(\mu;\nu)=\prod_{a,b=1}^{N_{f}}\Gamma_{h}(\mu%
_{a}+\nu_{b};\omega_{1},\omega_{2})\tilde{J}_{\tilde{N}_{c},(N_{f},N_{f}),0}(%
\omega-\mu;\omega-\nu)~{}.$$
(\secsym45.0pt)\eqlabeL(\secsym45.0pt)\secsym45.0𝑝𝑡\eqlabeL\secsym45.0𝑝𝑡( 45.0 italic_p italic_t ) ( 45.0 italic_p italic_t )
The balancing conditions (\secsym2.0pt) reduce to
\writedef(\secsym46.0pt)\leftbracket(\secsym46.0pt)
$$ST^{-1}=(pq)^{\tilde{N}_{c}}~{}~{}\Rightarrow~{}~{}\sum_{a=1}^{N_{f}}(\mu_{a}+%
\nu_{a})=2\tilde{N}_{c}\omega~{}.$$
(\secsym46.0pt)\eqlabeL(\secsym46.0pt)\secsym46.0𝑝𝑡\eqlabeL\secsym46.0𝑝𝑡( 46.0 italic_p italic_t ) ( 46.0 italic_p italic_t )
This condition does not allow a straightforward field theory interpretation of the duality relation (\secsym45.0pt).
A second degeneration step based on the ansatz (\secsym17.0pt), (\secsym18.0pt) can be applied on both
sides to remove the balancing condition and lead to the $SU(N_{c})$ version of Aharony duality.
On the electric side this is a straightforward standard reduction. On the magnetic side this is a
non-standard reduction. Unfortunately, the absence of an efficient computational method for such
reductions hinders the completion of this exercise. The result would allow us to determine the
(squashed) $S^{3}$ partition function of the magnetic theory from which its full matter content can be
determined. A third standard reduction that sends equal real masses of the same sign to infinity
can then be used to determine the $SU(N_{c})$ version of the Giveon-Kutasov duality.
\eqnres@t
5.0pt. Other features of degeneration schemes
\writetoca\secsym Other features of degeneration schemes
In this paper we discussed explicitly on the level of SCIs and $S^{3}$ partition functions
how the standard Seiberg duality for the four dimensional ${\cal N}=1$ SQCD theory reduces in three
dimensions to Aharony and Giveon-Kutasov dualities for ${\cal N}=2$ SQCD theories. On a
mathematical level we argued that this reduction entails a set of non-standard degeneration
identities which cannot be determined by exchanging limits and integrals. An efficient
method for the computation of such identities remains an open problem. On a physical level
we proposed that such degenerations can be used to determine the precise properties of the
still illusive Aharony and Giveon-Kutasov dualities for $d=3$ ${\cal N}=2$ SQCD theories with
$SU(N)$ gauge group. Analogous reduction schemes can be envisioned for generic
$d=4$ Seiberg dualities \SpiridonovZA.
In a general 3d/4d connection the degeneration formula
\writedef(\secsym47.0pt)\leftbracket(\secsym47.0pt)
$$\lim_{v\to 0}I=Z$$
(\secsym47.0pt)\eqlabeL(\secsym47.0pt)\secsym47.0𝑝𝑡\eqlabeL\secsym47.0𝑝𝑡( 47.0 italic_p italic_t ) ( 47.0 italic_p italic_t )
between a four dimensional SCI $I$ and a three dimensional partition function $Z$
can be useful in relating also other properties of the four and three dimensional
theories. An example that deserves further study has to do with spontaneous supersymmetry
breaking.
If $I$ is zero in a certain regime (independent of fugacities, but dependent on parameters like
$N_{c},N_{f}$ $etc.$), then according to (\secsym47.0pt) $Z$ will also be zero in that regime.
In \MoritaCS we conjectured that zeros of $Z$ are related to spontaneous supersymmetry
breaking in the three dimensional theory. The opposite argument may not be true, namely
it is not apriori obvious from eq. (\secsym47.0pt) whether $Z=0$ implies $I=0$.
In the specific SQCD example of this note the following spontaneous supersymmetry breaking
patterns occur. The four dimensional ${\cal N}=1$ SQCD theory exhibits spontaneous supersymmetry
breaking for $N_{f}<N_{c}$. One can readily check (using properties of the elliptic
$\Gamma_{e}$-functions) that the SCI index vanishes when $N_{f}<N_{c}$ and is non-zero otherwise.
By reduction the same property carries over to the $S^{3}$ partition function of the ${\cal N}=2$ SQCD
theory without CS interactions (for $N_{f}<N_{c}$) and the $S^{3}$ partition function of the ${\cal N}=2$
SQCD theory with CS interactions (for $N_{f}+k<N_{c}$ and CS level $k$). Without CS interactions it is
indeed known that a dynamically generated superpotential lifts the space of supersymmetric vacua
for $N_{f}<N_{c}-1$. The case $N_{f}=N_{c}-1$ is a bit more tricky, as was already pointed out in \BeniniMF.
In that case there is a smooth moduli space of supersymmetric vacua. The vanishing of the
hyperbolic hypergeometric integral expression based on the standard UV description of the theory
does not reflect this fact, presumably because of accidental symmetries. This subtlety, however, does
not appear when we make further reductions to obtain Chern-Simons-matter theories. In that case
spontaneous supersymmetry breaking occurs precisely when the UV description-based $S^{3}$
partition function vanishes.
We believe that such similarities in spontaneous supersymmetry breaking patterns between
four and three dimensional theories related by $S^{1}$ reductions are naturally explained
by degeneration formulae of the type (\secsym47.0pt) in the manner outlined above. Note that the precise
mechanism of spontaneous breaking of supersymmetry in general depends on the number of
spacetime dimensions. The details of this connection deserve further study.
Acknowledgements
I would like to thank Grigory Vartanov for useful correspondence and discussions. The work of VN
was partially supported by the European grants FP7-REGPOT-2008-1: CreteHEPCosmo-228644,
PERG07-GA-2010-268246, and the EU program “Thalis” ESF/NSRF 2007-2013.
\listrefs |
Uniqueness
of limit flow for a class
of quasi-linear parabolic equations
Marco Squassina
and
Tatsuya Watanabe
Dipartimento di Informatica
Università degli Studi di Verona
Verona, Italy
[email protected]
Department of Mathematics
Faculty of Science, Kyoto Sangyo University
Motoyama, Kamigamo, Kita-ku, Kyoto-City, Japan
[email protected]
Abstract.
We investigate the issue of uniqueness of the limit flow for a relevant class of quasi-linear parabolic equations defined on the whole space.
More precisely, we shall investigate conditions which guarantee that the global solutions decay at infinity uniformly in time and their entire trajectory approaches a single steady state as time goes to infinity.
Finally, we obtain a characterization of solutions which blow-up, vanish or converge to a stationary state for initial data of the form $\lambda\varphi_{0}$ while $\lambda>0$ crosses a bifurcation value $\lambda_{0}$.
Key words and phrases:Quasilinear parabolic equation, asymptotic behavior, $\omega$-limit set, blow-up
2010 Mathematics Subject Classification: Primary 35K59, 35B40 Secondary 35B44
This paper was carried out while the second author was staying at
University Bordeaux I. The author is very grateful to all the staff of
University Bordeaux I for their kind hospitality.
The first author is supported by Gruppo Nazionale per l’Analisi Matematica,
la Probabilità e le loro Applicazioni (INdAM).
The second author is supported by
JSPS Grant-in-Aid for Scientific Research (C) (No. 15K04970).
Contents
1 Introduction and main results
1.1 Overview
1.2 Main results
2 Preparatory results
2.1 Local existence and basic properties
2.2 Energy stabilization
2.3 Decay estimates
2.4 Some technical results
3 Proof of the main results
3.1 Proof of Theorem 1.1
3.2 Proof of Theorem 1.2
3.3 Proof of Theorem 1.3
1. Introduction and main results
1.1. Overview
In the last decades, a considerable attention has been devoted to the study of solutions
to the quasi-linear Schrödinger equation
(1.1)
$${\rm i}u_{t}+\Delta u+u\Delta u^{2}-u+|u|^{p-1}u=0\quad\text{in $\mathbb{R}^{N%
}\times(0,\infty)$}.$$
In fact, this equation arises in superfluid film equation in plasma physics, see [5, 6],
and it is also a more accurate model in a many physical phenomena compared with
the classical semi-linear Schrödinger equation ${\rm i}u_{t}+\Delta u-u+|u|^{p-1}u=0$.
In particular, local well-posedness, regularity, existence and properties of ground states
as well as stability of standing wave solutions were investigated,
see e.g. [8] and the references therein.
The problem raised the attention also in the framework of non-smooth critical point theory,
since the functional associated with the standing wave solutions of (1.1)
$$u\mapsto\frac{1}{2}\int_{{\mathbb{R}^{N}}}(1+2u^{2})|\nabla u|^{2}dx+\frac{1}{%
2}\int_{{\mathbb{R}^{N}}}u^{2}dx-\frac{1}{p+1}\int_{{\mathbb{R}^{N}}}|u|^{p+1}%
\,dx,$$
is merely lower semi-continuous on the Sobolev space $H^{1}(\mathbb{R}^{N})$
and it turns out that it is differentiable only along bounded directions.
Hence on $H^{1}(\mathbb{R}^{N})$, the existence of critical points required the development of new tools
and ideas, see e.g. [22, 30] and [19].
In this paper, motivated by the results obtained in [9, 10]
for a class of semi-linear parabolic equations,
we aim to investigate the asymptotic behavior for the quasi-linear parabolic problem
(1.2)
$$\displaystyle u_{t}-\Delta u-u\Delta u^{2}+u=|u|^{p-1}u$$
$$\displaystyle\text{in $\mathbb{R}^{N}\times(0,\infty)$},$$
(1.3)
$$\displaystyle u(x,0)=u_{0}(x)$$
$$\displaystyle\text{in $\mathbb{R}^{N}$},$$
whose corresponding stationary problem is
(1.4)
$$\displaystyle-\Delta u-u\Delta u^{2}+u$$
$$\displaystyle=|u|^{p-1}u\quad\text{in $\mathbb{R}^{N}$},$$
$$\displaystyle u(x)\to 0$$
as $$|x|\to\infty.$$
More precisely, we deal with the problem of uniqueness of the limit
of bounded trajectories of (1.2)-(1.3).
Since the problem is invariant under translations,
even knowing that (1.4) admits a unique solution up to translations
in general does not prevent from having different positively diverging sequences
$\{t_{n}\}_{n\in\mathbb{N}}$, $\{\tau_{n}\}_{n\in\mathbb{N}},$ such that
$\{u(\cdot,t_{n})\}_{n\in\mathbb{N}}$ and $\{u(\cdot,\tau_{n})\}_{n\in\mathbb{N}}$
converge to different solutions to (1.4).
As proved by L. Simon in a celebrated paper [29] (see also [16]),
in the case of variational parabolic problems such as $u_{t}+{\mathcal{E}}^{\prime}(u,\nabla u)=0$
where the associated Lagrangian ${\mathcal{E}}(s,\xi)$ depends analytically
on its variables $(s,\xi)$,
then it is always the case that the full flow $u(t)$ converges to a stationary solution
of ${\mathcal{E}}^{\prime}(u,\nabla u)=0$ and oscillatory behavior is thus ruled out.
The argument is essentially based upon Lojasiewicz inequality [20]
and a series of additional estimates.
On the other hand for (1.2), the assumptions of [29] are not fulfilled
due to the presence of the non-analytical nonlinearity $u\to|u|^{p-1}u$,
unless $p$ is an odd integer.
In general, without the analyticity assumption,
the $\omega$-limit set corresponding to a suitable sub-manifold of initial data
is a continuum of $H^{1}$ which is homeomorphic to the sphere, see [24, 25].
However, equation (1.4) has been object of various investigations
for what concerns uniqueness and non-degeneracy of solutions.
By working on the linearized operator ${\mathcal{L}}$ around a stationary solution $w$, namely
(1.5)
$${\mathcal{L}}\phi=-(1+2w^{2})\Delta\phi-4w\nabla w\cdot\nabla\phi-(4w\Delta w+%
2|\nabla w|^{2})\phi+\phi-p|w|^{p-1}\phi,$$
and by exploiting the non-degeneracy [2, 28] of the positive radial solutions
to (1.4), i.e.
$${\rm Ker}({\mathcal{L}})={\rm span}\Big{\{}\frac{\partial w}{\partial x_{1}},%
\ldots,\frac{\partial w}{\partial x_{N}}\Big{\}},$$
inspired by the ideas of [9] where the semi-linear case is considered,
we will be able to prove that, in fact,
the flow of (1.2)-(1.3) enjoys uniqueness.
As to similar results for semi-linear parabolic problems,
see [7, 10, 11] and references therein.
Throughout the paper we shall assume that
$$3\leq\,p<\frac{3N+2}{N-2}\quad\text{if $N\geq 3$},\qquad 3\leq p<\infty\quad%
\text{if $N=1,2$}.$$
We will deal with classical solutions $u\in C([0,T_{0}),C^{2}(\mathbb{R}^{N}))\cap C^{1}((0,T_{0}),C(\mathbb{R}^{N}))$
to (1.2)-(1.3), whose local existence and additional properties will be established in Section 2.
The uniqueness of positive solutions of (1.4) has been investigated in [1, 15],
while the non-degeneracy of the unique positive solution has been also obtained
in [2, 3, 28].
We also note that the unique positive solution $w$ of (1.4) is radially symmetric
with respect to a point $x_{0}\in{\mathbb{R}^{N}}$ and decays exponentially at infinity.
For a good source of references for the issue of long term behavior of semi-linear parabolic equations, we refer the reader to [12].
1.2. Main results
The followings are the main results of the paper.
Theorem 1.1 (Decaying solutions).
Let $N\geq 2$ and let $u_{0}\in C_{0}^{\infty}(\mathbb{R}^{N})$ be non-negative and radially non-increasing.
Let $u$ be the corresponding solution to (1.2)-(1.3) and
assume that it is globally defined.
Then $u$ is positive, bounded, radially decreasing and
(1.6)
$$\lim_{|x|\to\infty}\sup_{t>0}u(x,t)=0.$$
Theorem 1.2 (Uniqueness of limit).
Let $N\geq 1$ and let $u$ be a non-negative, bounded, globally defined
solution to (1.2)-(1.3) which satisfies (1.6).
Then either $u(x,t)\to 0$ uniformly in ${\mathbb{R}^{N}}$ as $t\to\infty$ or
there is a positive solution $w$ of (1.4) such that $u(x,t)\to w(x)$ uniformly in ${\mathbb{R}^{N}}.$
In addition,
(1.7)
$$\lim_{t\to\infty}\int_{0}^{K}\|u(\cdot,t+s)-w(\cdot)\|_{H^{1}({\mathbb{R}^{N}}%
)}^{2}\,ds=0,$$
for every $K>0$.
Theorem 1.3 (Bifurcation).
Let $N\geq 2$ and let $\varphi_{0}\in C_{0}^{\infty}(\mathbb{R}^{N})$ be
non-negative, radially non-increasing and not identically equal to zero.
If $p=3$, assume furthermore that
$$\int_{{\mathbb{R}^{N}}}\Big{(}\varphi_{0}^{2}|\nabla\varphi_{0}|^{2}-\frac{1}{%
4}|\varphi_{0}|^{4}\Big{)}\,dx<0.$$
Then there exists $\lambda_{0}>0$ such that the solution $u$
to (1.2)-(1.3) with $u_{0}=\lambda\varphi_{0}$ satisfies
(i)
If $\lambda<\lambda_{0}$,
then $u(x,t)$ goes to zero as $t\to\infty$ uniformly in ${\mathbb{R}^{N}}$.
(ii)
If $\lambda=\lambda_{0}$,
then $u(x,t)$ converges to a positive solution $w$ of (1.4) uniformly in ${\mathbb{R}^{N}}$.
(iii)
If $\lambda>\lambda_{0}$, then $u(x,t)$ blows up in finite time.
Remark. Here we collect some remarks on the main results.
(i) In Theorem 1.2, we don’t need any symmetric assumptions on
the solution. However by the result in [23], we can show that our global solution is
asymptotically symmetric, that is, it has a common center of symmetry for the
elements of the $\omega$-limit.
See [7, 21] for related results.
(ii) To prove the uniform decay condition (1.6) in Theorem 1.1,
we have to assume that $u_{0}$ is radially non-increasing.
This assumption is used to obtain a universal bound near infinity, see Remark 2.14.
We believe that this is technical, but we don’t know how to remove it at present.
(iii) By a recent result in [3], the non-degeneracy of the positive radial solution to
(1.4) holds even if $1<p<3$. On the other hand,
the condition $p\geq 3$ appears in various situations,
especially in the proof of Theorem 1.1.
Although the nonlinear term $|u|^{p-1}u$ is superlinear even when $1<p<3$,
problem (1.2) has a sublinear structure due to the term $u\Delta u^{2}$,
causing our arguments to completely fail.
(iv) In the proof of Theorem 1.1, we also require that $N\geq 2$.
This is to construct a suitable supersolution, see Remark 2.16.
As we will see in Section 2, our problem is uniformly parabolic, yielding that basic tools
(energy estimate, Schauder estimate, Comparison Principle, etc) are available.
Especially some proofs work in the spirit of those of [9]
for semi-linear problems.
However quite often the semi-linear techniques fail to work,
especially in the construction of suitable sub-solutions (see e.g. Lemma 2.13).
To compare the dynamical behavior of solutions for our quasi-linear parabolic problem
with that for the corresponding semi-linear one, for $\kappa>0$, we consider the problem
(1.8)
$$\left\{\begin{array}[]{rl}u_{t}-\Delta u-\kappa u\Delta u^{2}+u=|u|^{p-1}u&\ %
\hbox{in}\ {\mathbb{R}^{N}}\times(0,\infty),\\
u(x,0)=\lambda\varphi_{0}(x)&\ \hbox{in}\ {\mathbb{R}^{N}}\end{array}\right.$$
and the corresponding semi-linear parabolic problem:
(1.9)
$$\left\{\begin{array}[]{rl}u_{t}-\Delta u+u=|u|^{p-1}u&\ \hbox{in}\ {\mathbb{R}%
^{N}}\times(0,\infty),\\
u(x,0)=\lambda\varphi_{0}(x)&\ \hbox{in}\ {\mathbb{R}^{N}}.\end{array}\right.$$
The stationary problem associated with (1.9) is given by
(1.10)
$$-\Delta w+w=|w|^{p-1}w\ \hbox{in}\ {\mathbb{R}^{N}},\quad w(x)\to 0\ \hbox{as}%
\ |x|\to\infty.$$
It is well-known that problem (1.10) has a unique positive solution
for $1<p<(N+2)/(N-2)$ if $N\geq 3$ and $1<p<\infty$ if $N=1,2$.
Now let $\lambda_{0}(\kappa)>0$ be a constant obtained by applying
Theorem 1.3 to (1.8).
When $3\leq p<(N+2)/(N-2)$, both $\lambda_{0}(\kappa)$ and $\lambda_{0}(0)$ are defined and
(1.11)
$$\lambda_{0}(0)<\lambda_{0}(\kappa)\quad\hbox{for all}\ \kappa>0.$$
In fact we claim that $\lambda_{0}(\kappa_{0})<\lambda_{0}(\kappa_{1})$ for all $\kappa_{0}<\kappa_{1}$.
Defining $I_{\kappa}$ by
$$I_{\kappa}(u)=\frac{1}{2}\int_{{\mathbb{R}^{N}}}\big{(}(1+2\kappa u^{2})|%
\nabla u|^{2}+u^{2}\big{)}\,dx-\frac{1}{p+1}\int_{{\mathbb{R}^{N}}}|u|^{p+1}\,dx,$$
it follows that if $I_{\kappa_{0}}(u)\geq 0$ for all $u\in C_{0}^{\infty}({\mathbb{R}^{N}})$, then
$I_{\kappa_{1}}(u)\geq 0$ for all $u\in C_{0}^{\infty}({\mathbb{R}^{N}})$.
Thus by Lemmas 2.17-2.18 and by the definition of $\lambda_{0}(\kappa)$,
the claim follows.
Inequality (1.11) shows that there exist initial values $u_{0}$ such that
the corresponding solution to (1.8) is globally defined,
but that of the semi-linear problem (1.9) blows up in finite time.
In other words, the quasi-linear term $u\Delta u^{2}$ prevents blow-up of solutions.
This kind of stabilizing effects has
been observed for the quasi-linear Schrödinger equation (1.1), see e.g. [6, 8].
Plan of the paper.
In Section 2, we state several preparatory results.
In Subsection 2.1, we establish the local existence of classical solutions of (1.2)
and give qualitative properties of classical solutions.
Subsection 2.2 concerns with stability estimates for global solutions.
We prove uniform estimates of global solutions in Subsection 2.3.
We state technical results about uniqueness of limit in Subsection 2.4.
In Section 3, we will prove the main results of the paper.
Notations. For any $p\in[1,\infty)$ and a domain $U\subset{\mathbb{R}^{N}}$,
the space $L^{p}(U)$ is endowed with the norm
$$\|u\|_{L^{p}(U)}=\Big{(}\int_{U}|u|^{p}\,dx\Big{)}^{1/p}\!\!\!.$$
$(\cdot,\cdot)_{L^{2}(U)}$ denotes the standard inner product in $L^{2}(U)$.
The Sobolev space $H^{1}(U)$ is endowed with the standard norm
$$\|u\|_{H^{1}(U)}=\Big{(}\int_{U}\big{(}|\nabla u|^{2}+|u|^{2}\big{)}\,dx\Big{)%
}^{1/2}\!\!.$$
The higher order spaces $H^{m}(U)$ are endowed with the standard norm.
The space $C^{k}\big{(}(0,T),H^{m}(U)\big{)}$ denotes the functions with $k$ time derivatives
which belong to $H^{m}(U)$. When $U={\mathbb{R}^{N}}$, we may write $\|\cdot\|_{H^{m}({\mathbb{R}^{N}})}=\|\cdot\|_{H^{m}}$.
The symbols ${\partial u}/{\partial x_{i}}$, $\partial^{2}u/{\partial x_{i}\partial x_{j}}$
and $u_{t}$ denote, respectively, the first and second order space derivatives and
the time derivative of $u$.
For non-negative integer $m$, $D^{m}u$ denotes the set of all partial derivatives of order $m$.
$C^{\infty}_{0}(\mathbb{R}^{N})$ denotes the space of compactly supported smooth functions.
The notation ${\rm span}\{w_{1},\ldots,w_{k}\}$ denotes the vector space generated by the vectors $\{w_{1},\ldots,w_{k}\}$.
We denote by $\Omega(u)$ the $\omega$-limit set of $u$, namely the set
$$\Omega(u):=\big{\{}w\in H^{1}(\mathbb{R}^{N}):\,u(\cdot,t_{n})\to w\ \hbox{%
uniformly in ${\mathbb{R}^{N}}$ as $n\to\infty$, for some}\ t_{n}\to\infty\big%
{\}}.$$
The symbol $B(x_{0},R)$ denotes a ball in $\mathbb{R}^{N}$ of center $x_{0}$ and with radius $R$.
The complement of a measurable set $E\subset\mathbb{R}^{N}$ will be denoted by $E^{c}$.
2. Preparatory results
2.1. Local existence and basic properties
In this subsection, we prove the local existence of classical solutions of
(1.2)-(1.3) and provide also some qualitative properties.
First we observe that (1.2) can be written as $L(u)=0$ where
$$L(u)=(1+2u^{2})\Delta u+2u|\nabla u|^{2}-u+|u|^{p-1}u-u_{t}=:F\left(u,\frac{%
\partial u}{\partial x_{i}},\frac{\partial^{2}u}{\partial x_{i}\partial x_{j}}%
\right)-u_{t},$$
where we have set
$$F(u,p_{i},r_{ij})=\sum_{i,j=1}^{N}(1+2u^{2})\delta_{ij}r_{ij}+2u\sum_{i=1}^{N}%
p_{i}^{2}-u+|u|^{p-1}u.$$
Then one has $\frac{\partial F}{\partial r_{ij}}=(1+2u^{2})\delta_{ij}$ and hence
$$\sum_{i,j=1}^{N}\frac{\partial F}{\partial r_{ij}}\left(u,\frac{\partial u}{%
\partial x_{i}},\frac{\partial^{2}u}{\partial x_{i}\partial x_{j}}\right)\xi_{%
i}\xi_{j}=(1+2u^{2})|\xi|^{2}\geq|\xi|^{2},$$
for all $\xi\in{\mathbb{R}^{N}}\setminus\{0\}$ and $u\in\mathbb{R}$.
This implies that $F$ is uniformly elliptic and the nonlinear operator $L$ is
(strongly) parabolic with respect to any $u$.
We also note that $L$ can be written by the divergence form
$$L(u)={\rm div}A(u,\nabla u)+B(u,\nabla u)-u_{t},$$
(2.1)
$$A(u,{\bf p})=(1+2u^{2}){\bf p},\quad B(u,{\bf p})=-(1+2|{\bf p}|^{2})u+|u|^{p-%
1}u.$$
Then we have the following result on the local existence of classical solutions
whose proof is based on a modified Galerkin method as in [31].
Lemma 2.1 (Local existence).
Let $u_{0}\in C_{0}^{\infty}({\mathbb{R}^{N}})$.
Then there exist $T_{0}=T_{0}(u_{0})\in(0,\infty]$ and a unique classical solution
$u(x,t)$ of (1.2)-(1.3) satisfying
(2.2)
$$\displaystyle\sup_{t\in(0,T_{0})}\|D^{k}u(\cdot,t)\|_{L^{\infty}({\mathbb{R}^{%
N}})}<\infty\ \hbox{for}\ |k|\leq 2,$$
(2.3)
$$\displaystyle u(x,t)\to 0\ \hbox{as}\ |x|\to\infty\ \hbox{for each}\ t\in(0,T_%
{0}).$$
Proof.
Since the operator $L$ is strongly parabolic, for any $u_{0}\in C_{0}^{\infty}({\mathbb{R}^{N}})$,
there exist a (small) positive number $T_{0}=T_{0}(u_{0})$ and a unique solution
$u(x,t)$ of (1.2)-(1.3) satisfying
$$u\in C\left([0,T_{0}),H^{m}({\mathbb{R}^{N}})\right)\cap C^{1}\left((0,T_{0}),%
H^{m-2}({\mathbb{R}^{N}})\right)\ \hbox{for any}\ m\in\mathbb{N}\ \hbox{with}%
\ m>\frac{N}{2}+2$$
by using a suitable approximation and applying the energy estimate,
see [31, Proposition 7.5].
Then by the Sobolev embedding $H^{m}({\mathbb{R}^{N}})\hookrightarrow C^{2}({\mathbb{R}^{N}})$
and $H^{m-2}({\mathbb{R}^{N}})\hookrightarrow C({\mathbb{R}^{N}})$ for $m>\frac{N}{2}+2$,
$u$ is a classical solution of (1.2)-(1.3).
Moreover by the Sobolev and Morrey inequalities, (2.2) and (2.3) also hold.
∎
From (2.1), we can also obtain the local existence of classical solutions
by applying the Schauder estimate, see [17, Theorem 8.1, p.495].
We note that $T_{0}$ is not the maximal existence lifespan,
but the local solution $u(x,t)$ can be extended beyond $T_{0}$
as long as $\sup\|u(\cdot,t)\|_{C^{2}({\mathbb{R}^{N}})}$ is bounded.
Next we prepare the following Comparison Principle for later use.
For this statement, we refer the reader to [26, Section 7, Theorem 12, p.187].
Lemma 2.2 (Comparison principle).
Let $U$ be a bounded domain in ${\mathbb{R}^{N}}$ and $T>0$.
Suppose that $u$ is a solution of $L(u)=f(x,t)$ in $U\times(0,T]$
satisfying the initial boundary conditions:
$$\displaystyle u(x,t)=g_{1}(x,t)$$
$$\displaystyle\hbox{on}\ \partial U\times(0,T),$$
$$\displaystyle u(x,0)=g_{2}(x)$$
$$\displaystyle\hbox{in}\ U.$$
Assume that $z(x,t)$ and $Z(x,t)$ satisfy the inequalities:
$$\displaystyle L(Z)\leq f(x,t)\leq L(z)$$
$$\displaystyle\ \hbox{in}\ U\times(0,T],$$
$$\displaystyle z(x,t)\leq g_{1}(x,t)\leq Z(x,t)$$
$$\displaystyle\ \hbox{on}\ \partial U\times(0,T),$$
$$\displaystyle z(x,0)\leq g_{2}(x)\leq Z(x,0)$$
$$\displaystyle\ \hbox{in}\ U.$$
If $L$ is parabolic with respect to the functions
$\theta u+(1-\theta)z$ and $\theta u+(1-\theta)Z$ for any $\theta\in[0,1]$,
then it follows that
$$z(x,t)\leq u(x,t)\leq Z(x,t)\ \hbox{in}\ U\times(0,T].$$
We recall that $z$ and $Z$ are called a subsolution
and a supersolution of $L(u)=f$ respectively.
By applying Lemma 2.2, we provide some qualitative properties
for solutions of (1.2)-(1.3).
Lemma 2.3 (Radially decreasing flows).
Suppose that $u_{0}\in C_{0}^{\infty}({\mathbb{R}^{N}})$ is non-negative and not identically zero.
Then the corresponding solution $u$ is positive for
all $x\in{\mathbb{R}^{N}}$ and $t\in(0,T_{0})$.
Moreover if $u_{0}(x)=u_{0}(|x|)$ and $u_{0}^{\prime}(r)\leq 0$ for all $r\geq 0$,
then $u(x,t)$ is also radial and
$u_{r}(r,t)<0$ for all $r\geq 0$ and $t\in(0,T_{0})$.
Proof.
First since $z\equiv 0$ is a subsolution of $L(u)=0$,
it follows by Lemma 2.2 that $u\geq 0$.
Moreover from (2.1), we can see that the structural assumptions
for quasi-linear parabolic equations in [32] are fulfilled. Then we can use the time-dependent Harnack inequality for $L(u)=0$,
see [32, Theorem 1.1]. Thus we have $u>0$. Next we suppose that $u_{0}$ is radial.
Then by the local uniqueness and the rotation invariance of problem (1.2),
it follows that $u$ is radially symmetric.
Let us assume that $u_{0}^{\prime}(r)\leq 0$ for all $r\geq 0$.
We show that $u_{r}\leq 0$. To this end, we follow an idea in [27, Section 52.5].
Now we differentiate (1.2) with respect to $r$
and write $u^{\prime}=u_{r}$ for simplicity. Then by a direct calculation, one has
$$\displaystyle u_{t}^{\prime}$$
$$\displaystyle-(1+2u^{2})u^{\prime\prime\prime}-\big{(}8uu^{\prime}+\frac{N-1}{%
r}(1+2u^{2})\Big{)}u^{\prime\prime}$$
$$\displaystyle-\Big{(}\frac{4(N-1)}{r}uu^{\prime}-\frac{N-1}{r^{2}}(1+2u^{2})+2%
(u^{\prime})^{2}+pu^{p-1}-1\Big{)}u^{\prime}=0.$$
We put $\phi(r,t)=u_{r}(r,t)e^{-Kt}$ for $K>0$.
Then $\phi$ satisfies the following parabolic problem:
(2.4)
$$\displaystyle\tilde{L}(\phi)$$
$$\displaystyle:=(1+2u^{2})\phi^{\prime\prime}+a\phi^{\prime}+b\phi-\phi_{t}=0,$$
$$\displaystyle a(r,t)$$
$$\displaystyle=8uu^{\prime}+\frac{N-1}{r}(1+2u^{2}),$$
$$\displaystyle b(r,t)$$
$$\displaystyle=\frac{4(N-1)}{r}uu^{\prime}-\frac{N-1}{r^{2}}(1+2u^{2})+2(u^{%
\prime})^{2}+pu^{p-1}-1-K.$$
Moreover choosing sufficiently large $K>0$, we may assume that
$b(r,t)\leq 0$ in $(0,\infty)\times(0,T_{0})$.
Hereafter we write $Q=(0,\infty)\times(0,T_{0})$ for simplicity.
Next we suppose that
$$\displaystyle\sup_{Q}\phi(r,t)>0.$$
Then we can take
$$M:=\frac{1}{2}\displaystyle\sup_{Q}\phi(r,t)>0$$
and put $\Phi(r,t)=\phi(r,t)-M$.
We observe that $\Phi(0,t)=u_{r}(0,t)e^{-Kt}-M=-M$
for every $t\in(0,T_{0})$.
Moreover $\Phi(r,t)\to-M$ as $r\to\infty$ for each
$t\in(0,T_{0})$.
In fact since $\|\nabla u(\cdot,t)\|_{H^{m-1}({\mathbb{R}^{N}})}<\infty$ for any $m>\frac{N}{2}+2$,
it follows by the Morrey embedding theorem that
$|\nabla u(x,t)|\to 0$ as $|x|\to\infty$ and hence
$$\lim_{r\to\infty}\Phi(r,t)=\lim_{r\to\infty}u_{r}(r,t)e^{-Kt}-M=-M.$$
Now since
$$\displaystyle\sup_{t\in(0,T_{0})}\Phi(0,t)=\lim_{r\to\infty}\sup_{t\in(0,T_{0}%
)}\Phi(r,t)=-M,$$
there exist $(r_{0},r_{1})\subset(0,\infty)$ such that
$\Phi(r,t)\leq 0$ for $r\in(0,r_{0})\cup(r_{1},\infty)$ and $t\in(0,T_{0})$,
(2.5)
$$\Phi(r_{0},t)=\Phi(r_{1},t)=0\ \hbox{for}\ t\in(0,T_{0}).$$
Moreover by the definition of $\Phi$ and from $u_{0}^{\prime}(r)\leq 0$,
it follows that
(2.6)
$$\Phi(r,0)=\phi(r,0)-M=u_{r}(r,0)-M=u_{0}^{\prime}(r)-M<0\ \hbox{for}\ r\in(r_{%
0},r_{1}).$$
Finally from (2.4), $\Phi=\phi-M$ and $b\leq 0$, we also have
(2.7)
$$\tilde{L}(\Phi)=\tilde{L}(\phi)-bM=-bM\geq 0\ \hbox{in}\ (r_{0},r_{1})\times(0%
,T_{0}).$$
Since the operator $\tilde{L}$ is parabolic, we can apply the Comparison Principle.
Thus from (2.5)-(2.7), it follows that $\Phi$ is a subsolution of $\tilde{L}(u)=0$
and hence $\Phi\leq 0$ in $(0,\infty)\times(0,T_{0})$.
On the other hand by the definition of $M$, one has
$$\displaystyle\sup_{Q}\Phi(r,t)=\sup_{Q}\phi(r,t)-M=M>0.$$
This is a contradiction. Thus $\sup_{Q}\phi\leq 0$ and hence
$u_{r}(r,t)=e^{Kt}\phi(r,t)\leq 0$ for all $r\geq 0$ and $t\in(0,T_{0})$.
This completes the radial non-increase of $u$ as required.
Finally the radial decrease of $u$ follows by the Hopf lemma,
see [26, Theorem 6, P. 174].
∎
2.2. Energy stabilization
In this subsection, we prove several stability estimates.
Let $u(x,t)$ be a non-negative bounded, globally defined solution
of (1.2)-(1.3) satisfying (1.6) and denote by $\Omega(u)$ the
$\omega$-limit set of $u$.
We also suppose that $\|\nabla u(\cdot,t)\|_{L^{\infty}({\mathbb{R}^{N}})}$ is uniformly bounded.
We define the functional
$$I(u):=\frac{1}{2}\int_{{\mathbb{R}^{N}}}\big{(}(1+2u^{2})|\nabla u|^{2}+u^{2}%
\big{)}\,dx-\frac{1}{p+1}\int_{{\mathbb{R}^{N}}}|u|^{p+1}\,dx.$$
Notice that $I$ is well defined on the set of functions $u\in H^{1}(\mathbb{R}^{N})$
such that $u^{2}\in H^{1}(\mathbb{R}^{N})$ from $3\leq p<\frac{3N+2}{N-2}$,
via the Sobolev embedding (cf. [8]). Moreover we have the following.
Lemma 2.4 (Energy identity).
There holds
$$\frac{d}{dt}I\big{(}u(\cdot,t)\big{)}=-\int_{{\mathbb{R}^{N}}}u_{t}(x,t)^{2}\,dx.$$
Proof.
It is possible to prove that $I$ is differentiable along smooth bounded directions.
By the proof of Lemma 2.1, we know that $u\in C^{1}\big{(}(0,T_{0}),H^{m-2}({\mathbb{R}^{N}})\big{)}$
for any $m>N/2+2$.
Since $H^{m-2}({\mathbb{R}^{N}})\hookrightarrow L^{\infty}({\mathbb{R}^{N}})$ for $m>N/2+2$,
it follows that $u\in C^{1}\big{(}(0,T_{0}),L^{\infty}({\mathbb{R}^{N}})\big{)}$
and hence $I$ is differentiable with respect to $t$ at $u$ along the smooth direction $u_{t}$.
By a direct computation and from (1.2), we have
$$\frac{d}{dt}I\big{(}u(\cdot,t)\big{)}=I^{\prime}(u(\cdot,t))(u_{t}(\cdot,t))=-%
\int_{{\mathbb{R}^{N}}}u_{t}^{2}(x,t)\,dx.$$
This completes the proof.
∎
Lemma 2.4 implies that $I$ is decreasing in $t$
and hence $I$ is a Lyapunov function associated with the problem (1.2)-(1.3).
Lemma 2.5 (Flow stabilization).
For every $K>0$ we have
$$\displaystyle\lim_{t\to\infty}\sup_{\tau\in[0,K]}\|u(\cdot,t+\tau)-u(\cdot,t)%
\|_{L^{2}({\mathbb{R}^{N}})}=0,$$
$$\displaystyle\lim_{t\to\infty}\sup_{\tau\in[0,K]}\|u(\cdot,t+\tau)-u(\cdot,%
\tau)\|_{C^{1}({\mathbb{R}^{N}})}=0.$$
In particular if $u(\cdot,t_{n})\to w$ uniformly in ${\mathbb{R}^{N}}$,
then $u(\cdot,t_{n}+\rho_{n})\to w$ in $C^{1}(\mathbb{R}^{N})$
for any bounded sequence $\{\rho_{n}\}_{n\in\mathbb{N}}\subset\mathbb{R}^{+}$.
Proof.
We fix $\tau\in[0,K]$. For every $t>0$ we have
$$\displaystyle\int_{\mathbb{R}^{N}}|u(\cdot,t+\tau)-u(\cdot,t)|^{2}\,dx$$
$$\displaystyle=\int_{\mathbb{R}^{N}}\left|\int_{t}^{t+\tau}u_{t}(\cdot,s)\,ds%
\right|^{2}\,dx\leq\tau\int_{\mathbb{R}^{N}}\int_{t}^{t+\tau}|u_{t}(\cdot,s)|^%
{2}\,ds\,dx$$
$$\displaystyle=\tau\int_{t}^{t+\tau}\int_{\mathbb{R}^{N}}|u_{t}(\cdot,s)|^{2}\,%
dx\,ds=\tau\big{(}I(t)-I(t+\tau)\big{)}.$$
Since $I(t)$ is non-increasing and bounded from below,
it has finite limit as $t\to\infty$, which yields the assertion.
For the second claim, since $\{u(\cdot,t),\ t>1\}$ is relatively compact in $C^{1}({\mathbb{R}^{N}})$
(see the argument in the proof of Lemma 2.20),
one can argue as in [21, Lemma 3.1].
∎
By Lemma 2.5, we have the following basic result.
Lemma 2.6 ($\omega$ limit structure).
$\Omega(u)$ is either $\{0\}$ or consists of positive solutions of (1.4).
Proof.
For every $\varphi\in C_{0}^{\infty}({\mathbb{R}^{N}})$ and $\tau>0$, we have
$$\displaystyle\int_{t_{n}}^{t_{n}+\tau}\int_{\mathbb{R}^{N}}u_{t}\varphi\,dx\,ds$$
$$\displaystyle+\int_{t_{n}}^{t_{n}+\tau}\int_{\mathbb{R}^{N}}\nabla u\cdot%
\nabla\varphi\,dx\,ds+2\int_{t_{n}}^{t_{n}+\tau}\int_{\mathbb{R}^{N}}u\varphi|%
\nabla u|^{2}\,dx\,ds$$
$$\displaystyle+2\int_{t_{n}}^{t_{n}+\tau}\int_{\mathbb{R}^{N}}u^{2}\nabla u%
\cdot\nabla\varphi\,dx\,ds+\int_{t_{n}}^{t_{n}+\tau}\int_{\mathbb{R}^{N}}u%
\varphi\,dx\,ds=\int_{t_{n}}^{t_{n}+\tau}\int_{\mathbb{R}^{N}}u^{p}\varphi\,dx%
\,ds.$$
This yields for some $\xi_{n}\in[t_{n},t_{n}+\tau]$
$$\displaystyle\int_{\mathbb{R}^{N}}\big{(}u(x,t_{n}+\tau)-u(x,t_{n})\big{)}%
\varphi(x)\,dx+\int_{\mathbb{R}^{N}}\nabla u(x,\xi_{n})\cdot\nabla\varphi(x)\,dx$$
$$\displaystyle+2\int_{\mathbb{R}^{N}}u(x,\xi_{n})\varphi(x)|\nabla u(x,\xi_{n})%
|^{2}\,dx+2\int_{\mathbb{R}^{N}}u^{2}(x,\xi_{n})\nabla u(x,\xi_{n})\cdot\nabla%
\varphi(x)\,dx$$
$$\displaystyle+\int_{\mathbb{R}^{N}}u(x,\xi_{n})\varphi(x)\,dx=\int_{\mathbb{R}%
^{N}}u^{p}(x,\xi_{n})\varphi(x)\,dx.$$
Since $u(\cdot,t_{n})\to w$ uniformly in ${\mathbb{R}^{N}}$, by virtue of Lemma 2.5
it follows that $u(\cdot,t_{n}+\tau)\to w$ in $L^{2}(\mathbb{R}^{N})$ and $u(\cdot,\xi_{n})\to w$ in $C^{1}(\mathbb{R}^{N})$, which yields
$$\displaystyle\int_{\mathbb{R}^{N}}\nabla w\cdot\nabla\varphi\,dx+2\int_{%
\mathbb{R}^{N}}w\varphi|\nabla w|^{2}\,dx+2\int_{\mathbb{R}^{N}}w^{2}\nabla w%
\cdot\nabla\varphi\,dx+\int_{\mathbb{R}^{N}}w\varphi\,dx=\int_{\mathbb{R}^{N}}%
w^{p}\varphi\,dx,$$
for every $\varphi\in C^{\infty}_{0}(\mathbb{R}^{N})$, namely $w$ is a non-negative solution of (1.4).
∎
Lemma 2.7 (Energy bounds).
The following properties hold.
(i)
There exists $C>0$ such that
$$\sup_{t>0}\int_{{\mathbb{R}^{N}}}\big{(}(1+2|u(x,t)|^{2})|\nabla u(x,t)|^{2}+|%
u(x,t)|^{2}\big{)}\,dx\leq C.$$
(ii)
If $w\in\Omega(u)$, then $w\in H^{1}({\mathbb{R}^{N}})$.
Moreover
$$\sup_{w\in\Omega(u)}\big{(}\|w\|_{H^{1}({\mathbb{R}^{N}})}+\|w\|_{L^{\infty}({%
\mathbb{R}^{N}})}\big{)}<\infty.$$
Proof.
We prove (i). Since $I$ is decreasing in $t$, it follows that for $t>0$
$$\frac{1}{2}\int_{{\mathbb{R}^{N}}}\Big{(}(1+2u^{2})|\nabla u|^{2}+u^{2}\Big{)}%
\,dx-\frac{1}{p+1}\int_{{\mathbb{R}^{N}}}|u|^{p+1}\,dx\leq I(u_{0}).$$
Moreover for every $R>0$, one has
$$\frac{1}{p+1}\int_{{\mathbb{R}^{N}}}|u|^{p+1}\,dx\leq\frac{1}{p+1}\Big{(}\sup_%
{|x|\geq R,\,t>0}|u(x,t)|\Big{)}^{p-1}\int_{B^{c}(0,R)}|u|^{2}\,dx+\int_{B(0,R%
)}|u|^{p+1}\,dx.$$
Finally from (1.6), by taking $R$ large enough, we have
$$\frac{1}{p+1}\Big{(}\sup_{|x|\geq R,\,t>0}|u(x,t)|\Big{)}^{p-1}\leq\frac{1}{4},$$
which yields
$$\sup_{t>0}\int_{{\mathbb{R}^{N}}}\Big{(}(1+2|u(x,t)|^{2})|\nabla u(x,t)|^{2}+|%
u(x,t)|^{2}\Big{)}\,dx\leq 4M|B(0,R)|+4I(u_{0}),$$
where we have set $M=\|u\|_{L^{\infty}({\mathbb{R}^{N}}\times[0,\infty))}^{p+1}$.
(ii) Suppose that
$$\|u(\cdot,t_{n})-w(\cdot)\|_{L^{\infty}({\mathbb{R}^{N}})}\to 0\quad\text{for %
some $t_{n}\to\infty$}.$$
Then from (i), it follows that $\{u(\cdot,t_{n})\}$ is bounded in $H^{1}({\mathbb{R}^{N}})$.
Thus, up to a subsequence, we have $u(\cdot,t_{n})\rightharpoonup\tilde{w}$ in $H^{1}({\mathbb{R}^{N}})$
and $u(\cdot,t_{n})\to\tilde{w}$ a.e. in ${\mathbb{R}^{N}}$ for some $\tilde{w}\in H^{1}({\mathbb{R}^{N}})$.
Since $u(\cdot,t_{n})$ converges to $w$ uniformly, it follows that
$w\equiv\tilde{w}$, which implies that $w\in H^{1}({\mathbb{R}^{N}})$. By the boundedness of $u(x,t)$ in $H^{1}({\mathbb{R}^{N}})$ and $L^{\infty}({\mathbb{R}^{N}})$,
the last assertion of (ii) follows.
∎
Lemma 2.8 (Lipschitzianity controls).
Let $0<t_{1}<t_{2}$. Then there exists $C>0$ independent of $t_{1}$ and $t_{2}$ such that
the following properties hold.
(i)
$\displaystyle\|u(\cdot,t_{2})-w(\cdot)\|_{L^{2}({\mathbb{R}^{N}})}\leq e^{C(t_%
{2}-t_{1})}\|u(\cdot,t_{1})-w(\cdot)\|_{L^{2}({\mathbb{R}^{N}})}.$
(ii)
$\displaystyle\int_{t_{1}}^{t_{2}}\|u(\cdot,s)-w(\cdot)\|_{H^{1}({\mathbb{R}^{N%
}})}^{2}\,ds\leq Ce^{C(t_{2}-t_{1})}\|u(\cdot,t_{1})-w(\cdot)\|_{L^{2}(RN)}^{2}.$
Proof.
(i) The proof is based on the standard energy estimate.
We put $\phi(x,t)=u(x,t)-w(x)$.
From (1.2), (1.4) and by the mean value theorem, one has
(2.8)
$$\phi_{t}-(1+2u^{2})\Delta\phi-2w\nabla(u+w)\cdot\nabla\phi-2(u+w)\Delta w\phi-%
2|\nabla u|^{2}\phi+\phi-p\big{(}\kappa u+(1-\kappa)w\big{)}^{p-1}\phi=0,$$
for some $\kappa\in(0,1)$.
Multiplying (2.8) by $\phi$ and integrating it over ${\mathbb{R}^{N}}$, we get
$$\displaystyle\frac{1}{2}\frac{\partial}{\partial t}\|\phi\|_{L^{2}}^{2}$$
$$\displaystyle-\int_{{\mathbb{R}^{N}}}\Big{(}(1+2u^{2})\phi\Delta\phi+2w\phi%
\nabla(u+w)\cdot\nabla\phi+2(u+w)\phi^{2}\Delta w+2\phi^{2}|\nabla u|^{2}\Big{%
)}\,dx$$
$$\displaystyle+\int_{{\mathbb{R}^{N}}}\phi^{2}\,dx-p\int_{{\mathbb{R}^{N}}}\big%
{(}\kappa u+(1-\kappa)w\big{)}^{p-1}\phi^{2}\,dx=0.$$
Using the integration by parts, we have
$$\displaystyle-\int_{{\mathbb{R}^{N}}}(1+2u^{2})\phi\Delta\phi\,dx$$
$$\displaystyle=\int_{{\mathbb{R}^{N}}}(1+2u^{2})|\nabla\phi|^{2}+4u\phi\nabla u%
\cdot\nabla\phi\,dx,$$
$$\displaystyle-\int_{{\mathbb{R}^{N}}}2(u+w)\phi^{2}\Delta w\,dx$$
$$\displaystyle=\int_{{\mathbb{R}^{N}}}2\phi^{2}\nabla w\cdot\nabla(u+w)+4(u+w)%
\phi\nabla w\cdot\nabla\phi\,dx.$$
Thus one has
$$\displaystyle\frac{1}{2}\frac{\partial}{\partial t}\|\phi\|_{L^{2}}^{2}$$
$$\displaystyle+\int_{{\mathbb{R}^{N}}}\Big{\{}(1+2u^{2})|\nabla\phi|^{2}+4u\phi%
\nabla u\cdot\nabla\phi-2w\phi\nabla(u+w)\cdot\nabla\phi$$
$$\displaystyle +2\phi^{2}\nabla w\cdot\nabla(u+w)+4(u+w)\phi\nabla w%
\cdot\nabla\phi-2\phi^{2}|\nabla u|^{2}\Big{\}}\,dx$$
$$\displaystyle+\int_{{\mathbb{R}^{N}}}\phi^{2}\,dx-p\int_{{\mathbb{R}^{N}}}\big%
{(}\kappa u+(1-\kappa)w\big{)}^{p-1}\phi^{2}\,dx=0.$$
Since $u$, $w$, $\nabla u$ and $\nabla w$ are bounded, we obtain
$$\frac{1}{2}\frac{\partial}{\partial t}\|\phi\|_{L^{2}}^{2}+\|\phi\|_{H^{1}}^{2%
}\leq C\|\phi\|_{L^{2}}\|\nabla\phi\|_{L^{2}}+C\|\phi\|_{L^{2}}^{2}.$$
Thus by the Young inequality, it follows that
(2.9)
$$\frac{\partial}{\partial t}\|\phi(\cdot,t)\|_{L^{2}}^{2}+\|\phi(\cdot,t)\|_{H^%
{1}}^{2}\leq C\|\phi(\cdot,t)\|_{L^{2}}^{2}.$$
Now let $\zeta(t):=\|\phi(\cdot,t)\|_{L^{2}}^{2}$.
Then one has $\zeta^{\prime}(t)\leq C\zeta(t)$.
By the Gronwall inequality, it follows that
$\zeta(t_{2})\leq e^{C(t_{2}-t_{1})}\zeta(t_{1})$ and hence the claim holds.
(ii) Integrating (2.9) over $[t_{1},t_{2}]$, one has
$$\displaystyle\int_{t_{1}}^{t_{2}}\|\phi(\cdot,s)\|_{H^{1}}^{2}\,ds\leq\|\phi(%
\cdot,t_{1})\|_{L^{2}}^{2}+C\int_{t_{1}}^{t_{2}}\|\phi(\cdot,s)\|_{L^{2}}^{2}%
\,ds.$$
Thus from (i), we get
$$\int_{t_{1}}^{t_{2}}\|\phi(\cdot,s)\|_{H^{1}}^{2}\,ds\leq\Big{(}1+\frac{1}{2C}%
\Big{)}e^{2C(t_{2}-t_{1})}\|\phi(\cdot,s)\|_{L^{2}}^{2}.$$
This completes the proof.
∎
Lemma 2.9 (Further stability estimates).
Let $K>1$ be arbitrarily given and $\{t_{n}\}_{n\in\mathbb{N}}$ be a sequence such that
$t_{n}\to\infty$ as $n\to\infty$.
If $\|u(\cdot,t_{n})-w(\cdot)\|_{L^{\infty}({\mathbb{R}^{N}})}\to 0$
as $n\to\infty$, then the following properties hold.
(i)
$\displaystyle\lim_{n\to\infty}\int_{0}^{K}\|u(\cdot,s+t_{n})-w(\cdot)\|_{H^{1}%
({\mathbb{R}^{N}})}^{2}\,ds=0$.
(ii)
$\displaystyle\lim_{n\to\infty}\|u(\cdot,t+t_{n})-w(\cdot)\|_{L^{\infty}({%
\mathbb{R}^{N}}\times[0,K])}=0.$
Proof.
Arguing as in the proof of Lemma 2.7 (ii),
we may assume that $u(\cdot,t_{n})\rightharpoonup w$ in $H^{1}({\mathbb{R}^{N}})$.
Moreover by the uniform decay condition (1.6), one can show that
$\sup_{n\geq 1}u(x,t_{n})$ decays exponentially at infinity, see Lemma 2.15.
Thus by the exponential decay of $w$ and the embedding
$H^{1}({\mathbb{R}^{N}})\hookrightarrow L^{2}_{{\rm loc}}({\mathbb{R}^{N}})$, it follows that
(2.10)
$$\lim_{n\to\infty}\|u(\cdot,t_{n})-w(\cdot)\|_{L^{2}({\mathbb{R}^{N}})}=0.$$
Next applying Lemma 2.8 (ii) with $t_{1}=t_{n}$ and $t_{2}=t_{n}+K$, one has
$$\int_{t_{n}}^{t_{n}+K}\|u(\cdot,s)-w(\cdot)\|_{H^{1}}^{2}\,ds\leq Ce^{CK}\|u(%
\cdot,t_{n})-w(\cdot)\|_{L^{2}}^{2}.$$
Thus from (2.10), the claim holds.
(ii)
We argue as in [17, Theorem 2.5, P. 18].
Let $\phi_{n}(x,t)=u(x,t+t_{n})-w(x)$ and define
$$\displaystyle\tilde{L}(\phi_{n})$$
$$\displaystyle=(1+2u^{2})\Delta\phi_{n}+2w\nabla(u+w)\cdot\nabla\phi_{n}+a(x)%
\phi_{n}-(\phi_{n})_{t},$$
$$\displaystyle a(x)$$
$$\displaystyle=2(u+w)\Delta w+2|\nabla u|^{2}-1+p\big{(}\kappa u+(1-\kappa)w%
\big{)}^{p-1}.$$
Then from (2.8), it follows that $\tilde{L}(\phi_{n})=0$.
We put
(2.11)
$$\|u\|_{L^{\infty}\big{(}{\mathbb{R}^{N}}\times[0,K]\big{)}}+\|\nabla u\|_{L^{%
\infty}\big{(}{\mathbb{R}^{N}}\times[0,K]\big{)}}=M,\ \|a\|_{L^{\infty}({%
\mathbb{R}^{N}})}=A,\ \|\phi_{n}(\cdot,0)\|_{L^{\infty}({\mathbb{R}^{N}})}=B.$$
For $\varepsilon>0$, $R>0$ and $c>0$, we define
$$Z(x,t)=\phi_{n}(x,t)e^{-(A+\varepsilon)t}-B-\frac{M}{R^{2}}(x^{2}+ct),\ |x|%
\leq R,\ t\in[0,K].$$
Then by a direct calculation, one has
$$\displaystyle(\tilde{L}-A-\varepsilon)Z$$
$$\displaystyle=B\big{(}A+\varepsilon-a(x)\big{)}$$
$$\displaystyle\quad+\frac{M}{R^{2}}\Big{(}c+\big{(}A+\varepsilon-a(x)\big{)}(x^%
{2}+ct)-2(1+2u^{2})-4w\nabla(u+w)\cdot\nabla x\Big{)}$$
$$\displaystyle=:F(x,t).$$
From (2.11) and the boundedness of $u$, $w$, $\nabla u$, $\nabla w$,
we can choose large $c$ independent of $\varepsilon$, $R$ and $n\in\mathbb{N}$ so that $F\geq 0$.
Moreover from (2.11), we also have
$$\displaystyle Z(x,t)$$
$$\displaystyle=\phi_{n}e^{-(A+\varepsilon)t}-B-M-\frac{M}{R^{2}}ct\leq 0\quad%
\hbox{on}\ |x|=R,\ t\in[0,K],$$
$$\displaystyle Z(x,0)$$
$$\displaystyle=\phi_{n}(x,0)-B-\frac{M}{R^{2}}x^{2}\leq 0\quad\hbox{for}\ |x|%
\leq R.$$
Thus by applying the Comparison Principle to $\tilde{L}-A-\varepsilon$,
we obtain $Z\leq 0$ for $|x|\leq R$ and $t\in[0,K]$.
Defining
$$z(x,t)=\phi_{n}(x,t)e^{-(A+\varepsilon)t}+B+\frac{M}{R^{2}}(x^{2}+ct)$$
for same $c>0$, one can see that
$$(\tilde{L}-A-\varepsilon)z=f\leq 0,\quad z\geq 0\ \hbox{on}\ |x|=R,\ t\in[0,K]%
,\quad z(x,0)\geq 0\ \hbox{for}\ |x|\leq R.$$
Thus by the Comparison Principle, we get $z\geq 0$ and hence
$$|\phi_{n}(x,t)|\leq e^{(A+\varepsilon)t}\left(B+\frac{M}{R^{2}}(x^{2}+ct)%
\right)\quad\hbox{for}\ |x|\leq R,\ t\in[0,K].$$
Since $c$ is independent of $\varepsilon$ and $R$,
we can take $R\to\infty$, $\varepsilon\to 0$ to obtain
$$\|\phi_{n}(\cdot,t)\|_{L^{\infty}\big{(}{\mathbb{R}^{N}}\times[0,K]\big{)}}%
\leq Be^{AK}=e^{AK}\|\phi_{n}(\cdot,0)\|_{L^{\infty}({\mathbb{R}^{N}})}.$$
Then by the assumption $\|\phi_{n}(\cdot,0)\|_{L^{\infty}}=\|u(\cdot,t_{n})-w(\cdot)\|_{L^{\infty}}\to
0$, it follows that
$$\|\phi_{n}(\cdot,t)\|_{L^{\infty}\big{(}{\mathbb{R}^{N}}\times[0,K]\big{)}}\to
0%
\quad\hbox{as}\ n\to\infty.$$
This completes the proof.
∎
Lemma 2.10 (Further stability estimates).
Let $K>1$. Then there exists $C=C(K)>0$ with
$$\int_{0}^{K}\|u(\cdot,s+\tau+t)-w(\cdot)\|_{H^{1}({\mathbb{R}^{N}})}^{2}\,ds%
\leq C\int_{0}^{K}\|u(\cdot,s+t)-w(\cdot)\|_{H^{1}({\mathbb{R}^{N}})}^{2}\,ds$$
for any $t>0$ and $\tau\in[0,K]$.
Proof.
Although the proof proceeds as in [10, Proposition 4.2],
we will sketch it for the sake of completeness. By the mean value theorem and Schwarz inequality,
there is $s_{0}\in[0,K]$ with
$$\displaystyle\|u(\cdot,t+s_{0})-w(\cdot)\|_{L^{2}}$$
$$\displaystyle\leq\|u(\cdot,t+s_{0})-w(\cdot)\|_{H^{1}}$$
$$\displaystyle\leq C\int_{t}^{t+K}\|u(\cdot,s)-w(\cdot)\|_{H^{1}}\,ds$$
$$\displaystyle\leq C\Big{(}\int_{t}^{t+K}\|u(\cdot,s)-w(\cdot)\|_{H^{1}}^{2}\,%
ds\Big{)}^{1\over 2}.$$
Thus by applying Lemma 2.8 (ii) with $t_{1}=t+s_{0}$ and
$t_{2}=t+s_{0}+2K$, it follows that
$$\displaystyle\int_{t+s_{0}}^{t+s_{0}+2K}\|u(\cdot,s)-w(\cdot)\|_{H^{1}}^{2}\,ds$$
$$\displaystyle\leq Ce^{CK}\|u(\cdot,t+s_{0})-w(\cdot)\|_{L^{2}}^{2}$$
$$\displaystyle\leq Ce^{CK}\int_{t}^{t+K}\|u(\cdot,s)-w(\cdot)\|_{H^{1}}^{2}\,ds.$$
Let $\tau\in[0,K]$. Since $[t+\tau,t+\tau+K]\subset[t,t+2K]\subset[t,t+K]\cup[t+s_{0},t+s_{0}+2K]$, we obtain
$$\displaystyle\int_{t+\tau}^{t+\tau+K}\|u(\cdot,s)-w(\cdot)\|_{H^{1}}^{2}\,ds%
\leq(1+Ce^{CK})\int_{t}^{t+K}\|u(\cdot,s)-w(\cdot)\|_{H^{1}}^{2}\,ds.$$
This completes the proof.
∎
Lemma 2.11.
Let $\{z_{n}\}_{n\in\mathbb{N}}\subset{\mathbb{R}^{N}}$ be a sequence with $|z_{n}|\leq 1$ for all $n\in\mathbb{N}$. Then there exists $C>0$ independent of $n\in\mathbb{N}$ such that
the following properties hold.
(i)
$\|w(\cdot+z_{n})-w(\cdot)-\nabla w(\cdot)\cdot z_{n}\|_{H^{1}({\mathbb{R}^{N}}%
)}\leq C|z_{n}|^{2}$.
(ii)
$\displaystyle\|w(\cdot+z_{n})-w(\cdot)\|_{H^{1}({\mathbb{R}^{N}})}\leq C|z_{n}|$.
Proof.
(i) By the Taylor expansion, one has
$$|w(x+z_{n})-w(x)-\nabla w(x)\cdot z_{n}|\leq C|z_{n}|^{2}\sum_{i,j=1}^{N}\Big{%
|}\frac{\partial^{2}w}{\partial x_{i}\partial x_{j}}(x+\kappa_{n}z_{n})\Big{|},$$
for some $\kappa_{n}\in(0,1)$.
From (1.4) and by the exponential decay of $w$, we can show that
$\frac{\partial^{3}w}{\partial x_{i}\partial x_{j}\partial x_{k}}$
also decays exponentially at infinity for all $i,j,k=1,\cdots,N$. Thus, we get
$$\displaystyle\|w(\cdot+z_{n})-w(\cdot)-\nabla w(\cdot)\cdot z_{n}\|_{H^{1}}%
\leq C|z_{n}|^{2}\sum_{i,j=1}^{N}\Big{\|}\frac{\partial^{2}w}{\partial x_{i}%
\partial x_{j}}(\cdot+\kappa_{n}z_{n})\Big{\|}_{H^{1}}\leq C|z_{n}|^{2}.$$
(ii) We differentiate (1.4) w.r.t to $x_{i}$.
Then multiplying by $\frac{\partial w}{\partial x_{i}}$ and integrating on ${\mathbb{R}^{N}}$, yields
$$\displaystyle\int_{{\mathbb{R}^{N}}}\Big{\{}(1+2w^{2})\left|\nabla\frac{%
\partial w}{\partial x_{i}}\right|^{2}+2\left(\frac{\partial w}{\partial x_{i}%
}\right)^{2}|\nabla w|^{2}$$
$$\displaystyle\qquad\quad+8w\frac{\partial w}{\partial x_{i}}\nabla w\cdot%
\nabla\frac{\partial w}{\partial x_{i}}+\left(\frac{\partial w}{\partial x_{i}%
}\right)^{2}-pw^{p-1}\left(\frac{\partial w}{\partial x_{i}}\right)^{2}\Big{\}%
}\,dx=0.$$
Then by the Schwarz inequality, Young inequality and from the boundedness of $w$, $\nabla w$,
we get
(2.12)
$$\int_{{\mathbb{R}^{N}}}\left|\nabla\frac{\partial w}{\partial x_{i}}\right|^{2%
}\,dx\leq C\int_{{\mathbb{R}^{N}}}\left|\frac{\partial w}{\partial x_{i}}%
\right|^{2}\,dx\leq C\|w\|_{H^{1}}^{2}.$$
Thus from (i), (2.12) and $|z_{n}|\leq 1$, it follows that
$$\displaystyle\|w(\cdot+z_{n})-w(\cdot)\|_{H^{1}}$$
$$\displaystyle\leq\|w(\cdot+z_{n})-w(\cdot)-\nabla w(\cdot)\cdot z_{n}\|_{H^{1}%
}+\|\nabla w(\cdot)\cdot z_{n}\|_{H^{1}}$$
$$\displaystyle\leq C|z_{n}|^{2}+C|z_{n}|\leq C|z_{n}|.$$
This completes the proof.
∎
2.3. Decay estimates
In this subsection, we show uniform estimates for global solutions of (1.2)-(1.3).
Our goal of this subsection is to prove the following proposition.
Proposition 2.12 (Uniform decay).
Let $u(x,t)$ be a non-negative, radially non-increasing and globally defined solution of (1.2)-(1.3).
Then the following properties hold.
(i)
$\displaystyle\sup_{t>0}\|u(\cdot,t)\|_{L^{\infty}({\mathbb{R}^{N}})}<\infty$.
(ii)
$\displaystyle\lim_{|x|\to\infty}\sup_{t>0}u(x,t)=0$.
The proof of Proposition 2.12 consists of several lemmas.
First, we prove that $u$ is uniformly bounded near infinity.
Lemma 2.13 (Universal bound near infinity).
Let $u(x,t)$ be a non-negative globally defined solution of (1.2)-(1.3) and assume that $u$ is radially non-increasing with respect to the origin. Then for any
$$K>\Big{(}\frac{p+1}{2}\Big{)}^{1\over p-1},$$
there exists $R_{K}>0$ such that
(2.13)
$$u(x,t)\leq K\quad\hbox{for all}\ |x|\geq R_{K}\ \hbox{and}\ t>0.$$
Proof.
Suppose by contradiction that the claim fails.
Then we find $K_{0}>((p+1)/2)^{1/(p-1)}$ such that for all $R>0$,
$u(x_{R},t_{R})>K_{0}$ for some $|x_{R}|\geq R$ and $t_{R}>0$.
For simplicity, we write $|x_{R}|=\tilde{R}$.
Since $u$ is radially non-increasing, it follows that
(2.14)
$$u(x,t_{R})>K_{0}\quad\hbox{for all $x\in B(0,\tilde{R})$}.$$
We claim that $u(x,t)$ must blow-up in finite time. We define a functional $I_{R}$ by
(2.15)
$$I_{R}(u):=\frac{1}{2}\int_{B(0,R)}\big{(}(1+2u^{2})|\nabla u|^{2}+u^{2}\big{)}%
\,dx-\frac{1}{p+1}\int_{B(0,R)}|u|^{p+1}\,dx.$$
First, for sufficiently large $R>1$,
we show that there exists a function $v_{R}\in C_{0}^{\infty}({\mathbb{R}^{N}})$ such that
(2.16)
$$I_{R}(v_{R})<0,\quad v_{R}\leq K_{0}\quad\hbox{in}\ B(0,R),\quad v_{R}=0\quad%
\hbox{on}\ \partial B(0,R).$$
To this aim, let
(2.17)
$$\Big{(}\frac{p+1}{2}\Big{)}^{1\over p-1}<\zeta<K_{0}$$
be arbitrarily given and choose $v_{R}\in C_{0}^{\infty}({\mathbb{R}^{N}})$ so that
$0\leq v_{R}(x)\leq\zeta$ for all $x\in{\mathbb{R}^{N}}$,
$$v_{R}(x)=\zeta\quad\hbox{for}\ |x|\leq R-1,\quad v_{R}(x)=0\quad\hbox{for}\ |x%
|\geq R,\quad\ |\nabla v_{R}(x)|\leq C\zeta.$$
Then we have
$$\displaystyle I_{R}(v_{R})$$
$$\displaystyle\leq\int_{\{R-1\leq|x|\leq R\}}\Big{(}\frac{C^{2}\zeta^{2}}{2}(1+%
2v_{R}^{2})+\frac{v_{R}^{2}}{2}-\frac{v_{R}^{p+1}}{p+1}\Big{)}\,dx+|B(0,R-1)|%
\Big{(}\frac{\zeta^{2}}{2}-\frac{\zeta^{p+1}}{p+1}\Big{)}$$
$$\displaystyle\leq\big{(}R^{N}-(R-1)^{N}\big{)}C^{2}|B(0,1)|\zeta^{2}(1+\zeta^{%
2})+(R-1)^{N}|B(0,1)|\Big{(}\frac{\zeta^{2}}{2}-\frac{\zeta^{p+1}}{p+1}\Big{)}.$$
By (2.17), it follows that $\zeta^{2}/2-\zeta^{p+1}/(p+1)<0$ and hence
$I_{R}(v_{R})\to-\infty$, as $R\to\infty$.
Thus, by taking large $R>1$, we obtain $I_{R}(v_{R})<0$.
Moreover, by the construction, we also have $v_{R}\leq K_{0}$ in $B(0,R)$ and
$v_{R}=0$ on $\partial B(0,R)$. Finally since $\tilde{R}\geq R$,
we can replace $R$ by $\tilde{R}$.
Next we consider the following auxiliary problem:
(2.18)
$$\displaystyle v_{t}=(1+2v^{2})\Delta v+2v|\nabla v|^{2}-v+v^{p}$$
$$\displaystyle\quad\,\,\hbox{in}\ B(0,\tilde{R})\times[t_{R},\infty),$$
$$\displaystyle v=0$$
$$\displaystyle\quad\,\,\hbox{on}\ \partial B(0,\tilde{R})\times[t_{R},\infty),$$
$$\displaystyle v(x,t_{R})=v_{\tilde{R}}(x)$$
$$\displaystyle\quad\,\,\hbox{in}\ B(0,\tilde{R}).$$
We claim that $v(x,t)$ blows up in finite time in a similar argument as [14].
Indeed by a direct calculation,
$\frac{d}{dt}I_{\tilde{R}}\big{(}v(\cdot,t)\big{)}\leq 0$ for $t\geq t_{R}$.
Next from (2.18), we obtain
$$\displaystyle\frac{\partial}{\partial t}\Bigl{(}\frac{1}{2}\int_{B(0,\tilde{R}%
)}v^{2}(x,t)\,dx\Bigr{)}$$
$$\displaystyle=-\int_{B(0,\tilde{R})}\Big{(}(1+4v^{2})|\nabla v|^{2}+v^{2}-v^{p%
+1}\Big{)}\,dx$$
$$\displaystyle=-4I_{\tilde{R}}\big{(}v(\cdot,t)\big{)}+\Big{(}1-\frac{4}{p+1}%
\Big{)}\int_{B(0,\tilde{R})}v^{p+1}\,dx$$
$$\displaystyle\geq-4I_{\tilde{R}}(v_{\tilde{R}})+\frac{p-3}{p+1}\int_{B(0,%
\tilde{R})}v^{p+1}\,dx.$$
By the Hölder inequality, we also have
$$|B(0,\tilde{R})|^{-\frac{p-1}{2}}\Big{(}\int_{B(0,\tilde{R})}v^{2}\,dx\Big{)}^%
{p+1\over 2}\leq\int_{B(0,\tilde{R})}v^{p+1}\,dx.$$
Thus from $I_{\tilde{R}}(v_{\tilde{R}})<0$ and $p\geq 3$, we obtain
$$\frac{\partial}{\partial t}\Big{(}\int_{B(0,\tilde{R})}v^{2}(x,t)\,dx\Big{)}%
\geq C\Big{(}\int_{B(0,\tilde{R})}v^{2}(x,t)\,dx\Big{)}^{p+1\over 2}\quad\hbox%
{for all}\ t\geq t_{R}.$$
This implies that $v(x,t)$ blows up in finite time.
Now from (2.14) and (2.16), one has
$$L(v)\geq 0\quad\hbox{in}\ B(0,\tilde{R})\times[t_{R},\infty),\quad v\leq u%
\quad\hbox{on}\ \partial B(0,\tilde{R})\times[t_{R},\infty),\quad v(\cdot,t_{R%
})\leq u(\cdot,t_{R})\ \hbox{in}\ B(0,\tilde{R}).$$
Then by Lemma 2.2, it follows that
$u(x,t)\geq v(x,t)$ for all $x\in B(0,\tilde{R})$ and $t\geq t_{R}$.
Thus $u(x,t)$ must blow-up in finite time,
contradicting the assumption that $u(x,t)$ is globally defined.
∎
Remark 2.14.
Lemma 2.13 is the only part where the radial non-increase of $u(x,t)$ is needed.
We can remove this assumption if we could show that
$$\max_{x\in\partial B(0,R+2)}u(x,t)\leq\inf_{x\in B(0,R)}u(x,t)\quad\hbox{for}%
\ t\in[0,T]\ \hbox{and large}\ R>0.$$
This type of estimates were obtained for porous medium equations, see [4, Proposition 2.1].
However we don’t know whether this estimate holds true for our quasi-linear parabolic problem.
Once we have the uniform boundedness near infinity,
we can get the decay estimate at infinity.
Lemma 2.15 (Exponential decays).
Suppose $N\geq 2$ and let $u(x,t)$ be a non-negative global solution of (1.2)-(1.3)
which satisfies the uniform boundedness property (2.13).
Then there exist $\delta>0$, $C>0$ and $R_{0}>0$ such that
$$\sup_{t>0}|D^{k}u(x,t)|\leq Ce^{-\delta|x|}\quad\hbox{for all}\ |x|\geq R_{0}%
\ \hbox{and}\ |k|\leq 2.$$
Proof.
By standard linear parabolic estimates, it suffices to consider the case $k=0$.
Let $w$ be a positive solution of (1.4). Then $w$ is radially decreasing
and decays exponentially at infinity.
Moreover we claim that $w(0)>((p+1)/2)^{1/(p-1)}$.
Indeed, $w$ satisfies the Pohǒzaev identity
$$0\leq\frac{N-2}{2N}\int_{{\mathbb{R}^{N}}}(1+2w^{2})|\nabla w|^{2}\,dx=\int_{{%
\mathbb{R}^{N}}}\Big{(}\frac{w^{p+1}}{p+1}-\frac{w^{2}}{2}\Big{)}\,dx.$$
For the proof, see [8, Lemma 3.1].
If the claim fails,
then $w(x)<((p+1)/2)^{1/(p-1)}$ for all $x\in{\mathbb{R}^{N}}\setminus\{0\}$ by the monotonicity of $w$,
which implies
$$\int_{{\mathbb{R}^{N}}}\left(\frac{w^{p+1}}{p+1}-\frac{w^{2}}{2}\right)\,dx<0,$$
which is impossible. Now applying Lemma 2.13 with
$$\Big{(}\frac{p+1}{2}\Big{)}^{1\over p-1}<K<w(0),$$
there exists $R_{0}=R(K)$ such that
(2.19)
$$u(x,t)\leq K\ \hbox{for}\ |x|\geq R_{0}\ \hbox{and}\ t>0.$$
Moreover choosing $R_{0}$ larger if necessary, we may assume
${\rm supp}(u_{0})\subset B(0,R_{0})$. Next, we put
$$Z(x,t):=Z(x)=w(|x|-R_{0}),\quad\text{for $|x|\geq R_{0}$ and $t>0$}.$$
Then there exists $\varepsilon_{0}>0$ such that $Z(x)\geq K$ for $R_{0}\leq|x|\leq R_{0}+\varepsilon_{0}$. From (2.19), we get
$$\displaystyle L(Z)\leq 0$$
$$\displaystyle\quad\hbox{for}\ \ R_{0}\leq|x|\leq R_{0}+\varepsilon_{0}\ \hbox{%
and}\ t>0,$$
$$\displaystyle Z\geq u$$
$$\displaystyle\quad\hbox{for}\ \ |x|=R_{0},R_{0}+\varepsilon_{0}\ \hbox{and}\ t%
>0,$$
$$\displaystyle Z(\cdot,0)\geq u_{0}$$
$$\displaystyle\quad\hbox{for}\ \ R_{0}\leq|x|\leq R_{0}+\varepsilon_{0}.$$
Thus by Lemma 2.2,
we obtain $u(x,t)\leq U(x)$ for $R_{0}\leq|x|\leq R_{0}+\varepsilon_{0}$ and $t>0$.
Applying the Comparison Principle again, we have
$u(x,t)\leq U(x)$ for all $|x|\geq R_{0}$ and $t>0$. This completes the proof.
∎
Remark 2.16.
In the proof of Lemma 2.15, our construction of a supersolution $Z$ fails when $N=1$.
In fact in this case, we claim that
$$w(0)=\Big{(}\frac{p+1}{2}\Big{)}^{1\over p-1}.$$
To see this, we multiply $w^{\prime}$ by the one-dimensional version of equation (1.4)
$$(1+2w^{2})w^{\prime\prime}+2w(w^{\prime})^{2}-w+w^{p}=0.$$
Integrating it over $[0,r]$, since $w^{\prime}(0)=0$, we have
$$\frac{1}{2}\big{(}1+2w^{2}(r)\big{)}\big{(}w^{\prime}(r)\big{)}^{2}+\frac{w^{p%
+1}(r)}{p+1}-\frac{w^{2}(r)}{2}=\frac{w^{p+1}(0)}{p+1}-\frac{w^{2}(0)}{2}\ %
\hbox{for}\ r>0.$$
Passing to a limit $r\to\infty$, the claim is proved.
Since there is no gap between $w(0)$ and $((p+1)/2)^{1/(p-1)}$,
we cannot apply Lemma 2.13 for $N=1$.
But if we could replace $((p+1)/2)^{1/(p-1)}$ by 1 in Lemma 2.13,
we could construct a decaying supersolution $Z$ in the same way.
More precisely, instead of (2.13), let us assume that for any $K>1$,
there exists $R_{K}>0$ such that
$$u(x,t)\leq K,\quad\hbox{for all}\ |x|\geq R_{K}\ \hbox{and}\ t>0.$$
Then the same conclusion as Lemma 2.15 holds.
On the other hand, replacing $((p+1)/2)^{1/(p-1)}$ by 1,
our construction of a blow up subsolution $v$ in the proof of Lemma 2.13 fails.
Thus we need another argument when $N=1$.
We also remark that a construction of blowing up subsolutions for semi-linear problems as in [9] does not work for our problem.
Lemma 2.17 (Global existence and energy sign I).
Let $u$ be a global solution of (1.2)-(1.3).
Then $I\big{(}u(\cdot,t)\big{)}\geq 0$ for every $t>0$.
Proof.
We use the concavity method as in [18].
It suffices to show that if $I\big{(}u(\cdot,t_{0})\big{)}<0$ for some $t_{0}>0$,
then $u(x,t)$ must blow-up in finite time.
To this end, suppose by contradiction that $I\big{(}u(\cdot,t_{0})\big{)}<0$ but
$u$ is globally defined.
First multiplying (1.2) by $u$ and integrating it over ${\mathbb{R}^{N}}$, one has
$$\int_{{\mathbb{R}^{N}}}uu_{t}\,dx+\int_{{\mathbb{R}^{N}}}\big{(}(1+4u^{2})|%
\nabla u|^{2}+u^{2}\big{)}dx-\int_{{\mathbb{R}^{N}}}|u|^{p+1}\,dx=0.$$
Thus by the definition of $I(u)$, it follows that
$$\frac{p-1}{2}\int_{{\mathbb{R}^{N}}}\big{(}|\nabla u|^{2}+u^{2}\big{)}dx+(p-3)%
\int_{{\mathbb{R}^{N}}}u^{2}|\nabla u|^{2}\,dx=(p+1)I(u)+\int_{{\mathbb{R}^{N}%
}}uu_{t}\,dx.$$
We put
$$M(t):=\frac{1}{2}\int_{t_{0}}^{t}\|u(\cdot,s)\|_{L^{2}({\mathbb{R}^{N}})}^{2}%
\,ds.$$
Then one has $M^{\prime}(t)=\frac{1}{2}\|u(\cdot,t)\|_{L^{2}}^{2}$.
Moreover by Lemma 2.4 and from $p\geq 3$, we also have
$$\displaystyle M^{\prime\prime}(t)$$
$$\displaystyle=\int_{{\mathbb{R}^{N}}}uu_{t}\,dx$$
$$\displaystyle=-(p+1)I\big{(}u(\cdot,t)\big{)}+\frac{p-1}{2}\int_{{\mathbb{R}^{%
N}}}\big{(}\nabla u|^{2}+u^{2}\big{)}\,dx+(p-3)\int_{{\mathbb{R}^{N}}}u^{2}|%
\nabla u|^{2}\,dx$$
$$\displaystyle\geq-(p+1)I\big{(}u(\cdot,t_{0})\big{)}>0\quad\hbox{for}\ t\geq t%
_{0}.$$
This implies that $M^{\prime}(t)\to\infty$ and $M(t)\to\infty$ as $t\to\infty$.
Next by Lemma 2.4, it follows that
$$\int_{t_{0}}^{t}\|u_{t}(\cdot,s)\|_{L^{2}}^{2}\,ds=I\big{(}u(\cdot,t_{0})\big{%
)}-I\big{(}u(\cdot,t)\big{)}<-I\big{(}u(\cdot,t)\big{)},$$
which implies that
$$M^{\prime\prime}(t)\geq-(p+1)I\big{(}u(\cdot,t)\big{)}>(p+1)\int_{t_{0}}^{t}\|%
u_{t}(\cdot,s)\|_{L^{2}}^{2}\,ds.$$
Thus we get
$$\displaystyle M(t)M^{\prime\prime}(t)$$
$$\displaystyle\geq\frac{p+1}{2}\left(\int_{t_{0}}^{t}\|u(\cdot,s)\|_{L^{2}}^{2}%
\,ds\right)\left(\int_{t_{0}}^{t}\|u_{t}(\cdot,s)\|_{L^{2}}^{2}\,ds\right)$$
$$\displaystyle\geq\frac{p+1}{2}\left(\int_{t_{0}}^{t}u(x,s)u_{t}(x,s)\,dx\,ds%
\right)^{2}$$
$$\displaystyle=\frac{p+1}{2}\big{(}M^{\prime}(t)-M^{\prime}(t_{0})\big{)}^{2}.$$
Since $M^{\prime}(t)\to\infty$ as $t\to\infty$, there exists $\alpha>0$ and $t_{1}\geq t_{0}$
such that
$$M(t)M^{\prime\prime}(t)\geq(1+\alpha)M^{\prime}(t)^{2}\quad\ \hbox{for}\ t\geq
t%
_{1}.$$
This shows that $M^{-\alpha}(t)$ is concave on $[t_{1},\infty)$,
contradicting to $M^{-\alpha}(t)\to 0$ as $t\to\infty$.
Thus, the assertion holds.
∎
Finally we show the following lemma.
Lemma 2.18 (Global existence and energy sign II).
Let $0<T\leq\infty$.
Let $u$ be a non-negative solution of (1.2)-(1.3)
and assume that $I\big{(}u(\cdot,t)\big{)}\geq 0$ for all $t\in(0,L)$.
Then then there exists $C>0$ such that
$$\sup_{t\in(0,L)}\|u(\cdot,t)\|_{L^{\infty}({\mathbb{R}^{N}})}\leq C.$$
Proof.
Suppose that there exists a sequence $\{t_{n}\}_{n\in\mathbb{N}}\subset(0,L)$
converging to $L$ such that
$$M_{n}:=\|u(\cdot,t_{n})\|_{L^{\infty}(\mathbb{R}^{N})}\to\infty.$$
We derive a contradiction by using a blow-up type argument.
Let $\{x_{n}\}_{n\in\mathbb{N}}\subset{\mathbb{R}^{N}}$ be such that
$$\frac{M_{n}}{2}\leq u(x_{n},t_{n})\leq M_{n},$$
and consider the sequence
$$v_{n}(y,\tau):=\frac{1}{M_{n}}u\left(x_{n}+\frac{y}{M_{n}^{p-3\over 2}},t_{n}+%
\frac{\tau}{M_{n}^{p-1}}\right).$$
Then by a direct calculation, one has
$$\frac{\partial v_{n}}{\partial\tau}=\frac{1}{M_{n}^{2}}\Delta v_{n}+v_{n}%
\Delta v_{n}^{2}-\frac{1}{M_{n}^{p-1}}v_{n}+v_{n}^{p}\quad\hbox{in}\ {\mathbb{%
R}^{N}}\times(-M_{n}^{p-1}t_{n},0].$$
Passing to a subsequence and using a diagonal argument as in [13], we have
$$v_{n}\to v\quad\hbox{in}\ C^{2,1}_{{\rm loc}}({\mathbb{R}^{N}}\times(-\infty,0%
]),$$
where $v$ is a non-negative solution of the following parabolic problem
$$\frac{\partial v}{\partial\tau}=v\Delta v^{2}+v^{p}\quad\hbox{in}\ {\mathbb{R}%
^{N}}\times(-\infty,0].$$
Now we claim that $v_{\tau}\equiv 0$.
To this end, we observe that by Lemma 2.4 that
$$\int_{0}^{t_{0}}\int_{{\mathbb{R}^{N}}}|u_{t}(x,t)|^{2}\,dx\,dt\leq I\big{(}u(%
\cdot,0)\big{)}-I\big{(}u(\cdot,t_{0})\big{)}\quad\hbox{for any}\ t_{0}>0.$$
Since $I\big{(}u(\cdot,t)\big{)}\geq 0$ for all $t>0$, we have
$$\displaystyle\int_{-\infty}^{0}\int_{\mathbb{R}^{N}}\left|\frac{\partial}{%
\partial\tau}v_{n}(y,\tau)\right|^{2}\,dy\,d\tau$$
$$\displaystyle=\frac{1}{M_{n}^{2p}}\int_{-\infty}^{0}\int_{\mathbb{R}^{N}}\left%
|u_{t}\left(x_{n}+\frac{y}{M_{n}^{p-3\over 2}},t_{n}+\frac{\tau}{M_{n}^{p-1}}%
\right)\right|^{2}\,dy\,d\tau$$
$$\displaystyle\leq M_{n}^{\frac{(N-2)p-3N-2}{2}}\int_{0}^{\infty}\int_{{\mathbb%
{R}^{N}}}|u_{t}(x,t)|^{2}\,dx\,dt$$
$$\displaystyle\leq M_{n}^{\frac{(N-2)p-3N-2}{2}}I\big{(}u(\cdot,0)\big{)}.$$
Since $p<(3N+2)/(N-2)$ and $M_{n}\to\infty$, it follows that $v_{\tau}\equiv 0$.
Now since $v_{\tau}\equiv 0$,
$v$ is a nontrivial, non-negative bounded solution of the following nonlinear elliptic problem
(2.20)
$$-v\Delta v^{2}=v^{p}\quad\hbox{in}\ {\mathbb{R}^{N}}.$$
If $3<p<(3N+2)/(N-2)$, it follows that $v\equiv 0$ by applying the Liouville theorem to $v^{2}$.
This contradicts to the fact $v(0)\geq 1/2$.
On the other hand if $p=3$, it follows that $v^{2}$ is a nontrivial bounded eigenfunction of
$-\Delta$ in ${\mathbb{R}^{N}}$ associated with the eigenvalue $1$. But this is impossible.
Thus in both cases, we obtain a contradiction and hence the proof is complete.
∎
Remark 2.19.
We note that in the proof of Lemma 2.18,
we need the assumption $3\leq p$ to obtain the non-existence of nontrivial, non-negative bounded solutions of (2.20).
We also observe that if we adopt the scaling
$$v_{n}(y,\tau):=\frac{1}{M_{n}}u\Big{(}x_{n}+\frac{y}{M_{n}^{p-1\over 2}},t_{n}%
+\frac{\tau}{M_{n}^{p-1}}\Big{)}$$
as in [9], then we obtain the following rescaled problem
$$\frac{\partial v_{n}}{\partial\tau}=\Delta v_{n}+M_{n}^{2}v_{n}\Delta v_{n}^{2%
}-\frac{1}{M_{n}^{p-1}}v_{n}+v_{n}^{p}\ \hbox{in}\ {\mathbb{R}^{N}}\times(-M_{%
n}^{p-1}t_{n},0].$$
Hence this scaling does not work in our case due to the term $M_{n}^{2}v_{n}\Delta v_{n}^{2}$.
Now we can see that Proposition 2.12 follows from Lemmas 2.3, 2.15, 2.17 and 2.18.
2.4. Some technical results
In this subsection, we prepare some technical lemmas to prove Theorem 1.2.
First we shall need the following result.
Lemma 2.20 ($\omega$-limit).
Let $u$ be a non-negative, bounded and globally defined
solution of (1.2)-(1.3) satisfying the uniform decay condition (1.6).
Then $\{u(\cdot,t_{n})\}$ has a uniformly convergent subsequence in ${\mathbb{R}^{N}}$
for any sequence $\{t_{n}\}_{n\in\mathbb{N}}$ with $t_{n}\to\infty$.
In particular, the $\omega$-limit set $\Omega(u)$ is well-defined.
Furthermore, the set $\{u(\cdot,t_{n})\}$ is relatively compact in $C^{1}({\mathbb{R}^{N}})$.
Proof.
We know that $\|u(\cdot,t)\|_{L^{\infty}({\mathbb{R}^{N}})}$ is uniformly bounded.
Moreover by assumption (1.6), we can show that the function $\sup_{t>0}|D^{k}u(x,t)|$
decay exponentially for $|k|\leq 2$ (cf. the proof of Lemma 2.15).
Applying the Schauder estimate, we also have the uniform boundedness of
$\|\nabla u(\cdot,t)\|_{L^{\infty}({\mathbb{R}^{N}})}$.
Let $\{t_{n}\}_{n\in\mathbb{N}}$ be a sequence such that $t_{n}\to\infty$.
Then by (i) of Lemma 2.7, it follows that $\|u(\cdot,t_{n})\|_{H^{1}}\leq C$.
Passing to a subsequence, we may assume that $u(\cdot,t_{n})\rightharpoonup w$ in $H^{1}({\mathbb{R}^{N}})$ for some $w\in H^{1}({\mathbb{R}^{N}})$.
Then arguing as in Lemma 2.6, one can see that
either $w=0$ or $w$ is a positive solution of (1.4).
In particular, $w$ decays exponentially at infinity.
Arguing as for the proof of (i) of Lemma 2.9, we have
$\|u(\cdot,t_{n})-w(\cdot)\|_{L^{2}}\to 0$.
Let $U$ be any bounded domain.
Then applying higher order regularity theory, we get
$$\|u(\cdot,t_{n})-w(\cdot)\|_{H^{m}(V)}\leq C\|u(\cdot,t_{n})-w(\cdot)\|_{L^{2}%
(U)}$$
for any $V\subset\subset U$ and $m\geq\frac{N}{2}+1$.
By the Sobolev embedding $H^{m}(V)\hookrightarrow C^{0}(V)$,
passing to a subsequence, we have that $u(\cdot,t_{n})\to w$ uniformly on $V$.
Since $U$ is arbitrary and $u(x,t_{n})$ decays uniformly at infinity,
it follows that $u(\cdot,t_{n})\to w(\cdot)$ uniformly in ${\mathbb{R}^{N}}$.
Finally since $m\geq\frac{N}{2}+1$, we have the continuous embedding
$H^{m}_{loc}({\mathbb{R}^{N}})\hookrightarrow C^{1}_{loc}({\mathbb{R}^{N}})$.
Together with the uniform exponential decay of $|D^{k}(u(\cdot,t_{n})|$ for $|k|\leq 2$,
passing to a subsequence if necessary,
it follows that $u(\cdot,t_{n})\to w$ in $C^{1}({\mathbb{R}^{N}})$. This completes the proof.
∎
To finish the proof of Theorem 1.2, we have to prove (1.7)
and show that the limit $w\in\Omega(u)$ is independent of the choice of the sequence $\{t_{n}\}$.
To this end, we put
$$\eta(y,t):=\left(\int_{0}^{T}\|u(\cdot,s+t)-w(\cdot+y)\|_{H^{1}({\mathbb{R}^{N%
}})}^{2}\,ds\right)^{1\over 2},$$
where $w$ is a fixed element in the $\omega$-limit set $\Omega(u)$.
First we state the following proposition whose proof will be given later.
Proposition 2.21.
There exist $M>0$ and $T>1$ such that the following properties hold.
For every sequence $\{(y_{n},t_{n})\}_{n\in\mathbb{N}}\subset\mathbb{R}^{N}\times\mathbb{R}^{+}$ satisfying
$|y_{n}|\leq 1$, $t_{n}\to\infty$ and
$$\eta(y_{n},t_{n})=\left(\int_{0}^{T}\|u(\cdot,s+t_{n})-w(\cdot+y_{n})\|_{H^{1}%
}^{2}\,ds\right)^{1\over 2}\to 0,$$
there exist a subsequence $\{(y_{n_{j}},t_{n_{j}})\}$ and
$\{z_{j}\}\subset{\mathbb{R}^{N}}$ such that $|z_{j}|\leq M\eta(y_{n_{j}},t_{n_{j}})$ and
$$\eta\left(z_{n_{j}}+y_{n_{j}},t_{n_{j}}+T\right)\leq\frac{1}{2}\eta(y_{n_{j}},%
t_{n_{j}}).$$
By Proposition 2.21, we obtain the following Corollary.
Corollary 2.22 (Uniform stability).
There exist $M>0$, $T>1$, $t_{0}>0$ and $\eta_{0}>0$ such that the following property hold.
For every $(y,t)\in{\mathbb{R}^{N}}\times\mathbb{R}_{+}$ satisfying
$|y|\leq 1$, $t\geq t_{0}$ and $\eta(y,t)\leq\eta_{0}$,
there exists $z\in{\mathbb{R}^{N}}$ such that $|z|\leq M\eta(y,t)$ and
$$\eta(z+y,t+T)\leq\frac{1}{2}\eta(y,t).$$
Proof.
Let $M>0$, $T>1$ be the constants provided in Proposition 2.21.
We assume by contradiction that Corollary 2.22 does not hold.
Then there exists $\{(y_{n},t_{n})\}_{n\in\mathbb{N}}\subset{\mathbb{R}^{N}}\times\mathbb{R}_{+}$ such that
$\eta(y_{n},t_{n})\to 0$, $|y_{n}|\leq 1$, $t_{n}\to\infty$ and
$$\eta(z+y_{n},t_{n}+T)>\frac{1}{2}\eta(y_{n},t_{n})$$
for all $z\in{\mathbb{R}^{N}}$ with $|z|\leq M\eta(y_{n},t_{n})$.
This contradicts Proposition 2.21.
∎
Lemma 2.23.
Let $M>0$, $T>1$ and $t_{0}>0$ be constants provided by Corollary 2.22.
Then there exists $\bar{\eta}>0$ such that the following property hold.
For every $k\in\mathbb{N}$ and $t^{*}>t_{0}$ with $\eta(0,t^{*})\leq\bar{\eta}$,
there exists $\{x_{i}\}_{i=1}^{k}\subset{\mathbb{R}^{N}}$ such that
$|x_{i}|\leq M\eta\big{(}x_{1}+\cdots+x_{i-1},t^{*}+(i-1)T\big{)}$ and
$$\eta\big{(}x_{1}+\cdots+x_{k},t^{*}+kT\big{)}\leq\frac{1}{2}\eta\big{(}x_{1}+%
\cdots+x_{k-1},t^{*}+(k-1)T\big{)}.$$
Here we put $x_{0}=0$.
Proof.
If $M>0$, $T>1$, $t_{0}>0$ and $\eta_{0}>0$ denote the constants by Corollary 2.22, we define
$$\bar{\eta}:=\frac{1}{2}\min\left\{\eta_{0},\frac{1}{2M}\right\}$$
and claim that for each $k\in\mathbb{N}$, there exists $x_{k}\in{\mathbb{R}^{N}}$ such that
(2.21)
$$\left\{\begin{array}[]{l}\displaystyle\eta\big{(}x_{1}+\cdots+x_{k},t^{*}+kT%
\big{)}\leq\frac{1}{2}\eta\big{(}x_{1}+\cdots+x_{k-1},t^{*}+(k-1)T\big{)},\\
|x_{k}|\leq M\eta\big{(}x_{1}+\cdots+x_{k-1},t^{*}+(k-1)T\big{)},\\
|x_{1}+\cdots+x_{k}|\leq 1,\\
\eta\big{(}x_{1}+\cdots+x_{k},t^{*}+kT\big{)}<\bar{\eta}.\end{array}\right.$$
This will be proved by an induction argument.
Suppose that $\eta(0,t^{*})\leq\bar{\eta}$.
Then by applying Corollary 2.22 with $y:=0$ and $t:=t^{*}$,
there exists $z=:x_{1}\in{\mathbb{R}^{N}}$ such that
$$|x_{1}|\leq M\eta(0,t^{*})\leq M\bar{\eta}\leq\frac{1}{4}\quad\hbox{and}\quad%
\eta(x_{1},t^{*}+T)\leq\frac{1}{2}\eta(0,t^{*})<\bar{\eta}.$$
This implies that (2.21) holds for $k=1$.
Next we assume that (2.21) holds for $k\in\mathbb{N}$.
Using Corollary 2.22 with $y:=x_{1}+\cdots+x_{k}$ and $t:=t^{*}+kT$, there exists
$x_{k+1}\in{\mathbb{R}^{N}}$ such that $|x_{k+1}|\leq M\eta(x_{1}+\cdots+x_{k},t^{*}+kT)$ and
(2.22)
$$\eta\big{(}x_{1}+\cdots+x_{k}+x_{k+1},t^{*}+(k+1)T\big{)}\leq\frac{1}{2}\eta%
\big{(}x_{1}+\cdots+x_{k},t^{*}+kT\big{)}.$$
To finish the inductive step, it suffices to show that
$$|x_{1}+\cdots+x_{k}+x_{k+1}|\leq 1\quad\hbox{and}\quad\eta\big{(}x_{1}+\cdots+%
x_{k}+x_{k+1},t^{*}+(k+1)T\big{)}<\bar{\eta}.$$
Now by the induction hypothesis, it follows that
$$\sum_{i=2}^{k+1}|x_{i}|\leq M\sum_{i=2}^{k+1}\eta\big{(}x_{1}+\cdots+x_{i-1},t%
^{*}+(i-1)T\big{)},$$
(2.23)
$$\eta\big{(}x_{1}+\cdots+x_{i-1},t^{*}+(i-1)T\big{)}\leq\frac{1}{2}\eta\big{(}x%
_{1}+\cdots+x_{i-2},t^{*}+(i-2)T\big{)}\leq\frac{1}{2^{i-1}}\eta(0,t^{*}),$$
for every $2\leq i\leq k+1$. Thus one has
$$|x_{1}+\cdots+x_{k}+x_{k+1}|\leq 2M\eta(0,t^{*})\leq 2M\bar{\eta}\leq\frac{1}{%
2}.$$
Finally from (2.21) and (2.22), we also have
$$\eta\big{(}x_{1}+\cdots+x_{k}+x_{k+1},t^{*}+(k+1)T\big{)}\leq\frac{1}{2}\bar{%
\eta}.$$
Thus by induction, Lemma 2.23 holds.
∎
Lemma 2.24.
Let $T>1$, $t_{0}>0$ and $\eta_{0}>0$ be constants in Corollary 2.22 and Lemma 2.23.
Then there exists $\tilde{C}>0$ such that the following properties hold.
For every $k\in\mathbb{N}$ and $t^{*}>t_{0}$ with $\eta(0,t^{*})\leq\bar{\eta}$, it follows that
$$\eta(0,t^{*}+kT)\leq\tilde{C}\eta(0,t^{*}).$$
Proof.
Let $M>0$, $T>1$, $t_{0}>0$, $\bar{\eta}>0$ and
$\{x_{i}\}_{i=1}^{k}\subset{\mathbb{R}^{N}}$ be as in Lemma 2.23.
Then by the triangular inequality and (ii) of Lemma 2.11, one has
$$\displaystyle\big{|}\eta(x_{1}+\cdots+x_{k},t^{*}+kT)-\eta(0,t^{*}+kT)\big{|}$$
$$\displaystyle\leq\Biggl{|}\left(\int_{0}^{T}\|u(\cdot,s+t^{*}+kT)-w(\cdot+x_{1%
}+\cdots+x_{k})\|_{H^{1}}^{2}\,ds\right)^{1\over 2}$$
$$\displaystyle\qquad-\left(\int_{0}^{T}\|u(\cdot,s+t^{*}+kT)-w(\cdot)\|_{H^{1}}%
^{2}\,ds\right)^{1\over 2}\Biggr{|}$$
$$\displaystyle\leq\left(\int_{0}^{T}\|w(\cdot+x_{1}+\cdots+x_{k})-w(\cdot)\|_{H%
^{1}}^{2}\,ds\right)^{1\over 2}$$
$$\displaystyle\leq CT^{1\over 2}\sum_{i=1}^{k}|x_{i}|\leq CT^{1\over 2}M\sum_{i%
=1}^{k}\eta\big{(}x_{1}+\cdots+x_{i-1},t^{*}+(i-1)T\big{)}.$$
Thus from (2.23), we obtain
$$\displaystyle\eta(0,t^{*}+kT)$$
$$\displaystyle\leq(1+CT^{1\over 2}M)\sum_{i=1}^{k}\eta\big{(}x_{1}+\cdots+x_{i-%
1},t^{*}+(i-1)T\big{)}$$
$$\displaystyle\leq 2(1+CT^{1\over 2}M)\eta(0,t^{*}).$$
Taking $\tilde{C}=2(1+CT^{1\over 2}M)$, the claim holds.
∎
We shall now prove Proposition 2.21.
Let $T>1$ be a constant which will be chosen later and suppose that a sequence $\{(y_{n},t_{n})\}_{n\in\mathbb{N}}\subset{\mathbb{R}^{N}}\times\mathbb{R}_{+}$ satisfies $|y_{n}|\leq 1$, $t_{n}\to\infty$ and
$$\eta(y_{n},t_{n})=\left(\int_{0}^{T}\|u(\cdot,s+t_{n})-w(\cdot+y_{n})\|_{H^{1}%
({\mathbb{R}^{N}})}^{2}\,ds\right)^{1\over 2}\to 0.$$
Then passing to a subsequence, we may assume that $y_{n}\to y_{0}$ as $n\to\infty$.
Moreover since $t_{n}\to\infty$ as $n\to\infty$,
we may also assume that $u(\cdot,t_{n})\to\tilde{w}$ uniformly for some $\tilde{w}\in\Omega(u)$.
Thus for any $K>1$, we have by (ii) of Lemma 2.9 that
$$\lim_{n\to\infty}\|u(\cdot,t+t_{n})-\tilde{w}(\cdot)\|_{L^{\infty}({\mathbb{R}%
^{N}}\times[0,K])}=0.$$
On the other hand, it follows that $w(\cdot+y_{n})\to w(\cdot+y_{0})$ in $H^{1}({\mathbb{R}^{N}})$.
Thus from $\eta(y_{n},t_{n})\to 0$,
(2.24)
$$\lim_{n\to\infty}\int_{0}^{T}\|u(\cdot,s+t_{n})-w(\cdot+y_{0})\|_{H^{1}({%
\mathbb{R}^{N}})}^{2}\,ds=0.$$
This implies that $\tilde{w}(\cdot)=w(\cdot+y_{0})$ and hence
(2.25)
$$\lim_{n\to\infty}\|u(\cdot,t+t_{n})-w(\cdot+y_{0})\|_{L^{\infty}({\mathbb{R}^{%
N}}\times[0,K])}=0.$$
Moreover by Lemma 2.20,
we know that $\{u(\cdot,t+t_{n})\}$ is relatively compact in $C^{1}({\mathbb{R}^{N}})$.
Thus by the uniform convergence of $u(\cdot,t+t_{n})\to w(\cdot+y_{0})$, one also has
(2.26)
$$\lim_{n\to\infty}\|\nabla u(\cdot,t+t_{n})-\nabla w(\cdot+y_{0})\|_{L^{\infty}%
({\mathbb{R}^{N}}\times[0,K])}=0.$$
Hereafter, we write for simplicity
$$\eta_{n}:=\eta(y_{n},t_{n}),\quad u_{n}(x,t):=u(x,t+t_{n}),\quad w_{n}(x):=w(x%
+y_{n}),\quad w_{0}(x):=w(x+y_{0}).$$
We also note that up to translation, $w_{0}$ is radially symmetric with respect to $y_{0}$.
Now we set
$$\phi_{n}(x,t):=\frac{u_{n}(x,t)-w_{n}(x)}{\eta_{n}}.$$
Since
$$\int_{1\over 2}^{1}\|\phi(\cdot,s)\|_{H^{1}}^{2}\,ds\leq\int_{0}^{T}\|\phi_{n}%
(\cdot,s)\|_{H^{1}}^{2}\,ds=1,$$
we have $\|\phi_{n}(\cdot,\tau_{n})\|_{H^{1}}\leq 2$
for some $\{\tau_{n}\}\subset[\frac{1}{2},1]$ by the mean value theorem.
Thus, passing to a subsequence, we may assume that
(2.27)
$$\tau_{n}\to\tau_{0}\in\Big{[}\frac{1}{2},1\Big{]}\ \hbox{and}\ \phi_{n}(\cdot,%
\tau_{n})\rightharpoonup\phi_{0}\ \hbox{in}\ H^{1}({\mathbb{R}^{N}}),$$
for some $\phi_{0}\in H^{1}({\mathbb{R}^{N}})$.
Moreover by the compact embedding $H^{1}_{{\rm loc}}({\mathbb{R}^{N}})\hookrightarrow L^{2}_{{\rm loc}}({\mathbb{%
R}^{N}})$,
we have
$$\phi_{n}(\cdot,\tau_{n})\to\phi_{0}\quad\hbox{in}\ L_{{\rm loc}}^{2}({\mathbb{%
R}^{N}})\quad\hbox{and}\quad\|\phi_{0}\|_{L^{2}({\mathbb{R}^{N}})}\leq 2.$$
Lemma 2.25.
Let $K>1$ be given. Then there exists $C>0$ such that
$$\sup_{n\in\mathbb{N}}\int_{0}^{K}\|\phi_{n}(\cdot,s)\|_{H^{1}({\mathbb{R}^{N}}%
)}^{2}\,ds\leq C.$$
Proof.
By applying (ii) of Lemma 2.8 with $t_{1}=\tau_{n}+t_{n}$ and $t_{2}=K+t_{n}$, one has
$$\int_{\tau_{n}}^{K}\|\phi_{n}(\cdot,s)\|_{H^{1}}^{2}\,ds\leq Ce^{2C(K-\tau_{n}%
)}\|\phi_{n}(\cdot,\tau_{n})\|_{L^{2}}^{2}.$$
Since $\tau_{n}\in[\frac{1}{2},1]$ and $\|\phi_{n}(\cdot,\tau_{n})\|_{L^{2}({\mathbb{R}^{N}})}\leq 2$, we get
$$\int_{\tau_{n}}^{K}\|\phi_{n}(\cdot,s)\|_{H^{1}}^{2}\,ds\leq C,$$
where $C>0$ is independent of $n\in\mathbb{N}$.
Since $\int_{0}^{T}\|\phi_{n}(\cdot,s)\|_{H^{1}}^{2}\,ds=1$, we also have
$$\int_{0}^{\tau_{n}}\|\phi_{n}(\cdot,s)\|_{H^{1}}^{2}\,ds\leq\int_{0}^{T}\|\phi%
_{n}(\cdot,s)\|_{H^{1}}\,ds=1.$$
Thus the claim holds.
∎
Lemma 2.26 (Convergence to the linearized problem).
Let $K>1$ be arbitrarily given.
Then there exists a subsequence of $\{\phi_{n}\}$, still denoted by $\{\phi_{n}\}$,
such that $\phi_{n}\rightharpoonup\phi$ in $L^{2}([0,K),H^{1}({\mathbb{R}^{N}}))$.
Moreover $\phi\in C((0,\infty),L^{2}({\mathbb{R}^{N}}))$ and satisfies the following linear parabolic problem
(2.28)
$$\left\{\begin{array}[]{ll}\phi_{t}+{\mathcal{L}}_{0}\phi=0&\hbox{in}\ {\mathbb%
{R}^{N}}\times(0,\infty),\\
\phi(x,\tau_{0})=\phi_{0}(x)&\hbox{in}\ {\mathbb{R}^{N}}.\end{array}\right.$$
Here ${\mathcal{L}}_{0}$ is the linearized operator around $w_{0}$, which is defined by
$${\mathcal{L}}_{0}\phi:=-(1+2w_{0}^{2})\Delta\phi-4w_{0}\nabla w_{0}\cdot\nabla%
\phi-(4w_{0}\Delta w_{0}+2|\nabla w_{0}|^{2})\phi+\phi-pw_{0}^{p-1}\phi.$$
Proof.
The weak convergence of $\phi_{n}$ follows by Lemma 2.25.
We show that the weak limit $\phi$ satisfies (2.28).
Now from (1.2) and (1.4) and by the definition of $\phi_{n}$, one has
(2.29)
$$(\phi_{n})_{t}-\Delta\phi_{n}+\phi_{n}-\eta_{n}^{-1}\big{(}u_{n}\Delta u_{n}^{%
2}-w_{n}\Delta w_{n}^{2}\big{)}-\eta_{n}^{-1}(u_{n}^{p}-w_{n}^{p})=0.$$
Fix $\varphi\in C_{0}^{\infty}\big{(}{\mathbb{R}^{N}}\times[0,K]\big{)}$.
Multiplying (2.29) by $\varphi$ and integrating over $[\tau_{n},K]\times{\mathbb{R}^{N}}$, we get
$$\displaystyle\int_{\tau_{n}}^{K}\int_{{\mathbb{R}^{N}}}\Big{(}-\phi_{n}\varphi%
_{t}+\nabla\phi_{n}\cdot\nabla\varphi+\phi_{n}\varphi-\frac{1}{\eta_{n}}\big{(%
}u_{n}\Delta u_{n}^{2}-w_{n}\Delta w_{n}^{2}\big{)}\varphi-\frac{1}{\eta_{n}}(%
u_{n}^{p}-w_{n}^{p})\varphi\Big{)}\,dx\,ds$$
$$\displaystyle=\int_{{\mathbb{R}^{N}}}\phi_{n}(x,\tau_{n})\varphi(x,\tau_{n})\,dx.$$
Now from the integration by parts, it follows that
$$\displaystyle-\frac{1}{\eta_{n}}\int_{{\mathbb{R}^{N}}}\big{(}u_{n}\Delta u_{n%
}^{2}-w_{n}\Delta w_{n}^{2}\big{)}\varphi\,dx$$
$$\displaystyle=2\int_{{\mathbb{R}^{N}}}(u_{n}\nabla(u_{n}+w_{n})\cdot\nabla\phi%
_{n}+|\nabla w_{n}|^{2}\phi_{n})\varphi\,dx$$
(2.30)
$$\displaystyle+2\int_{{\mathbb{R}^{N}}}\big{(}(u_{n}+w_{n})\phi_{n}\nabla u_{n}%
+w_{n}^{2}\nabla\phi_{n}\big{)}\cdot\nabla\varphi\,dx.$$
Moreover by the mean value theorem, we also have
(2.31)
$$-\frac{1}{\eta_{n}}\int_{{\mathbb{R}^{N}}}(u_{n}^{p}-w_{n}^{p})\varphi\,dx=-p%
\int_{{\mathbb{R}^{N}}}(\kappa_{n}u_{n}+(1-\kappa_{n})w_{n})^{p-1}\phi_{n}%
\varphi\,dx$$
for some $\kappa_{n}\in(0,1)$.
Thus, we obtain
(2.32)
$$\int_{\tau_{0}}^{K}\int_{{\mathbb{R}^{N}}}(-\phi_{n}\varphi_{t}+{\mathcal{L}}_%
{0}\phi_{n}\cdot\varphi)\,dx\,ds-\int_{{\mathbb{R}^{N}}}\phi_{0}(x)\varphi(x,%
\tau_{0})\,dx=\sum_{i=1}^{4}\mathbb{I}_{i}^{n},$$
where we have set
$$\displaystyle\mathbb{I}^{n}_{1}$$
$$\displaystyle:=\int_{\tau_{0}}^{\tau_{n}}\int_{{\mathbb{R}^{N}}}\Big{\{}-\phi_%
{n}\varphi_{t}+\nabla\phi_{n}\cdot\nabla\varphi+\phi_{n}\varphi+2\big{(}(u_{n}%
+w_{n})\phi_{n}\nabla u_{n}+w_{n}^{2}\nabla\phi_{n}\big{)}\cdot\nabla\varphi$$
$$\displaystyle\quad+2(u_{n}\nabla(u_{n}+w_{n})\cdot\nabla\phi_{n}+|\nabla w_{n}%
|^{2}\phi_{n})\varphi-p(\kappa_{n}u_{n}+(1-\kappa_{n})w_{n})^{p-1}\phi_{n}%
\varphi\Big{\}}\,dx\,ds,$$
$$\displaystyle\mathbb{I}^{n}_{2}$$
$$\displaystyle:=\int_{{\mathbb{R}^{N}}}\big{(}\phi_{n}(x,\tau_{n})\varphi(x,%
\tau_{n})-\phi_{0}(x)\varphi(x,\tau_{0})\big{)}\,dx,$$
$$\displaystyle\mathbb{I}^{n}_{3}$$
$$\displaystyle:=\int_{\tau_{0}}^{K}\int_{{\mathbb{R}^{N}}}p\big{(}(\kappa_{n}u_%
{n}+(1-\kappa_{n})w_{n})^{p-1}-w_{0}^{p-1}\Big{)}\phi_{n}\varphi\,dx\,ds,$$
$$\displaystyle\mathbb{I}^{n}_{4}$$
$$\displaystyle:=-\int_{\tau_{0}}^{K}\int_{{\mathbb{R}^{N}}}\Big{\{}2\phi_{n}%
\big{(}(u_{n}+w_{n})\nabla u_{n}-2w_{0}\nabla w_{0}\big{)}\cdot\nabla\varphi+2%
(w_{n}^{2}-w_{0}^{2})\nabla\phi_{n}\cdot\nabla\varphi$$
$$\displaystyle\quad+2\varphi\big{(}u_{n}\nabla(u_{n}+w_{n})-2w_{0}\nabla w_{0}%
\big{)}\cdot\nabla\phi_{n}+2(|\nabla w_{n}|^{2}-|\nabla w_{0}|^{2})\phi_{n}%
\varphi\Big{\}}\,dx\,ds.$$
Now by virtue of Lemma 2.25, as $n\to\infty$ one has
$$\displaystyle|\mathbb{I}_{1}^{n}|$$
$$\displaystyle\leq C\int_{\tau_{0}}^{\tau_{n}}\int_{{\mathbb{R}^{N}}}\Big{(}|%
\phi_{n}|\big{(}|\varphi_{t}|+|\varphi|+|\nabla\varphi|\big{)}+|\nabla\phi_{n}%
|\big{(}|\nabla\varphi|+|\varphi|\big{)}\Big{)}\,dx\,ds$$
$$\displaystyle\leq C\Big{(}\int_{\tau_{0}}^{\tau_{n}}\int_{{\mathbb{R}^{N}}}%
\big{(}|\varphi_{t}|^{2}+|\varphi|^{2}+|\nabla\varphi|^{2}\big{)}dx\,ds\Big{)}%
^{1\over 2}\Big{(}\int_{\tau_{0}}^{\tau_{n}}\int_{{\mathbb{R}^{N}}}\big{(}|%
\nabla\phi_{n}|^{2}+|\phi_{n}|^{2}\big{)}\,dx\,ds\Big{)}^{1\over 2}$$
$$\displaystyle\leq C|\tau_{n}-\tau_{0}|^{1\over 2}\Big{(}\int_{0}^{K}\|\phi_{n}%
(\cdot,s)\|_{H^{1}}^{2}\,ds\Big{)}^{1\over 2}\leq C|\tau_{n}-\tau_{0}|^{1\over
2%
}\to 0.$$
Moreover since $\phi_{n}(\cdot,\tau_{n})\to\phi_{0}$ in $L^{2}_{{\rm loc}}({\mathbb{R}^{N}})$,
we also have $|\mathbb{I}_{2}^{n}|\to 0$ as $n\to\infty$.
Next from (2.25) and by the uniform convergence of $w_{n}$ to $w_{0}$, it follows that
$$\lim_{n\to\infty}\Big{(}\sup_{{\mathbb{R}^{N}}\times[0,K]}\big{|}w_{0}^{p-1}-(%
\kappa_{n}u_{n}+(1-\kappa_{n})w_{n})^{p-1}\big{|}\Big{)}=0.$$
Thus by Lemma 2.25, one has $|\mathbb{I}_{3}^{n}|\to 0$.
Similarly by the uniform convergences of $\nabla u_{n}\to\nabla w_{0}$,
$\nabla w_{n}\to\nabla w_{0}$ and from (2.24), we also have $|\mathbb{I}_{4}^{n}|\to 0$.
Letting $n\to\infty$ in (2.32), we obtain
$$\int_{\tau_{0}}^{K}\int_{{\mathbb{R}^{N}}}(-\phi\varphi_{t}+{\mathcal{L}}_{0}%
\phi\cdot\varphi)\,dx\,ds-\int_{{\mathbb{R}^{N}}}\phi_{0}(x)\varphi(x,\tau_{0}%
)\,dx=0.$$
This implies that $\phi$ is a weak solution of (2.28).
Then by the linear parabolic theory, it follows that $\phi$ is a classical solution and $\phi\in C\big{(}(0,\infty),L^{2}({\mathbb{R}^{N}})\big{)}$.
∎
Lemma 2.27.
Let $\theta_{n}(x,t):=\phi_{n}(x,t)-\phi(x,t).$ Then the following facts hold.
(i)
Let $K>1$.
Then there exist $n_{1}=n_{1}(K)\in\mathbb{N}$ and a positive constant $\hat{C}$ independent of $n\in\mathbb{N}$ and $K$ such that
$$\sup_{n\geq n_{1}}\int_{1}^{K}\|\theta_{n}(\cdot,s)\|_{H^{1}({\mathbb{R}^{N}})%
}^{2}\,ds\leq\hat{C}.$$
(ii)
For any $\varepsilon>0$, there exist $T_{\varepsilon}>1$ and
$n_{2}=n_{2}(T_{\varepsilon})\in\mathbb{N}$ such that
$$\sup_{n\geq n_{2}}\int_{T_{\varepsilon}}^{2T_{\varepsilon}}\|\theta_{n}(\cdot,%
s)\|_{H^{1}({\mathbb{R}^{N}})}^{2}\,ds<\varepsilon.$$
Proof.
(i) Let $R>0$ be given. First, we claim that
(2.33)
$$\lim_{n\to\infty}\sup_{t\in[\tau_{n},K]}\|\theta_{n}(\cdot,t)\|_{L^{2}\big{(}B%
(0,R)\big{)}}=0.$$
Now from (2.28) and (2.29), one has
$$\displaystyle(\theta_{n})_{t}-\Delta\theta_{n}+\theta_{n}-\frac{1}{\eta_{n}}%
\big{(}u_{n}\Delta u_{n}^{2}-w_{n}\Delta w_{n}^{2}\big{)}-\frac{1}{\eta_{n}}(u%
_{n}^{p}-w_{n}^{p})$$
(2.34)
$$\displaystyle\quad+2w_{0}^{2}\Delta\phi+4w_{0}\nabla w_{0}\cdot\nabla\phi+4w_{%
0}\Delta w_{0}\phi+2|\nabla w_{0}|^{2}\phi+pw_{0}^{p-1}\phi=0.$$
Let $\xi\in C_{0}^{\infty}({\mathbb{R}^{N}})$ be a cut-off function satisfying $\xi\equiv 1$ on $B(0,R)$.
We multiply (2.4) by $\xi^{2}\theta_{n}$ and integrate it over ${\mathbb{R}^{N}}$.
Then, from (2.4), (2.31) and the integration by parts, we get
(2.35)
$$\int_{{\mathbb{R}^{N}}}\Big{(}\frac{1}{2}\frac{\partial}{\partial t}(\xi^{2}%
\theta_{n}^{2})+(1+2w_{0}^{2})|\nabla\theta_{n}|^{2}\xi^{2}+\theta_{n}^{2}\xi^%
{2}+2\xi\theta_{n}\nabla\theta_{n}\cdot\nabla\xi\Big{)}\,dx=-\sum_{i=1}^{5}%
\mathbb{I}_{i}^{n},$$
where we have set
$$\displaystyle\mathbb{I}^{n}_{1}$$
$$\displaystyle:=\int_{{\mathbb{R}^{N}}}\Big{(}2u_{n}\nabla(u_{n}+w_{n})\cdot%
\nabla\phi_{n}-4w_{0}\nabla w_{0}\cdot\nabla\phi\Big{)}\xi^{2}\theta_{n}\,dx,$$
$$\displaystyle\mathbb{I}^{n}_{2}$$
$$\displaystyle:=\int_{{\mathbb{R}^{N}}}\big{(}2|\nabla w_{n}|^{2}\phi_{n}-2|%
\nabla w_{0}|^{2}\phi\Big{)}\xi^{2}\theta_{n}\,dx,$$
$$\displaystyle\mathbb{I}^{n}_{3}$$
$$\displaystyle:=\int_{{\mathbb{R}^{N}}}\Big{(}2(u_{n}+w_{n})\phi_{n}\nabla u_{n%
}-4w_{0}\phi\nabla w_{0}\Big{)}\cdot\nabla(\xi^{2}\theta_{n})\,dx,$$
$$\displaystyle\mathbb{I}^{n}_{4}$$
$$\displaystyle:=\int_{{\mathbb{R}^{N}}}\Big{(}2(w_{n}^{2}-w_{0}^{2})\xi^{2}%
\nabla\phi_{n}\cdot\nabla\theta_{n}+2(w_{n}^{2}\nabla\phi_{n}-w_{0}^{2}\nabla%
\phi)\cdot\nabla(\xi^{2})\theta_{n}\Big{)}\,dx,$$
$$\displaystyle\mathbb{I}^{n}_{5}$$
$$\displaystyle:=\int_{{\mathbb{R}^{N}}}p\Big{(}w_{0}^{p-1}\phi-(\kappa_{n}u_{n}%
+(1-\kappa_{n})w_{n})^{p-1}\phi_{n}\Big{)}\xi^{2}\,\theta_{n}\,dx.$$
Now by the Schwarz and the Young inequalities, it follows that
$$2\int_{{\mathbb{R}^{N}}}\xi\theta_{n}\nabla\theta_{n}\cdot\nabla\xi\,dx\geq-%
\frac{1}{2}\int_{{\mathbb{R}^{N}}}|\nabla\theta_{n}|^{2}|\xi|^{2}\,dx-2\int_{{%
\mathbb{R}^{N}}}|\theta_{n}|^{2}|\nabla\xi|^{2}\,dx.$$
Next by the Schwarz inequality, one has
$$\displaystyle|\mathbb{I}_{1}^{n}|$$
$$\displaystyle\leq 2\|u_{n}\nabla(u_{n}+w_{n})\|_{L^{\infty}({\mathbb{R}^{N}}%
\times[0,K])}\|\xi\nabla\theta_{n}\|_{L^{2}}\|\xi\theta_{n}\|_{L^{2}}$$
$$\displaystyle\quad+2\|u_{n}\nabla(u_{n}+w_{n})-2w_{0}\nabla w_{0}\|_{L^{\infty%
}({\mathbb{R}^{N}}\times[0,K])}\|\xi\nabla\phi\|_{L^{2}}\|\xi\theta_{n}\|_{L^{%
2}}.$$
Similarly we have
$$\displaystyle|\mathbb{I}_{2}^{n}|$$
$$\displaystyle\leq 2\left\||\nabla w_{n}|^{2}-|\nabla w_{0}|^{2}\right\|_{L^{%
\infty}}\|\xi\theta_{n}\|_{L^{2}}\|\phi_{n}\|_{L^{2}}+2\|\nabla w_{0}\|_{L^{%
\infty}}^{2}\|\xi\theta_{n}\|_{L^{2}}^{2},$$
$$\displaystyle|\mathbb{I}_{3}^{n}|$$
$$\displaystyle\leq 2\|(u_{n}+w_{n})\nabla u_{n}-2w_{0}\nabla w_{0}\|_{L^{\infty%
}}\|\phi_{n}\|_{L^{2}}\|\xi\nabla\theta_{n}\|_{L^{2}}$$
$$\displaystyle\quad+4\|(u_{n}+w_{n})\nabla u_{n}-2w_{0}\nabla w_{0}\|_{L^{%
\infty}}\|\xi\theta_{n}\|_{L^{2}}\|\phi_{n}\nabla\xi\|_{L^{2}},$$
$$\displaystyle\quad+4\|w_{0}\nabla w_{0}\|_{L^{\infty}}\|\xi\theta_{n}\|_{L^{2}%
}\|\xi\nabla\theta_{n}\|_{L^{2}}+8\|w_{0}\nabla w_{0}\|_{L^{\infty}}\|\xi%
\theta_{n}\|_{L^{2}}\|\theta_{n}\nabla\xi\|_{L^{2}}$$
$$\displaystyle|\mathbb{I}_{4}^{n}|$$
$$\displaystyle\leq 2\|w_{n}^{2}-w_{0}^{2}\|_{L^{\infty}}\|\xi\nabla\phi\|_{L^{2%
}}\|\xi\nabla\theta_{n}\|_{L^{2}}+2\|w_{n}^{2}-w_{0}^{2}\|_{L^{\infty}}\|\xi%
\nabla\theta_{n}\|_{L^{2}}^{2}$$
$$\displaystyle\quad+4\|w_{n}^{2}-w_{0}^{2}\|_{L^{\infty}}\|\xi\nabla\phi\|_{L^{%
2}}\|\theta_{n}\nabla\xi\|_{L^{2}}+4\|w_{n}\|_{L^{\infty}}^{2}\|\xi\nabla%
\theta_{n}\|_{L^{2}}\|\theta_{n}\nabla\xi\|_{L^{2}},$$
$$\displaystyle|\mathbb{I}_{5}^{n}|$$
$$\displaystyle\leq p\|w_{0}^{p-1}-(\kappa_{n}u_{n}+(1-\kappa_{n})w_{n})^{p-1}\|%
_{L^{\infty}}\|\phi_{n}\|_{L^{2}}\|\xi\theta_{n}\|_{L^{2}}+p\|w_{0}\|_{L^{%
\infty}}^{p-1}\|\xi\theta_{n}\|_{L^{2}}^{2}.$$
Next applying (i) of Lemma 2.8 with $t_{1}=\tau_{n}+t_{n}$ and $t_{2}=t+t_{n}$, we get
$$\|\phi_{n}(\cdot,t)\|_{L^{2}}\leq e^{C(t-\tau_{n})}\|\phi_{n}(\cdot,\tau_{n})%
\|_{L^{2}}\leq Ce^{CK}\quad\hbox{for}\ t\in[\tau_{n},K]$$
and hence $\|\theta_{n}(\cdot,t)\|_{L^{2}}\leq C$.
Thus from (2.35), the uniform decays of $u_{n}$, $w_{n}$, $\nabla u_{n}$, $\nabla w_{n}$ and by the Young inequality, we obtain
$$\displaystyle\frac{\partial}{\partial t}\|\xi\theta_{n}\|_{L^{2}}^{2}+\int_{{%
\mathbb{R}^{N}}}(|\nabla\theta_{n}|^{2}+|\theta_{n}|^{2})|\xi|^{2}\,dx$$
$$\displaystyle\leq C\|\xi\theta_{n}\|_{L^{2}}^{2}+C\|\theta_{n}\nabla\xi\|_{L^{%
2}}^{2}$$
$$\displaystyle\quad+2\|w_{n}^{2}-w_{0}^{2}\|_{L^{\infty}}\|\xi\nabla\theta_{n}%
\|_{L^{2}}^{2}+C\|\theta_{n}\nabla\xi\|_{L^{2}}+h_{n},$$
where $C$ and $h_{n}$ are positive constants with $h_{n}\to 0$.
Now let $\varepsilon>0$.
We choose $\xi$ so that
$$C\|\theta_{n}\nabla\xi\|_{L^{2}}^{2}+C\|\theta_{n}\nabla\xi\|_{L^{2}}\leq C%
\sup_{{\mathbb{R}^{N}}}|\nabla\xi|(1+|\nabla\xi|)<\frac{\varepsilon}{2}.$$
Next we take large $n_{0}\in\mathbb{N}$ so that
$$2\|w_{n}^{2}-w_{0}^{2}\|_{L^{\infty}}\leq\frac{1}{2}\ \ \hbox{and}\ \ h_{n}<%
\frac{\varepsilon}{2}\ \hbox{for}\ n\geq n_{0}.$$
Then we obtain
(2.36)
$$\frac{\partial}{\partial t}\|\xi\theta_{n}(\cdot,t)\|_{L^{2}}^{2}\leq C\|\xi%
\theta_{n}(\cdot,t)\|_{L^{2}}^{2}+\varepsilon.$$
Let $\zeta_{n}(t):=\|\xi\theta_{n}(\cdot,t)\|_{L^{2}}^{2}$.
From (2.36), it follows that $\zeta_{n}^{\prime}\leq C\zeta_{n}+\varepsilon$.
Thus by the Gronwall inequality, one has
$$\zeta_{n}(t)\leq e^{C(t-\tau_{n})}\zeta_{n}(\tau_{n})+e^{Ct}\int_{\tau_{n}}^{t%
}\varepsilon e^{-Cs}\,ds\leq e^{CK}\left(\zeta_{n}(\tau_{n})+\frac{\varepsilon%
}{C}\right)\ \hbox{for}\ t\in[\tau_{n},K].$$
Since $\phi_{n}(\cdot,\tau_{n})\to\phi(\cdot,\tau_{0})$ in $L^{2}_{{\rm loc}}({\mathbb{R}^{N}})$,
we have $\zeta_{n}(\tau_{n})=\|\xi\theta_{n}(\cdot,\tau_{n})\|_{L^{2}}^{2}\to 0$. Thus,
$$\limsup_{n\to\infty}\Big{(}\sup_{t\in[\tau_{n},K]}\|\theta_{n}(\cdot,t)\|_{L^{%
2}\big{(}B(0,R\big{)}}\Big{)}\leq\frac{\varepsilon}{C}e^{CK}.$$
Since $\varepsilon$ is arbitrarily, (2.33) holds.
Next we show that $\int_{1}^{K}\|\theta_{n}(\cdot,s)\|_{H^{1}({\mathbb{R}^{N}})}^{2}\,ds\leq\hat{C}$.
To this aim, we multiply (2.4) by $\theta_{n}$ and integrate it over ${\mathbb{R}^{N}}$.
Then arguing as above, one has
$$\int_{{\mathbb{R}^{N}}}\Big{(}\frac{1}{2}\frac{\partial}{\partial t}(\theta_{n%
})^{2}+(1+2w_{0}^{2})|\nabla\theta_{n}|^{2}+|\theta_{n}|^{2}\Big{)}\,dx=-\sum_%
{i=1}^{4}\mathbb{J}^{n}_{i},$$
where we have set
$$\displaystyle\mathbb{J}^{n}_{1}$$
$$\displaystyle:=\int_{{\mathbb{R}^{N}}}\Big{(}2u_{n}\nabla(u_{n}+w_{n})\cdot%
\nabla\phi_{n}-4w_{0}\nabla w_{0}\cdot\nabla\phi\Big{)}\theta_{n}\,dx,$$
$$\displaystyle\mathbb{J}^{n}_{2}$$
$$\displaystyle:=\int_{{\mathbb{R}^{N}}}\Big{(}2(u_{n}+w_{n})\phi_{n}\nabla u_{n%
}-4w_{0}\phi\nabla w_{0}\Big{)}\cdot\nabla\theta_{n}\,dx,$$
$$\displaystyle\mathbb{J}^{n}_{3}$$
$$\displaystyle:=\int_{{\mathbb{R}^{N}}}(2|\nabla w_{n}|^{2}\phi_{n}-2|\nabla w_%
{0}|^{2}\phi)\theta_{n}+2(w_{n}^{2}-w_{0}^{2})\nabla\phi_{n}\cdot\nabla\theta_%
{n}\,dx,$$
$$\displaystyle\mathbb{J}^{n}_{4}$$
$$\displaystyle:=\int_{{\mathbb{R}^{N}}}p\Big{(}w_{0}^{p-1}\phi-(\kappa_{n}u_{n}%
+(1-\kappa_{n})w_{n})^{p-1}\phi_{n}\Big{)}\theta_{n}\,dx.$$
Now, we fix $\delta>0$ arbitrarily.
By the Young inequality, it follows that
$$\displaystyle|\mathbb{J}_{1}^{n}|$$
$$\displaystyle\leq 2\int_{{\mathbb{R}^{N}}}|u_{n}\nabla(u_{n}+w_{n})||\theta_{n%
}||\nabla\theta_{n}|+|u_{n}\nabla(u_{n}+w_{n})-2w_{0}\nabla w_{0}||\nabla\phi|%
|\theta_{n}|\,dx$$
$$\displaystyle\leq\frac{1}{8}\|\nabla\theta_{n}\|_{L^{2}}^{2}+C\int_{{\mathbb{R%
}^{N}}}|u_{n}\nabla(u_{n}+w_{n})|^{2}|\theta_{n}|^{2}\,dx$$
$$\displaystyle\quad+\delta\|\theta_{n}\|_{L^{2}}^{2}+C_{\delta}\|u_{n}\nabla(u_%
{n}+w_{n})-2w_{0}\nabla w_{0}\|_{L^{\infty}}^{2}\|\nabla\phi\|_{L^{2}}^{2}$$
$$\displaystyle\leq\frac{1}{8}\|\nabla\theta_{n}\|_{L^{2}}^{2}+C\sup_{|x|\geq R}%
|u_{n}(x,t)|\int_{B^{c}(0,R)}|\theta_{n}|^{2}\,dx+C\int_{B(0,R)}|\theta_{n}|^{%
2}\,dx$$
$$\displaystyle\quad+\delta\|\theta_{n}\|_{L^{2}}^{2}+C_{\delta}\|u_{n}\nabla(u_%
{n}+w_{n})-2w_{0}\nabla w_{0}\|_{L^{\infty}}^{2}\|\nabla\phi\|_{L^{2}}^{2}.$$
From (1.6), there exists $R_{\delta}>0$ such that
$$C\sup_{|x|\geq R_{\delta}}|u_{n}(x,t)|<\delta,$$
for all $n\in\mathbb{N}$ and $t\in[1,K]$.
Thus we obtain
$$|\mathbb{J}_{1}^{n}|\leq\frac{1}{8}\|\nabla\theta_{n}\|_{L^{2}}^{2}+2\delta\|%
\theta_{n}\|_{L^{2}}^{2}+C_{\delta}\int_{B(0,R_{\delta})}|\theta_{n}|^{2}\,dx+%
C_{\delta}\hat{h}_{n},$$
where $C_{\delta}$ is a positive constant independent of $n\in\mathbb{N}$ and $K$,
and $\hat{h}_{n}$ is a positive constant satisfying $\hat{h}_{n}\to 0$ as $n\to\infty$.
Estimating $\mathbb{J}_{2}^{n},\mathbb{J}_{3}^{n},\mathbb{J}_{4}^{n}$ similarly, we have
$$\displaystyle\frac{\partial}{\partial t}\|\theta_{n}\|_{L^{2}}^{2}+\|\theta_{n%
}\|_{H^{1}}^{2}$$
$$\displaystyle\leq\left(\frac{1}{2}+\|w_{n}^{2}-w_{0}^{2}\|_{L^{\infty}}\right)%
\|\nabla\theta_{n}\|_{L^{2}}^{2}$$
$$\displaystyle\quad+5\delta\|\theta_{n}\|_{L^{2}}^{2}+C_{\delta}\int_{B(0,R_{%
\delta})}|\theta_{n}|^{2}\,dx+C_{\delta}\hat{h}_{n}.$$
Now we choose $\delta=1/10$.
Taking $n\in\mathbb{N}$ larger if necessary,
we have $\|w_{n}^{2}-w_{0}^{2}\|_{L^{\infty}}\leq 1/4$.
Then we obtain
(2.37)
$$\frac{\partial}{\partial t}\|\theta_{n}(\cdot,t)\|_{L^{2}}^{2}+\|\theta_{n}(%
\cdot,t)\|_{H^{1}}^{2}\leq C_{\delta}\int_{B(0,R_{\delta})}|\theta_{n}(x,t)|^{%
2}\,dx+C_{\delta}\hat{h}_{n}.$$
Integrating (2.37) over $[\tau_{n},K]$, we get
$$\displaystyle\int_{\tau_{n}}^{K}\|\theta_{n}(\cdot,s)\|_{H^{1}}^{2}\,ds\leq\|%
\theta_{n}(\cdot,\tau_{n})\|_{L^{2}}^{2}+C_{\delta}\int_{\tau_{n}}^{K}\int_{B(%
0,R_{\delta})}|\theta_{n}(x,s)|^{2}\,dx\,ds+C_{\delta}\hat{h}_{n}(K-\tau_{n}).$$
From (2.33) and $\hat{h}_{n}\to 0$, there exists $n_{1}=n_{1}(K)\in\mathbb{N}$ such that
$$C_{\delta}\int_{\tau_{n}}^{K}\int_{B(0,R_{\delta})}|\theta_{n}(x,s)|^{2}\,dx\,%
ds\leq 2\quad\hbox{and}\quad C_{\delta}\hat{h}_{n}(K-\tau_{n})\leq 2\quad\hbox%
{for}\ n\geq n_{1}.$$
Moreover from $\|\phi_{n}(\cdot,\tau_{n})\|_{L^{2}}\leq 2$,
$\|\phi(\cdot,\tau_{0})\|_{L^{2}}\leq 2$ and by the continuity of $\phi$, we also have
$$\displaystyle\sup_{n\geq n_{1}}\|\theta_{n}(\cdot,\tau_{n})\|_{L^{2}}^{2}\leq%
\big{(}\|\phi_{n}(\cdot,\tau_{n})\|_{L^{2}}+\|\phi(\cdot,\tau_{n})-\phi(\cdot,%
\tau_{0})\|_{L^{2}}+\|\phi(\cdot,\tau_{0})\|_{L^{2}}\big{)}^{2}\leq 36.$$
Since $\tau_{n}\leq 1$, we obtain
$$\sup_{n\geq n_{1}}\int_{1}^{K}\|\theta_{n}(\cdot,s)\|_{H^{1}}^{2}\,ds\leq 40%
\quad\hbox{for}\ n\geq n_{1}.$$
This completes the proof of (i).
(ii) We fix $\varepsilon>0$ arbitrarily and let $T>1$.
First we observe from (i) that
$$\int_{1}^{T}\|\theta_{n}(\cdot,s)\|_{H^{1}}^{2}\,ds\leq\hat{C}\quad\hbox{for}%
\ n\geq n_{1}(T).$$
Thus by the mean value theorem, there exists $s_{n}\in[1,T]$ such that
$\|\theta_{n}(\cdot,s_{n})\|_{L^{2}}^{2}\leq\frac{\hat{C}}{T-1}$.
Next we integrate (2.37) over $[s_{n},2T]$.
Then from $\tau_{n}\leq 1\leq s_{n}\leq T$, it follows that
$$\displaystyle\int_{T}^{2T}\|\theta_{n}(\cdot,s)\|_{H^{1}}^{2}\,ds$$
$$\displaystyle\leq\int_{s_{n}}^{2T}\|\theta_{n}(\cdot,s)\|_{H^{1}}^{2}\,ds$$
$$\displaystyle\leq\|\theta_{n}(\cdot,s_{n})\|_{L^{2}}^{2}+C\int_{s_{n}}^{2T}%
\int_{B(0,R_{\delta})}|\theta_{n}|^{2}\,dx\,ds$$
$$\displaystyle\leq\frac{\hat{C}}{T-1}+C\int_{\tau_{n}}^{2T}\int_{B(0,R_{\delta}%
)}|\theta_{n}|^{2}\,dx\,ds+C\hat{h}_{n}(2T-s_{n}).$$
Now we choose $T_{\varepsilon}>1$ so that
$\frac{\hat{C}}{T_{\varepsilon}-1}<\frac{\varepsilon}{3}$.
Next from formula (2.33) and $\hat{h}_{n}\to 0$,
we can take large $n_{2}=n_{2}(T_{\varepsilon})\in\mathbb{N}$ so that
$$C\int_{\tau_{n}}^{2T_{\varepsilon}}\int_{B(0,R_{\delta})}|\theta_{n}|^{2}\,dx%
\,ds<\frac{\varepsilon}{3}\quad\hbox{and}\quad C\hat{h}_{n}(2T_{\varepsilon}-s%
_{n})<\frac{\varepsilon}{3}\quad\hbox{for}\ n\geq n_{2}.$$
Then it follows that
$$\sup_{n\geq n_{2}}\int_{T_{\varepsilon}}^{2T_{\varepsilon}}\|\theta_{n}(\cdot,%
s)\|_{H^{1}}^{2}\,ds<\varepsilon$$
and hence the proof is complete.
∎
Now we consider the following eigenvalue problem
$${\mathcal{L}}_{0}\psi=\mu\psi,\quad\psi\in L^{2}({\mathbb{R}^{N}})\quad\hbox{%
and}\quad\psi(x)\to 0\ \hbox{as}\ |x|\to\infty.$$
Then the first eigenvalue $\mu_{1}$ is negative.
We denote by $\psi_{1}$ the associated eigenfunction with $\|\psi_{1}\|_{L^{2}({\mathbb{R}^{N}})}=1$.
Moreover we know that the second eigenvalue $\mu_{2}$ is zero and the corresponding eigenspace is spanned by $\{\frac{\partial w_{0}}{\partial x_{i}}\}_{i=1}^{N}$ (See [3], Remark 4.10).
Let $\tau_{0}\in[\frac{1}{2},1]$ be as in (2.27) and decompose
(2.38)
$$\phi_{0}(x)=\phi(x,\tau_{0})=C_{0}e^{-\mu_{1}\tau_{0}}\psi_{1}(x)+\sum_{i=1}^{%
N}C_{i}\frac{\partial w_{0}}{\partial x_{i}}(x)+\tilde{\theta}(x,\tau_{0}),$$
where $C_{0},C_{i}\in\mathbb{R}$ and $\psi_{1}$, $\frac{\partial w_{0}}{\partial x_{i}}$, $\tilde{\theta}$ are mutually orthogonal in $L^{2}({\mathbb{R}^{N}})$.
Finally we set
(2.39)
$$\tilde{\theta}(x,t):=\phi(x,t)-C_{0}e^{-\mu_{1}t}\psi_{1}(x)-\sum_{i=1}^{N}C_{%
i}\frac{\partial w_{0}}{\partial x_{i}}(x).$$
Then by direct calculations, one can see that $\tilde{\theta}$ satisfies
(2.40)
$$\tilde{\theta}_{t}+{\mathcal{L}}_{0}\tilde{\theta}=0\quad\hbox{in}\ {\mathbb{R%
}^{N}}\times(0,\infty).$$
Moreover by the definition of $\tilde{\theta}$, we also have
(2.41)
$$\int_{{\mathbb{R}^{N}}}\tilde{\theta}(\cdot,\tau_{0})\psi_{1}\,dx=\int_{{%
\mathbb{R}^{N}}}\tilde{\theta}(\cdot,\tau_{0})\frac{\partial w_{0}}{\partial x%
_{i}}\,dx=0\quad\hbox{for}\ i=1,\cdots,N.$$
In the next result, we shall use the crucial information of non-degeneracy of stationary solutions.
Lemma 2.28 (Non-degeneracy and stability).
There exist $\alpha>0$ and $\tilde{T}>0$ such that
$e^{-\alpha\tilde{T}}<1$ and
$$\int_{T}^{2T}\|\tilde{\theta}(\cdot,s)\|_{H^{1}({\mathbb{R}^{N}})}^{2}\,ds\leq
e%
^{-\alpha T}\quad\hbox{for all}\ \ T\geq\tilde{T}.$$
Proof.
First we claim that
(2.42)
$$\int_{{\mathbb{R}^{N}}}\tilde{\theta}(\cdot,t)\psi_{1}\,dx=\int_{{\mathbb{R}^{%
N}}}\tilde{\theta}(\cdot,t)\frac{\partial w_{0}}{\partial x_{i}}\,dx=0\quad%
\hbox{for}\ i=1,\cdots,N\ \hbox{and}\ t\geq\tau_{0}.$$
To this aim, we put $\eta(t):=\int_{{\mathbb{R}^{N}}}\tilde{\theta}(\cdot,t)\psi_{1}\,dx$.
Then from (2.40), one has
$$\eta^{\prime}(t)=(\tilde{\theta}_{t},\psi_{1})_{L^{2}}=-({\mathcal{L}}_{0}%
\tilde{\theta},\psi_{1})_{L^{2}}=-(\tilde{\theta},{\mathcal{L}}_{0}\psi_{1})_{%
L^{2}}=-\mu_{1}(\tilde{\theta},\psi_{1})_{L^{2}}=-\mu_{1}\eta.$$
Thus from (2.41), it follows that $\eta(t)=\eta(\tau_{0})e^{-\mu_{1}(t-\tau_{0})}=0$ for all $t\geq\tau_{0}$.
We can prove the second equality in a similar way.
Next we define
$$\displaystyle\bar{\mu}$$
$$\displaystyle:=\inf\Big{\{}({\mathcal{L}}_{0}\psi,\psi)_{L^{2}}:\ \psi\in H^{1%
}({\mathbb{R}^{N}}),\ \|\psi\|_{L^{2}}=1,$$
$$\displaystyle (\psi,\psi_{1})_{L^{2}}=\big{(}\psi,\frac{\partial
w%
_{0}}{\partial x_{i}}\big{)}_{L^{2}}=0\quad\hbox{for}\ i=1,\cdots,N\Big{\}}$$
and suppose that $\bar{\mu}>0$ for the present.
Then from (2.40) and (2.42), we get
(2.43)
$$\frac{d}{dt}\int_{{\mathbb{R}^{N}}}\tilde{\theta}^{2}(\cdot,t)\,dx=-2({%
\mathcal{L}}_{0}\tilde{\theta},\tilde{\theta})_{L^{2}}\leq-2\bar{\mu}\int_{{%
\mathbb{R}^{N}}}\tilde{\theta}^{2}(\cdot,t)\,dx$$
and hence
(2.44)
$$\int_{{\mathbb{R}^{N}}}\tilde{\theta}^{2}(\cdot,t)\,dx\leq e^{-2\bar{\mu}(t-%
\tau_{0})}\int_{{\mathbb{R}^{N}}}\tilde{\theta}^{2}(\cdot,\tau_{0})\,dx\quad%
\hbox{for}\ t\geq\tau_{0}.$$
Now let $\tilde{T}>1$ be a constant which will be chosen later
and take $T\geq\tilde{T}$ arbitrarily.
Integrating (2.43) over $[T,2T]$, one has
$$\int_{{\mathbb{R}^{N}}}\tilde{\theta}^{2}(\cdot,2T)\,dx+2\int_{T}^{2T}({%
\mathcal{L}}_{0}\tilde{\theta},\tilde{\theta})_{L^{2}}\,ds=\int_{{\mathbb{R}^{%
N}}}\tilde{\theta}^{2}(\cdot,T)\,dx.$$
Moreover by the Young inequality, we also have
$$\displaystyle({\mathcal{L}}_{0}\tilde{\theta},\tilde{\theta})_{L^{2}}$$
$$\displaystyle=\int_{{\mathbb{R}^{N}}}\Bigl{(}(1+2w_{0}^{2})|\nabla\tilde{%
\theta}|^{2}+8w_{0}\tilde{\theta}\nabla w_{0}\cdot\nabla\tilde{\theta}+2\tilde%
{\theta}^{2}|\nabla w_{0}|^{2}+\tilde{\theta}^{2}-pw_{0}^{p-1}\tilde{\theta}^{%
2}\Bigr{)}\,dx$$
$$\displaystyle\geq\int_{{\mathbb{R}^{N}}}\Big{(}|\nabla\tilde{\theta}|^{2}+%
\tilde{\theta}^{2}-8w_{0}|\tilde{\theta}||\nabla w_{0}||\nabla\tilde{\theta}|-%
pw_{0}^{p-1}\tilde{\theta}^{2}\Big{)}\,dx$$
(2.45)
$$\displaystyle\geq\frac{1}{2}\int_{{\mathbb{R}^{N}}}|\nabla\tilde{\theta}|^{2}+%
\tilde{\theta}^{2}\,dx-\int_{{\mathbb{R}^{N}}}\big{(}pw_{0}^{p-1}+32w_{0}^{2}|%
\nabla w_{0}|^{2}\big{)}\tilde{\theta}^{2}\,dx$$
$$\displaystyle\geq\frac{1}{2}\|\tilde{\theta}(\cdot,t)\|_{H^{1}}^{2}-C\|\tilde{%
\theta}(\cdot,t)\|_{L^{2}}^{2}.$$
Thus from (2.44), we obtain
$$\displaystyle\int_{T}^{2T}\|\tilde{\theta}(\cdot,s)\|_{H^{1}}^{2}\,ds$$
$$\displaystyle\leq C\int_{T}^{2T}\|\tilde{\theta}(\cdot,s)\|_{L^{2}}^{2}\,ds+\|%
\tilde{\theta}(\cdot,T)\|_{L^{2}}^{2}$$
$$\displaystyle\leq Ce^{2\bar{\mu}\tau_{0}}\|\tilde{\theta}(\cdot,\tau_{0})\|_{L%
^{2}}^{2}\int_{T}^{2T}e^{-2\bar{\mu}s}\,ds+e^{2\bar{\mu}\tau_{0}}\|\tilde{%
\theta}(\cdot,\tau_{0})\|_{L^{2}}^{2}e^{-2\bar{\mu}T}$$
$$\displaystyle\leq\frac{C}{\bar{\mu}}e^{2\bar{\mu}\tau_{0}}\|\tilde{\theta}(%
\cdot,\tau_{0})\|_{L^{2}}^{2}e^{-2\bar{\mu}T}+e^{2\bar{\mu}\tau_{0}}\|\tilde{%
\theta}(\cdot,\tau_{0})\|_{L^{2}}^{2}e^{-2\bar{\mu}T}$$
$$\displaystyle\leq\bar{C}e^{-2\bar{\mu}T}\quad\hbox{for all}\ \ T\geq\tilde{T},$$
where $\bar{C}>0$ is independent of $T$ and $\tilde{T}$.
Putting $\alpha:=\bar{\mu}>0$ and taking $\tilde{T}>1$ larger so that
$\bar{C}e^{-\alpha\tilde{T}}\leq 1$, the claim holds.
We now show that $\bar{\mu}>0$.
By the definition of $\bar{\mu}$ and $\mu_{2}=0$, it follows that $\bar{\mu}\geq 0$.
Suppose by contradiction that $\bar{\mu}=0$.
Then there exists $\{\psi_{n}\}\subset H^{1}({\mathbb{R}^{N}})$ such that $\|\psi_{n}\|_{L^{2}}=1$,
$(\psi_{n},\psi_{1})_{L^{2}}=(\psi_{n},\frac{\partial w_{0}}{\partial x_{i}})_{%
L^{2}}=0$
for $i=1,\cdots,N$ and $({\mathcal{L}}_{0}\psi_{n},\psi_{n})_{L^{2}}\to 0$ as $n\to\infty$.
Since $({\mathcal{L}}_{0}\psi_{n},\psi_{n})_{L^{2}}\to 0$ and $\|\psi_{n}\|_{L^{2}}=1$,
one can show that $\|\psi_{n}\|_{H^{1}}$ is bounded.
Thus passing to a subsequence, we may assume that
$\psi_{n}\rightharpoonup\bar{\psi}$ in $H^{1}({\mathbb{R}^{N}})$ and
$\psi_{n}\to\bar{\psi}$ in $L_{{\rm loc}}^{2}({\mathbb{R}^{N}})$ for some $\bar{\psi}\in H^{1}({\mathbb{R}^{N}})$.
Moreover arguing as (2.4), we have
(2.46)
$$\frac{1}{2}\|\psi_{n}\|_{H^{1}}^{2}\leq({\mathcal{L}}_{0}\psi_{n},\psi_{n})_{L%
^{2}}+\int_{{\mathbb{R}^{N}}}\big{(}pw_{0}^{p-1}+32w_{0}^{2}|\nabla w_{0}|^{2}%
\big{)}\psi_{n}^{2}\,dx.$$
Since $w_{0}$ decays exponentially at infinity and $\|\psi_{n}\|_{L^{2}}=1$,
there exists $R>0$ such that
$$\int_{B^{c}(0,R)}\big{(}pw_{0}^{p-1}+32w_{0}^{2}|\nabla w_{0}|^{2}\big{)}\psi_%
{n}^{2}\,dx\leq\frac{1}{4}.$$
Thus from (2.46), we get
$$\frac{1}{4}\leq C\int_{B(0,R)}\psi_{n}^{2}\,dx+o_{n}(1).$$
This implies that $\bar{\psi}\not\equiv 0$.
Moreover by the Fatou lemma, the weak convergence of $\psi_{n}\rightharpoonup\bar{\psi}$,
the strong convergence in $L^{2}_{{\rm loc}}({\mathbb{R}^{N}})$
and by the exponential decay of $w_{0}$, one can show that
$$({\mathcal{L}}_{0}\bar{\psi},\bar{\psi})_{L^{2}}\leq 0,\quad(\bar{\psi},\psi_{%
1})_{L^{2}}=\Big{(}\bar{\psi},\frac{\partial w_{0}}{\partial x_{i}}\Big{)}_{L^%
{2}}=0\quad\hbox{for}\ i=1,\cdots,N.$$
Since $\bar{\mu}=0$,
it follows by the definition of $\bar{\mu}$ that $({\mathcal{L}}_{0}\bar{\psi},\bar{\psi})_{L^{2}}=0$.
By the Lagrange multiplier rule,
using $\bar{\psi},\psi_{1}$ and $\partial w_{0}/\partial x_{i}$ as test functions,
one can prove that ${\mathcal{L}}_{0}\bar{\psi}=0$, which contradicts
${\rm Ker}({\mathcal{L}}_{0})={\rm span}\{\frac{\partial w_{0}}{\partial x_{i}}\}$.
Thus $\bar{\mu}>0$ and the proof is complete.
∎
Lemma 2.29.
It follows that $C_{0}=0$ and hence it holds
$$\phi(x,t)=\sum_{i=1}^{N}C_{i}\frac{\partial w_{0}}{\partial x_{i}}(x)+\tilde{%
\theta}(x,t).$$
Proof.
First we observe by Lemma 2.4 that
(2.47)
$$I(u)(t+t_{n})-I(u)(t+\tau)\leq 0,\quad\hbox{for any}\ 0<\tau\leq t_{n}.$$
Let $t_{0}>1$ be given. From (2.25), (2.26)
and uniform exponential decay of $\sup_{t>0}|D^{k}u(\cdot,t)|$ for $|k|\leq 1$, one has
$$I(u)(t+t_{n})\to I(w_{0})\quad\hbox{as}\ n\to\infty\ \ \hbox{on}\ [1,t_{0}].$$
Thus integrating (2.47) over $[1,t_{0}]$ and passing a limit $n\to\infty$, we get
(2.48)
$$\int_{1}^{t_{0}}\big{(}I(u)(s+\tau)-I(w_{0})\big{)}\,ds\geq 0\quad\hbox{for %
any}\ t_{0}>1.$$
Next since $u_{n}=w_{n}+\eta_{n}\phi_{n}$,
$I^{\prime}(w_{n})=0$ and $I(w_{0})=I(w_{n})$, by Taylor expansion, we have
$$\displaystyle\int_{1}^{t_{0}}\big{(}I(u)(s+t_{n})-I(w_{0})\big{)}\,ds$$
$$\displaystyle=\int_{1}^{t_{0}}\big{(}I(w_{n}+\eta_{n}\phi_{n})-I(w_{n})\big{)}%
\,ds$$
$$\displaystyle=\frac{\eta_{n}^{2}}{2}\int_{1}^{t_{0}}\big{\langle}I^{\prime%
\prime}(w_{n}+\kappa_{n}\eta_{n}\phi_{n})\phi_{n},\phi_{n}\big{\rangle}\,ds$$
$$\displaystyle=\frac{\eta_{n}^{2}}{2}\int_{1}^{t_{0}}\big{\langle}I^{\prime%
\prime}(w_{0})\phi_{n},\phi_{n}\big{\rangle}\,ds+\frac{\eta_{n}^{2}}{2}\int_{1%
}^{t_{0}}\big{\langle}\big{(}I^{\prime\prime}(w_{n})-I^{\prime\prime}(w_{0})%
\big{)}\phi_{n},\phi_{n}\big{\rangle}\,ds$$
(2.49)
$$\displaystyle\quad+\frac{\eta_{n}^{2}}{2}\int_{1}^{t_{0}}\big{\langle}\big{(}I%
^{\prime\prime}(w_{n}+\kappa_{n}\eta_{n}\phi_{n})-I^{\prime\prime}(w_{n})\big{%
)}\phi_{n},\phi_{n}\big{\rangle}\,ds,$$
for some $\kappa_{n}\in(0,1)$.
Now from $w_{n}+\kappa_{n}\eta_{n}\phi_{n}=\kappa_{n}u_{n}+(1-\kappa_{n})w_{n}$, one has
$$\displaystyle\Big{\langle}\big{(}I^{\prime\prime}(w_{n}+\kappa_{n}\eta_{n}\phi%
_{n})-I^{\prime\prime}(w_{n})\big{)}\phi_{n},\phi_{n}\Big{\rangle}$$
$$\displaystyle=\int_{{\mathbb{R}^{N}}}\Big{\{}2\phi_{n}^{2}\Big{(}|\nabla\big{(%
}\kappa_{n}u_{n}+(1-\kappa_{n})w_{n}\big{)}|^{2}-|\nabla w_{n}|^{2}\Big{)}+2|%
\nabla\phi_{n}|^{2}\Big{(}\big{(}\kappa_{n}u_{n}+(1-\kappa_{n})w_{n}\big{)}^{2%
}-w_{n}^{2}\Big{)}$$
$$\displaystyle +8\phi_{n}\nabla\phi_{n}\cdot\Big{(}\big{(}\kappa_{n}%
u_{n}+(1-\kappa_{n})w_{n}\big{)}\nabla\big{(}\kappa_{n}u_{n}+(1-\kappa_{n})w_{%
n}\big{)}-w_{n}\nabla w_{n}\Big{)}$$
$$\displaystyle -p\phi_{n}^{2}\Big{(}\big{|}\kappa_{n}u_{n}+(1-\kappa%
_{n})w_{n}\big{|}^{p-1}-|w_{n}|^{p-1}\Big{)}\Big{\}}\,dx.$$
Since $u_{n}$ and $w_{n}$ converge to $w_{0}$ in $L^{\infty}\big{(}{\mathbb{R}^{N}}\times[1,t_{0}]\big{)}$
by (2.25) and (2.26), it follows that
$$\Big{\langle}\big{(}I^{\prime\prime}(w_{n}+\kappa_{n}\eta_{n}\phi_{n})-I^{%
\prime\prime}(w_{n})\big{)}\phi_{n},\phi_{n}\Big{\rangle}\leq o(1)\|\phi_{n}(%
\cdot,t)\|_{H^{1}({\mathbb{R}^{N}})}^{2}.$$
Thus by Lemma 2.25, there exists $n_{0}=n_{0}(t_{0})\in\mathbb{N}$ such that for $n\geq n_{0}$,
(2.50)
$$\int_{1}^{t_{0}}\big{\langle}\big{(}I^{\prime\prime}(w_{n}+\kappa_{n}\eta_{n}%
\phi_{n})-I^{\prime\prime}(w_{n})\big{)}\phi_{n},\phi_{n}\big{\rangle}\,ds\leq
1.$$
Similarly one gets for $n\geq n_{1}$,
(2.51)
$$\int_{1}^{t_{0}}\big{\langle}\big{(}I^{\prime\prime}(w_{n})-I^{\prime\prime}(w%
_{0})\big{)}\phi_{n},\phi_{n}\big{\rangle}\,ds\leq 1.$$
Next since $\theta_{n}=\phi_{n}-\phi$, it follows that
$$\displaystyle\int_{1}^{t_{0}}\big{\langle}I^{\prime\prime}(w_{0})\phi_{n},\phi%
_{n}\big{\rangle}\,ds=\int_{1}^{t_{0}}\Big{(}\big{\langle}I^{\prime\prime}(w_{%
0})\phi,\phi\big{\rangle}+2\big{\langle}I^{\prime\prime}(w_{0})\phi,\theta_{n}%
\big{\rangle}+\big{\langle}I^{\prime\prime}(w_{0})\theta_{n},\theta_{n}\big{%
\rangle}\Big{)}\,ds.$$
By (i) of Lemma 2.27, there exists $n_{1}=n_{1}(t_{0})\in\mathbb{N}$ such that for $n\geq n_{1}$,
$$\displaystyle\int_{1}^{t_{0}}\big{\langle}I^{\prime\prime}(w_{0})\theta_{n},%
\theta_{n}\big{\rangle}\,ds$$
$$\displaystyle=\int_{1}^{t_{0}}\int_{{\mathbb{R}^{N}}}\Big{\{}(1+2w_{0}^{2})|%
\nabla\theta_{n}|^{2}+8w_{0}\theta_{n}\nabla w_{0}\cdot\nabla\theta_{n}$$
$$\displaystyle +2\theta_{n}^{2}|\nabla w_{0}|^{2}+\theta_{n}^{%
2}-pw_{0}^{p-1}\theta_{n}^{2}\Big{\}}\,dx\,ds$$
(2.52)
$$\displaystyle\leq C\int_{1}^{t_{0}}\|\theta_{n}(\cdot,s)\|_{H^{1}}^{2}\,ds\leq%
\tilde{C},$$
$$\displaystyle\int_{1}^{t_{0}}\big{\langle}I^{\prime\prime}(w_{0})\phi,\theta_{%
n}\big{\rangle}\,ds$$
$$\displaystyle=\int_{1}^{t_{0}}\int_{{\mathbb{R}^{N}}}\Big{\{}(1+2w_{0}^{2})%
\nabla\phi\cdot\nabla\theta_{n}+4w_{0}\phi\nabla w_{0}\cdot\nabla\theta_{n}$$
$$\displaystyle +4w_{0}\theta_{n}\nabla w_{0}\cdot\nabla\phi+2\phi\theta_{n%
}|\nabla w_{0}|^{2}+\phi\theta_{n}-pw_{0}^{p-1}\phi\theta_{n}\Big{\}}\,dx\,ds$$
(2.53)
$$\displaystyle\leq C\left(\int_{1}^{t_{0}}\|\phi(\cdot,s)\|_{H^{1}}^{2}\,ds%
\right)^{1\over 2}\left(\int_{1}^{t_{0}}\|\theta_{n}(\cdot,s)\|_{H^{1}}^{2}\,%
ds\right)^{1\over 2}\leq\tilde{C},$$
where $\tilde{C}$ is independent of $n\in\mathbb{N}$ and $t_{0}$.
Finally from (2.39), it follows that
$$\displaystyle\int_{1}^{t_{0}}\big{\langle}I^{\prime\prime}(w_{0})\phi,\phi\big%
{\rangle}\,ds$$
$$\displaystyle=\big{\langle}I^{\prime\prime}(w_{0})\psi_{1},\psi_{1}\big{%
\rangle}\int_{1}^{t_{0}}(C_{0})^{2}e^{-2\mu_{1}s}\,ds$$
$$\displaystyle\quad+2\int_{1}^{t_{0}}C_{0}e^{-\mu_{1}s}\Big{\langle}I^{\prime%
\prime}(w_{0})\psi_{1},\sum_{i=1}^{N}C_{i}\frac{\partial w_{0}}{\partial x_{i}%
}+\tilde{\theta}\Big{\rangle}\,ds$$
$$\displaystyle\quad+\int_{1}^{t_{0}}\Big{\langle}I^{\prime\prime}(w_{0})\Big{(}%
\sum_{i=1}^{N}C_{i}\frac{\partial w_{0}}{\partial x_{i}}+\tilde{\theta}\Big{)}%
,\sum_{i=1}^{N}C_{i}\frac{\partial w_{0}}{\partial x_{i}}+\tilde{\theta}\Big{%
\rangle}\,ds.$$
Noticing that
$\big{\langle}I^{\prime\prime}(w_{0})\psi_{1},\,\cdot\,\big{\rangle}=({\mathcal%
{L}}_{0}\psi_{1},\,\cdot\,)_{L^{2}}=\mu_{1}(\psi_{1},\,\cdot\,)_{L^{2}}$,
$$\Big{\langle}I^{\prime\prime}(w_{0})\frac{\partial w_{0}}{\partial x_{i}},\,%
\cdot\,\Big{\rangle}=\Big{(}{\mathcal{L}}_{0}\Big{(}\frac{\partial w_{0}}{%
\partial x_{i}}\Big{)},\,\cdot\,\Big{)}_{L^{2}}=0,$$
and that $\psi_{1}$, $\frac{\partial w_{0}}{\partial x_{i}}$ are orthogonal in $L^{2}$, we have
$$\displaystyle\int_{1}^{t_{0}}\big{\langle}I^{\prime\prime}(w_{0})\phi,\phi\big%
{\rangle}\,ds$$
(2.54)
$$\displaystyle=-\frac{(C_{0})^{2}}{2}(e^{-2\mu_{1}t_{0}}-e^{-2\mu_{1}})+2\mu_{1%
}C_{0}\int_{1}^{t_{0}}e^{-\mu_{1}s}(\psi_{1},\tilde{\theta})_{L^{2}}\,ds+\int_%
{1}^{t_{0}}\big{\langle}I^{\prime\prime}(w_{0})\tilde{\theta},\tilde{\theta}%
\big{\rangle}\,ds.$$
Next by the Schwarz inequality, one has
$$\displaystyle 2\mu_{1}C_{0}\int_{1}^{t_{0}}e^{-\mu_{1}s}(\psi_{1},\tilde{%
\theta})_{L^{2}}\,ds$$
$$\displaystyle\leq 2|\mu_{1}||C_{0}|e^{-\mu_{1}t_{0}}\|\psi_{1}\|_{L^{2}}\int_{%
1}^{t_{0}}\|\tilde{\theta}(\cdot,s)\|_{L^{2}}\,ds$$
$$\displaystyle\leq 2|\mu_{1}||C_{0}|\sqrt{t_{0}}e^{-\mu_{1}t_{0}}\left(\int_{1}%
^{t_{0}}\|\tilde{\theta}(\cdot,s)\|_{H^{1}}^{2}\,ds\right)^{1\over 2}.$$
Let $\tilde{T}>0$ be as in Lemma 2.28 and suppose
$\ell\tilde{T}\leq t_{0}\leq(\ell+1)\tilde{T}$ for some $\ell\in\mathbb{N}\cup\{0\}$. Then
$$\displaystyle\int_{1}^{t_{0}}\|\tilde{\theta}(\cdot,s)\|_{H^{1}}^{2}\,ds$$
$$\displaystyle\leq\int_{1}^{\tilde{T}}\|\tilde{\theta}(\cdot,s)\|_{H^{1}}^{2}\,%
ds+\sum_{k=1}^{\ell}\int_{k\tilde{T}}^{(k+1)\tilde{T}}\|\tilde{\theta}(\cdot,s%
)\|_{H^{1}}^{2}\,ds$$
$$\displaystyle\leq\int_{1}^{\tilde{T}}\|\tilde{\theta}(\cdot,s)\|_{H^{1}}^{2}\,%
ds+\sum_{k=1}^{\ell}e^{-\alpha k\tilde{T}}$$
$$\displaystyle\leq\int_{1}^{\tilde{T}}\|\tilde{\theta}(\cdot,s)\|_{H^{1}}^{2}\,%
ds+\frac{e^{-\alpha\tilde{T}}}{1-e^{-\alpha\tilde{T}}}.$$
Thus there exists $L>0$ independent of $t_{0}$ such that
(2.55)
$$2\mu_{1}C_{0}\int_{1}^{t_{0}}e^{-\mu_{1}s}(\psi_{1},\tilde{\theta})_{L^{2}}\,%
ds\leq L|C_{0}|\sqrt{t_{0}}e^{-\mu_{1}t_{0}}.$$
Similarly one has
(2.56)
$$\int_{1}^{t_{0}}\big{\langle}I^{\prime\prime}(w_{0})\tilde{\theta},\tilde{%
\theta}\big{\rangle}\,ds\leq L.$$
From (2.4)-(2.56), we obtain
$$\displaystyle\int_{1}^{t_{0}}\big{(}I(u)(s+t_{n})-I(w_{0})\big{)}\,ds$$
$$\displaystyle\leq\frac{\eta_{n}^{2}}{2}\Big{(}2+2\tilde{C}+L+\frac{(C_{0})^{2}%
}{2}e^{-2\mu_{1}}+L|C_{0}|\sqrt{t_{0}}e^{-\mu_{1}t_{0}}-\frac{(C_{0})^{2}}{2}e%
^{-2\mu_{1}t_{0}}\Big{)}$$
for $n\geq\max\{n_{0},n_{1}\}$.
Now suppose by contradiction that $C_{0}\neq 0$.
Then since $\mu_{1}<0$, one has
$$2+2\tilde{C}+L+\frac{(C_{0})^{2}}{2}e^{-2\mu_{1}}+L|C_{0}|\sqrt{t_{0}}e^{-\mu_%
{1}t_{0}}-\frac{(C_{0})^{2}}{2}e^{-2\mu_{1}t_{0}}\to-\infty\quad\hbox{as}\ t_{%
0}\to\infty.$$
This contradicts to (2.48).
Thus it follows that $C_{0}=0$ and hence the proof is complete.
∎
Proof of Proposition 2.21 concluded.
Let $C_{1},\cdots,C_{N}$ be as defined in (2.38)
and $\tilde{T}>0$, $\alpha>0$ be as in Lemma 2.28.
We put ${\bf C}=(C_{1},\cdots,C_{N})$ and $z_{n}:=\eta_{n}{\bf C}\in{\mathbb{R}^{N}}$.
Since $\eta_{n}\to 0$, we may assume $|z_{n}|\leq 1$.
By Lemma 2.29, the orthogonality of $\frac{\partial w_{0}}{\partial x_{i}}$, $\tilde{\theta}(\cdot,\tau_{0})$ in $L^{2}({\mathbb{R}^{N}})$ and from $\|\phi_{0}\|_{L^{2}}\leq 2$, one has
$$4\geq\|\phi_{0}\|_{L^{2}}^{2}=\left\|\sum_{i=1}^{N}C_{i}\frac{\partial w_{0}}{%
\partial x_{i}}+\tilde{\theta}(\cdot,\tau_{0})\right\|_{L^{2}}^{2}=|{\bf C}|^{%
2}\|\nabla w_{0}\|_{L^{2}}^{2}+\|\tilde{\theta}(\cdot,\tau_{0})\|_{L^{2}}^{2}.$$
Since $\|\nabla w_{0}\|_{L^{2}}=\|\nabla w\|_{L^{2}}$, it follows that
$|{\bf C}|\leq\frac{2}{\|\nabla w\|_{L^{2}}}=:M$ and
hence $|z_{n}|\leq M\eta_{n}$.
Next by the definitions of $\phi_{n}$, $\theta_{n}$ and from Lemma 2.29,
we get
$$\displaystyle u(x,t+t_{n})-w(x+y_{n}+z_{n})$$
$$\displaystyle=\eta_{n}\phi_{n}(x,t)+w(x+y_{n})-w(x+y_{n}+z_{n})$$
$$\displaystyle=\eta_{n}\theta_{n}(x,t)+\eta_{n}\phi(x,t)+w(x+y_{n})-w(x+y_{n}+z%
_{n})$$
$$\displaystyle=\eta_{n}\theta_{n}(x,t)+\sum_{i=1}^{N}\eta_{n}C_{i}\frac{%
\partial w_{0}}{\partial x_{i}}(x)+\eta_{n}\tilde{\theta}(x,t)+w(x+y_{n})-w(x+%
y_{n}+z_{n})$$
$$\displaystyle=\eta_{n}\theta_{n}(x,t)+\eta_{n}\tilde{\theta_{n}}(x,t)+\big{(}%
\nabla w_{0}(x)\cdot z_{n}+w(x+y_{n})-w(x+y_{n}+z_{n})\big{)}$$
$$\displaystyle=\eta_{n}\theta_{n}(x,t)+\eta_{n}\tilde{\theta_{n}}(x,t)+\big{(}%
\nabla w(x+y_{n})\cdot z_{n}+w(x+y_{n})-w(x+y_{n}+z_{n})\big{)}$$
$$\displaystyle\quad+\big{(}\nabla w_{0}(x)-\nabla w(x+y_{n})\big{)}\cdot z_{n}.$$
By Lemma 2.11 (i), one has
$$\|w(\cdot+y_{n}+z_{n})-w(\cdot+y_{n})-\nabla w(\cdot+y_{n})\cdot z_{n}\|_{H^{1%
}}^{2}\leq C|z_{n}|^{4}.$$
Moreover since $w(\cdot+y_{n})\to w(\cdot+y_{0})=w_{0}(\cdot)$ in $C^{2}({\mathbb{R}^{N}})$,
we also have
$$\|\big{(}\nabla w_{0}(\cdot)-\nabla w(\cdot+y_{n})\big{)}\cdot z_{n}\|_{H^{1}}%
^{2}=o(1)|z_{n}|^{2}.$$
Thus by the Triangular inequality, we obtain
$$\displaystyle\eta^{2}(y_{n}+z_{n},t_{n}+T)$$
$$\displaystyle=\int_{T}^{2T}\|u(\cdot,s+t_{n})-w(\cdot+y_{n}+z_{n})\|_{H^{1}}^{%
2}\,ds$$
$$\displaystyle\leq\eta_{n}^{2}\left(\int_{T}^{2T}\Big{(}16\|\theta_{n}(\cdot,s)%
\|_{H^{1}}^{2}+16\|\tilde{\theta}(\cdot,s)\|_{H^{1}}^{2}\Big{)}\,ds+CT\big{(}|%
z_{n}|^{2}+o(1)\big{)}\right).$$
By Lemma 2.27 (ii), there exist $T>1$ and $n_{2}=n_{2}(T)\in\mathbb{N}$ such that
$$\int_{T}^{2T}\|\theta_{n}(\cdot,s)\|_{H^{1}}^{2}\,ds\leq\frac{1}{256}\quad%
\hbox{and}\quad CT\big{(}|z_{n}|^{2}+o(1)\big{)}\leq\frac{1}{8}\quad\hbox{for}%
\ n\geq n_{2}.$$
Taking $T>1$ large if necessary, we may assume
$T>\tilde{T}$ and $e^{-\alpha T}\leq\frac{1}{256}$.
Then by Lemma 2.28,
$$\int_{T}^{2T}\|\tilde{\theta}(\cdot,s)\|_{H^{1}}^{2}\,ds\leq\frac{1}{256}.$$
Thus we obtain $\eta^{2}(y_{n}+z_{n},t_{n}+T)\leq\frac{1}{4}\eta^{2}(y_{n},t_{n})$.
This completes the proof. ∎
3. Proof of the main results
In this section, we will prove the main results of the paper.
3.1. Proof of Theorem 1.1
Let $u_{0}\in C_{0}^{\infty}({\mathbb{R}^{N}})$ be non-negative, radially non-increasing and not identically zero.
Then, by means of Lemma 2.3, we know that
$$u(x,t)>0,\quad u(x,t)=v(|x|,t),\quad v_{r}(|x|,t)<0\quad\text{for any $x\in%
\mathbb{R}^{N}$ and $t\in(0,T_{\rm max})$}.$$
If $u$ is globally defined, we have that $T_{\rm max}=\infty$.
Then by Proposition 2.12, we learn that $u$ is uniformly bounded in space and time,
and it satisfies the decay condition (1.6). ∎
3.2. Proof of Theorem 1.2
Let $w\in\Omega(u)$.
Then there exists a diverging sequence $\{t_{n}\}_{n\in\mathbb{N}}$ such that
$u(\cdot,t_{n})\to w(\cdot)$ uniformly in ${\mathbb{R}^{N}}$ as $n\to\infty$.
Let $T>1$, $\eta_{0}>0,\ t_{0}>0$ be as in Lemma 2.24 and fix $\varepsilon>0$.
Then by (i) of Lemma 2.9, there exists $n_{0}\in\mathbb{N}$ such that $t_{n_{0}}\geq t_{0}$,
$$\eta(0,t_{n_{0}})=\left(\int_{0}^{T}\|u(\cdot,s+t_{n_{0}})-w(\cdot)\|_{H^{1}({%
\mathbb{R}^{N}})}^{2}\,ds\right)^{1\over 2}<\min\{\eta_{0},\varepsilon\}.$$
Thus from Lemma 2.24, one has
(3.1)
$$\eta(0,t_{n_{0}}+kT)\leq\tilde{C}\varepsilon\quad\hbox{for every}\ k\in\mathbb%
{N}.$$
Let $t\geq t_{n_{0}}$ be given. Then it follows that
$t_{n_{0}}+kT\leq t\leq t_{n_{0}}+(k+1)T$ for some $k\in\mathbb{N}$.
Thus we can write $t=t_{n_{0}}+kT+\tau$ with $\tau\in[0,T]$.
Then by Lemma 2.10 and from (3.1),
there exists $C>0$ independent of $t$ such that
(3.2)
$$\eta(0,t)=\eta(0,t_{n_{0}}+kT+\tau)\leq C\eta(0,t_{n_{0}}+kT)\leq C\varepsilon.$$
Now let $K>0$ be arbitrary.
Then $\ell T\leq K<(\ell+1)T$ for some $\ell\in\mathbb{N}$.
Then from (3.2),
$$\displaystyle\int_{0}^{K}\|u(\cdot,s+t)-w(\cdot)\|_{H^{1}}^{2}\,ds$$
$$\displaystyle\leq\int_{0}^{(\ell+1)T}\|u(\cdot,s+t)-w(\cdot)\|_{H^{1}}^{2}\,ds$$
$$\displaystyle=\sum_{j=0}^{\ell}\eta^{2}(0,t+jT)\leq(\ell+1)C^{2}\varepsilon^{2}.$$
This implies that (1.7) holds.
Finally we show that the limit $w\in\Omega(u)$ is independent of the
choice of the sequence $\{t_{n}\}_{n\in\mathbb{N}}$.
Indeed suppose that there exists another sequence $\{\tilde{t}_{n}\}_{n\in\mathbb{N}}$ such that $u(\cdot,\tilde{t}_{n})\to\tilde{w}$ uniformly for some $\tilde{w}\in\Omega(u)$.
Then by the previous argument, one has
$$\int_{0}^{K}\|u(\cdot,s+t)-\tilde{w}(\cdot)\|_{H^{1}}^{2}\,ds\leq(\ell+1)C^{2}%
\varepsilon^{2}.$$
This implies that $w\equiv\tilde{w}$ and hence the proof is complete.
∎
3.3. Proof of Theorem 1.3
Let $\varphi_{0}\in C_{0}^{\infty}({\mathbb{R}^{N}})$ be a function which is non-negative,
radially non-increasing and not-identically equal to zero.
For $\lambda>0$, we denote by $u_{\lambda}$ the solution of (1.2)-(1.3) with the initial condition $u_{0}=\lambda\varphi_{0}$.
Then two cases may occur, either $u_{\lambda}$ blows up in finite time or it is globally defined.
In the second case, $u_{\lambda}$ is positive, radially decreasing and satisfies the uniform decay condition (1.6) by Lemma 2.3 and Theorem 1.1.
Thus by Theorem 1.2,
$u_{\lambda}$ converges to 0 or a positive solution of (1.4) uniformly in ${\mathbb{R}^{N}}$.
Now we define
$$\displaystyle{\mathcal{A}}$$
$$\displaystyle:=\{\lambda\in(0,\infty):\ u_{\lambda}\ \hbox{blows up in finite %
time}\}.$$
$$\displaystyle{\mathcal{B}}$$
$$\displaystyle:=\{\lambda\in(0,\infty):\ u_{\lambda}\ \hbox{converges to a %
positive solution of \eqref{eq:1.3}}\ \hbox{uniformly in ${\mathbb{R}^{N}}$}\}.$$
$$\displaystyle{\mathcal{C}}$$
$$\displaystyle:=\{\lambda\in(0,\infty):\ u_{\lambda}\ \hbox{converges to zero}%
\ \hbox{uniformly in ${\mathbb{R}^{N}}$}\}.$$
One can see that ${\mathcal{A}},{\mathcal{B}},{\mathcal{C}}$ are intervals and
${\mathcal{A}}\cup{\mathcal{B}}\cup{\mathcal{C}}=(0,\infty)$. The proof of Theorem 1.3 consists of four steps.
Step 1: ${\mathcal{A}}$ is open.
By using standard parabolic estimates, one can prove that, for fixed $t_{0}>0$, the mapping
$$\lambda\to I\big{(}u_{\lambda}(\cdot,t_{0})\big{)}$$
is continuous.
On the other hand, it follows from Lemmas 2.17 and 2.18 that
$u_{\lambda}$ blows up in finite time
iff there is $t_{0}>0$ such that $I\big{(}u_{\lambda}(\cdot,t_{0})\big{)}<0$.
These facts imply that ${\mathcal{A}}$ is open.
$\Box$
Step 2: ${\mathcal{C}}$ is open and not empty.
We observe that any constant less than $1$ is a super-solution of (1.2).
Moreover as we have observed in the proof of Lemma 2.15,
any positive solutions of (1.4) have maximum values strictly larger than 1.
Finally for fixed $t>0$,
$u_{\lambda}(\cdot,t)$ is continuous with respect to $\lambda$ uniformly in $x\in{\mathbb{R}^{N}}$.
From these facts, one can show that ${\mathcal{C}}$ is open and not empty.
$\Box$
Step 3: ${\mathcal{A}}$ is not empty.
We choose $R>0$ so that ${\rm supp}\ \varphi_{0}\subset B(0,R)$.
Then, taking into account Lemma 2.17,
it suffices to show that $I(\lambda\varphi_{0})<0$ for large $\lambda>0$.
It follows that
$$I(\lambda\varphi_{0})=\frac{\lambda^{2}}{2}\int_{B(0,R)}\big{(}|\nabla\varphi_%
{0}|^{2}+\varphi_{0}^{2}\big{)}\,dx+\lambda^{4}\int_{B(0,R)}\varphi_{0}^{2}|%
\nabla\varphi_{0}|^{2}\,dx-\frac{\lambda^{p+1}}{p+1}\int_{B(0,R)}|\varphi_{0}|%
^{p+1}\,dx.$$
If $p>3$, or $p=3$ and $\int_{{\mathbb{R}^{N}}}\varphi_{0}^{2}|\nabla\varphi_{0}|^{2}-\frac{1}{4}|%
\varphi_{0}|^{4}\,dx<0$,
then we have $I(\lambda\varphi_{0})\to-\infty$ as $\lambda\to\infty$.
Thus we have $I(\lambda\varphi_{0})<0$ for large $\lambda>0$ and ${\mathcal{A}}$ is not empty.
$\Box$
Now since $(0,\infty)$ is connected, it follows that ${\mathcal{B}}$ is not empty.
Step 4: ${\mathcal{B}}$ consists of a single point $\lambda_{0}$.
Suppose by contraction that the set ${\mathcal{B}}$ has at least two elements $\lambda_{0}<\lambda_{1}$.
We claim that $(\lambda_{0},\lambda_{1})\subset{\mathcal{A}}$.
Now let $\lambda\in(\lambda_{0},\lambda_{1})$ be arbitrarily given.
First we show that
(3.3)
$$\lim_{t\to T_{\lambda}}I\big{(}u_{\lambda}(\cdot,t)\big{)}<0,$$
where $T_{\lambda}>0$ is the maximal existence time for $u_{\lambda}$.
To this end, we suppose by contradiction that
$I\big{(}u_{\lambda}(\cdot,t)\big{)}\geq 0$ for all $t\in(0,T_{\lambda}]$.
Then by Lemmas 2.17-2.18, $u_{\lambda}$ is globally defined.
Moreover since $u_{\lambda_{0}}(\cdot,0)<u_{\lambda}(\cdot,0)$,
we have $u_{\lambda_{0}}(x,t)\leq u_{\lambda}(x,t)$
for all $x\in{\mathbb{R}^{N}}$ and $t>0$ by the Comparison Principle.
Finally since $u_{\lambda_{0}}(x,t)\to w(x)$ as $t\to\infty$,
it follows that $\lambda\in{\mathcal{B}}$ and hence $u_{\lambda}(x,t)\to w(x)$ as $t\to\infty$
by the radial symmetry of $u_{\lambda}$
and the uniqueness of positive radial solution of (1.4).
Next we put $\phi=u_{\lambda}-u_{\lambda_{0}}$. Then from (1.2), one has
$$\displaystyle 0$$
$$\displaystyle=\phi_{t}+{\mathcal{L}}\phi+2(u_{\lambda}^{2}-w^{2})\Delta\phi+%
\big{(}4w\Delta w-2(u_{\lambda}+u_{\lambda_{0}})\Delta u_{\lambda_{0}}\big{)}\phi$$
(3.4)
$$\displaystyle\quad+2(|\nabla u_{\lambda}|^{2}-|\nabla w|^{2})\phi+\big{(}4w%
\nabla w-2u_{\lambda_{0}}\nabla(u_{\lambda}+u_{\lambda_{0}})\big{)}\cdot\nabla\phi$$
$$\displaystyle\quad+p\Big{(}w^{p-1}-\big{(}\kappa u_{\lambda}+(1-\kappa)u_{%
\lambda_{0}}\big{)}^{p-1}\Big{)}\phi,$$
where $\kappa\in(0,1)$ and ${\mathcal{L}}$ is the linearized operator
which is defined in (1.5).
Let $\mu_{1}<0$ be the first eigenvalue of ${\mathcal{L}}_{0}$
and $\psi_{1}$ be the corresponding eigenfunction.
Multiplying $\psi_{1}$ by (3.3) and integrating it over ${\mathbb{R}^{N}}$,
one can obtain as in the proof of Lemma 2.26 that
$$\frac{\partial}{\partial t}\int_{{\mathbb{R}^{N}}}\phi\psi_{1}\,dx-\big{(}\mu_%
{1}+\varepsilon(t)\big{)}\int_{{\mathbb{R}^{N}}}\phi\psi_{1}\,dx\geq 0\quad%
\hbox{for all}\ t>0,$$
where $\varepsilon(t)\to 0$ as $t\to\infty$.
Since $\mu_{1}<0$, it follows that
$\int_{{\mathbb{R}^{N}}}\phi(\cdot,t)\psi_{1}\,dx\to\infty$ as $t\to\infty$.
But this contradicts to $\phi(\cdot,t)\to 0$ as $t\to\infty$.
Thus inequality (3.3) holds.
Now from (3.3) and by the continuity of $I\big{(}u_{\lambda}(\cdot,t)\big{)}$ with respect to $t$, we have $I\big{(}u_{\lambda}(\cdot,t)\big{)}<0$ for $t$ sufficiently close to $T_{\lambda}$.
Then one can show that $u_{\lambda}(x,t)$ blows up in finite time and hence $\lambda\in{\mathcal{A}}$.
Since $\lambda\in(\lambda_{0},\lambda_{1})$ is arbitrarily,
we obtain $(\lambda_{0},\lambda_{1})\subset{\mathcal{A}}$ as claimed.
Next for $\lambda\in(\lambda_{0},\lambda_{1})$, we have $u_{\lambda}(x,t)\leq u_{\lambda_{1}}(x,t)$ for all $x\in{\mathbb{R}^{N}}$ and $t>0$ by the Comparison Principle.
Since $\lambda\in{\mathcal{A}}$ and $\lambda_{1}\in{\mathcal{B}}$, it follows that
$u_{\lambda_{1}}(\cdot,t)\to w$ as $t\to\infty$ but $u_{\lambda}$ blows up in finite time.
This is a contradiction and hence the set ${\mathcal{B}}$ consists of a single point $\lambda_{0}$.
Finally, by Steps 1-4, it follows that
${\mathcal{A}}=(\lambda_{0},\infty)$, ${\mathcal{B}}=\{\lambda_{0}\}$ and ${\mathcal{C}}=(0,\lambda_{0})$.
This completes the proof. ∎
References
[1]
S. Adachi, T. Watanabe,
Uniqueness of the ground state solutions of quasilinear
Schrödinger equations,
Nonlinear Anal. 75 (2012), 819–833.
[2]
S. Adachi, M. Shibata, T. Watanabe,
Global uniqueness results for ground states for a class of
quasilinear elliptic equations, preprint.
[3]
S. Adachi, M. Shibata, T. Watanabe,
A note on the uniqueness and the non-degeneracy of positive radial solutions for semilinear elliptic problems and its application, in preparation.
[4]
D. Aroson, L. Caffarelli,
The initial trace of the solution of the porous medium equation,
Trans. Amer. Math. Soc. 280 (1983), 351–366.
[5]
L. Brizhik, A. Eremko, B. Piette, W. J. Zakrzewski,
Electron self-trapping in a discrete two-dimensional lattice,
Physica D 159 (2001), 71–90.
[6]
L. Brizhik, A. Eremko, B. Piette, W. J. Zahkrzewski,
Static solutions of a $D$-dimensional modified nonlinear Schrödinger equation,
Nonlinearity 16 (2003), 1481–1497.
[7]
J. Busca, M. A. Jendoubi, P. Poláčik,
Convergence to equilibrium for semilinear parabolic problems in ${\mathbb{R}^{N}}$,
Comm. Partial Differential Equations,
27 (2002), 1793–1814.
[8]
M. Colin, L. Jeanjean, M. Squassina,
Stability and instability results for standing waves of
quasi-linear Schrödinger equations,
Nonlinearity 23 (2010), 1353–1385.
[9]
C. Cortázar, M. del Pino, M. Elgueta,
The problem of uniqueness of the limit in a semilinear heat equation,
Comm. Partial Differential Equations 24 (1999), 2147–2172.
[10]
C. Cortázar, M. Garcia-Huidobro, P. Herreros,
On the uniqueness of the limit for an asymptotically autonomous
semilinear equation on ${\mathbb{R}^{N}}$,
Comm. Partial Differential Equations 40 (2015), 1218–1240.
[11]
E. Feireisl, H. Petzeltová,
Convergence to a ground state as a threshold phenomenon in
nonlinear parabolic equations,
Differential Integral Equations 10 (1997), 181–196.
[12]
F. Gazzola, T.Weth,
Finite time blow-up and global solutions
for semilinear parabolic equations with initial data at high energy level,
Differential Integral Equations 18 (2005), 961-990.
[13]
Y. Giga, R. Kohn,
Characterizing blow-up using similarity variables,
Indiana Univ. Math. J. 36 (1987), 1–40.
[14]
Y. Giga, R. Kohn,
Nondegeneracy of blowup for semilinear heat equations,
Comm. Pure Appl. Math. 42 (1989), 845–884.
[15]
F. Gladiali, M. Squassina,
Uniqueness of ground states for a class of quasi-linear elliptic equations,
Adv. Nonlinear Anal. 1 (2012), 159–179.
[16]
M. Jendoubi,
A simple unified approach to some convergence theorems of L. Simon,
J. Funct. Anal. 153 (1998), 187–202.
[17]
O. A. Ladyzhenskaya, V. A. Solonnilov, N. N. Ural’ceva,
Linear and quasilinear equations of parabolic type.
Transl. Math. Monographs 23 (1968),
Amer. Math. Soc. Providence R. I.
[18]
H. A. Levine,
Some nonexistence and instability theorem for solutions of formally parabolic equations
of the form $Pu_{t}=-Au+F(u)$,
Arch. Rational Mech. Anal. 51 (1973), 371–386.
[19]
J. Q. Liu, Y. Q. Wang, Z. Q. Wang,
Solutions for quasilinear Schrödinger equations via the Nehari method,
Comm. Partial Differential Equations 29 (2004), 879–901.
[20]
S. Lojasiewicz,
Une proprieté topologique des sous-ensembles analytiques reels.
Colloques internationaux du C.N.R.S.
117 (1963), Les equations aux derivees partielles.
[21]
L. Montoro, B. Sciunzi, M. Squassina,
Asymptotic symmetry for a class of quasi-linear parabolic problems,
Adv. Nonlinear Stud. 10 (2010), 789–818.
[22]
B. Pellacci, M. Squassina,
Unbounded critical points for a class of lower semicontinuous functionals,
J. Differential Equations 201 (2004), 25–62.
[23]
P. Poláčik,
Symmetry properties of positive solutions of parabolic equations on ${\mathbb{R}^{N}}$:
I. Asymptotic symmetry for the Cauchy problem,
Comm. Partial Differential Equations, 30 (2005), 1567–1593.
[24]
P. Poláčik, K. Rybakowski,
Nonconvergent bounded trajectories in semilinear heat equations,
J. Differential Equations 124 (1996), 472â-494.
[25]
P. Poláčik, F. Simondon,
Nonconvergent bounded solutions of semilinear heat
equations on arbitrary domains,
J. Differential Equations 186 (2002), 586â-610.
[26]
M. Protter, H. Weinberger,
Maximum Principles in Differential Equations,
Springer-Verlag, New York-Berlin (1984).
[27]
P. Quittner, P. Souplet,
Superlinear Parabolic Problems. Blow-up, Global Existence and Steady States,
Birkhäuser Advanced Texts, Basler Lehrbücher. Birkhäuser, Basel, (2007).
[28]
A. Selvitella,
Nondegeneracy of the ground state for quasilinear Schrödinger equations,
Calc. Var. Partial Differential Equations 53 (2015), 349–364.
[29]
L. Simon,
Asymptotics for a class of nonlinear evolution equations, with applications to geometric problems,
Ann. of Math. 118 (1983), 525–571.
[30]
V. Solferino, M. Squassina,
Diffeomorphism-invariant properties for quasi-linear elliptic operators,
J. Fixed Point Theory Appl. 11 (2012), 137–157.
[31]
M. Taylor,
Partial Differential Equations III- Nonlinear Equations,
second edition,
Applied Math. Sci. 117 (2010).
[32]
N. S. Trudinger,
Pointwise estimates and quasilinear parabolic equations,
Comm. Pure Appl. Math. 21 (1968), 205–226. |
Absolute parameters for the F-type eclipsing binary BW Aquarii
P. F. L. Maxted
Astrophysics Group, Keele University, Staffordshire, ST5 5BG, UK
[email protected]
binaries: eclipsing, stars: fundamental parameters, stars:
individual (BW Aquarii)
††software: ellc (Maxted, 2016),
emcee (Foreman-Mackey et al., 2013)
BW Aqr is a bright eclipsing binary star containing a pair of F7V stars. The
absolute parameters of this binary (masses, radii, etc.) are known to good
precision so they are often used to test stellar models, particularly in
studies of convective overshooting (e.g., Clausen, 1991; Claret & Torres, 2018).
Maxted & Hutcheon (2018) analysed the Kepler K2 data
for BW Aqr and noted that it shows variability
between the eclipses that may be caused by tidally induced pulsations. The
authors warn that they had “not attempted to characterise the level of
systematic error” in the parameters they derived.
Lester & Gies (2018) analysed the same data
together with new radial velocity (RV) measurements. The binary star model
they used gave a poor fit to the light curve through the secondary eclipse so
the radii they derive may be subject to significant systematic error. The
standard errors they quote on the stellar masses based on $N=14$ pairs of RV
measurements are clearly underestimated. The RMS (root-mean square) of the
residuals from their spectroscopic orbit fit to these RVs is $\sigma\approx 3.6$ km s${}^{-1}$ so the semi-amplitudes of the spectroscopic orbit are expected to
have errors $\sigma_{K}\approx\sigma/\sqrt{N/2}=1.4$ km s${}^{-1}$ (Montgomery & O’Donoghue, 1999), cf. the quoted errors of 0.27 km s${}^{-1}$.
Table 1 shows the absolute parameters for BW Aqr derived from an improved
analysis of the Kepler K2 light curve plus the RV measurements from both
Imbert (1979) and Lester & Gies (2018). The light curve
data used are identical to those shown in Maxted & Hutcheon (2018). We used
ellc version 1.8.0111pypi.org/project/ellc/ to model the light curve
using the power-2 limb darkening law, $I_{\lambda}(\mu)=1-c\left(1-\mu^{\alpha}\right)$, with the same parameters $c$ and $\alpha$ for both
stars and with Gaussian priors on the
parameters $h_{1}=1-c\left(1-2^{-\alpha}\right)=0.78\pm 0.02$ and $h_{2}=c2^{-\alpha}=0.44\pm 0.10$ (Maxted, 2018). The “default”
grid size in ellc was used so that numerical noise is less than 60 ppm.
Other details of the fit to the light curve are similar to those described in
Maxted & Hutcheon (2018).
Each pair of consecutive primary and secondary eclipses in the K2 light curve
were analysed separately using the emcee algorithm to determine the
median of the posterior probability distributions (PPDs) for the model
parameters. The results shown in Table 1 are calculated from the mean and its
standard error from these 10 median values. The RV measurements were fit
simultaneously with the times of mid-eclipse from Lester & Gies (2018)
and Volkov & Chochol (2014) using the model omdot to account for
apsidal motion (Maxted et al., 2015).
The values in Table 1 with their robust error estimates from the standard
deviation of the mean are consistent with the values and errors from
Maxted & Hutcheon (2018) based on the PPD calculated using emcee for
a fit to the entire K2 light curve.
PM gratefully acknowledges support provided by the UK Science and Technology
Facilities Council through grant number ST/M001040/1.
Kepler
References
Claret & Torres (2018)
Claret, A., & Torres, G. 2018, ArXiv e-prints.
https://arxiv.org/abs/1804.03148
Clausen (1991)
Clausen, J. V. 1991, A&A, 246, 397
Foreman-Mackey et al. (2013)
Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, PASP,
125, 306, doi: 10.1086/670067
Imbert (1979)
Imbert, M. 1979, A&AS, 36, 453
Lester & Gies (2018)
Lester, K. V., & Gies, D. R. 2018, ArXiv e-prints.
https://arxiv.org/abs/1805.02670
Maxted (2016)
Maxted, P. F. L. 2016, A&A, 591, A111, doi: 10.1051/0004-6361/201628579
Maxted (2018)
—. 2018, ArXiv e-prints.
https://arxiv.org/abs/1804.07943
Maxted & Hutcheon (2018)
Maxted, P. F. L., & Hutcheon, R. J. 2018, ArXiv e-prints.
https://arxiv.org/abs/1803.10522
Maxted et al. (2015)
Maxted, P. F. L., Hutcheon, R. J., Torres, G., et al. 2015, A&A, 578,
A25, doi: 10.1051/0004-6361/201525873
Montgomery & O’Donoghue (1999)
Montgomery, M. H., & O’Donoghue, D. 1999, Delta Scuti Star Newsletter, 13,
28
Volkov & Chochol (2014)
Volkov, I. M., & Chochol, D. 2014, CoSka, 43, 419 |
\correspondance\extraAuth
Editorial on Research Topic:
High-Tc Superconductivity in Electron-Doped Iron Selenide and Related Compounds
Jose P. Rodriguez ${}^{1,*}$, Dmytro S. Inosov ${}^{2}$ and Jun Zhao ${}^{3}$
Iron-selenide superconductors comprise a particularly interesting group of materials
inside the family of iron-based superconductors.
The simplest member of the group is bulk FeSe, which has a modest critical temperature of $T_{c}=9$ K.
Like iron-pnictide superconductors, bulk FeSe shows a structural transition at $T_{s}=90$ K from a
tetragonal to an orthorhombic phase driven by nematic ordering of the electronic degrees of freedom.
Angle-resolved photoemission spectroscopy (ARPES), for example,
reveals a small hole Fermi surface pocket at the
center of the Brillouin zone and two electron Fermi surface pockets at the corner of the Brillouin zone,
each with unequal $d_{xz}/d_{yz}$-orbital character.
Unlike in iron-pnictide superconductors, however, no magnetic order coexists with the nematic order
at temperatures below the structural transition.
Inelastic neutron scattering (INS) spectroscopy
finds a spin resonance inside the energy gap
of the superconducting phase in bulk FeSe, however, at wavevectors
corresponding to a stripe spin-density wave (SDW)[1].
It strongly suggests $s^{+-}$ superconductivity across the hole and electron Fermi surface pockets
driven by associated antiferromagnetic spin fluctuations.
INS also finds spin fluctuations at the Néel wavevector $(\pi,\pi)$
above the superconducting energy gap[2].
This suggests that
superconductivity, nematic order, stripe-SDW order,
and some type of Néel antiferromagnetic order compete at low temperature in bulk FeSe.
One of the editors of the research topic has proposed that the latter is hidden Néel order[3, 4].
The superconducting critical temperature increases dramatically to
$30$-$40$ K and above upon doping iron selenide with electrons.
The latter has been achieved in various ways;
for example, by alkali-metal intercalation, by placing a monolayer of FeSe
on a substrate, and by organic-molecule intercalation.
ARPES finds that the hole bands at the center of the Brillouin zone lie buried below the Fermi level.
INS finds a spin resonance inside the superconducting energy gap,
but it lies midway between the SDW and Néel wavenumbers[5].
INS also finds peaks and rings of low-energy spin excitations above the energy gap
around the Néel wavevector[6, 7].
ARPES and scanning tunneling microscopy (STM) find a non-zero superconducting energy gap.
The situation with electron-doped FeSe is rather puzzling then,
with high-$T_{c}$ superconductivity existing over electron Fermi surface pockets alone!
This is not expected in iron-selenide superconductors,
where electron-electron repulsion is strong[8].
The latter requires that the sign of the pair wavefunction oscillates over the Brillouin zone[4].
It is our pleasure to introduce eight articles from the Research Topic that address many of
the unsolved problems that have emerged in the field of iron-selenide superconductors,
some of which we have listed above.
The contributions to the Research Topic contain articles on both theory and experiment,
with four papers reporting on original research,
and with four review papers.
Tong Chen, Min Yi and Pengcheng Dai review how nematicity in bulk iron selenide
can be scrutinized by exploiting detwinning techniques[9],
while Amalia Coldea reviews the series of nematic superconductors FeSe${}_{1-x}$S${}_{x}$[10].
Both articles tackle the interplay between nematicity and superconductivity that exists in bulk FeSe,
with or without chemical substitutions.
Maw-Kuen Wu and collaborators show that insulating Fe${}_{4}$Se${}_{5}$ becomes a superconductor with $T_{c}=8$ K
after proper annealing[11].
They thereby argue that Fe${}_{4}$Se${}_{5}$ is the insulating parent compound for iron-selenide superconductors.
It would clearly be useful to compare future studies of the low-energy spin excitations
in Fe${}_{4}$Se${}_{5}$ with those of its electron-doped counterpart
Rb${}_{2}$Fe${}_{4}$Se${}_{5}$ [5, 6].
Finally, Xiaoli Dong, Fang Zhou and Zhongxian Zhao review a new soft-chemical technique to
grow high-quality single crystals of organic-molecule intercalated FeSe[12].
Their samples have critical temperatures of $T_{c}=42$ K,
and they notably show record critical currents.
On the theory side, Rong Yu, Qimiao Si and collaborators
review $3d$-orbital-selective physics in iron superconductors[13].
They point out how the $d_{xy}$ orbital is
the one most susceptible to Mott localization in iron-selenide superconductors[8].
They also emphasize how
the relatively small energy splitting between the $d_{xz}/d_{yz}$ orbitals
that is seen by ARPES in the nematic phase of bulk FeSe,
$\Delta E_{\Gamma}$ and $\Delta E_{M}<50$ meV,
can be reconciled with the large orbitally-dependent
wavefunction renormalizations seen by STM in the same phase,
$Z(d_{yz})/Z(d_{xz})=4$.
Maxim Dzero and Maxim Khodas study the effect of point disorder on the stripe SDW state
by exploiting a quasi-classical Green’s function technique[14].
They find that the tetragonally symmetric stripe SDW state is more robust with respect to disorder
than the orthorhombically symmetric one.
This result could have bearing on the absence of magnetic order in the nematic phase of bulk FeSe, for example.
Last, Andrzej Ptok, Konrad Kapcia and Przemysław Piekarz
study a two-band model for iron superconductors
that includes intra-band and inter-band coupling between Cooper pairs[15].
They notably find Cooper pair states in relative orbitals of mixed symmetry.
Finally,
Rustem Khasanov and collaborators applied muon-spin rotation/relaxation ($\mu$SR)
on the iron-pnictide superconductor NdFeAsO${}_{0.65}$F${}_{0.35}$,
thereby obtaining London penetration lengths[16].
Interestingly,
a two-band analysis of their data yields only weak inter-band coupling of the Cooper pairs.
The brief survey above of the author contributions to the Research Topic conveys
the richness of the field of iron-selenide superconductivity and related materials.
We believe that you will enjoy reading the Research Topic.
Yours sincerely, Jose Rodriguez, Dmytro Inosov and Jun Zhao,
February 28, 2022.
References
[1]
Wang Q, Shen Y, Pan B, Hao Y, Ma M, Zhou F, Steffens P, Schmalzl K, Forrest TR,
Abdel-Hafiez M, Chareev DA, Vasiliev AN, Bourges P, Sidis Y, Cao H, and Zhao J,
“Interplay between stripe spin fluctuations, nematicity and superconductivity in FeSe”.
Nat Mater. (2016) 15:159-163. doi:10.1038/nmat4535
[2]
Wang Q, Shen Y, Pan B, Zhang X, Ikeuchi K, Iida K, Christianson AD, Walker HC,
Adroja DT, Abdel-Hafiez M, Chen X, Chareev DA, Vasiliev AN, and Zhao J,
“Magnetic ground state of FeSe”. Nat Commun. (2016) 7:1–7. doi: 10.1038/ncomms12182
[3]
Rodriguez JP, “Spin resonances in iron selenide high-Tc superconductors by proximity to
a hidden spin density wave”, Phys Rev B. (2020) 102: 024521. doi: 10.1103/PhysRevB.102.024521
[4]
Rodriguez JP, “Superconductivity by hidden spin fluctuations in electron-doped iron selenide”,
Phys Rev B. (2021) 103: 184513. doi: 10.1103/PhysRevB.103.184513
[5]
Park JT, Friemel G, Li Y, Kim JH, Tsurkan V, Deisenhofer J, Krug von Nidda HA,
Loidl A, Ivanov A, Keimer B, Inosov DS,
“Magnetic resonant mode in the low-energy spin excitation spectrum
of superconducting Rb${}_{2}$Fe${}_{4}$Se${}_{5}$ single crystals”,
Phys Rev Lett. (2011) 107: 177005. doi: 10.1103/PhysRevLett.107.177055
[6]
Friemel G, Park JP, Maier TA, Tsurkan V, Li Y, Deisenhofer J, Krug von Nidda HA,
Loidl A, Ivanov A, Keimer B, Inosov DS,
“Reciprocal-space structure and dispersion of the magnetic resonant mode in the superconducting
phase of Rb${}_{x}$Fe${}_{2-y}$Se${}_{2}$ single crystals”,
Phys Rev B. (2012) 85: 140511(R). doi: 10.1103/PhysRevB.85.140511
[7]
Pan B, Shen Y, Hu D, Feng Y, Park JT, Christianson AD,
Wang, Q, Hao Y, Wo H, Yin Z, Maier TA, and Zhao J,
“Structure of spin excitations in heavily electron-doped Li${}_{0.8}$Fe${}_{0.2}$ODFeSe superconductors”,
Nat Commun. (2017) 8: 123. doi: 10.1038/s41467-017-00162-x
[8]
Yi M, Liu ZK, Zhang Y, Zhu JX, Lee JJ, Moore RG, Schmitt FT, Li W, Riggs SC, Chu JH,
Lv B, Hu J, Hashimoto M, Mo SK, Hussain Z, Mao ZQ, Chu CW, Fisher IR, Si Q, Shen ZX, and Lu DH,
“Observation of universal strong orbital-dependent correlation effects in iron chalcogenides”,
Nat Commun. (2017) 6: 7777. doi: 10.1038/ncomms8777
[9]
Chen T, Yi M, and Dai P,
“Electronic and magnetic anisotropies in FeSe family of iron-based superconductors”,
Front Phys. 21 August 2020. doi: 10.3389/fphys.2020.00314
[10]
Coldea AI,
“Electronic nematic states tuned by isoelectronic substitutions in bulk FeSe${}_{1-x}$S${}_{x}$,
Front Phys. 23 March 2021. doi 10.3389/fphys.2020.594500
[11]
Yeh K, Chen Y, Lo T, Wu P, Wang M, Chang-Liao K, and Wu M,
“Fe-vacancy-ordered Fe${}_{4}$Se${}_{5}$: the insulating parent phase of FeSe superconductor”,
Front Phys. 13 November 2020. doi: 10.3389/fphys.2020.567054
[12]
Dong X, Zhou F, Zhao Z,
“Electronic and superconducting properties of some FeSe-based single crystals and films grown hydrothermally”,
Front Phys. 11 November 2020. doi: 10.3389/fphy.2020.586182
[13]
Yu R, Hu H, Nica EM, Zhu JX, and Si Q,
“Orbital selectivity in electron correlations and superconducting pairing of iron-based superconductors”,
Front Phys. 05 May 2021. doi: 10.3389/fphy.2021.578347
[14]
Dzero M and Khodas M,
“Quasiclassical theory of $C_{4}$-symmetric magnetic order in disordered multiband metals”,
Front Phys. 08 September 2020. doi: 10.3389/fphy2020.00356
[15]
Ptok A, Kapcia KJ, and Piekarz P,
“Effects of pair-hopping coupling on properties of multi-band iron-based superconductors”,
Front Phys. 19 August 2020. doi: 10.3389/fphy.2020.00284
[16]
Gupta R, Maisuradze A, Zhigadlo ND, Luetkens H, Amato A, and Khasanov R,
“Self-consistent two-gap approach in studying multi-band superconductivity of NdFeAsO${}_{0.65}$F${}_{0.35}$”,
Front Phys. 30 January 2020. doi: 10.3389/fphy.2020.00002 |
The Noise Collector for sparse recovery in high dimensions
Miguel Moscoso111Department of Mathematics, Universidad Carlos III de Madrid, Leganes, Madrid 28911, Spain
, Alexei Novikov222Department of Mathematics, Pennsylvania State University, University Park, PA 16802
, George Papanicolaou333Department of Mathematics, Stanford University, Stanford, CA 94305
, Chrysoula Tsogka444Department of Applied Mathematics, University of California, Merced, CA 95343
Abstract
The ability to detect sparse signals from noisy high-dimensional data is a top priority in modern science and engineering.
A sparse solution of the linear system ${\cal A}\mbox{\boldmath{$\rho$}}=\mbox{\boldmath{$b$}}_{0}$ can be found efficiently with an $\ell_{1}$-norm minimization approach if the data is noiseless.
Detection of the signal’s support from data corrupted by noise is still a challenging problem, especially if the level of noise must be estimated.
We propose a new efficient approach that does not require any parameter estimation. We introduce the Noise Collector (NC) matrix ${\cal C}$ and solve an augmented system
${\cal A}\mbox{\boldmath{$\rho$}}+{\cal C}\mbox{\boldmath{$\eta$}}=\mbox{%
\boldmath{$b$}}_{0}+\mbox{\boldmath{$e$}}$, where $e$ is the noise. We show that the $l_{1}$-norm minimal solution of the augmented system has zero false discovery rate for any level of noise and with probability that tends to one as the dimension of $\mbox{\boldmath{$b$}}_{0}$ increases to infinity.
We also obtain exact support recovery if the noise is not too large, and develop a Fast Noise Collector Algorithm which makes the computational cost of solving the augmented system comparable to that of the original one. Finally, we demonstrate the effectiveness of the method in applications to passive array imaging.
\MakePerPage
footnote
We want to find sparse solutions $\mbox{\boldmath{$\rho$}}\in\mathbb{R}^{K}$ for
$${\cal A}\,\mbox{\boldmath{$\rho$}}=\mbox{\boldmath{$b$}},$$
(1)
from highly incomplete measurement data $\mbox{\boldmath{$b$}}=\mbox{\boldmath{$b$}}_{0}+\mbox{\boldmath{$e$}}\in%
\mathbb{R}^{N}$, corrupted by noise $e$ where $1\ll N<K$. In the noiseless case, $\rho$
can be found exactly by solving the optimization problem [9]
$$\mbox{\boldmath{$\rho$}}_{*}=\arg\min_{\small\mbox{\boldmath{$\rho$}}}\|\mbox{%
\boldmath{$\rho$}}\|_{\ell_{1}},\hbox{ subject to }{\cal A}\,\mbox{\boldmath{$%
\rho$}}=\mbox{\boldmath{$b$}},$$
(2)
provided the measurement matrix ${\cal A}\in\mathbb{R}^{N\times K}$ satisfies additional conditions, e.g., decoherence or restricted isometry properties [11, 4],
and the solution vector $\rho$ has a small number $M$ of nonzero components or degrees of freedom.
When measurements are noisy exact recovery is no longer possible. However the exact support of $\rho$ can be determined if the noise is not too strong. The most commonly used approach is to solve the $\ell_{2}$-relaxed form of [2]
$$\displaystyle\mbox{\boldmath{$\rho$}}_{\lambda}=\arg\min_{\small\mbox{%
\boldmath{$\rho$}}}\left(\lambda\|\mbox{\boldmath{$\rho$}}\|_{\ell_{1}}+\|%
\mbox{\boldmath{${\cal A}$}}\mbox{\boldmath{$\rho$}}-\mbox{\boldmath{$b$}}\|^{%
2}_{\ell_{2}}\right),$$
(3)
known as Lasso in the statistics literature [26]. There are sufficient conditions for the support of $\mbox{\boldmath{$\rho$}}_{\lambda}$ to be contained within the true support, see e.g.
Fuchs [14], Tropp [27] and Wainwright [31]. These conditions depend
on the signal-to-noise ratio (SNR), which is not known and must be estimated, and on the regularization parameter $\lambda$, which must be carefully chosen and/or adaptively changed [32].
Although such an adaptive procedure improves the outcome, the resulting solutions tend to include a large number of “false positives” in practice [23].
Our contribution is a method for exact support recovery in the presence of additive noise. A key element of this method is that it has no tuning parameters. In particular,
it does not require any prior knowledge of the level of noise which is often difficult to estimate.
Main Results. Suppose $\rho$ is an $M$-sparse solution of the noiseless system in (1),
where the columns of ${\cal A}$ have unit length. Our main result ensures that we can
recover the support of $\rho$ by looking at the support of $\mbox{\boldmath{$\rho$}}_{\tau}$ found as
$$\displaystyle\left(\mbox{\boldmath{$\rho$}}_{\tau},\mbox{\boldmath{$\eta$}}_{%
\tau}\right)=\arg\min_{\small\mbox{\boldmath{$\rho$}},\small\mbox{\boldmath{$%
\eta$}}}\left(\tau\|\mbox{\boldmath{$\rho$}}\|_{l_{1}}+\|\mbox{\boldmath{$\eta%
$}}\|_{l_{1}}\right),$$
(4)
$$\displaystyle\hbox{ subject to }{\cal A}\mbox{\boldmath{$\rho$}}+{\cal C}\mbox%
{\boldmath{$\eta$}}=\mbox{\boldmath{$b$}}_{0}+\mbox{\boldmath{$e$}},$$
with an $O(\sqrt{\ln N})$ weight $\tau$, and an appropriately chosen Noise Collector matrix ${\cal C}\in\mathbb{R}^{N\times\Sigma}$, $\Sigma\gg K$.
The minimization problem [4] can be understood as a relaxation of [2].
It works by absorbing all the noise, and possibly some signal, in ${\cal C}\mbox{\boldmath{$\eta$}}_{\tau}$.
The following theorem shows that if the signal is pure noise, and
the columns of the Noise Collector are chosen uniformly and independently at random on the unit sphere $\mathbb{S}^{N-1}=\left\{x\in\mathbb{R}^{N},\|x\|_{\ell_{2}}=1\right\}$, then
${\cal C}\mbox{\boldmath{$\eta$}}_{\tau}=\mbox{\boldmath{$e$}}$ for any level of noise, with high probability.
Theorem 1 (No phantom signal): Suppose $\mbox{\boldmath{$b$}}_{0}=0$ and $\mbox{\boldmath{$e$}}/\|\mbox{\boldmath{$e$}}\|_{l_{2}}$ is uniformly distributed on
the unit sphere $\mathbb{S}^{N-1}$. Fix $\beta>1$ and draw $\Sigma=N^{\beta}$ columns for ${\cal C}$ independently from the uniform distribution on
$\mathbb{S}^{N-1}$.
For any $\kappa>0$ there are constants $c_{0}=c_{0}(\kappa,\beta)$ and $N_{0}=N_{0}(\kappa,\beta)$
such that, for $\tau=c_{0}\sqrt{\ln N}$ and all $N>N_{0}$, $\mbox{\boldmath{$\rho$}}_{\tau}$, the solution of (4), is zero with probability $1-1/N^{\kappa}$.
Theorem 1 guarantees a zero false discovery rate in the absence of
signals with meaningful information, with high probability.
We generalize this result for the case in which the recorded signals
carry useful information in the next Theorem, where we show that
the support of $\mbox{\boldmath{$\rho$}}_{\tau}$ is inside the support of $\rho$.
Theorem 2 (Zero false discoveries): Let $\rho$ be an $M$-sparse solution of the noiseless system ${\cal A}\mbox{\boldmath{$\rho$}}=\mbox{\boldmath{$b$}}_{0}$. Assume $\kappa$, $\beta$, the Noise Collector, the noise, and $\mbox{\boldmath{$\rho$}}_{\tau}$ are the same as in Theorem 1.
In addition, assume that the columns of ${\cal A}$ are incoherent, in the sense that $|\langle\mbox{\boldmath{$a$}}_{i},\mbox{\boldmath{$a$}}_{j}\rangle|\leqslant%
\frac{1}{3M}$.
Then, there are constants $c_{0}=c_{0}(\kappa,\beta)$ and $N_{0}=N_{0}(\kappa,\beta)$ such that, for $\tau=c_{0}\sqrt{\ln N}$ and all $N>N_{0}$,
$\mbox{supp}(\mbox{\boldmath{$\rho$}}_{\tau})\subseteq\mbox{supp}(\mbox{%
\boldmath{$\rho$}})$
with probability $1-1/N^{\kappa}$.
The incoherence conditions in Theorem 2 are needed to guarantee that the true signal does not create false positives elsewhere.
The next Theorem shows that if the noise is not too large, then $\mbox{\boldmath{$\rho$}}_{\tau}$ and $\rho$ have exactly the same support.
Theorem 3 (Exact support recovery): Keep the same assumptions as in Theorem 2. Suppose the magnitudes of the non-zero entries of $\rho$ are bounded by $\gamma$.
If $\|\mbox{\boldmath{$e$}}\|_{l_{2}}/\|\mbox{\boldmath{$b$}}_{0}\|_{l_{2}}%
\leqslant c_{2}/\sqrt{\ln N}$, $c_{2}=c_{2}(\kappa,\beta,\gamma,M)$, then $\mbox{\boldmath{$\rho$}}_{\tau}$ and $\rho$ have the same support with probability
$1-1/N^{\kappa}$.
Motivation. We are interested in imaging accurately sparse scenes using limited and noisy data.
Such imaging problems arise in many areas such as medical imaging [29], structural biology [1], radar [2], and geophysics [24].
In imaging, the $\ell_{1}$-norm minimization method in (2) is used often, in e.g. [19, 22, 16, 28, 12, 6].
This method has the desirable property of super-resolution, that is, the enhancement fine scale details of the images using, in this case, prior information about its low dimensional structure (sparsity).
This has been analyzed in different settings by Donoho and Elad [10], Candès and Fernandez-Granda [5],
Fannjiang and Liao [13], and Borcea and Kocyigit [3], among others. We want to retain this property in our method when the data is
corrupted by additive noise.
However, noise fundamentally limits the quality of the images formed with almost all computational imaging techniques.
Specifically, $\ell_{1}$-norm minimization produces images that are unstable for low SNR due to the ill-conditioning
of super-resolution reconstruction schemes. The instability emerges as clutter noise in the images, or grass, that degrades the resolution.
Our initial motivation to introduce the Noise Collector matrix ${\cal C}$ was to regularize the matrix ${\cal A}$ and, thus, to suppress the clutter in the images.
We proposed in [20] to seek the minimal $\ell_{1}$-norm solution of the augmented linear system ${\cal A}\mbox{\boldmath$\rho$}+{\cal C}\mbox{\boldmath{$\eta$}}=\mbox{%
\boldmath$b$}$.
The idea was to choose the columns of ${\cal C}$ almost orthogonal to those of ${\cal A}$.
Indeed, the condition number of $[{\cal A}\,|\,{\cal C}]$
becomes $O(1)$ when $O(N)$ columns of ${\cal C}$ are taken at random. This essentially follows from the bounds on the largest and the smallest nonzero singular values of random matrices,
see e.g. Theorem 4.6.1 in [30].
The idea to create a dictionary for noise is not new. For example, the work by Laska et al. [17] considers a
specific version of the measurement noise model so $\mbox{\boldmath{$b$}}={\cal A}\mbox{\boldmath{$\rho$}}+{\cal C}\mbox{\boldmath%
{$e$}}$, where ${\cal C}$ is a matrix with fewer (orthonormal)
columns than rows, and the noise vector $e$ is sparse. ${\cal C}$ represents the basis in which the noise is sparse
and it is assumed to be known. Then, they show that it is possible to recover sparse signals and sparse noise exactly using
$\ell_{1}$-norm minimization algorithms. We stress that we do not assume here that the noise is sparse. In our work,
the noise is large (SNR can be small) and is evenly distributed across the data, so it cannot be sparsely accommodated.
To suppress the clutter, our theory in [20] required exponentially many columns, so $\Sigma\lesssim e^{N}$.
This seemed to make the noise collector impractical, but the numerical experiments suggested that $O(N)$ columns were enough to obtain excellent results.
We address this issue here and explain why the noise collector matrix ${\cal C}$ only needs algebraically many columns.
Moreover, to make the absorption of noise less expensive, and thus improve the algorithm in [20], we introduce the weight $\tau$ in (4).
Indeed, by weighting the columns of the noise collector matrix ${\cal C}$ with respect to those in the model matrix ${\cal A}$, the algorithm
now produces images with no clutter at all, no matter how much noise is added to the data.
Finally, we want the Noise Collector to be efficient, with almost no extra computational cost with respect to the Lasso problem in [3].
To this end, it is constructed using circulant matrices that allows for efficient matrix vector multiplications using FFTs.
The proofs of Theorems 1, 2, and 3 are given in Section Proofs. We now explain how the Noise Collector works.
The Noise Collector
The construction of the Noise Collector matrix ${\cal C}$ starts with the following three key properties. Firstly, its columns should be sufficiently orthogonal to the columns of ${\cal A}$,
so it does not absorb signals with ”meaningful” information. Secondly, the
columns of ${\cal C}$ should be uniformly distributed on the unit sphere $\mathbb{S}^{N-1}$ so that we could approximate well a typical noise vector. Thirdly,
the number of columns in ${\cal C}$ should grow slower than exponential with $N$, otherwise the method is impractical.
One way to guarantee all three properties is to impose
$$|\langle\mbox{\boldmath{$a$}}_{i},\mbox{\boldmath{$c$}}_{j}\rangle|<\frac{%
\alpha}{\sqrt{N}}~{}\forall i,j\,,\hbox{ and }|\langle\mbox{\boldmath{$c$}}_{i%
},\mbox{\boldmath{$c$}}_{j}\rangle|<\frac{\alpha}{\sqrt{N}}~{}\forall i\neq j,$$
(5)
with $\alpha>1$, and fill out ${\cal C}$ drawing $\mbox{\boldmath{$c$}}_{i}$ at random with rejections until the rejection rate becomes too high. Then, by construction, the columns of ${\cal C}$ are almost orthogonal
to the columns of ${\cal A}$, and when the rejection rate becomes too high this implies that we can not pack more N-dimensional unit vectors into ${\cal C}$ and, thus, we
can approximate well a typical noise vector.
Finally, the Kabatjanskii-Levenstein inequality (see discussion in [25])
implies that
the number $\Sigma$ of columns in ${\cal C}$ grows at most polynomially:
$\Sigma\leqslant N^{\alpha^{2}}$.
It is, however, more convenient for the proofs to use a probabilistic version of [5]. Suppose that the columns of ${\cal C}$ are drawn at random independently.
Then, the dot product of any
two random unit vectors is still typically of order $1/\sqrt{N}$, see e.g. [30]. If the number of columns grows polynomially, we only
have to sacrifice an asymptotically
negligible event where our Noise Collector does not
satisfy the three key properties, and the decoherence constraints in [5] are weakened by a logarithmic factor. The next Lemma is proved in Section
Proofs.
Lemma 1: Suppose $\Sigma=N^{\beta}$, $\beta>1$, vectors $\mbox{\boldmath{$c$}}_{i}\in\mathbb{S}^{N-1}$ are drawn at random and independently. Then, (i) for any $\kappa>0$ there are constants $c_{0}(\kappa,\beta)$ and $\alpha>1/2$, such that
$$\forall i,j|\langle\mbox{\boldmath{$a$}}_{i},\mbox{\boldmath{$c$}}_{j}\rangle|%
<c_{0}\sqrt{\ln N}/\sqrt{N},$$
(6)
and (ii) for any $\mbox{\boldmath{$e$}}\in\mathbb{S}^{N-1}$ there exists at least one $\mbox{\boldmath{$c$}}_{j}$, so that
$$|\langle\mbox{\boldmath{$e$}},\mbox{\boldmath{$c$}}_{j}\rangle|\geqslant\alpha%
/\sqrt{N}\,$$
(7)
with probability $1-1/N^{\kappa}$.
The estimate in [6] implies that any solution
${\cal C}\mbox{\boldmath{$\eta$}}=\mbox{\boldmath{$a$}}_{i}$
satisfies, for any $i\leqslant N$,
$$\|\mbox{\boldmath{$\eta$}}\|_{\ell_{1}}\geqslant\frac{\sqrt{N}}{c_{0}\sqrt{\ln
N%
}}\,,$$
(8)
with probability $1-1/N^{\kappa}$. This estimate measures how expensive it is to approximate columns of ${\cal A}$ with the Noise Collector.
In turn, the weight $\tau$ should be chosen so that it is expensive to approximate noise using columns of ${\cal A}$. It cannot be taken too large, though, because we may loose the signal.
In fact, one can prove that if $\tau\geqslant\sqrt{N}/\alpha$, then $\mbox{\boldmath{$\rho$}}_{\tau}\equiv 0$ for any $\rho$ and any level of noise.
Intuitively, the weight $\tau$ characterizes the rate at which the signal is lost as the noise increases.
To explain the theoretical lower bound $\tau\geqslant c_{0}\sqrt{\ln N}$ we turn to the geometric interpretation of duality in linear programming.
Suppose $\tau=\infty$ and there is no signal, $\mbox{\boldmath{$b$}}_{0}$.
Then, the solution of [4] satisfies $(\mbox{\boldmath{$\rho$}}_{\infty},\mbox{\boldmath{$\eta$}}_{\infty})=(\mbox{%
\boldmath{$0$}},\mbox{\boldmath{$\eta$}})$,
and there is a dual certificate $z$ of optimality
of $(\mbox{\boldmath{$0$}},\mbox{\boldmath{$\eta$}})$ for $\tau=\infty$
that satisfies
$$\langle\mbox{\boldmath{$c$}}_{j},\mbox{\boldmath{$z$}}\rangle=\hbox{sign}(\eta%
_{j})\,\hbox{ if }\eta_{j}\neq 0,\hbox{ and }|\langle\mbox{\boldmath{$c$}}_{j}%
,\mbox{\boldmath{$z$}}\rangle|\leqslant 1\,\hbox{ if }\eta_{j}=0.$$
Define a nonlinear map
$$\Phi_{\cal C}:\mbox{\boldmath{$e$}}\to\mbox{\boldmath{$z$}},$$
(9)
where $\mbox{\boldmath{$e$}}\in\mathbb{R}^{N}$ is the noise vector in [4], and $z$ is the dual certificate of optimality of $(\mbox{\boldmath{$0$}},\mbox{\boldmath{$\eta$}})$ for $\tau=\infty$.
For example, if ${\cal C}$ is the identity matrix, then $\Phi_{\cal C}(\mbox{\boldmath{$e$}})=(\hbox{sign}(e_{1}),\dots,\hbox{sign}(e_{%
N}))$; see Figure 1-left.
If $\mbox{\boldmath{$z$}}=\Phi_{\cal C}(\mbox{\boldmath{$e$}})$ remains a dual certificate of optimality of $(\mbox{\boldmath{$0$}},\mbox{\boldmath{$\eta$}})$ for $\tau=c_{0}\sqrt{\ln N}$, then it implies
that support$(\mbox{\boldmath{$\rho$}}_{\tau})\subset$ support$(\mbox{\boldmath{$\rho$}})$ for such $\tau$. Thus, Theorem 1 follows once we check that
$$|\langle\mbox{\boldmath{$a$}}_{j},\mbox{\boldmath{$z$}}\rangle|<\tau,\hbox{ %
for all }j\leqslant K,$$
(10)
holds with large probability. Thus, we need to understand the statistics of $\mbox{\boldmath{$z$}}=\Phi_{\cal C}(\mbox{\boldmath{$e$}})$, given that $\mbox{\boldmath{$e$}}/\|\mbox{\boldmath{$e$}}\|$ is uniformly distributed on
$\mathbb{S}^{N-1}$. The columns of the Noise Collector were also uniformly distributed on $\mathbb{S}^{N-1}$, thus the vector $\mbox{\boldmath{$n$}}=\mbox{\boldmath{$z$}}/\|\mbox{\boldmath{$z$}}\|_{l_{2}}$
has to be
uniformly distributed on $\mathbb{S}^{N-1}$ as well. The chance (10) does not hold, could be estimated by the area of the intersection of
the unit sphere $\mathbb{S}^{N-1}$ and the $l_{1}$ ball of radius $O(\sqrt{N})$ (see Figure 1-right), which can be shown to be small
by standard estimates from high-dimensional probability.
By construction, the columns of the combined matrix $[{\cal A}\,|\,{\cal C}]$ are incoherent. This is the key observation, that allows us to prove Theorems 2 and 3 using
standard techniques, see e.g. [20].
In particular, we automatically have exact recovery by the standard arguments [11] applied to $[{\cal A}\,|\,{\cal C}]$ if the data is noiseless.
Lemma 2 (Exact Recovery): Suppose $\rho$ is an $M$-sparse solution of ${\cal A}\mbox{\boldmath{$\rho$}}=\mbox{\boldmath{$b$}}$, and there is no noise, $\mbox{\boldmath{$e$}}=0$.
In addition, assume that the columns of ${\cal A}$ are incoherent: $|\langle\mbox{\boldmath{$a$}}_{i},\mbox{\boldmath{$a$}}_{j}\rangle|\leqslant%
\frac{1}{3M}$.
Then, the solution to [4] satisfies $\mbox{\boldmath$\rho$}_{\tau}=\mbox{\boldmath$\rho$}$ for all
$$M<\frac{\sqrt{N}}{c_{0}\sqrt{\ln N}\tau}\,,$$
(11)
with probability $1-1/N^{\kappa}$.
Fast Noise Collector Algorithm
To find the minimizer [4], we consider a variational approach. We define the function
$$\displaystyle F(\mbox{\boldmath{$\rho$}},\mbox{\boldmath{$\eta$}},\mbox{%
\boldmath{$z$}})=\lambda\,(\tau\|\mbox{\boldmath{$\rho$}}\|_{\ell_{1}}+\|\mbox%
{\boldmath{$\eta$}}\|_{\ell_{1}})$$
(12)
$$\displaystyle+\frac{1}{2}\|{\cal A}\mbox{\boldmath{$\rho$}}+{\cal C}\mbox{%
\boldmath{$\eta$}}-\mbox{\boldmath$b$}\|^{2}_{\ell_{2}}+\langle\mbox{\boldmath%
{$z$}},\mbox{\boldmath$b$}-{\cal A}\mbox{\boldmath{$\rho$}}-{\cal C}\mbox{%
\boldmath{$\eta$}}\rangle$$
for a weight $\tau=c_{0}\sqrt{\ln N}$, and determine the solution as
$$\max_{\mbox{\boldmath{$z$}}}\min_{\mbox{\boldmath{$\rho$}},\mbox{\boldmath{$%
\eta$}}}F(\mbox{\boldmath{$\rho$}},\mbox{\boldmath{$\eta$}},\mbox{\boldmath{$z%
$}}).$$
(13)
The key observation is that this variational principle finds the minimum in [4] exactly for all values of the regularization parameter $\lambda$.
Hence, the proposed method is fully automated, meaning that it has no tuning parameters. To determine the exact extremum in [13], we use the iterative soft thresholding algorithm GeLMA [21]
that works as follows .
For $\beta=1.5$ we use $\tau=0.8\sqrt{\ln N}$ in our numerical experiments. For optimal results,
one can calibrate $c_{0}$ to be the smallest constant such that Theorem 1 holds, that is, we see no phantom signals when the algorithm is fed with pure noise.
Pick a value for the regularization parameter $\lambda$, e.g. $\lambda=1$. Choose step sizes $\Delta t_{1}<2/\|[{\cal A}\,|\,{\cal C}]\|^{2}$ and
$\Delta t_{2}<\lambda/\|{\cal A}\|$ 555Choosing two step sizes instead of the smaller one $\Delta t_{1}$ improves the convergence speed.. Set $\mbox{\boldmath{$\rho$}}_{0}=\mbox{\boldmath{$0$}}$, $\mbox{\boldmath{$\eta$}}_{0}=\mbox{\boldmath{$0$}}$, $\mbox{\boldmath{$z$}}_{0}=\mbox{\boldmath{$0$}}$, and
iterate for $k\geqslant 0$:
$$\displaystyle\mbox{\boldmath{$r$}}=\mbox{\boldmath{$b$}}-{\cal A}\,\mbox{%
\boldmath$\rho$}_{k}-{\cal C}\,\mbox{\boldmath{$\eta$}}_{k}\,,$$
$$\displaystyle\mbox{\boldmath{$\rho$}}_{k+1}=\mathcal{S}_{\,\tau\,\lambda\Delta
t%
_{1}}\left(\mbox{\boldmath$\rho$}_{k}+\Delta t_{1}\,{\cal A}^{*}(\mbox{%
\boldmath{$z$}}_{k}+\mbox{\boldmath{$r$}})\right)\,,$$
$$\displaystyle\mbox{\boldmath{$\eta$}}_{k+1}=\mathcal{S}_{\lambda\Delta t_{1}}%
\left(\mbox{\boldmath{$\eta$}}_{k}+\Delta t_{1}\,{\cal C}^{*}(\mbox{\boldmath{%
$z$}}_{k}+\mbox{\boldmath{$r$}})\right)\,,$$
$$\displaystyle\mbox{\boldmath{$z$}}_{k+1}=\mbox{\boldmath{$z$}}_{k}+\Delta t_{2%
}\,\mbox{\boldmath{$r$}}\,,$$
(14)
where $\mathcal{S}_{\lambda}(y_{i})=\text{sign}(y_{i})\max\{0,|y_{i}|-\lambda\}$.
The Noise Collector matrix ${\cal C}$ is computed by drawing $N^{\beta-1}$ normally distributed $N$-dimensional vectors, normalized to unit length. These are the generating vectors of the Noise Collector. From each of them a circulant $N\times N$ matrix ${\cal C}_{i}$, $i=1,\ldots,N^{\beta-1}$, is constructed. The Noise Collector matrix is obtained by concatenation,
so ${\cal C}=\left[{\cal C}_{1}\left|{\cal C}_{2}\left|\ldots\left|{\cal C}_{N^{%
\beta-1}}\right.\right.\right.\right]$. Exploiting the circulant structure of the matrices ${\cal C}_{i}$, we perform the matrix vector multiplications ${\cal C}\mbox{\boldmath{$\eta$}}_{k}$ and ${\cal C}^{*}(\mbox{\boldmath{$z$}}_{k}+\mbox{\boldmath{$r$}})$ in (14) using the FFT [15]. This makes the complexity associated to the Noise Collector $O(N^{\beta}\log(N))$. Note that only the $N^{\beta-1}$ generating vectors are stored, and not the entire $N\times N^{\beta}$ Noise Collector matrix. In practice, we use $\beta\approx 1.5$ which makes the cost of using the Noise Collector negligible, as typically $K\gg N^{\beta-1}$.
Application to imaging
We consider passive array imaging of point sources. The problem consists in determining the positions $\vec{\mbox{\boldmath{$z$}}}_{j}$ and the complex666We chose to work with real numbers in the previous sections for ease of presentation but the results also hold with complex numbers. amplitudes $\alpha_{j}$,
$j=1,\dots,M$, of a few point sources from measurements of polychromatic signals on an array of receivers; see Figure 2.
The imaging system is characterized by the array aperture $a$, the distance $L$ to the sources, the bandwidth $B$
and the central wavelength $\lambda_{0}$.
The sources are located inside an image window IW, which is discretized with a uniform grid of points $\vec{\mbox{\boldmath{$y$}}}_{k}$, $k=1,\ldots,K$.
The unknown is the source vector
$\mbox{\boldmath{$\rho$}}=[\rho_{1},\ldots,\rho_{K}]^{\intercal}\in\mathbb{C}^{K}$,
whose components $\rho_{k}$ correspond to the complex amplitudes of the $M$ sources at the grid points $\vec{\mbox{\boldmath{$y$}}}_{k}$, $k=1,\ldots,K$, with $K\gg M$. For the true source vector we have
$\rho_{k}=\alpha_{j}$ if $\vec{\mbox{\boldmath{$y$}}}_{k}=\vec{\mbox{\boldmath{$z$}}}_{j}$ for some $j=1,\ldots,M$, while $\rho_{k}=0$ otherwise.
Denoting by $G(\vec{\mbox{\boldmath{$x$}}},\vec{\mbox{\boldmath{$y$}}};\omega)$ the Green’s function for the propagation of a
signal of angular frequency $\omega$ from point $\vec{\mbox{\boldmath{$y$}}}$ to point $\vec{\mbox{\boldmath{$x$}}}$, we define the single-frequency
Green’s function vector that connects a point $\vec{\mbox{\boldmath{$y$}}}$ in the IW with all points $\vec{\mbox{\boldmath{$x$}}}_{r}$, $r=1,\ldots,N$, on the array as
$$\mbox{\boldmath{$g$}}(\vec{\mbox{\boldmath{$y$}}};\omega)=[G(\vec{\mbox{%
\boldmath{$x$}}}_{1},\vec{\mbox{\boldmath{$y$}}};\omega),G(\vec{\mbox{%
\boldmath{$x$}}}_{2},\vec{\mbox{\boldmath{$y$}}};\omega),\ldots,G(\vec{\mbox{%
\boldmath{$x$}}}_{N},\vec{\mbox{\boldmath{$y$}}};\omega)]^{\intercal}\in%
\mathbb{C}^{N}\,.$$
In a homogeneous medium in three dimensions,
$G(\vec{\mbox{\boldmath{$x$}}},\vec{\mbox{\boldmath{$y$}}};\omega)=\frac{\exp\{%
\mathrm{i}\omega|\vec{\mbox{\boldmath{$x$}}}-\vec{\mbox{\boldmath{$y$}}}|/c_{0%
}\}}{4\pi|\vec{\mbox{\boldmath{$x$}}}-\vec{\mbox{\boldmath{$y$}}}|}$.
The data for the imaging problem are the signals
$$b(\vec{\mbox{\boldmath{$x$}}}_{r},\omega_{l})=\sum_{j=1}^{M}\alpha_{j}G(\vec{%
\mbox{\boldmath{$x$}}}_{r},\vec{\mbox{\boldmath{$z$}}}_{j};\omega_{l})$$
(15)
recorded at receiver locations $\vec{\mbox{\boldmath{$x$}}}_{r}$, $r=1,\ldots,N$, at frequencies $\omega_{l}$, $l=1,\dots,S$.
These data are stacked in a column vector
$$\mbox{\boldmath{$b$}}=[\mbox{\boldmath{$b$}}(\omega_{1})^{\intercal},\mbox{%
\boldmath{$b$}}(\omega_{2})^{\intercal},\dots,\mbox{\boldmath{$b$}}(\omega_{S}%
)^{\intercal}]^{\intercal}\in\mathbb{C}^{(N\cdot S)}\,,$$
(16)
with
$\mbox{\boldmath{$b$}}(\omega_{l})=[b(\vec{\mbox{\boldmath{$x$}}}_{1},\omega_{l%
}),b(\vec{\mbox{\boldmath{$x$}}}_{2},\omega_{l}),\dots,b(\vec{\mbox{\boldmath{%
$x$}}}_{N},\omega_{l})]^{\intercal}\in\mathbb{C}^{N}$. Then,
${\cal A}\,\mbox{\boldmath$\rho$}=\mbox{\boldmath$b$}$,
with ${\cal A}$ the $(N\cdot S)\times K$ measurement matrix whose columns $\mbox{\boldmath{$a$}}_{k}$ are the multiple-frequency Green’s function vectors
$$\mbox{\boldmath{$a$}}_{k}=[\mbox{\boldmath{$g$}}(\vec{\mbox{\boldmath{$y$}}}_{%
k};\omega_{1})^{\intercal},\mbox{\boldmath{$g$}}(\vec{\mbox{\boldmath{$y$}}}_{%
k};\omega_{2})^{\intercal},\dots,\mbox{\boldmath{$g$}}(\vec{\mbox{\boldmath{$y%
$}}}_{k};\omega_{S})^{\intercal}]^{\intercal}\in\mathbb{C}^{(N\cdot S)}\,,$$
(17)
normalized to have length one.
The system ${\cal A}\,\mbox{\boldmath$\rho$}=\mbox{\boldmath$b$}$ relates the unknown vector $\mbox{\boldmath$\rho$}\in\mathbb{C}^{K}$ to the data vector $\mbox{\boldmath$b$}\in\mathbb{C}^{(N\cdot S)}$.
Next, we illustrate the performance of the Noise Collector in this imaging setup. The most important features are that (i) no calibration is necessary with respect to the level of noise, (ii) exact support recovery for relatively large levels of noise (i.e., $\|\mbox{\boldmath{$e$}}\|_{l_{2}}/\|\mbox{\boldmath{$b$}}_{0}\|_{l_{2}}%
\leqslant c_{2}/\sqrt{\ln N}$), and (iii)
zero false discovery rate for all levels of noise, with high probability.
We consider a high frequency microwave imaging regime with central frequency $f_{0}=60$GHz corresponding to $\lambda_{0}=5$mm. We make measurements for $S=25$ equally spaced frequencies spanning a bandwidth $B=20$GHz. The array has $N=25$ receivers and an aperture $a=50$cm. The distance from the array to the center of the imaging window is $L=50$cm. Then, the resolution is $\lambda_{0}L/a=5$mm in the cross-range (direction parallel to the array) and $c_{0}/B=15$mm in range (direction of propagation). These parameters are typical in microwave scanning technology [18].
We seek to image a source vector with sparsity $M=12$; see the left plot in Fig. 3. The size of the imaging window is 20cm$\times$60cm and the pixel spacing is 5mm$\times$15mm. The number of unknowns is, therefore, $K=1681$ and the number of data is $NS=625$. The size of the noise collector is taken to be $\Sigma=10^{4}$, so $\beta\approx 1.5$. When the data is noiseless, we obtain exact recovery as expected; see the right plot in Fig. 3
In Fig. 4, we display the imaging results, with and without a Noise Collector, when the data is corrupted by additive noise. The SNR $=1$, so the $\ell_{2}$-norms of the signals and the noise are equal. In the left plot, we show the recovered image using $\ell_{1}$-norm minimization without a Noise Collector. There is a lot of grass in this image, with many non-zero values outside the true support. When a Noise Collector is used, the level of the grass is reduced and the image improves; see the second from the left plot. Still, there are several false discoveries because we use $\tau=1$ in [14].
In the third column from the left of Fig. 4 we show the image obtained with a weight $\tau=0.8\sqrt{\ln 625}=2$
in [14].
With this weight, there are no false discoveries and the recovered support is exact. This simplifies the imaging problem dramatically, as we can now
restrict the inverse problem to the true support just obtained, and then solve an overdetermined linear system using a classical $\ell_{2}$ approach. The results are
shown in the right column of Fig. 4. Note that this second step largely compensates for
the signal that was lost in the first step due to the high level of noise.
In Figure 5 we illustrate the performance of the Noise Collector for different sparsity levels $M$ and SNR values. Success in recovering the true support of the unknown corresponds to the value $1$ (yellow) and failure to $0$ (blue). The small phase transition zone (green) contains intermediate values. These results are obtained by averaging over 5 realizations of noise.
Remark 1: We considered passive array imaging for ease of presentation. Same results hold for active array imaging with or without multiple scattering;
see [7] for the detailed analytical setup.
Remark 2:
We have considered a microwave imaging regime. Similar results can be obtained in other regimes.
Proofs
Proof of Lemma 1: Denote the event
$$\Omega_{t}=\left\{\max_{i,j}|\langle\mbox{\boldmath{$a$}}_{i},\mbox{\boldmath{%
$c$}}_{j}\rangle|\geqslant t/\sqrt{N}\right\}.$$
By independence,
$\mathbb{P}\left(|\langle\mbox{\boldmath{$a$}}_{i},\mbox{\boldmath{$c$}}_{j}%
\rangle|\geqslant t/\sqrt{N}\right)\leqslant 2\exp(-t^{2}/2)$
for any $i$ and $j$. Thus,
$\mathbb{P}\left(\Omega_{t}\right)\leqslant 2N\Sigma\exp(-t^{2}/2)$.
Choosing $t=c_{0}\sqrt{\ln N}$ for sufficiently large $c_{0}$, we get
$$\mathbb{P}\left(\Omega_{t}\right)\leqslant CN^{\beta+1}N^{-c^{2}_{0}/2}%
\leqslant N^{-\kappa},$$
where $c_{0}^{2}>2(\beta+\kappa+1)$ and $N\geqslant N_{0}$. Hence, [6] holds with large probability $1-N^{-\kappa}$.
Next, we consider the chances that [7] does not hold. Suppose there is a direction $\mbox{\boldmath{$b$}}\in\mathbb{S}^{N-1}$ such that
$$|\langle\mbox{\boldmath{$b$}},\mbox{\boldmath{$c$}}_{j}\rangle|\leqslant\alpha%
/\sqrt{N}$$
(18)
holds for all $j$. Let
$V_{k}(\mbox{\boldmath{$c$}}_{i_{1}},\dots,\mbox{\boldmath{$c$}}_{i_{k}})$
be the $k$-dimensional volume of a parallelogram spanned by $\mbox{\boldmath{$c$}}_{i_{1}}$, $\dots$, $\mbox{\boldmath{$c$}}_{i_{k}}$. Note that $V_{k}$ is
equal to $V_{k-1}$ times its height. Then, if (18) holds,
$$\frac{V_{N}(\mbox{\boldmath{$c$}}_{i_{1}},\dots,\mbox{\boldmath{$c$}}_{i_{N-1}%
},\mbox{\boldmath{$c$}}_{i_{N}})}{V_{N-1}(\mbox{\boldmath{$c$}}_{i_{1}},\dots,%
\mbox{\boldmath{$c$}}_{i_{N-1}})}\leqslant\frac{2\alpha}{\sqrt{N}}$$
(19)
for any choice of $N$ columns $\mbox{\boldmath{$c$}}_{i_{j}}$ from the noise collector ${\cal C}$. If we fix the indices
${i_{1}}$, $\dots$, ${i_{N}}$ then, due to rotational invariance, the probability of the event (19)
equals the probability of event
$|\langle\mbox{\boldmath{$c$}}_{1},\mbox{\boldmath{$e$}}_{1}\rangle|\leqslant 2%
\alpha/\sqrt{N}$.
Using
$$\mathbb{P}\left(|\langle\mbox{\boldmath{$c$}}_{1},\mbox{\boldmath{$e$}}_{1}%
\rangle|\leqslant\frac{2\alpha}{\sqrt{N}}\right)=\sqrt{\frac{N}{2\pi}}\int_{-2%
\alpha/\sqrt{N}}^{2\alpha/\sqrt{N}}\!\!\!\!e^{-x^{2}N/2}dx\leqslant\frac{4%
\alpha}{\sqrt{2\pi}},$$
and that we can find $N^{\beta-1}$ sets of distinct indices ${i_{1}}$, $\dots$, ${i_{N}}$, we conclude that
$$\mathbb{P}\left(\exists\mbox{\boldmath{$b$}}\in\mathbb{S}^{N-1}\hbox{ such %
that~{}\eqref{anti_deco_1} holds}\right)\leqslant\left(4\alpha/\sqrt{2\pi}%
\right)^{N^{\beta-1}}.$$
Choosing $\alpha$ sufficiently small, i.e. $\alpha<4\sqrt{2\pi}/4\approx 0.63$, and $N$ sufficiently large, we obtain the result.$\Box$
Proof of Theorem 1: In order to check (10), we assume that both $\mbox{\boldmath{$c$}}_{i}$ and $-\mbox{\boldmath{$c$}}_{i}$ are in ${\cal C}$, because it is more geometrically intuitive
to work with the convex hull
$$H=\left\{x\in\mathbb{R}^{N}\left|x=\sum_{i=1}^{\Sigma}\xi_{i}\mbox{\boldmath{$%
c$}}_{i},~{}\xi_{i}\geqslant 0,~{}\sum_{i=1}^{\Sigma}\xi_{i}\leqslant 1\right.%
\right\}.$$
(20)
It implies we may also assume $\eta$ in (4) has non-negative coefficients,
and $\|\mbox{\boldmath{$\eta$}}\|_{l_{1}}=\min_{\lambda>0}\{\mbox{\boldmath{$e$}}%
\in\lambda H\}$.
Thus,
$\|\mbox{\boldmath{$\eta$}}\|_{l_{1}}$ is a norm of $e$ with respect to ${\cal C}$, and we can set $\|\mbox{\boldmath{$e$}}\|_{{\cal C}}:=\|\mbox{\boldmath{$\eta$}}\|_{l_{1}}$. This norm is called atomic in [8].
Suppose $\Lambda$ is the support of $\eta$. Its typical size $|\Lambda|=N$.
Then, the simplex
$$\left\{\mbox{\boldmath{$x$}}\in\mathbb{R}^{N}\left|\mbox{\boldmath{$x$}}=\sum_%
{i\in\Lambda}\alpha_{i}\mbox{\boldmath{$c$}}_{i},\sum_{i\in\Lambda}\alpha_{i}=%
1,\alpha_{i}\geqslant 0\right.\right\}$$
has the unique normal vector $n$, which is collinear to our dual certificate $z$, because
$$\langle\mbox{\boldmath{$z$}},\mbox{\boldmath{$c$}}_{i}\rangle=\langle\mbox{%
\boldmath{$z$}},\mbox{\boldmath{$e$}}\rangle/\|e\|_{{\cal C}}=1,\forall i\in%
\Lambda,\langle\mbox{\boldmath{$z$}},\mbox{\boldmath{$c$}}_{j}\rangle<1,%
\forall j\not\in\Lambda.$$
(21)
The estimate (7) implies that the convex hull $H$ contains an $l_{2}$-ball of radius $\alpha/\sqrt{N}$. Therefore,
$\|\mbox{\boldmath{$z$}}\|_{l_{2}}\leqslant\sqrt{N}/\alpha$ with large probability.
By construction, the distribution of $\Phi_{\cal C}(\mbox{\boldmath{$e$}})$ is rotationally invariant with respect to the probability measure induced by all $\mbox{\boldmath{$c$}}_{i}$ and $e$. Thus
$\mbox{\boldmath{$n$}}=\mbox{\boldmath{$z$}}/\|\mbox{\boldmath{$z$}}\|_{l_{2}}$
is also uniformly distributed on $\mbox{\boldmath{$S$}}^{N-1}$, and
$$\mathbb{P}\left(\left|\langle\mbox{\boldmath{$a$}}_{j},\mbox{\boldmath{$n$}}%
\rangle\right|\geqslant t/\sqrt{N}\right)\leqslant 2\exp\left(-t^{2}/2\right),$$
(22)
for all $i=j,\dots,K$, see e.g. [30]. Therefore, we can bound the probability that (10) does not hold:
$$\mathbb{P}\left(\max_{j\leqslant K}|\langle\mbox{\boldmath{$a$}}_{j},\mbox{%
\boldmath{$z$}}\rangle|\geqslant\tau\right)\leqslant K\mathbb{P}\left(|\langle%
\mbox{\boldmath{$a$}}_{1},\mbox{\boldmath{$n$}}\rangle|\geqslant\alpha\tau/%
\sqrt{N}\right)$$
$$\leqslant 2K\exp\left(-\alpha^{2}\tau^{2}/2\right)\leqslant N^{\beta-\alpha^{2%
}c^{2}_{0}/2}\sim N^{-\kappa},$$
for large $N$ and appropriately chosen $c_{0}=\sqrt{2(\kappa+\beta)}/\alpha$.
Hence, (10) holds with large probability $1-N^{-\kappa}$. $\Box$
Proof of Theorem 2: If the columns of ${\cal A}$ are orthogonal, our previous arguments could be modified to verify Theorem 2. Indeed, suppose $V$ is the span of the column vectors $\mbox{\boldmath{$a$}}_{j}$, with $j$ in the support of $\rho$. Say, $V$ is spanned by $\mbox{\boldmath{$a$}}_{1}$, $\dots$, $\mbox{\boldmath{$a$}}_{M}$. Let $W=V^{\perp}$ be the orthogonal complement to $V$. Then, the orthogonal projection of the signal $\mbox{\boldmath{$\rho$}}^{w}=0$.
By the concentration of measure see e.g. [30], the projection of the noise $\mbox{\boldmath{$e$}}^{w}$ is
uniformly distributed on the unit sphere $\mathbb{S}^{N-1-M}$ with large probability. Applying the previous arguments to
$\mbox{\boldmath{$z$}}^{w}$, the projection of $z$ on $W$, we conclude that
the projection $\mbox{\boldmath{$\rho$}}^{w}_{\tau}=0$. Therefore, $\mbox{supp}(\mbox{\boldmath{$\rho$}}_{\tau})\subseteq\mbox{supp}(\mbox{%
\boldmath{$\rho$}})$ with large probability.
For general ${\cal A}$ consider the orthogonal decomposition $\mbox{\boldmath{$a$}}_{i}=\mbox{\boldmath{$a$}}_{i}^{v}+\mbox{\boldmath{$a$}}_%
{i}^{w}$ for all $i\geqslant M+1$.
As before, we can choose $\tau=c_{0}\sqrt{\ln N}$ so that
$|\langle\mbox{\boldmath{$a$}}_{i}^{w},\mbox{\boldmath{$z$}}\rangle|<\tau/2$ with large probability. It remains to demonstrate that
$|\langle\mbox{\boldmath{$a$}}_{i}^{v},\mbox{\boldmath{$z$}}\rangle|\leqslant%
\tau/2$. Fix any $i\geqslant M+1$.
Suppose $\mbox{\boldmath{$a$}}_{i}^{v}=\sum_{k=1}^{M}\alpha_{k}\mbox{\boldmath{$a$}}_{k}$, and
$|\alpha_{j}|=\max_{k\leqslant M}|\alpha_{k}|=\|\mbox{\boldmath{$\alpha$}}\|_{l%
_{\infty}}$.
Thus,
$$\frac{1}{3M}\geqslant|\langle\mbox{\boldmath{$a$}}_{j},\mbox{\boldmath{$a$}}_{%
i}^{v}\rangle|\geqslant|\langle\mbox{\boldmath{$a$}}_{j},\sum_{k=1}^{M}\alpha_%
{k}\mbox{\boldmath{$a$}}_{k}\rangle|\geqslant\|\mbox{\boldmath{$\alpha$}}\|_{l%
_{\infty}}\left(1-\frac{M-1}{3M}\right).$$
Then, $\|\mbox{\boldmath{$\alpha$}}\|_{l_{\infty}}\leqslant 1/2M$, so $\|\mbox{\boldmath{$\alpha$}}\|_{l_{1}}\leqslant M\|\mbox{\boldmath{$\alpha$}}%
\|_{l_{\infty}}\leqslant 1/2$. Hence,
$$|\langle\mbox{\boldmath{$a$}}_{i}^{v},\mbox{\boldmath{$z$}}\rangle|\leqslant%
\sum_{k=1}^{M}|\alpha_{k}|~{}|\langle\mbox{\boldmath{$a$}}_{k},\mbox{\boldmath%
{$z$}}\rangle|\leqslant\|\mbox{\boldmath{$\alpha$}}\|_{l_{1}}\tau\leqslant\tau%
/2.\hskip 42.679134pt\Box$$
Proof of Theorem 3: It suffices to prove the result for 1-sparse $\rho$, say, $\mbox{\boldmath{$\rho$}}=(1,0,\dots,0)$.
We will demonstrate that the solution to the minimization problem
$$\displaystyle\left(\mbox{\boldmath{$\eta$}}_{\tau},\rho_{1}\right)=\arg\min_{%
\mbox{\boldmath{$\rho$}},\mbox{\boldmath{$\eta$}}}\left(\|\mbox{\boldmath{$%
\eta$}}\|_{\ell_{1}}+\tau|\rho|\right),$$
(23)
$$\displaystyle\hbox{ subject to }{\cal C}\mbox{\boldmath{$\eta$}}=\mbox{%
\boldmath{$e$}}+\mbox{\boldmath{$a$}}_{1}(1-\rho_{1}),$$
with $\tau=c_{0}\sqrt{\ln N}$, satisfies $\rho_{1}>1/2$ if $\delta<c_{2}/\sqrt{\ln N}$, $c_{2}=\alpha/5c_{0}$. This implies $\mbox{supp}(\mbox{\boldmath$\rho$}_{\tau})=\mbox{supp}(\mbox{\boldmath$\rho$})$.
Suppose $\mbox{\boldmath{$\eta$}}_{\mbox{\boldmath{$e$}}}$, $\mbox{\boldmath{$\eta$}}_{\mbox{\boldmath{$a$}}_{1}}$ and $\mbox{\boldmath{$\eta$}}_{t}$ are the optimal solutions of
$$\mbox{\boldmath{$\eta$}}_{\mbox{\boldmath{$b$}}}=\arg\min\left(\|\mbox{%
\boldmath{$\eta$}}\|_{\ell_{1}}\right),\hbox{ subject to }{\cal C}\mbox{%
\boldmath{$\eta$}}=\mbox{\boldmath{$b$}},$$
(24)
with right-hand sides $\mbox{\boldmath{$b$}}=\mbox{\boldmath{$e$}}$, $\mbox{\boldmath{$b$}}=\mbox{\boldmath{$a$}}_{1}$, and $\mbox{\boldmath{$b$}}=\mbox{\boldmath{$e$}}+t\mbox{\boldmath{$a$}}_{1}$, respectively.
Since ${\cal C}\left(\mbox{\boldmath{$\eta$}}_{t}-\mbox{\boldmath{$\eta$}}_{\mbox{%
\boldmath{$e$}}}\right)=t\mbox{\boldmath{$a$}}_{1}$, we have
$$t\|\mbox{\boldmath{$\eta$}}_{\mbox{\boldmath{$a$}}_{1}}\|_{\ell_{1}}\leqslant%
\|\mbox{\boldmath{$\eta$}}_{t}\|_{\ell_{1}}+\|\mbox{\boldmath{$\eta$}}_{\mbox{%
\boldmath{$e$}}}\|_{\ell_{1}}\,$$
and, therefore,
$$\|\mbox{\boldmath{$\eta$}}_{t}\|_{\ell_{1}}\geqslant t\|\mbox{\boldmath{$\eta$%
}}_{\mbox{\boldmath{$a$}}_{1}}\|_{\ell_{1}}-\|\mbox{\boldmath{$\eta$}}_{\mbox{%
\boldmath{$e$}}}\|_{\ell_{1}}\,.$$
(25)
From (7) and (8), we have
$$\|\mbox{\boldmath{$\eta$}}_{\mbox{\boldmath{$e$}}}\|_{\ell_{1}}\leqslant\delta%
\frac{\sqrt{N}}{\alpha}\hbox{ and }\|\mbox{\boldmath{$\eta$}}_{\mbox{\boldmath%
{$a$}}_{1}}\|_{\ell_{1}}\geqslant\frac{\sqrt{N}}{c_{0}\sqrt{\ln N}},$$
respectively.
Suppose $\delta\leqslant c_{2}/\sqrt{\ln N}$, $c_{2}=\alpha/5c_{0}$. Then, for any
$t\geqslant 1/2$, and for $N$ large enough,
$$t\|\mbox{\boldmath{$\eta$}}_{\mbox{\boldmath{$a$}}_{1}}\|_{\ell_{1}}-\|\mbox{%
\boldmath{$\eta$}}_{\mbox{\boldmath{$e$}}}\|_{\ell_{1}}>\|\mbox{\boldmath{$%
\eta$}}_{\mbox{\boldmath{$e$}}}\|_{\ell_{1}}+\tau t.$$
Using (25) with $t=1-\rho_{1}$, we conclude that
$$\|\mbox{\boldmath{$\eta$}}_{1-\rho_{1}}\|_{\ell_{1}}+\tau\rho_{1}>\|\mbox{%
\boldmath{$\eta$}}_{\mbox{\boldmath{$e$}}}\|_{\ell_{1}}+\tau$$
for all $\rho_{1}\leqslant 1/2$. It implies (23).
$\Box$
Acknowledgements
The work of M. Moscoso was partially supported by Spanish grant MICINN FIS2016-77892-R. The work of A.Novikov was partially supported by NSF grants DMS-1515187, DMS-1813943.
The work of G. Papanicolaou was partially supported by AFOSR FA9550-18-1-0519. The work of C. Tsogka was partially supported by AFOSR FA9550-17-1-0238 and FA9550-18-1-0519. We thank Marguerite Novikov for drawing Figure 1.
References
[1]
M. AlQuraishi and H. H. McAdams, Direct inference of protein DNA interactions using compressed sensing methods, Proc. Natl. Acad. Sci. U.S.A 108,14819–14824 (2001).
[2]
R. Baraniuk and Philippe Steeghs, Compressive Radar Imaging, in 2007 IEEE Radar Conference, Apr. 2007, 128–133.
[3]
L. Borcea and I. Kocyigit, Resolution analysis of imaging with $\ell_{1}$ optimization, SIAM J. Imaging Sci. 8, 3015–3050 (2015).
[4]
E. J Candès and T. Tao, Decoding by linear programming, IEEE Trans. Inf. Theory 51, 4203–4215 (2005).
[5]
E. J. Candès and C. Fernandez-Granda, Towards a mathematical theory of super-resolution, Comm. Pure Appl. Math. 67, 906-956 (2014).
[6]
A. Chai, M. Moscoso and G. Papanicolaou, Robust imaging of localized scatterers using the singular value decomposition and $\ell_{1}$ optimization, Inverse Problems 29, 025016 (2013).
[7]
A. Chai, M. Moscoso and G. Papanicolaou, Imaging Strong Localized Scatterers with Sparsity Promoting Optimization,
SIAM J. Imaging Sci. 7, 1358–1387 (2014).
[8]
V. Chandrasekaran, B.Recht, P. A. Parrilo, A. S. Willsky, The convex geometry of linear inverse problems,
Found. Comput. Math. 12, 805–849 (2012).
[9]
S. S. Chen, D. L. Donoho, and M. A. Saunders, Atomic decomposition by basis pursuit, SIAM Rev. 43,
12–159 (2001).
[10]
D. L. Donoho, Super-resolution via sparsity constraint, SIAM J Math Anal 23, 1303–1331 (1992).
[11]
D. L. Donoho and M. Elad, Optimally sparse representation in general (nonorthogonal) dictionaries via $\ell_{1}$ minimization, Proc. Natl. Acad. Sci. U.S.A 100, 2197–2202 (2003).
[12]
A. C. Fannjiang, T. Strohmer, and P. Yan, Compressed remote sensing of sparse objects, SIAM J. Imag. Sci. 3, 595-618 (2010).
[13]
A. C. Fannjiang and W. Liao, Coherence pattern-guided compressive sensing with unresolved grids,
SIAM J. Imag. Sci. 5, 179–202 (2012).
[14]
J. J. Fuchs, Recovery of exact sparse representations in the presence
of bounded noise, IEEE Trans. Inf. Theory 51, 3601–3608 (2005).
[15]
R. M. Gray, Toeplitz and Circulant Matrices: A Review, Foundations and Trends in Communications and Information Theory 2, 155–239 (2006).
[16]
M. A. Herman and T. Strohmer, High-Resolution Radar via Compressed Sensing,
IEEE Trans. Signal Process. 57, 2275-2284 (2009).
[17]
J. N. Laska, M. A. Davenport and R. G. Baraniuk, Exact signal recovery from sparsely corrupted
measurements through the Pursuit of Justice, 2009 Conference Record of the Forty-Third Asilomar Conference on Signals,
Systems and Computers, Pacific Grove, CA, 2009, 1556–1560.
[18]
J. Laviada, A. Arboleya-Arboleya, Y. Alvarez-Lopez, C. Garcia-Gonzalez and F. Las-Heras, Phaseless synthetic aperture radar with efficient sampling for broadband near-field imaging: Theory and validation, IEEE Trans. Antennas Propag., 63:2 (2015), pp. 573–584.
[19]
D. Malioutov, M. Cetin, A.S. Willsky, A sparse signal reconstruction perspective for source localization with sensor arrays, IEEE Trans. Signal Process. 53, 3010–3022 (2005).
[20]
M. Moscoso, A. Novikov, G. Papanicolaou and C.Tsogka, Imaging with highly incomplete and corrupted data,
submitted.
[21]
M. Moscoso, A. Novikov, G. Papanicolaou and L. Ryzhik, A differential equations approach to l1-minimization with applications to array imaging, Inverse Problems 28 (2012).
[22]
J. Romberg, Imaging via Compressive Sampling, IEEE Signal Process. Mag. 25, 14–20 (2008).
[23]
J. N. Sampson, N. Chatterjee, R. J. Carroll, and S. Müller,
Controlling the local false discovery rate in the adaptive Lasso, Biostatistics 14, 653–666 (2013).
[24]
H. L. Taylor, S. C. Banks, and J. F. McCoy, Deconvolution with the l 1 norm, Geophysics 44,
39–52 (1979).
[25]
T. Terrence, A cheap version of the Kabatjanskii-Levenstein bound for almost orthogonal vectors,
https://terrytao.wordpress.com/2013/07/18/a-cheap-version-of-the-kabatjanskii-levenstein-bound-for-almost-orthogonal-vectors/
[26]
R. Tibshirani, Regression Shrinkage and Selection via the lasso, Journal of the Royal Statistical Society. Series B (methodological) 58, 267–288 (1996).
[27]
J. A. Tropp, Just Relax: Convex Programming Methods for Identifying Sparse Signals in Noise,
IEEE Trans. Inf. Theory 52, 1030–1051 (2006).
[28]
J. A. Tropp, J. N. Laska, M. F. Duarte, J. K. Romberg, and R. G. Baraniuk, Beyond Nyquist: Efficient Sampling of Sparse Bandlimited Signals, IEEE Transactions on Information Theory
56, 520–544 (2010).
[29]
J. Trzasko and A. Manduca, Highly undersampled magnetic resonance image reconstruction via homotopic
$\ell_{0}$-minimization, IEEE Trans. Med. Imag. 28, 106–121 (2009).
[30]
R. Vershynin, High-dimensional probability. An introduction with applications in data science,
Cambridge University Press, 2018.
[31]
M. J. Wainwright, Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell_{1}$-Constrained Quadratic Programming (Lasso), IEEE Trans. Inf. Theory 55, 2183–2202 (2009).
[32]
H. Zou, The Adaptive Lasso and Its Oracle Properties,
J. Amer. Statist. Assoc. 101, 1418–1429 (2006). |
Reweighting the RCT for generalization: finite sample error and variable selection
Bénédicte Colnet
Soda project-team, Premedical project-team, INRIA (email: [email protected]).
Julie Josse
Premedical project team, INRIA Sophia-Antipolis, Montpellier, France.
Gaël Varoquaux
Soda project-team, INRIA Saclay, France.
Erwan Scornet
Centre de Mathémathiques Appliquées, UMR 7641, École polytechnique, CNRS, Institut Polytechnique de Paris, Palaiseau, France.
Abstract
Randomized Controlled Trials (RCTs) may suffer from limited scope. In particular, samples may be unrepresentative:
some RCTs over- or under- sample individuals with certain characteristics compared to the target population, for which one wants conclusions on treatment effectiveness.
Re-weighting trial individuals to match the target population can improve the treatment effect estimation.
In this work, we establish the exact expressions of the bias and variance of such reweighting procedures - also called Inverse Propensity of Sampling Weighting (IPSW) - in presence of categorical covariates for any sample size. Such results allow us to compare the theoretical performance of different versions of IPSW estimates. Besides, our results show how the performance (bias, variance, and quadratic risk) of IPSW estimates depends on the two sample sizes (RCT and target population). A by-product of our work is the proof of consistency of IPSW estimates. Results also reveal that IPSW performances are improved when the trial probability to be treated is estimated (rather than using its oracle counterpart).
In addition, we study choice of variables: how including covariates that are not necessary for
identifiability of the causal effect may impact the asymptotic variance.
Including covariates that are shifted between the two samples but not
treatment effect modifiers increases the variance while non-shifted but
treatment effect modifiers do not.
We illustrate all the takeaways in a didactic example, and on a semi-synthetic simulation inspired from critical care medicine.
Keywords: Average treatment effect (ATE);
Sampling bias;
External validity;
Transportability;
Distributional shift;
IPSW.
1 Introduction
Motivation
Modern evidence-based medicine puts Randomized Controlled Trial
(RCT) at the core of clinical evidence. Indeed, randomization enables to
estimate the average treatment effect (called ATE) by avoiding
confounding effects of spurious or undesirable associated factors.
But more recently, concerns have been raised on the limited scope of RCTs: stringent eligibility criteria, unrealistic real-world compliance, short timeframe, limited sample size, etc. All these possible limitations threaten the external validity of RCT studies to other situations or populations (Rothwell, 2007; Gatsonis and
Sally, 2017; Deaton and
Cartwright, 2018).
The usage of complementary non-randomized data, referred to as observational or from the real world, brings promises as additional sources of evidence, in particular combined to trials (Kallus
et al., 2018; Athey
et al., 2020; Liu
et al., 2021). For example, assume policy makers are studying an RCT which comes with great promises about a new treatment. But when reading the report, they may discover that the RCT is composed of substancially younger people than the target population of interest. Such a situation can be uncovered from the so-called Table 1 of this newly published trial, which summarizes the demographics of the study population.
In case of treatment effect heterogeneities, e.g. if the younger individuals respond better to the treatment, the ATE estimated from the trial is over-estimated and then biased. Now, assume these policy makers have also at disposal a sample of the actual patients in the district, being a representative sample of the true distribution of age in this population (typically without information on the outcome or the treatment). Can they use this representative sample of the target population of interest to re-weight or to generalize the trial’s findings?
The answer is yes: the strategy has been formalized and popularized lately (Stuart
et al., 2011; Pearl and
Bareinboim, 2011; Bareinboim and
Pearl, 2012a, b; Tipton, 2013; O’Muircheartaigh
and Hedges, 2013; Hartman
et al., 2015; Kern
et al., 2016; Dahabreh et al., 2020) (reviewed in Colnet et al. (2020); Degtiar and
Rose (2022)) and can come under many variants named generalization, transportability, recoverability, and data-fusion.
In fact, the idea of re-weighting a trial can be traced back before the 2010’s. Several epidemiology books had already presented the core idea under the name standardization (Rothman and
Greenland, 2000; Rothman, 2011).
In this work, we focus on one estimator used to generalize RCTs: the Inverse Propensity of Sampling Weighting (IPSW) (Cole and
Stuart, 2010; Stuart
et al., 2011), also named Inverse Odds of Sampling Weights (IOSW) (Westreich et al., 2017; Josey
et al., 2021) or Inverse probability of participation weighting (IPPW) (Degtiar and
Rose, 2022).
Despite an increasing literature on generalization, important practical questions remain open (Kern
et al., 2016; Tipton
et al., 2016; Stuart and
Rhodes, 2017; Ling
et al., 2022).
For instance, which covariates – for e.g. age, and others – should be used to build the weights? Are some covariates increasing or lowering the overall precision? What is the impact of the size of the two samples (trial and representative sample) on the IPSW’s properties?
Outline
We start by illustrating the principles of trial re-weighting and some key results of this article on a toy example (Section 2). Section 2 ends with related works. Then Section 3 introduces the mathematical notations, assumptions, and the precise definition of the IPSW estimator. In particular, we present several versions of the IPSW estimator: whether the covariates probability of the trial or the target population are estimated from the data or assumed as an oracle.
This links our results to classic work in causal inference and epidemiology. Section 4 contains all the theoretical results, such as finite sample bias, variance, bounds on the risk, consistency, and large sample variance.
We also detail why another version of the IPSW, where the probability of treatment assignment in the trial is also estimated, has a lower variance. Finally, we discuss in Section 4 how additional and non-necessary covariates can either improve or damage variance, depending on their status: whether they are only shifted between the two populations or only treatment-effect modifiers.
Section 5 completes the toy example and illustrates all theoretical results on an extensive semi-synthetic example inspired from the medical domain.
Finally, Section 6 summarizes all practical takeaways for this research and discusses it.
2 Problem setting
2.1 Toy example
2.1.1 Context and intuitive estimation strategy
Assume that we would like to measure the average effect of a treatment (ATE) $A$ on a outcome $Y$ in a target population of interest $\mathcal{P}_{\text{\tiny T}}$ (for target),
and that an existing Randomized Controlled Trial (RCT) had already been conducted on $n=150$ individuals, sampled from a population $\mathcal{P}_{\text{\tiny R}}$ (for randomized), to assess the average effect of $A$ on $Y$. Usually, the average treatment effect is estimated from a trial via an Horvitz-Thomson estimator (Horvitz and
Thompson, 1952),
$$\displaystyle\hat{\tau}_{\text{\tiny HT},n}$$
$$\displaystyle=\frac{1}{n}\sum_{i\in\text{Trial}}\left(\frac{Y_{i}A_{i}}{\pi}-\frac{Y_{i}(1-A_{i})}{1-\pi}\right),$$
(1)
where $\pi$ is the probability of treatment allocation in the trial (in most applications, $\pi=0.5$). Figure 1 presents results of a simulated trial with an average treatment effect around $8.2$. In addition, assume that the trial provides evidence that the treatment effect is heterogeneous with respect to a certain genetic mutation denoted $X$ (with $X=1$ for the mutation, and $X=0$ if no mutation). More specifically, the average treatment effect conditional to $X$ is larger for individuals with $X=1$ than for those with $X=0$. This situation is illustrated on Figure 1 where the average effect per strata $X$ is also represented. We have at hand a representative sample of $m=1000$ individuals from the target population we are interest in (for example from an existing observational database). We observe that individuals with the genetic mutation ($X=1$) are over-represented in the trial compared to the target population of interest (see Figure 2). As a consequence, the trial overestimates the target population’s ATE we are interested in.
Fortunately, the representative sample of the target population can be used to learn weights, and re-weight the trial data in the following way,
$$\hat{\tau}_{n,m}=\frac{1}{n}\sum_{i\in\text{Trial}}\underbrace{\hat{w}_{n,m}(X_{i})}_{\textrm{Weights}}\underbrace{\left(\frac{Y_{i}A_{i}}{\pi}-\frac{Y_{i}(1-A_{i})}{1-\pi}\right)}_{\textrm{Horvitz-Thomson}}.$$
(2)
As detailed later on, the weights $\hat{w}_{n,m}$ aims at estimating the probability ratio $\frac{p_{\text{\tiny T}}\left(x\right)}{p_{\text{\tiny R}}\left(x\right)}$, where $p_{\text{\tiny T}}\left(x\right)$ (resp. $p_{\text{\tiny R}}\left(x\right)$) is the probability of observing an individual with characteristics $X=x$ in the target (resp. randomized) population. The weights $\hat{w}_{n,m}$ depend on the sizes of the randomized and observational data sets, namely $n$ and $m$.
Consequently, the ATE estimator $\hat{\tau}_{n,m}$ depends on the size of two data sets, raising questions on how this estimator behaves (bias and variance) as function of $n$ and $m$.
2.1.2 Simulations and first observations
To investigate empirically how $\hat{\tau}_{n,m}$ behaves, we run simulations following the Data Generative Process (DGP) described in Section 2.1.1 and represented in Figure 2(a). Figure 2(b) shows the different estimators in action, showing that the re-weighted trial compensates for the distribution shift as expected.
Figure 2(b) also shows that estimating $\pi$ from the data and plugging it in Equation 2 leads to a clear gain in variance.
This phenomenon is linked to seminal works in causal inference, and is further demonstrated in Section 4.2.
Finally, Figure 2(c) shows that
if $m$ remains small compared to $n$ or if $n$ remains small compared to $m$, then the asymptotic variance regime differs (see Corollary 2 for a formal statement, and Figure 6 for an illustration of the theoretical results).
For correct trial generalization, all shifted treatment effect modifier baseline covariates (see Definition 11 and 12, Section 4.3), such as the genetic mutation $X$, are necessary (Stuart
et al., 2011).
But, in practice one may be tempted to add as many covariates $V$ as available to account for all possible sources of external validity bias.
Doing so, we may add covariates $V$ that are not needed to properly estimates the weights. This is the case if (i) $V$ is shifted between the two data sets, but in reality is not a treatment effect modifier or if (ii) $V$ is a treatment effect modifier, but not shifted between the two data sets.
Figure 3(a) shows that
in $(i)$, the covariate $V$ should not be added,
as it can considerably inflate the variance and therefore damage the precision (see Corollary 4 for a formal statement);
while in $(ii)$, Figure 3(b) highlights that the covariate $V$ should be added as
the precision can be augmented by adding such covariates (see Corollary 5 for a formal statement).
In Section 4, we prove these phenomenons, deriving explicit finite sample and asymptotic results to characterize the re-weighting process.
2.2 Related work
The estimator $\hat{\tau}_{n,m}$ introduced in the toy example (Equation 2) is an exact implementation of the so-called Inverse Propensity of Sampling Weighting (IPSW) where the word sampling comes from the popular habit of modeling the problem as the one of a randomized trial suffering from selection bias (Cole and
Stuart, 2010; Bareinboim and
Pearl, 2012a; Tipton, 2013; Dahabreh et al., 2019).
Note that the estimator introduced in Equation 2 can also be linked to post-stratification (Imbens, 2011; Miratrix
et al., 2013), where post-stratification belongs to the family of adjustment methods on a single RCT.
Note that beyond trial re-weighting, other estimation strategies can be chosen when it comes to generalization, for example stratification (Tipton, 2013; O’Muircheartaigh
and Hedges, 2013), modeling the response (G-formula or Outcome Modeling) (Kern
et al., 2016; Dahabreh et al., 2019), using both strategies in a so-called doubly-robust approach (AIPSW) (Dahabreh et al., 2019, 2020), or entropy balancing (Josey
et al., 2021; Lee
et al., 2021).
Link with IPW
The IPSW can be related – to a certain extent – to the well-known Inverse Propensity Weighting (IPW) estimator in the context of a single observational data set (Hirano
et al., 2003). Indeed, this corresponds to a mirroring situation, where the weights are no longer the probability ratio, but the probability to be treated (propensity score, Rosenbaum and
Rubin, 1983). Robins
et al. (1992); Hahn (1998); Hirano
et al. (2003) showed that IPW is more efficient when weights are estimated, rather than relying on oracle weights.
This curious phenomenon can even be found in other areas of statistics (Efron and
Hinkley, 1978).
Beyond efficient estimation with a minimal adjustment set, it is known that additional and non-necessary baseline covariates in the adjustment set of the IPW can either increase the variance (the so-called instruments) (Velentgas et al., 2013; Schnitzer
et al., 2015; Wooldridge, 2016), while another class of covariates (the ones linked only to the outcome – and also called outcome-related covariates or risk factors or precision covariates) improves precision (Hahn, 2004; Lunceford and
Davidian, 2004; Brookhart et al., 2006; Lefebvre
et al., 2008; Witte and
Didelez, 2018). A recent crash-course about good and bad controls recalls this phenomenon (Cinelli
et al., 2022).
Finally, another very recent line of research consists in determining – given a Directed Acyclic Graph (DAG) – the asymptotically-efficient adjustment set for ATE estimation. This is also named ‘optimal’ valid adjustment set (O-set), corresponding to the adjustment set ensuring the smallest asymptotic variance compared to other adjustment sets. Henckel
et al. (2022) propose a result for linear model, and Rotnitzky and
Smucler (2020) extend this work for any non-parametrically adjusted estimator. Such methods are meant for complex DAGs where several possible adjustment sets can be used.
Theoretical results on IPSW
Expression of the variance has been proposed for an estimator related to the IPSW: the stratification estimator (O’Muircheartaigh
and Hedges, 2013; Tipton, 2013). These results only consider the situation of an infinite target sample.
Similar expressions can also be found in Rothman and
Greenland (2000), also assuming an infinite target sample compared to the trial sample size.
Buchanan
et al. (2018) propose theoretical properties such as asymptotic variance of a variant of IPSW under a parametric model, using M-estimation methods for the proof (Stefanski and
Boos, 2002). Why a variant? Because their proof is under the situation of a so-called nested design, that is a trial embedded in a larger observational population, so that there is only one single data set to consider and not two.
In addition, we have found no discussion - neither empirical nor theoretical - about the impact of adding non-necessary covariates
on the IPSW (or any other generalization’s estimator) properties (e.g., bias, variance).
Egami and
Hartman (2021) propose a method to estimate a separating set – i.e. a set of variables affecting both the sampling mechanism and treatment effect heterogeneity – and in particular when the trial contains many more covariates than the target population sample. However, their work focus on identification.
Huitfeldt et al. (2019) also consider covariate selection for generalization, but focus on which covariates are necessary depending on the causal measure chosen (ratio, difference, or other).
Yang
et al. (2020) addresses a similar problem (for non-probability sample and mean estimation), where they advocate selecting all variables, even instrumental variables, for robustness, although it may come at the cost of drop in efficiency.
Note that some existing practical recommendations advocate to add as many covariates as possible (Stuart and
Rhodes, 2017).
Contributions
This work considers several variants of the IPSW, whether or not the weights are oracle, semi-oracle, or estimated.
In this context, we derive the asymptotic variance of all the variants of IPSW and we show that several asymptotic regimes exist, depending on the relative size of the RCT compared to the target sample.
We also provide finite sample expression of the bias and variance for all the IPSW variants introduced, allowing to bound the risk on this estimator for any samples sizes (trial and target population).
From these theoretical results, we explain why the addition of some additional but non-necessary covariates in the adjustment set has a large impact on precision, for the best or the worst.
Indeed, while non-shifted treatment effect modifiers improve precision by lowering the variance, adding shifted covariates that are not predictive of the outcome considerably reduces the statistical power of the analysis by inflating the variance.
For this latter situation, we provide an explicit formula of the variance inflation when the additional covariate set is independent of the necessary one.
These results have important consequences for practitioners because they allow to give precise recommendations about how to select covariates.
Note that we link our work to seminal works in causal inference, showing that semi-oracle estimation outperforms a completely oracle estimation, while the exact result on IPW on efficient estimation can not be completely extended to the case of generalization.
All our results assume neither a parametric form of the outcome nor the sampling process, but are established at the cost of restricting the scope to categorical covariates for adjustment. Within the medical domain, scores or categories are often used to characterize individuals, which justifies this approach.
3 Notations and assumptions for causal identifiability
3.1 Notations
3.1.1 Problem setting
The notations and assumptions used in this work are grounded in the potential outcome framework (Imbens and
Rubin, 2015).
We assume to have at hand two data sets:
A randomized controlled trial
denoted $\mathcal{R}$ (for randomized), assessing the efficacy of a binary treatment $A$ on an outcome $Y$ (ordinal, binary, or continuous) conducted on $n$ iid observations. Each observation $i$ is labelled from $1$ to $n$ and can be modelled as sampled from a distribution $P_{\text{\tiny R}}(X,Y^{(1)},Y^{(0)},A)\in\mathds{X}\times\mathbb{R}^{2}\times\{0,1\}$, where $\mathds{X}$ is a categorical support. For any observation $i$, $A_{i}$ denotes the binary treatment assignment (with $A_{i}=0$ if no treatment and $A_{i}=1$ if treated), and $Y_{i}^{(a)}$ is the outcome had the subject been given treatment $a$ (for $a\in\{0,1\}$), which is assumed to be squared integrable.
$Y_{i}$ denotes the observed outcome, defined as $Y_{i}=A_{i}\,Y_{i}^{(1)}+(1-A_{i})\,Y_{i}^{(0)}$. In addition, this trial is assumed to be a Bernoulli trial with a constant probability of treatment assignment for all units and independence of treatment allocation between units (see in appendix Definition 13)111For a review of trial designs, in particular explaining the difference between a Bernoulli and a completely randomized design, we refer the reader to Chapter 2 of Imbens and
Rubin (2015).. We denote $\mathbb{P}_{\text{\tiny R}}\left[A_{i}=1\right]=\pi$. $X_{i}$ is a $p$-dimensional vector of categorical covariates accounting for individual characteristics on the observation $i$;
A sample of the target population of interest
denoted $\mathcal{T}$ (for target), containing $m$ iid individuals samples drawn from a distribution $P_{\text{\tiny T}}(X,Y^{(1)},Y^{(0)},A)\in\mathds{X}\times\mathbb{R}^{2}\times\{0,1\}$, labelled from $n+1$ to $n+m$. In this data set, we only observe individual categorical characteristics $X_{i}$. For simplicity, we further use the notation $P_{\text{\tiny T}}(X)$ for the marginal of $X$ on distribution $P_{\text{\tiny T}}$.
Finally, the probability of $X$ in the target population (resp. trial population) is denoted $p_{\text{\tiny T}}(x)$ (resp. $p_{\text{\tiny R}}(x)$).
Mathematically, a covariate shift between the two populations occurs when there exists $x\in\mathds{X}$ such that $p_{\text{\tiny R}}(x)\neq p_{\text{\tiny T}}(x)$. The setting and notations are summarized on Figure 5.
Comments on the notations
Note that a large part of the literature models the problem with a sampling mechanism from a super population. Doing so, the target and the trial samples are assumed sampled from this super population, with different mechanisms leading to a distributional shift of the trial (e.g. the framing in Stuart
et al., 2011; Hartman, 2021). Still, as soon as we are not working with a nested trial (that is a trial embedded in the target sample) and if only baseline covariates are considered for adjustment, the framing with a sampling model is equivalent to the problem setting introduced above (Colnet et al., 2020; Westreich et al., 2017).
Note that the literature is increasing adopting the framing that we use here
(Kern
et al., 2016; Nie
et al., 2021; Chattopadhyay et al., 2022).
3.1.2 Target quantity of interest
Recall that two distributions, indexed by R and T are involved in our problem setting (Section 3.1.1). Therefore, we will use these indices to denote quantities (expectations, probabilities) taken with respect to these distributions,
for example $\mathbb{E}_{\text{\tiny R}}\left[.\right]$ (resp. $\mathbb{E}_{\text{\tiny T}}\left[.\right]$) for an expectation over $P_{\text{\tiny R}}$ (resp. $P_{\text{\tiny T}}$).
We define the target population average treatment effect ATE (sometimes called TATE for Target):
$$\displaystyle\tau:=\mathbb{E}_{\text{\tiny T}}\left[Y^{(1)}-Y^{(0)}\right].$$
(3)
Because the randomized controlled data $\mathcal{R}$ are not sampled from the target population of interest, the sample average treatment effect $\tau_{\text{\tiny R}}$ (sometimes called SATE for Sample) estimated from this population,
$$\tau_{\text{\tiny R}}:=\mathbb{E}_{\text{\tiny R}}\left[Y^{(1)}-Y^{(0)}\right],$$
may be biased, that is $\tau_{\text{\tiny R}}\neq\tau$. While not being the target quantity of interest, we also introduce the so-called Conditional Average Treatment Effect (CATE), as
$$\forall x\in\mathds{X},\,\tau(x):=\mathbb{E}_{\text{\tiny T}}\left[Y^{(1)}-Y^{(0)}\mid X=x\right].$$
3.2 Identification assumptions
Assumptions are needed
to be able to generalize the findings from the population data $P_{\text{\tiny R}}$ toward the population $P_{\text{\tiny T}}$.
Assumptions on the trial
We first need validity of the trial, also called internal validity. These assumptions are the usual ones formulated in causal inference, and in particular for randomized controlled trials within the potential outcomes framework (Imbens and
Rubin, 2015; Hernan, 2020).
Assumption 1 (Representativity of the randomized data).
For all $i\in\mathcal{R},X_{i}\sim P_{\text{\tiny R}}(X)$ where $P_{\text{\tiny R}}$ is the population distribution from which the RCT was sampled.
Assumption 2 (Trial’s internal validity).
The RCT at hand $\mathcal{R}$ is assumed to be internaly valid, such that
(i)
Consistency and no interference hold, that is: $\forall i\in\mathcal{R},\,Y_{i}=A_{i}\,Y_{i}^{(1)}+(1-A_{i})\,Y_{i}^{(0)}$
–an assumption often termed SUTVA (stable unit treatment value);
(ii)
Treatment randomization holds, that is: $\forall i\in\mathcal{R},\,\left\{Y^{(1)}_{i},Y^{(0)}_{i}\right\}\perp A_{i}$;
(iii)
Positivity of trial treatment assignment holds, that is: $0<\pi<1$ (usually $\pi=0.5$).
Assumptions for generalization
The two following assumptions are specific to generalization or transportability.
Assumption 3 (Transportability).
$\forall x\in\mathds{X},\,\mathbb{P}_{\text{\tiny R}}(Y^{(1)}-Y^{(0)}\mid X=x)=\mathbb{P}_{\text{\tiny T}}(Y^{(1)}-Y^{(0)}\mid X=x).$
The transportability assumption (Stuart
et al., 2011; Pearl and
Bareinboim, 2011), also called sample
ignorability for treatment effects (Kern
et al., 2016) or Conditional Ignorability (Hartman, 2021), is probably the most important assumption to generalize or transport the trial findings to the target population, as this requires to have access to all shifted covariates being treatment modifiers.
In other words, it assumes that all the systematic variations in the treatment effect are captured by the covariates $X$ (O’Muircheartaigh
and Hedges, 2013).
The covariates $X$ are usually named the adjustment or separating set.
Note that the concept of treatment effect modifiers depends on the causal measure chosen; in this paper, we only consider the absolute difference most common for a continuous outcome as detailed in Equation 3. Would we have chosen the log-odd-ratio, for instance, then the covariates being treatment effect modifiers could be different.
Finally, note that Pearl and
Bareinboim (2011) introduces selection diagram to formalize this assumption relying on causal diagrams. Pearl (2015) details why diagrams can contain more identification scenarii. But in this work, we only consider baseline covariates for the transportability assumption (i.e no front-door adjustment).
Assumption 4 (Support inclusion).
$\forall x\in\mathds{X},\;p_{\text{\tiny R}}(x)>0$, and
$\operatorname{supp}(P_{T}(X))\in\operatorname{supp}(P_{R}(X))$.
Note that this last assumption is sometimes referred as the positivity of trial participation and can also be viewed as a sampling process with non-zero probability for all individuals.
3.3 Estimators
In this work, we denote any estimator targeting a quantity $\tau$ as $\hat{\tau}_{n,m}$ where the the index $n$ or $m$ is employed to characterise which data were used in the estimation strategy. For example, an estimator $\hat{\tau}_{n}$ (resp. $\hat{\tau}_{m}$) only uses the trial data (resp. observational data) whereas $\hat{\tau}_{n,m}$ uses both data sets.
3.3.1 Within-trial estimators of ATE
Two classical estimators targeting $\tau_{\text{\tiny R}}$ from trial data are the Horvitz-Thomson and Difference-in-means estimators.
Definition 1 (Horvitz-Thomson - Horvitz and
Thompson (1952)).
The Horvitz-Thomson estimator is denoted $\hat{\tau}_{\text{\tiny HT},n}$ and defined as,
$$\hat{\tau}_{\text{\tiny HT,}n}=\frac{1}{n}\sum_{i=1}^{n}\left(\frac{A_{i}Y_{i}}{\pi}-\frac{\left(1-A_{i}\right)Y_{i}}{1-\pi}\right).$$
Under a Bernoulli design (constant and independent probability to be treated $\pi$) the Horvitz-Thomson estimator $\hat{\tau}_{\text{\tiny HT},n}$ is an unbiased and consistent estimator of $\tau_{\text{\tiny R}}$, and its variance satisfies, for all $n$,
$$n\operatorname{Var}\left[\hat{\tau}_{\text{\tiny HT},n}\right]=\mathbb{E}_{\text{\tiny R}}\left[\frac{\left(Y^{(1)}\right)^{2}}{\pi}\right]+\mathbb{E}_{\text{\tiny R}}\left[\frac{\left(Y^{(0)}\right)^{2}}{1-\pi}\right]-\tau_{\text{\tiny R}}^{2}:=V_{\text{\tiny HT}}.$$
(4)
Definition 2 (Difference-in-means - Neyman (1923) and its English translation Splawa-Neyman et al. (1990)).
The Difference-in-means estimator is denoted $\hat{\tau}_{\text{\tiny DM,}n}$ and defined as
$$\hat{\tau}_{\text{\tiny DM,}n}=\frac{1}{n_{1}}\sum_{A_{i}=1}Y_{i}-\frac{1}{n_{0}}\sum_{A_{i}=0}Y_{i},\quad\text{where }n_{a}=\sum_{i=1}^{n}\mathbbm{1}_{A_{i}=a}.$$
The Difference-in-means is also referred to as the simple difference estimator for e.g. in Miratrix
et al. (2013) or difference in the sample means of the observed outcome variable between the treated and control groups for e.g. in Imai
et al. (2008).
Under a Bernoulli design, the difference-in-means estimator is a consistent estimator of $\tau_{\text{\tiny R}}$, and its finite sample variance is bounded by
$$n\operatorname{Var}\left[\hat{\tau}_{\text{\tiny DM},n}\right]\leq\frac{\operatorname{Var}\left[Y^{(1)}\right]}{\pi}+\frac{\operatorname{Var}\left[Y^{(0)}\right]}{1-\pi}+\mathcal{O}\left(n^{-1/2}\right),$$
(5)
and its large sample variance satisfies,
$$\lim_{n\to\infty}n\operatorname{Var}\left[\hat{\tau}_{\text{\tiny DM},n}\right]=\frac{\operatorname{Var}\left[Y_{i}^{(1)}\right]}{\pi}+\frac{\operatorname{Var}\left[Y_{i}^{(0)}\right]}{1-\pi}:=V_{\text{\tiny DM},\infty}.$$
(6)
An explicit expression of the finite sample bias and variance of $\hat{\tau}_{\text{\tiny DM},n}$ are given in appendix (see Lemma 2). What will be used later on, is the fact that the Difference-in-Means estimator can be viewed as a variant of the Horvitz-Thomson estimator, where the probability to be treated $\pi$ (or propensity score) is estimated, that is,
$$\hat{\tau}_{\text{\tiny DM,}n}=\frac{1}{n}\sum_{i=1}^{n}\left(\frac{A_{i}\,Y_{i}}{\hat{\pi}}-\frac{(1-A_{i})\,Y_{i}}{1-\hat{\pi}}\right),\quad\text{where }\hat{\pi}=\frac{\sum_{i=1}^{n}A_{i}}{n}.$$
Counter-intuitively, the benefit of estimating $\pi$ is to lower the variance.
Even if the true probability is $\pi=0.5$, the actual treatment allocation in the sample can be different (e.g., $\hat{\pi}=0.48$), and using $\hat{\pi}$ rather than $\pi$ leads to a smaller large sample variance by adjusting to the exact observed probability to be treated in the trial. In particular, it is possible to be convinced of this phenomenon when comparing the two variances,
$$V_{\text{\tiny DM},\infty}=V_{\text{\tiny HT}}-\left(\sqrt{\frac{1-\pi}{\pi}}\mathbb{E}_{\text{\tiny R}}[Y^{(1)}]+\sqrt{\frac{\pi}{1-\pi}}\mathbb{E}_{\text{\tiny R}}[Y^{(0)}]\right)^{2}\leq V_{\text{\tiny HT}}.$$
(7)
Appendix D recalls
derivations to obtain (4) to (7).
Other estimators of $\tau_{\text{\tiny R}}$ exist, and rely on prognostic covariates (also called adjustement) such as outcome-modeling or post-stratification. Below (Section 4.2), we introduce the post-stratification estimator, corresponding to the Horvitz-Thomson estimator where $\pi$ is estimated according to different stratum.
3.3.2 Re-weighting estimator for generalizing the trial findings
As mentioned in Subsection 2.2, in this work we focus on the reweighting strategy, that is the Inverse Propensity of Sampling Weighting (IPSW)
estimator (Cole and
Stuart, 2010; Stuart
et al., 2011).
Definition 3 (Completely oracle IPSW).
The completely oracle IPSW estimator is denoted $\hat{\tau}_{\pi,\text{\tiny T, R},n}^{*}$, and defined as
$$\hat{\tau}_{\pi,\text{\tiny T,R},n}^{*}=\frac{1}{n}\sum_{i=1}^{n}\frac{p_{\text{\tiny T}}(X_{i})}{p_{\text{\tiny R}}(X_{i})}Y_{i}\left(\frac{A_{i}}{\pi}-\frac{1-A_{i}}{1-\pi}\right)\,,$$
(8)
where $\frac{p_{\text{\tiny T}}(X_{i})}{p_{\text{\tiny R}}(X_{i})}$ are called the weights or the nuisance components.
Definition 3 corresponds to a completely oracle IPSW, where $p_{\text{\tiny T}}$, $p_{\text{\tiny R}}$, and the trial allocation probability $\pi$ are known.
3.3.3 Probability ratio estimation
In practice neither $p_{\text{\tiny R}}$ nor $p_{\text{\tiny T}}$ are known, and therefore one needs to estimate these probabilities. As explained in Subsection 3.1.1, we consider the case where $X$ is composed of categorial covariates only. In such a situation, a practical IPSW estimator can be built from Definition 3 by estimating each probability $p_{\text{\tiny T}}$ and $p_{\text{\tiny R}}$ by their empirical counterpart (that is counting how many observations fall in each categories in the trial and target samples).
Definition 4 (Probability estimation).
Under the setting defined in Subsection 3.1.1,
$$\forall x\in\mathcal{X},\;\;\hat{p}_{\text{\tiny T},m}(x):=\frac{1}{m}\sum_{i\in\mathcal{T}}\mathbbm{1}_{X_{i}=x}\;\;\text{ and, }\,\hat{p}_{\text{\tiny R},n}(x):=\frac{1}{n}\sum_{i\in\mathcal{R}}\mathbbm{1}_{X_{i}=x}.$$
Having defined a method for probability estimation, one can build practical IPSW variants.
Definition 5 (Semi-oracle IPSW).
The semi-oracle IPSW estimator $\hat{\tau}_{\pi,\text{\tiny T},n}^{*}$ is defined as
$$\hat{\tau}_{\pi,\text{\tiny T},n}^{*}=\frac{1}{n}\sum_{i=1}^{n}\frac{p_{\text{\tiny T}}(X_{i})}{\hat{p}_{\text{\tiny R},n}(X_{i})}Y_{i}\left(\frac{A_{i}}{\pi}-\frac{1-A_{i}}{1-\pi}\right)\,,$$
(9)
where $\hat{p}_{\text{\tiny R},n}$ is estimated according to Definition 4.
Note that this semi-oracle estimator corresponds to the so-called standardization procedure described in Rothman and
Greenland (2000).
Definition 6 (IPSW).
The (estimated) IPSW estimator $\hat{\tau}_{\pi,n,m}$ is defined as
$$\hat{\tau}_{\pi,n,m}=\frac{1}{n}\sum_{i=1}^{n}\frac{\hat{p}_{\text{\tiny T},m}(X_{i})}{\hat{p}_{\text{\tiny R},n}(X_{i})}Y_{i}\left(\frac{A_{i}}{\pi}-\frac{1-A_{i}}{1-\pi}\right)\,,$$
(10)
where $\hat{p}_{\text{\tiny R},n}$ and $\hat{p}_{\text{\tiny T},m}$ are estimated according to Definition 4.
Definition 6 corresponds to the classical implementation of the IPSW since, practically, the probabilities $\hat{p}_{\text{\tiny R},n}$ and $\hat{p}_{\text{\tiny T},m}$ are not known and must be estimated.
Another interpretation of IPSW
Note that the IPSW can be understood differently, thanks to the fact that covariates used to adjust are categorical. Indeed, it is possible to re-write the IPSW estimator from Definition 6 as,
$$\displaystyle\hat{\tau}_{\pi,n,m}=\sum_{x\in\mathds{X}}\frac{m_{x}}{m}\sum_{i=1}^{n}\mathds{1}_{X_{i}=x}\frac{1}{n_{x}}\left(\frac{A_{i}Y_{i}^{(1)}}{\pi}-\frac{(1-A_{i})Y_{i}^{(1)}}{1-\pi}\right)=\sum_{x\in\mathds{X}}\frac{m_{x}}{m}\hat{\tau}_{\text{\tiny HT},n_{x}},$$
where $m_{x}=\sum_{i=n+1}^{m}\mathds{1}_{X_{i}=x}$ and $n_{x}=\sum_{i=1}^{n}\mathds{1}_{X_{i}=x}$. This corresponds to a procedure where stratum average treatment effects are estimated with an Horvitz-Thomson procedure, and then aggregated with weights corresponding to the target sample proportions. Miratrix
et al. (2013) also discusses a similar approach in their section 5, but where the sample proportions corresponds to the true target population of interest. In a way, our work extends this situation to a more general case, considering the noise due to the sampling process from two populations.
Comment about oracle and semi-oracle interest
The completely-oracle and the semi-oracle estimators are not used in practice, as usually none of the true probabilities are known. Still, they both correspond to some asymptotic situations that are of interest to understand the IPSW. For instance:
•
Studying $\hat{\tau}_{\pi,\text{\tiny T},\text{\tiny R},n}^{*}$ allows us to observe the effect of averaging over the trial sample $\mathcal{R}$, without the variability due to covariates probabilities estimation ($\hat{p}_{\text{\tiny R},n}$ and $\hat{p}_{\text{\tiny T},m}$);
•
Studying $\hat{\tau}_{\pi,\text{\tiny T},n}^{*}$ allows to understand the situation where the target sample $\mathcal{T}$ is infinite ($m\rightarrow\infty$).
In addition, studying these estimators allows us to link our results with seminal works in causal inference showing that the estimated propensity score can lead to better properties than an oracle one (Robins
et al., 1992; Hahn, 1998; Hirano
et al., 2003).
Note that we could introduce another semi-oracle estimator, where $p_{\text{\tiny R}}$ is known but not $p_{\text{\tiny T}}$. This specific estimator does not correspond to a limit situation helping to figuring out the results, as it is as if the covariates probabilities in the trial are learned on a infinite data sample, but where the treatment effect estimate is still averaged on a finite sample. Finally, since all covariates are assumed to be categorical in our framework, trial and observational densities (continuous covariates) turn into trial and observational probabilities (categorical covariates). Oracles and semi-oracles will be different when considering continuous covariates as the weights will be replaced by density estimation or estimation of the probability of being in the target population (instead of the experimental sample) (e.g. see Kern
et al., 2016; Nie
et al., 2021)), sometimes directly estimating the ratio by binding the two data sources and therefore making the notion of semi-oracle outdated.
4 Theoretical results
4.1 Bias and variance of IPSW variants in finite-sample regime
In this section, we expose our main theoretical results on the three variants of the IPSW estimator (Definition 3, 5 and 6).
The following results rely on the variance of the Horvitz-Thomson estimator on a given strata $x$ (see Definition 1), denoted $V_{\text{\tiny HT}}(x)$, and defined as ,
$$V_{\text{\tiny HT}}(x):=\mathbb{E}_{\text{\tiny R}}\left[\frac{\left(Y^{(1)}\right)^{2}}{\pi}\mid X=x\right]+\mathbb{E}_{\text{\tiny R}}\left[\frac{\left(Y^{(0)}\right)^{2}}{1-\pi}\mid X=x\right]-\tau(x)^{2}.$$
(11)
In this equation, we removed the index $R$ of $\tau(x)$ as $\tau_{\text{\tiny R}}(x)=\tau_{\text{\tiny T}}(x)=\tau(x)$, thanks to Assumption 3.
Removing the index on the two conditional expectations would require to go beyond the classical transportability assumption, by assuming that
$$\forall a\in\{0,1\},P_{\text{\tiny R}}(Y^{(a)}\mid X=x)=P_{\text{\tiny T}}(Y^{(a)}\mid X=x),$$
i.e. $X$ contains all the covariates being shifted and predictive of the outcome, which is stronger than Assumption 3.
4.1.1 Properties of the completely oracle IPSW
The following result establishes consistency and finite sample bias and variance for the oracle IPSW, which extends the preceding results from Egami and
Hartman (2021) (see their appendix, Section SM-2).
Theorem 1 (Properties of the completely oracle IPSW).
Under the general setting defined in Subsection 3.1.1, granting Assumptions 1-4, the completely oracle IPSW is unbiased and has an explicit variance expression, that is, for all $n$,
$$\displaystyle\mathbb{E}\left[\hat{\tau}_{\pi,\text{\tiny T},\text{\tiny R},n}^{*}\right]$$
$$\displaystyle=\tau,\quad\text{and}\quad\operatorname{Var}\left[\hat{\tau}_{\pi,\text{\tiny T,R},n}^{*}\right]=\frac{V_{o}}{n},\quad\text{where}\quad V_{o}:=\operatorname{Var}_{\text{\tiny R}}\left[\frac{p_{\text{\tiny T}}(X)}{p_{\text{\tiny R}}(X)}\tau(X)\right]+\mathbb{E}_{\text{\tiny R}}\left[\left(\frac{p_{\text{\tiny T}}(X)}{p_{\text{\tiny R}}(X)}\right)^{2}V_{\text{\tiny HT}}(X)\right].$$
As a consequence, for all $n$, the quadratic risk of the completely oracle IPSW is given by,
$$\mathbb{E}\left[\left(\hat{\tau}_{\pi,\text{\tiny T},\text{\tiny R},n}^{*}-\tau\right)^{2}\right]=\frac{V_{o}}{n},$$
which implies its $L^{2}$-consistency as $n$ tends to infinity, that is,
$$\hat{\tau}_{\pi,\text{\tiny T,R},n}^{*}\stackrel{{\scriptstyle L^{2}}}{{\longrightarrow}}\tau.$$
The finite-sample variance $V_{o}$ depends on the probability ratio, the amplitude of the heterogeneity of treatment effect (through $\tau(x)$), and variances of the potential outcomes. In particular if for some category $x$, the $p_{\text{\tiny T}}(x)$ and $p_{\text{\tiny R}}(x)$ are very different implies a large variance when generalizing the trial’s findings. Note that the convergence rate is a usual one in $\propto\frac{1}{n}$. Although it is not our main contribution, Theorem 1 is of primary importance for comparing the impact of sample sizes on the performances of the different IPSW variants.
Appendix A.1 provides a
detailed proof of Theorem 1 and sheds light on the technical tools used for more complex IPSW variants.
4.1.2 Properties of the semi-oracle IPSW
In this section, we study the behaviour of the semi-oracle IPSW (Definition 5), for which the probability $p_{\text{\tiny T}}$ is known but the probability $p_{\text{\tiny R}}$ is estimated. One can obtain for a certain $x$, $\hat{p}_{\text{\tiny R},n}(x)=0$ for some $x\in\mathbb{X}$, even if the true probability is non-negative $p_{\text{\tiny R}}(x)>0$. This phenomenon, occurring when no observations in the trial correspond to the covariate vector $x$, induces a finite sample bias of the IPSW estimate. The performance of the semi-oracle IPSW estimate is thus closely related to $\mathbbm{1}_{Z_{n}(x)>0}$ where $Z_{n}(x)=\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}$, as stated in our next results.
Proposition 1.
Under the general setting defined in Subsection 3.1.1, granting Assumptions 1-4, the bias of the semi-oracle IPSW satisfies, for all $n$,
$$\displaystyle\mathbb{E}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\right]-\tau=-\sum_{x\in\mathds{X}}p_{\text{\tiny T}}(x)\left(1-p_{\text{\tiny R}}(x)\right)^{n}\tau(x),$$
and
$$\displaystyle\biggl{|}\mathbb{E}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\right]-\tau\biggr{|}\leq\left(1-\min_{x}p_{\text{\tiny R}}(x)\right)^{n}\mathbb{E}_{\text{\tiny T}}\left[\left|\tau(X)\right|\right].$$
Moreover, under the same set of assumptions, the variance of the semi-oracle IPSW satisfies, for all $n$,
$$\displaystyle n\operatorname{Var}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\right]=$$
$$\displaystyle\sum_{x\in\mathds{X}}p_{\text{\tiny T}}\left(x\right)^{2}V_{\text{\tiny HT}}(x)\mathbb{E}_{\text{\tiny R}}\left[\frac{\mathbbm{1}_{Z_{n}(x)>0}}{\hat{p}_{\text{\tiny R},n}(x)}\right]+n\operatorname{Var}\left[\mathbb{E}_{\text{\tiny T}}\left[\tau(X)\mathds{1}_{Z_{n}(X)=0}|\mathbf{X}_{n}\right]\right],$$
and
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\right]\leq$$
$$\displaystyle\,\frac{2V_{so}}{n+1}+\left(1-\min_{x\in\mathbb{X}}p_{\text{\tiny R}}(x)\right)^{n}\mathbb{E}_{\text{\tiny T}}\left[\tau(X)^{2}\right],$$
with
$$\displaystyle V_{\text{so}}:=$$
$$\displaystyle\,\mathbb{E}_{\text{\tiny R}}\left[\left(\frac{p_{\text{\tiny T}}(X)}{p_{\text{\tiny R}}(X)}\right)^{2}V_{\text{\tiny HT}}(X)\right].$$
The proof is detailed in Subsection A.2.1.
Proposition 1 establishes the exact finite-sample bias and variance of the semi-oracle IPSW estimate. Unlike the completely oracle IPSW, the semi-oracle IPSW is biased for small trials (i.e. small $n$), which can be understood by undercoverage of some categories in the trial. Indeed, for small trials, the probability that a category is not represented at all in the RCT may not be negligible. Fortunately, as shown in Proposition 1, this bias converges to zero exponentially with the trial size $n$.
Note that, as soon as $\tau(x)$ is of constant sign, the sign of the bias is known and opposite to that of $\tau(x)$. In fact, because of potentially empty categories in the trial, the expectation of the semi-oracle IPSW estimate $\mathbb{E}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\right]$ is pushed toward zero, if $\tau(x)$ is of constant sign.
Proposition 1 also gives the exact finite-sample expression of the variance for the semi-oracle IPSW estimate.
Corollary 1 provides asymptotic results derived from these finite-sample expressions:
Corollary 1 (Asymptotics).
Under the same assumptions as in Proposition 1, the semi-oracle IPSW is asymptotically unbiased, and its asymptotic variance satisfies,
$$\lim_{n\to\infty}\mathbb{E}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\right]=\tau,\,\quad\text{and}\quad\lim_{n\to\infty}n\operatorname{Var}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\right]=V_{\text{so}}.$$
The proof is detailed in Subsection A.2.2.
The quantity $V_{so}$ already exist in the literature, for example in Rothman and
Greenland (2000), where a form of semi-oracle IPSW was introduced under the name standardization. Here, we clarify the fact that this formula is valid only for large sample and we provide detailed derivations. Therefore, Corollary 1 is the first theoretical result establishing the asymptotic variance of the semi-oracle IPSW.
One can observe from the explicit derivations that the semi-oracle estimator $\hat{\tau}_{\pi,\text{\tiny T},n}^{*}$ has a lower asymptotic variance than the oracle IPSW $\hat{\tau}_{\pi,\text{\tiny T,R},n}^{*}$ recalled in Theorem 1. In particular,
$$V_{so}=V_{o}-\mathchoice{\leavevmode\hbox to70.23pt{\vbox to14pt{\pgfpicture\makeatletter\raise-3.5pt\hbox{\hskip 35.11409pt\lower-3.5pt\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{
{}{}}
{{}{{}}}{{}{}}{}{{}{}}
{
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-35.11409pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\displaystyle\hbox{\pagecolor{Bittersweet!17}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0} $\operatorname{Var}_{\text{\tiny R}}\left[\frac{p_{\text{\tiny T}}(X)}{p_{\text{\tiny R}}(X)}\tau(X)\right]$ }$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{}{{
{}{}{}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}{\leavevmode\hbox to70.23pt{\vbox to14pt{\pgfpicture\makeatletter\raise-3.5pt\hbox{\hskip 35.11409pt\lower-3.5pt\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{
{}{}}
{{}{{}}}{{}{}}{}{{}{}}
{
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-35.11409pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\textstyle\hbox{\pagecolor{Bittersweet!17}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0} $\operatorname{Var}_{\text{\tiny R}}\left[\frac{p_{\text{\tiny T}}(X)}{p_{\text{\tiny R}}(X)}\tau(X)\right]$ }$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{}{{
{}{}{}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}{\leavevmode\hbox to50.93pt{\vbox to10.08pt{\pgfpicture\makeatletter\raise-2.52083pt\hbox{\hskip 25.46315pt\lower-2.52083pt\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{
{}{}}
{{}{{}}}{{}{}}{}{{}{}}
{
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-25.46315pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\scriptstyle\hbox{\pagecolor{Bittersweet!17}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0} $\operatorname{Var}_{\text{\tiny R}}\left[\frac{p_{\text{\tiny T}}(X)}{p_{\text{\tiny R}}(X)}\tau(X)\right]$ }$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{}{{
{}{}{}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}{\leavevmode\hbox to38.06pt{\vbox to7.98pt{\pgfpicture\makeatletter\raise-1.99583pt\hbox{\hskip 19.02922pt\lower-1.99583pt\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{
{}{}}
{{}{{}}}{{}{}}{}{{}{}}
{
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-19.02922pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\scriptscriptstyle\hbox{\pagecolor{Bittersweet!17}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0} $\operatorname{Var}_{\text{\tiny R}}\left[\frac{p_{\text{\tiny T}}(X)}{p_{\text{\tiny R}}(X)}\tau(X)\right]$ }$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{}{{
{}{}{}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}.$$
ptAlways positive
This phenomenon has similar explanations222In fact, similar considerations appear outside causal inference, for example Efron and
Hinkley (1978) argued that the observed information rather than the expected Fisher information should be used to characterize the distribution of maximum-likelihood estimates. with the common (and often surprising) result stating that an estimated propensity score lowers the variance when re-weighting observational data compared to an estimator relying on oracle propensity score (see Robins
et al., 1992; Hahn, 1998; Hirano
et al., 2003; Lunceford and
Davidian, 2004, regarding IPW estimator).
Intuitively,
we only need to generalize from the actual sample to the target population, and not from a source trial population to a target population.
The semi oracle estimate has a lower asymptotic variance compared to the estimated IPSW but is also biased. One can thus wonder how the risk of the two estimates compare. Theorem 2 upper bounds the risk of the semi-oracle estimate:
Theorem 2 (Properties of the semi-oracle IPSW).
Under the general setting defined in Subsection 3.1.1, granting Assumptions 1-4, the quadratic risk of the completely oracle IPSW satisfies,
$$\mathbb{E}\left[\left(\hat{\tau}_{\pi,\text{\tiny T},n}^{*}-\tau\right)^{2}\right]\,\leq\,\frac{2V_{so}}{n+1}\,+\,2\left(1-\min_{x}p_{\text{\tiny R}}\right)^{n}\mathbb{E}_{\text{\tiny T}}\left[\tau(X)^{2}\right],$$
which implies its $L^{2}$-consistency as $n$ goes to infinity, that is,
$$\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\stackrel{{\scriptstyle L^{2}}}{{\longrightarrow}}\tau.$$
Subsection A.2.3 details the proof.
The second term in the upper bound of Theorem 2 decreases exponentially with $n$, whereas the first term decreases at rate $1/n$. At first, it is not easy to compare this upper bound to the risk of the completely oracle IPSW, due to the factor two before $V_{so}$. Close inspection of the proof of Theorem 2 reveals that the factor $2$ can be replaced by $(1+\varepsilon)$, for all $\varepsilon$, assuming that $n$ is large enough (see Lemma 3). The bound presented here is valid for all $n$ and can be improved if $n$ is taken large enough. Therefore, for all $n$ large enough, the first term in the upper bound is close to $V_{so}/(n+1)$ which is smaller than $V_{o}/(n+1)$ (see above), which makes the risk of the semi-oracle smaller than that of the completely oracle, for $n$ large enough. This bound opens the doors to guarantees even on small sample size. Also note that, unlike $V_{o}$, $V_{so}$ can be estimated with the data.
4.1.3 Properties of the (estimated) IPSW
Previous results on IPSW are valid when the size of the target population goes to infinity.
In this subsection, we establish theoretical guarantees for the estimated IPSW in a more complex setting: we consider finite trial and target population datasets and establish bounds depending on both sample sizes ($n$ and $m$).
Proposition 2.
Under the general setting defined in Subsection 3.1.1, granting Assumptions 1-4, the bias of the estimated IPSW is the same as that of the semi-oracle IPSW, that is, for all $n,m$,
$$\displaystyle\mathbb{E}\left[\hat{\tau}_{\pi,n,m}\right]-\tau\;$$
$$\displaystyle=\;-\sum_{x\in\mathds{X}}p_{\text{\tiny T}}(x)\,(1-p_{\text{\tiny R}}(x))^{n}\,\tau(x).$$
Moreover, under the same set of assumptions, the variance of the estimated IPSW satisfies, for all $n,m$,
$$\operatorname{Var}\left[\hat{\tau}_{\pi,n,m}\right]\,=\,\operatorname{Var}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\right]+\frac{1}{m}\left(\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)\neq 0}\right]-\operatorname{Var}\left[\mathbb{E}_{\text{\tiny T}}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)=0}|\mathbf{X}_{n}\right]\right]\right)\\
+\frac{1}{n\,m}\sum_{x\in\mathds{X}}V_{\text{\tiny HT}}(x)\,p_{\text{\tiny T}}(x)\,(1-p_{\text{\tiny T}}(x))\,\mathbb{E}\left[\frac{\mathbbm{1}_{Z_{n}(x)\neq 0}}{\hat{p}_{\text{\tiny R},n}\left(x\right)}\right]$$
and
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{\pi,n,m}\right]$$
$$\displaystyle\leq\frac{2V_{so}}{n+1}+\frac{\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\right]}{m}+\frac{2}{m\left(n+1\right)}\mathbb{E}_{\text{\tiny R}}\left[\frac{p_{\text{\tiny T}}\left(X\right)(1-p_{\text{\tiny T}}\left(X\right))}{p_{\text{\tiny R}}\left(X\right)^{2}}V_{\text{\tiny HT}}(X)\right]$$
$$\displaystyle\qquad+\left(1-\min_{x}p_{\text{\tiny R}}(x)\right)^{n/2}\mathbb{E}_{\text{\tiny T}}\left[\tau(X)^{2}\right]\left(1+\frac{4}{m}\right).$$
(12)
A proof is given in Subsection A.3.1.
Note that the term $\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\right]$ can be replaced by $\operatorname{Var}\left[\tau(X)\right]$ thanks to Assumption 3.
Proposition 2 is the first result to establish the bias and variance of the estimated IPSW in a finite-sample setting. A first observation is that the bias of the (estimated) IPSW is the same as that of the semi-oracle, showing that only a limited trial sample size can explain a finite sample bias.
On the other side, the variance terms differ, due to the additional estimation of the target probability $p_{\text{\tiny T}}$ in the estimated IPSW.
All additional terms compared to the variance of the semi-oracle $\hat{\tau}_{\text{\tiny T},\pi,n}$ therefore depend on $m$.
The explicit expression of the variance shows that $n$ and $m$ must go to infinity for the variance to go to zero.
In this setting, the variance is dominated by the first two terms in inequality 12. If $m\gg n$, the variance is dominated by the first term, which is the dominant term of the semi-oracle variance. Following this idea, Corollary 2 establishes the asymptotic bias and variance of the estimated IPSW in different sample size regimes.
Corollary 2.
Under the same assumptions as in Proposition 2, the estimated IPSW is asymptotically unbiased when $n$ tends to infinity, that is
$$\lim_{n\to\infty}\mathbb{E}\left[\hat{\tau}_{\pi,n,m}\right]=\tau.$$
Besides, letting $\lim\limits_{n,m\to\infty}m/n=\lambda\in[0,\infty]$, the asymptotic variance of the estimated IPSW satisfies
$$\lim\limits_{n,m\to\infty}\min(n,m)\operatorname{Var}\left[\hat{\tau}_{\pi,n,m}\right]=\min(1,\lambda)\left(\frac{\operatorname{Var}\left[\tau(X)\right]}{\lambda}+V_{so}\right).$$
A proof is detailed in Subsection A.3.2.
As highlighted in Corollary 2, there is not a unique asymptotic variance for the estimated IPSW. Its asymptotic variance depends on how the sample sizes $n$ and $m$ compare to each other asymptotically.
For example,
•
If $m/n\to\infty$, (i.e., $\lambda=\infty$) then the asymptotic variance of the estimated IPSW corresponds to the semi-oracle’s one;
•
If we consider an asymptotic regime where the observational sample is about ten times bigger than the trial ($\lambda=10$), then the asymptotic variance is equal to $\lim\limits_{n,m\to\infty}n\operatorname{Var}\left[\hat{\tau}_{\pi,n,m}\right]=\operatorname{Var}\left[\tau(X)\right]/10+V_{so}>V_{so}$;
•
Finally, if $m/n\to 0$, (i.e., $\lambda=0$) then the asymptotic variance of the estimated IPSW has no more link to that of the semi-oracle IPSW, and $\lim\limits_{n,m\to\infty}m\operatorname{Var}\left[\hat{\tau}_{\pi,n,m}\right]=\operatorname{Var}\left[\tau(X)\right]$.
This formula can be used to guide data collection. For example, and using the formula, one could say that at some point gathering $N$ additional individuals information in the target population (which has a cost) could lead to less gain in precision than gathering a bit more data on the trial (if possible). This phenomenon is illustrated on Figure 6.
Upper bound on the risk of the estimated IPSW can be established, based on Proposition 2.
Theorem 3 (Properties of the IPSW).
Under the general setting defined in Subsection 3.1.1, granting Assumptions 1-4, the quadratic risk of the estimated IPSW satisfies,
$$\displaystyle\mathbb{E}\left[\left(\hat{\tau}_{\pi,n,m}-\tau\right)^{2}\right]$$
$$\displaystyle\leq\frac{2V_{so}}{n+1}+\frac{\operatorname{Var}\left[\tau(X)\right]}{m}+\frac{2}{m(n+1)}\mathbb{E}_{\text{\tiny R}}\left[\frac{p_{\text{\tiny T}}\left(X\right)(1-p_{\text{\tiny T}}\left(X\right))}{p_{\text{\tiny R}}\left(X\right)^{2}}V_{\text{\tiny HT}}(X)\right]$$
$$\displaystyle\quad+2\left(1-\min_{x}p_{\text{\tiny R}}\left(x\right)\right)^{n}\mathbb{E}_{\text{\tiny T}}[\tau(X)^{2}]\left(1+\frac{2}{m}\right),$$
(13)
which implies its $L^{2}$-consistency as $m,n$ tends to infinity, that is,
$$\hat{\tau}_{\pi,n,m}\stackrel{{\scriptstyle L^{2}}}{{\longrightarrow}}\tau.$$
Proof is detailed in Subsection A.3.3.
The first and fourth terms in inequality (13) correspond to the bound of the semi-oracle estimator (see Theorem 2). Following the intuition, the bound on the risk of the estimated IPSW is larger than the one of the semi-oracle. This is due to the cost of estimating $p_{\text{\tiny T}}$ from a finite sample of size $m$. However, when $m\gg n$, the dominant terms in the risk of the estimated and semi-oracle IPSW are the same.
Indeen, consistency of the (estimated) IPSW for continuous covariates has been proven in the literature, for e.g. Buchanan
et al. (2018) demonstrate consistency and asymptotic normality under a nested-design and assuming a parametric selection process. Colnet
et al. (2021) demonstrate consistency assuming uniform convergence of the probability ratio under a cross-fitting procedure and no parametric assumption. Our results are the first to establish the bias and the variance of the estimated IPSW in finite and asymptotic regimes, with an explicit dependence on both sample sizes.
What if the probability to be treated depends on $x$?
In some trials, the probability to receive treatment depends on the strata (for e.g. for ethical reason). If so, all the previous results are kept unchanged, replacing $\pi$ by $\pi(x)$, and the proofs are written with $\pi(x)$, even if the main results are reported with a constant $\pi$ for briefness. In particular, all the covariates used to stratify the propensity to receive treatment in the trial should be used in the IPSW.
4.2 Estimating the probability to be treated in the trial?
So far, we have considered an estimation procedure where $\pi$, the probability to be treated in the trial, is plugged in the formula.
Still, one may want to estimate it for the purpose of precision.
This idea follows the same spirit of what can be done with the Horvitz-Thomson (Definition 1) and the Difference-in-means (Definition 2), where the large-sample gain in variance is recalled in Equation (7).
To our knowledge, different version of IPSW are currently present in the literature, with or without an estimated $\pi$ (see Table 1 in appendix for a non-exhaustive review). In our work, we propose to estimate $\pi$ per strata, and then adapt the semi-oracle IPSW (Definition 5) and the estimated IPSW (Definition 6).
Definition 7 (Estimation of $\hat{\pi}$ for each strata).
Under the setting defined in Subsection 3.1.1,
$$\forall x\in\mathds{X},\,\hat{\pi}_{n}(x)=\frac{\sum_{i\in\mathcal{R}}\mathbbm{1}_{X_{i}=x}\mathbbm{1}_{A_{i}=1}}{\sum_{i\in\mathcal{R}}\mathbbm{1}_{X_{i}=x}}.$$
Strange as it may seem, estimating $\pi$ per strata and not on the whole sample can also be beneficial in RCTs to improve precision. Imbens (2011); Miratrix
et al. (2013) introduce the post-stratification procedure, a technique aiming to use covariate information for precision when estimating the ATE from a single trial. These two research works detail why a so-called post-stratification estimator yields a lower variance compared to the Difference-in-Means – and therefore a Horvitz-Thomson – as soon as the covariates used for stratification are predictive of the outcome. More particularly, the post-stratification estimator on a single trial is defined as follows.
Definition 8 (Post-stratification - Imbens (2011); Miratrix
et al. (2013)).
The post-stratification estimator is denoted $\hat{\tau}_{\text{\tiny PS},n}$ and defined as,
$$\displaystyle\hat{\tau}_{\text{\tiny PS},n}$$
$$\displaystyle=\frac{1}{n}\sum_{i=1}^{n}\frac{A_{i}Y_{i}}{\hat{\pi}_{n}(x)}-\frac{(1-A_{i})Y_{i}}{1-\hat{\pi}_{n}(x)},$$
where $\pi$ is estimated according to Definition 7.
The different displays of the post-stratification estimator $\hat{\tau}_{\text{\tiny PS},n}$ in literature are recalled in Section D. The gain in efficiency of an IPSW version with estimated $\pi$ follows this intuition.
Definition 9 (Semi-oracle IPSW with $\hat{\pi}$).
The semi-oracle IPSW estimator $\hat{\tau}_{\text{\tiny T},n}^{*}$ with estimated propensity scores $\hat{\pi}_{n}$ is defined as
$$\hat{\tau}_{\text{\tiny T},n}^{*}=\frac{1}{n}\sum_{i=1}^{n}\frac{p_{\text{\tiny T}}(X_{i})}{\hat{p}_{\text{\tiny R},n}(X_{i})}Y_{i}\left(\frac{A_{i}}{\hat{\pi}_{n}(X_{i})}-\frac{1-A_{i}}{1-\hat{\pi}_{n}(X_{i})}\right)\,,$$
(14)
with $\hat{p}_{\text{\tiny R},n}(x)$ and $\hat{\pi}_{n}(x)$ defined in Definitions 4 and 7.
Definition 10 (IPSW with $\hat{\pi}$).
The completely-estimated IPSW estimator $\hat{\tau}_{n,m}$ with estimated propensity scores $\hat{\pi}_{n}$ is defined as
$$\hat{\tau}_{n,m}=\frac{1}{n}\sum_{i=1}^{n}\frac{\hat{p}_{\text{\tiny T},m}(X_{i})}{\hat{p}_{\text{\tiny R},n}(X_{i})}Y_{i}\left(\frac{A_{i}}{\hat{\pi}_{n}(X_{i})}-\frac{1-A_{i}}{1-\hat{\pi}_{n}(X_{i})}\right)\,,$$
(15)
where $\hat{p}_{\text{\tiny T},m}(x)$, $\hat{p}_{\text{\tiny R},n}(x)$, and $\hat{\pi}_{n}(x)$ defined in Definitions 4 and 7.
Before stating the formal results, and following the spirit of what was done with the variance of the Horvitz-Thomson per strata (11), we introduce $V_{\text{\tiny DM},n}(x)$:
$$V_{\text{\tiny DM},n}(x)=n\operatorname{Var}_{\text{\tiny R}}\left[\hat{\tau}_{\text{\tiny DM},n}|X=x\right].$$
(16)
The explicit variance of the Difference-in-Means under a Bernoulli design is provided in Appendix (see Lemma 2), and not displayed here for conciseness.
Proposition 3 (IPSW’s properties when also estimating $\pi$).
Under the general setting defined in Subsection 3.1.1, granting Assumptions 1-4, the bias of the estimated IPSW with estimated $\hat{\pi}_{n}$ (see Definition 7) is given by
$$\displaystyle\mathbb{E}\left[\hat{\tau}_{n,m}\right]-\tau\,$$
$$\displaystyle=\,\sum_{x\in\mathds{X}}p_{\text{\tiny T}}(x)\,\mathbb{E}\left[Y^{(0)}\mid X=x\right]\biggl{(}1-p_{\text{\tiny R}}(x)\left(1-\pi(x)\right)\biggr{)}^{n}$$
$$\displaystyle\quad-\sum_{x\in\mathds{X}}p_{\text{\tiny T}}(x)\,\mathbb{E}\left[Y^{(1)}\mid X=x\right]\bigl{(}1-p_{\text{\tiny R}}(x)\,\pi(x)\bigr{)}^{n}.$$
Besides, the variance of the estimated IPSW with estimated $\hat{\pi}_{n}$ satisfies, for all $n$
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{n,m}\right]$$
$$\displaystyle=\frac{1}{m}\operatorname{Var}\left[\tau(X)-C_{n}(X)\right]+\left(1-\frac{1}{m}\right)\operatorname{Var}\left[\mathbb{E}\left[C_{n}(X)|\mathbf{X}_{n}\right]\right]$$
$$\displaystyle\quad+\sum_{x\in\mathbb{X}}\left(\frac{p_{\text{\tiny T}}(x)(1-p_{\text{\tiny T}}(x))}{m}+p_{\text{\tiny T}}^{2}(x)\right)\mathbb{E}\left[\frac{\mathds{1}_{Z_{n}(x)>0}}{Z_{n}(x)}\right]V_{\text{\tiny DM},n}(x),$$
where
$$\displaystyle C_{n}(X)=\mathbb{E}\left[Y^{(1)}\mid X\right](1-\pi(X))^{Z_{n}(X)}-\mathbb{E}\left[Y^{(0)}\mid X\right]\pi(X)^{Z_{n}(X)}.$$
Furthermore,
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{n,m}\right]\leq$$
$$\displaystyle\frac{2\,\tilde{V}_{so}}{n+1}+\frac{\operatorname{Var}\left[\tau(X)\right]}{m}+\frac{2}{(n+1)m}\mathbb{E}_{\text{\tiny R}}\left[\frac{p_{\text{\tiny T}}\left(X\right)(1-p_{\text{\tiny T}}\left(X\right))}{p_{\text{\tiny R}}\left(X\right)^{2}}V_{\text{\tiny DM}}(X)\right]$$
$$\displaystyle\quad+2\left(1+\frac{3}{m}\right)\left(1-\min_{x}\left((1-\tilde{\pi}(x)^{2})p_{\text{\tiny R}}(x)\right)\right)^{n/2}\mathbb{E}\left[(Y^{(1)})^{2}+(Y^{(0)})^{2}\right],$$
where
$$\displaystyle\tilde{\pi}(x)=\max$$
$$\displaystyle\left(\pi(x),1-\pi(x)\right)\quad\text{and}\quad\tilde{V}_{so}:=\mathbb{E}_{\text{\tiny R}}\left[\left(\frac{p_{\text{\tiny T}}\left(X\right)}{p_{\text{\tiny R}}\left(X\right)}\right)^{2}V_{\text{\tiny DM},n}(X)\right].$$
Proof is detailed in Subsection A.4.1. Note that the bias takes a simpler form in the most usual case if $\pi(x)=1/2$,
$$\displaystyle\mathbb{E}\left[\hat{\tau}_{n,m}\right]-\tau$$
$$\displaystyle=-\sum_{x\in\mathds{X}}p_{\text{\tiny T}}(x)\tau(x)\left(1-\frac{p_{\text{\tiny R}}(x)}{2}\right)^{n}.$$
In this case, the bias of the estimated IPSW with estimated $\hat{\pi}_{n}$ is larger than the one of all three previous IPSW (completely oracle, semi-oracle and estimated with oracle $\pi$), but still decreases exponentially with $n$. Another difference comes from the fact that the sign and magnitude of the bias no longer depends on the sign and magnitude of $\tau(x)$ but also of $\mathbb{E}[Y^{(0)}]$ and $\mathbb{E}[Y^{(1)}]$.
The bound on the variance of $\hat{\tau}_{n,m}$ is very close to the one of $\hat{\tau}_{\pi,n,m}$, and in particular for any fixed $m$,
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{n,m}\right]$$
$$\displaystyle\leq\frac{2\,\tilde{V}_{so}}{n+1}+\frac{\operatorname{Var}\left[\tau(X)\right]}{m}+\frac{2}{(n+1)m}\mathbb{E}_{\text{\tiny R}}\left[\frac{p_{\text{\tiny T}}\left(X\right)(1-p_{\text{\tiny T}}\left(X\right))}{p_{\text{\tiny R}}\left(X\right)^{2}}V_{\text{\tiny DM}}(X)\right]+o\left(\frac{1}{n}\right),$$
where the main difference comes from $\tilde{V}_{so}$ that contains $V_{\text{\tiny DM},n}(X)$ rather than $V_{\text{\tiny HT}}(X)$. Combining (6) and (7) allows to have
$$\displaystyle n\operatorname{Var}\left[\hat{\tau}_{\text{\tiny DM},n}\right]\leq V_{\text{\tiny HT}}(x)+\mathcal{O}\left(n^{-1/2}\right),$$
which allows to conclude that for all $n$ large enough, the bound on the variance of $\hat{\tau}_{n,m}$ is tighter than the bound on the variance of $\hat{\tau}_{\pi,n,m}$. This can also be observed on the large sample variance.
Corollary 3.
Under the same assumptions as in Proposition 3, the completely estimated IPSW is asymptotically unbiased when $n$ tends to infinity, that is
$$\lim_{n\to\infty}\mathbb{E}\left[\hat{\tau}_{n,m}\right]=\tau.$$
Besides, letting $\lim\limits_{n,m\to\infty}m/n=\lambda\in[0,\infty]$, the asymptotic variance of completely estimated IPSW satisfies
$$\displaystyle\lim\limits_{n,m\to\infty}\min(n,m)\operatorname{Var}\left[\hat{\tau}_{n,m}\right]=\min(1,\lambda)\left(\frac{\operatorname{Var}\left[\tau(X)\right]}{\lambda}+\tilde{V}_{so,\infty}\right),$$
where
$$\displaystyle\tilde{V}_{so,\infty}:=$$
$$\displaystyle\,\mathbb{E}_{\text{\tiny R}}\left[\left(\frac{p_{\text{\tiny T}}\left(X\right)}{p_{\text{\tiny R}}\left(X\right)}\right)^{2}V_{\text{\tiny DM},\infty}(X)\right],$$
and
$$\displaystyle V_{\text{\tiny DM},\infty}(x):=$$
$$\displaystyle\,\frac{\operatorname{Var}_{\text{\tiny R}}\left[Y^{(1)}\mid X=x\right]}{\pi}+\frac{\operatorname{Var}_{\text{\tiny R}}\left[Y^{(0)}\mid X=x\right]}{1-\pi}.$$
Proof is detailed in Subsection A.4.2. Because $\forall x\in\mathds{X}$, $V_{\text{\tiny DM},\infty}(x)\leq V_{\text{\tiny HT}}(x)$, then $\tilde{V}_{so,\infty}\leq V_{so}$, so that the large sample variance of the semi-oracle and completely estimated IPSW are smaller than with an oracle $\pi$, regardless of the regime at which $n$ and $m$ tend to infinity.
Similarly to the result on $\hat{\tau}_{\pi,n,m}$, upper bound on the risk of the completely estimated IPSW can be established, based on Proposition 3.
Theorem 4 (Properties of the IPSW).
Under the general setting defined in Subsection 3.1.1, granting Assumptions 1-4, the quadratic risk of the completely estimated IPSW with estimated $\hat{\pi}$ satisfies,
$$\displaystyle\mathbb{E}\left[\left(\hat{\tau}_{n,m}-\tau\right)^{2}\right]$$
$$\displaystyle\leq\frac{2\tilde{V}_{so}}{n+1}+\frac{\operatorname{Var}\left[\tau(X)\right]}{m}+\frac{2}{m(n+1)}\mathbb{E}_{\text{\tiny R}}\left[\frac{p_{\text{\tiny T}}\left(X\right)(1-p_{\text{\tiny T}}\left(X\right))}{p_{\text{\tiny R}}\left(X\right)^{2}}V_{\text{\tiny DM}}(X)\right]$$
$$\displaystyle\quad+2\left(2+\frac{3}{m}\right)\left(1-\min_{x}\left((1-\tilde{\pi}(x))p_{\text{\tiny R}}(x)\right)\right)^{n/2}\mathbb{E}\left[(Y^{(1)})^{2}+(Y^{(0)})^{2}\right].$$
(17)
Consequently, the estimator $\hat{\tau}_{n,m}$ is $L^{2}$-consistent as $m,n$ tends to infinity, that is,
$$\hat{\tau}_{n,m}\stackrel{{\scriptstyle L^{2}}}{{\longrightarrow}}\tau.$$
Proof is detailed in Subsection A.4.3.
For the risk, and for the same arguments than for the bound on the variance, it can be shown that for a reasonable $n$, the bound on the risk of $\hat{\tau}_{n,m}$ is tighter than for $\hat{\tau}_{\pi,n,m}$.
All the previous results establish theoretical guidance explaining why an estimator also estimating $\pi$ per strata should be preferred in practice, at least for a reasonable trial sample size $n$. To our knowledge we have not found work explicitly stating that estimating $\pi$ in the IPSW should be preferred, even if Dahabreh et al. (2020) uses a logistic regression to estimate the propensity to receive treatment in the trial.
4.3 Extended adjustment set: when using extra covariates
In this section, we detail the impact of adding covariates that are not necessary for adjustment – for example being only shifted or only treatment effect modifiers – on the IPSW performances.
Indeed, in the literature, one of the natural approach is to adjust on all shifted covariates, also named the sampling set (Cole and
Stuart, 2010; Tipton, 2013).
Another adjustment set is also possible, being the heterogeneity set comprising all the treatment effect modifiers (Hartman, 2021), even if, knowing which covariate is treatment effect modifier is harder.
As mentioned in the related work (Subsection 2.2), there is an important literature about optimal adjustment set for precision in the causal inference literature, but to our knowledge the topic has not been tackled yet when it comes to efficiency in generalization. Egami and
Hartman (2021) discuss extensively the usage of these two sets for identification but do not study their impact on the asymptotic variance.
In this section the theoretical results hold for a specific regime, where the target sample is bigger than the trial sample, that is $m\gg n$. In other word, this situation is equivalent as considering the semi-oracle IPSW with estimated $\pi$ (Definition 9).
Formalization
Consider that the user has at disposal an external set of baseline categorical covariates denoted $V$. We assume that Assumptions 3 and 4 are preserved when adding $V$ to the adjustment set $X$ previously considered333Note that if preserving transportability is pretty straitghforward as $V$ is a baseline covariate too (for e.g. no collider bias), the support inclusion’s assumption can be more challenging when adding too many covariates (see D’Amour et al. (2017) for a discussion).. As mentioned above, this external covariates set can be of two different natures.
Definition 11 ($V$ is not a treatment effect modifier).
$V$ does not modulate treatment effect modifier, that is
$$\forall v\in\mathds{V},\;\forall s\in\{T,R\},\quad\mathbb{P}_{\text{s}}(Y^{(1)}-Y^{(0)}\mid X=x,V=v)=\mathbb{P}_{\text{s}}(Y^{(1)}-Y^{(0)}\mid X=x).$$
Definition 12 ($V$ is not shifted).
$V$ is not shifted, that is
$$\forall v\in\mathds{V},\quad p_{\text{\tiny T}}(v)=p_{\text{\tiny R}}(v).$$
To distinguish estimator using the set $X$ or the extended set $X,V$, we denote $\hat{\tau}(X)$ and $\hat{\tau}(X,V)$ the two estimations strategies. One can show that adding only shifted covariates $V$ leads to a loss of precision, when the set $V$ is independent of the set $X$.
Corollary 4 (Adding shifted and independent covariates).
Consider the semi-oracle IPSW estimator $\hat{\tau}_{\text{\tiny T},n}^{*}$ (Definition 9), and a set of additional shifted covariates $V$ (Definition 11) independent of $X$, which are not treatment effect modifiers. Then,
$$\lim_{n\to\infty}n\operatorname{Var}_{\text{\tiny R}}\left[\hat{\tau}_{\text{\tiny T},n}^{*}(X,V)\right]=\left(\sum_{v\in\mathcal{V}}\frac{p_{\text{\tiny T}}(v)^{2}}{p_{\text{\tiny R}}(v)}\right)\lim_{n\to\infty}n\operatorname{Var}_{\text{\tiny R}}\left[\hat{\tau}_{\text{\tiny T},n}^{*}(X)\right].$$
Proof is detailed in Subsection B.1.
This results states that the asymptotic variance of the semi-oracle estimator is always bigger if an additional independent shifted covariate set $V$ is added in the adjustment. Moreover, the stronger the shift, the bigger the variance inflation.
Note that this specific rule was retrieved in the toy example, where the plain line (corresponding to Corollary 4) matches the empirical dots on Figure 3(a).
On the contrary, adding an additional treatment effect modifier covariate set leads to a gain in precision.
Corollary 5 (Adding non-shifted treatment effect modifiers).
Consider the semi-oracle IPSW estimator $\hat{\tau}_{\text{\tiny T},n}^{*}$ (Definition 9). Consider an additional non-shifted treatment effect modifier set (Definition 12) independent of $X$. Then,
$$\displaystyle\lim_{n\to\infty}n\operatorname{Var}_{\text{\tiny R}}\left[\hat{\tau}_{\text{\tiny T},n}^{*}(X,V)\right]\,=\,\lim_{n\to\infty}n\operatorname{Var}_{\text{\tiny R}}\left[\hat{\tau}_{\text{\tiny T},n}^{*}(X)\right]-\mathbb{E}_{\text{\tiny R}}\left[\frac{p_{\text{\tiny T}}(X)}{p_{\text{\tiny R}}(X)}\operatorname{Var}\left[\tau(X,V)\mid X\right]\right].$$
In particular,
$$\displaystyle\lim_{n\to\infty}n\operatorname{Var}_{\text{\tiny R}}\left[\hat{\tau}_{\text{\tiny T},n}^{*}(X,V)\right]\leq\lim_{n\to\infty}n\operatorname{Var}_{\text{\tiny R}}\left[\hat{\tau}_{\text{\tiny T},n}^{*}(X)\right].$$
Proof is detailed in Subsection B.2.
This result follows a similar spirit as Rotnitzky and
Smucler (2020) due to the comparison of two asymptotic variances, even though the context and the theoretical tools are different.
5 Synthetic and semi-synthetic simulations
In this section, one additional analysis based on the toy example is provided to illustrate the different asymptotic regimes from Section 4. In addition, results are also illustrated on a semi-synthetic simulation aiming to mimic a medical scenario. The code to reproduce the simulations and the different figures is available on Github444BenedicteColnet/IPSW-categorical..
5.1 Synthetic: additional experiment from the toy example
While most of the results are illustrated at the beginning of the article through the toy example, here we more thoroughly investigate empirically the different asymptotic regimes of the IPSW and its variants.
In particular we complete Figure 2(c) that highlights the phenomenon of different asymptotic regimes, with a complete visualization of risks and variances allowing to more precisely illustrate the theoretical results, and in particular Corollary 2. More precisely, the quadratic risk is depicted in Figure 6(b), while the variance via $min(n,m)\operatorname{Var}\left[\hat{\tau}_{n,m}\right]$ is displayed in Figure 6(a). In both figures, different estimators (oracle or not) are considered with different regimes for $m$, as $n$ grows to infinity.
In particular, this simulation confirms that
(i)
all IPSW variants are consistent, even though their convergence speeds depend on the regime (Figure 6(b)),
(ii)
the completely oracle IPSW has a bigger variance than the semi-oracle IPSW (Figure 6(a)),
(iii)
the asymptotic variance depends on the asymptotic regime (Figure 6(a)),
(iv)
the completely estimated IPSW reaches the variance of the semi-oracle one if the target population sample is bigger than the trial (Figure 6(a)).
5.2 Semi-synthetic
In the semi-synthetic simulation, the data are taken from an application in critical care medicine, and only the outcome generative model is simulated, such that the covariate distribution and in particular the distribution shift between populations is inherited from a real situation.
5.2.1 Design
Two data-sets are used to generate two sources:
1.
A randomized controlled trial (RCT), called CRASH-3 (Dewan et al., 2012), aiming to measure the effect of Tranexamic Acide (TXA) to prevent death from Traumatic Brain Injury (TBI). A total of 175 hospitals in 29 different countries participated to the RCT, where adults with TBI suffering from intracranial bleeding were randomly administrated TXA (CRASH-3, 2019). The inclusion criteria of the trial are patients with a Glasgow Coma Scale (GCS)555The Glasgow Coma Scale (GCS) is a neurological scale which aims to assess a person’s consciousness. The lower the score, the higher the gravity of the trauma. score of 12 or lower or any intracranial bleeding on CT scan, and no major extracranial bleeding.
2.
An observational cohort, called Traumabase, comprising 23 French Trauma centers, collects detailed clinical data from the scene of the accident to the release from the hospital. The resulting database, called the Traumabase, comprises 23,000 trauma admissions to date, and is continually updated, representing a fair, almost-exhaustive data base about actual individuals taken in charge in France and suffering from trauma.
These two data sources are turned into two source populations representing a real-world situation with six covariates so that the distribution structure and, in particular, the distributional shift mimics a real-world situation. The six covariates kept in common are: GCS (categorical), gender (categorical), pupil reactivity (categorical), age (continuous), systolic blood pressure (continuous), and time-to-treatment (TTT) (continuous). The continuous covariates are then turned into categories. Additional details about data preparation are available in Appendix (see Section C). In this semi-synthetic simulation, only the outcome model is completely synthetic, and follows
$$\small Y:=f(\texttt{GCS},\texttt{Gender})+A\,\tau(\texttt{TTT},\texttt{Blood Pressure})+\epsilon_{\texttt{TTT}},$$
(18)
where $f$ and $\tau$ are two functions of the covariates, and $\epsilon_{\texttt{TTT}}$ is a gaussian noise such that $\mathbb{E}[\epsilon_{\texttt{TTT}}\mid X]=0$, but where heteroscedasticity is observed along the covariate TTT. The higher the time-to-treatment, the higher $\operatorname{Var}\left[\epsilon_{\texttt{TTT}}\mid\text{TTT}\right]$, and so the noise on $Y$ (see Section C for the detailed generated function).
This outcome model is such that only time-to-treatment (TTT) and blood pressure are effect modifiers, while other covariates only affects the baseline value or have no impact.
Each time a simulation is conducted observations are sampled from the two populations with replacement, and the outcome is created following equation (18). The trial is such that $\pi=0.5$.
5.2.2 Results
Minimal adjustment set is sufficient to generalize
The minimal adjustment set to generalize the trial results is constituted of the time-to-treatment(TTT) and the systolic blood pressure (blood).
Using only these two covariates, the simulations illustrate how the re-weighting procedure allows to correct for the population shift between the trial and the target population as presented on Figure 8 ($1,000$ repetitions).
Estimating $\pi$ lowers the variance
Simulations also illustrate the fact that estimating $\pi$ (Definition 10) compared to not estimating it (Definition 6) lowers the variance, as shown on Figure 8. This is expected from Corollary 3.
The generalized (or re-weighted) estimate is not necessarily noisier than the trial’s estimate
Note that the variance of the IPSW - with estimation of $\pi$ or not - has a similar variance as the estimates coming from the RCT only (Horvitz-Thomson or difference-in-means). This is due to the presence of heteroscedasticity in the generative model (see equation (18)). Indeed, we would like to emphasize that re-weighting the trial does not necessarily lead to wider confidence intervals.
This somehow challenges a common and intuitive idea present in the literature and stating that a re-weighted trial always has a larger variance than the trial itself (Gatsonis and
Sally, 2017; Ling
et al., 2022).
This intuition comes from the multiplication of weights that can take large values (in particular if, for some $x$, $p_{\text{\tiny R}}(x)\ll p_{\text{\tiny T}}(x)$), making this idea valid as soon as the outcome noise is homoscedastic.
However, the asymptotic variance of the semi-oracle IPSW from Corollary 1 highlights that this intuitive and reasonable idea is not necessarily true, as soon as there is heteroscedascity, which occurs if some categories for which potential outcomes have higher uncertainty (larger noise) are more represented in the trial than in the target population:
$$V_{so}=\sum_{x\in\mathcal{X}}\mathchoice{\leavevmode\hbox to25.91pt{\vbox to14pt{\pgfpicture\makeatletter\raise-3.5pt\hbox{\hskip 12.95572pt\lower-3.5pt\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{
{}{}}
{{}{{}}}{{}{}}{}{{}{}}
{
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-12.95572pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\displaystyle\hbox{\pagecolor{Purple!17} \color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0} $\frac{p_{\text{\tiny T}}^{2}(x)}{p_{\text{\tiny R}}(x)}$ }$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{}{{
{}{}{}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}{\leavevmode\hbox to25.91pt{\vbox to14pt{\pgfpicture\makeatletter\raise-3.5pt\hbox{\hskip 12.95572pt\lower-3.5pt\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{
{}{}}
{{}{{}}}{{}{}}{}{{}{}}
{
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-12.95572pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\textstyle\hbox{\pagecolor{Purple!17} \color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0} $\frac{p_{\text{\tiny T}}^{2}(x)}{p_{\text{\tiny R}}(x)}$ }$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{}{{
{}{}{}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}{\leavevmode\hbox to19.02pt{\vbox to10.08pt{\pgfpicture\makeatletter\raise-2.52083pt\hbox{\hskip 9.51065pt\lower-2.52083pt\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{
{}{}}
{{}{{}}}{{}{}}{}{{}{}}
{
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-9.51065pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\scriptstyle\hbox{\pagecolor{Purple!17} \color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0} $\frac{p_{\text{\tiny T}}^{2}(x)}{p_{\text{\tiny R}}(x)}$ }$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{}{{
{}{}{}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}{\leavevmode\hbox to14.43pt{\vbox to7.98pt{\pgfpicture\makeatletter\raise-1.99583pt\hbox{\hskip 7.21396pt\lower-1.99583pt\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{
{}{}}
{{}{{}}}{{}{}}{}{{}{}}
{
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-7.21396pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\scriptscriptstyle\hbox{\pagecolor{Purple!17} \color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0} $\frac{p_{\text{\tiny T}}^{2}(x)}{p_{\text{\tiny R}}(x)}$ }$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{}{{
{}{}{}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}\,\left(\mathchoice{\leavevmode\hbox to96.87pt{\vbox to12.54pt{\pgfpicture\makeatletter\raise-2.25555pt\hbox{\hskip 48.43263pt\lower-2.25555pt\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{
{}{}}
{{}{{}}}{{}{}}{}{{}{}}
{
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-48.43263pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\displaystyle\hbox{\pagecolor{Bittersweet!17}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0} $\frac{\operatorname{Var}\left[Y^{(1)}\mid X=x\right]}{\pi}+\frac{\operatorname{Var}\left[Y^{(0)}\mid X=x\right]}{1-\pi}$}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{}{{
{}{}{}{}{}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}{\leavevmode\hbox to96.87pt{\vbox to12.54pt{\pgfpicture\makeatletter\raise-2.25555pt\hbox{\hskip 48.43263pt\lower-2.25555pt\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{
{}{}}
{{}{{}}}{{}{}}{}{{}{}}
{
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-48.43263pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\textstyle\hbox{\pagecolor{Bittersweet!17}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0} $\frac{\operatorname{Var}\left[Y^{(1)}\mid X=x\right]}{\pi}+\frac{\operatorname{Var}\left[Y^{(0)}\mid X=x\right]}{1-\pi}$}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{}{{
{}{}{}{}{}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}{\leavevmode\hbox to67.81pt{\vbox to8.78pt{\pgfpicture\makeatletter\raise-1.57889pt\hbox{\hskip 33.90283pt\lower-1.57889pt\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{
{}{}}
{{}{{}}}{{}{}}{}{{}{}}
{
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-33.90283pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\scriptstyle\hbox{\pagecolor{Bittersweet!17}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0} $\frac{\operatorname{Var}\left[Y^{(1)}\mid X=x\right]}{\pi}+\frac{\operatorname{Var}\left[Y^{(0)}\mid X=x\right]}{1-\pi}$}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{}{{
{}{}{}{}{}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}{\leavevmode\hbox to48.43pt{\vbox to6.27pt{\pgfpicture\makeatletter\raise-1.12778pt\hbox{\hskip 24.21625pt\lower-1.12778pt\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{
{}{}}
{{}{{}}}{{}{}}{}{{}{}}
{
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-24.21625pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\scriptscriptstyle\hbox{\pagecolor{Bittersweet!17}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0} $\frac{\operatorname{Var}\left[Y^{(1)}\mid X=x\right]}{\pi}+\frac{\operatorname{Var}\left[Y^{(0)}\mid X=x\right]}{1-\pi}$}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{}{{
{}{}{}{}{}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}\right)$$
ptWeightsptCan be small for some $x$ with high weights $\frac{p_{\text{\tiny T}}^{2}(x)}{p_{\text{\tiny R}}(x)}$
In particular in this simulation, having a variance of the IPSW estimate smaller than that of the treatment effect estimator on the trial is possible because individuals treated earlier have less uncertainty in the response than individuals with high TTT (encoded in $\epsilon_{\texttt{TTT}}$), and the simulation is made such that in the target population such individuals are more present than in the trial.
Shifted and not treatment effect modifier covariate increases variance: the example of Glasgow score (GCS)
It is possible to illustrate the results from Section 4.3 with the semi-synthetic simulation.
For example, the Glasgow score (GCS) can be added to the minimal adjustment set previously used (see Figure 8), and leads to a loss of precision as this covariate is relatively strongly shifted between the two data sets and is not a treatment effect modifier (even if in the simulation this covariate has an impact on the outcome).
The increase in variance can be observed on Figure 9, where the green boxplot on the left represents such situation.
While a non-shifted but treatment effect modifier lowers the variance
To illustrate a gain in precision due to the addition of a non-shifted treatment effect modifier, it was not possible to use the natural covariates from the two original data sets as a distributional shift was always present in all covariates. To model such a situation, we added a categorical covariate X_sup (5 levels), independent with all other covariates and without shift, in the data generative model to represent such a situation:
$$\small Y:=f(\texttt{GCS},\texttt{Gender})+A\,\tau(\texttt{TTT},\texttt{Blood Pressure},\texttt{X\_sup})+\epsilon_{\texttt{TTT}}.$$
(19)
Doing so, it is possible to illustrate that adding X_sup in the adjustment set allows to lower the variance, and Figure 9 presents such situation with the purple boxplot on the right.
6 Conclusion and future work
In this work, we establish finite-sample and asymptotic results on different versions of the so-called Inverse Propensity Sampling Weights estimator, when the adjustment set is constituted of categorical covariates. We give the explicit expressions of the biases and variances for all estimates, together with their quadratic risk. Our detailed analysis allows us to compare this different estimate in differente finite-sample regimes.
Indeed, to the best of our knowledge, our work is the first to study the impact of finite trial and observational data sets on IPSW performance in the context of generalization, by providing rate of convergence for several IPSW estimates.
By doing so, we link these results with previous results in epidemiology where one data source was considered infinite, and also explain how certain observations can be seen through the eyes of seminal work in causal inference (efficient estimation with IPW).
Which covariate to include?
This work also reveals that care should be taken when selecting the covariates to generalize.
From applied literature, we have noticed that practitioners usually select almost all available covariates to build the weights, which is encouraged by the fear of missing an important shifted treatment effect modifier.
We show that inclusion of many covariates comes with the risk of adjusting on shifted covariates that are not treatment effect modifiers, which can drastically damage the precision.
On the contrary, even though adding some non-shifted covariates may sound counterintuitive, we show that such practice improves asymptotic precision, as soon as the non-shifted additional covariate set modulates treatment effect.
Still, adding too many covariates endangers overlap and therefore can lead to finite sample bias.
In light of these theoretical results, we believe that physicians and epidemiologists have an important role to play in selecting a limited number of covariates when generalizing trial’s findings.
Future work
Studying only categorical covariates is probably the main restriction of this work, as data can be hybrid and composed of continuous and categorical information.
However, even when facing a hybrid set of covariates - continuous and categorical - the user can still create bins for continuous covariates.
Even if such data-processing is not necessarily recommended, for a limited number of covariates this should allow to extend the analysis.
Indeed, binning covariate leads to within-stratum confounding, that is a residual confounding due to rough bins, and therefore to an asymptotic bias due to factors that are poorly controlled on.
To avoid within-stratum residual confounding, it is desirable to create more bins and split the data into more strata, but stratifying too finely with a finite sample may lead to (i) a variance inflation and (ii) the support inclusion assumption’s invalidity.
Indeed, the performances of the IPSW in a high-dimensional setting can be limited. For example, if all input variables are binary, the finite-sample bias and variance can be rewritten as a function of $n/2^{d}$ (where $d$ is the number of input variables) and can thus spin out of control if the sample sizes are too small compared to the dimension of the problem.
Future work should investigate how our conclusion on the different asymptotic regimes and the covariates selection’s impact on variance can be extended to settings with mixed-type covariates (for e.g. a smoother version of IPSW with density ratio estimation).
In practice, the limitation due to categorical covariates is balanced by the fact that within the medical field, clinical indicators and covariates are often scores and categories.
For example, Berkowitz
et al. (2018) apply the IPSW to generalize the effect of blood pressure control relying on many categorical covariates such as health insurance status (insured, uninsured), tobacco smoking status (never, current, former), and so on.
When facing continuous covariates in practice, and having in mind the current theoretical understanding of the different generalization estimators, this IPSW version has interests.
A solution would be found at the crossroads between identification bias (due to imprecise bins) and variance inflation or finite sample bias (due to numerous bins).
Quantifying such a tradeoff in specific settings would definitely help the practitioners by providing clear guidelines.
Acknowledgment
Part of this work was done while Bénédicte Colnet was visiting the Simons Institute for the Theory of Computing.
Also, part of this work was done during a visiting scholar period in the Stanford’s Statistics Department.
Therefore, we would like to thank the department for their welcoming and all the helpful and inspiring discussions, in particular with Prof. Trevor Hastie.
Funding and conflict of interest
Authors are all funded by their respective employer (Inria or École polytechnique) and have declared no conflict of interest.
References
Arnould
et al. (2021)
Arnould, L., C. Boyer, and E. Scornet (2021, 18–24 Jul).
Analyzing the tree-layer structure of deep forests.
In M. Meila and T. Zhang (Eds.), Proceedings of the 38th
International Conference on Machine Learning, Volume 139 of Proceedings
of Machine Learning Research, pp. 342–350. PMLR.
Athey
et al. (2020)
Athey, S., R. Chetty, and G. Imbens (2020).
Combining experimental and observational data to estimate treatment
effects on long term outcomes.
arXiv preprint arXiv:2006.09676.
Bareinboim and
Pearl (2012a)
Bareinboim, E. and J. Pearl (2012a).
Controlling selection bias in causal inference.
In N. D. Lawrence and M. Girolami (Eds.), Proceedings of the
Fifteenth International Conference on Artificial Intelligence and
Statistics, Volume 22 of Proceedings of Machine Learning Research, La
Palma, Canary Islands, pp. 100–108. PMLR.
Bareinboim and
Pearl (2012b)
Bareinboim, E. and J. Pearl (2012b).
Transportability of causal effects: Completeness results.
In Proceedings of the Twenty-Sixth AAAI Conference on Artificial
Intelligence, AAAI’12, pp. 698–704. AAAI Press.
Berkowitz
et al. (2018)
Berkowitz, S. A., J. B. Sussman, D. E. Jonas, and S. Basu (2018).
Generalizing intensive blood pressure treatment to adults with
diabetes mellitus.
Journal of the American College of Cardiology 72(11),
1214–1223.
SPECIAL FOCUS ISSUE: BLOOD PRESSURE.
Biau (2012)
Biau, G. (2012).
Analysis of a random forests model.
Journal of Machine Learning Research 13(38),
1063–1095.
Brookhart et al. (2006)
Brookhart, M. A., S. Schneeweiss, K. J. Rothman, R. J. Glynn, J. Avorn, and
T. Stürmer (2006).
Variable selection for propensity score models.
American journal of epidemiology 163(12), 1149–1156.
Buchanan
et al. (2018)
Buchanan, A. L., M. G. Hudgens, S. R. Cole, K. R. Mollan, P. E. Sax, E. S.
Daar, A. A. Adimora, J. J. Eron, and M. J. Mugavero (2018).
Generalizing evidence from randomized trials using inverse
probability of sampling weights.
Journal of the Royal Statistical Society: Series A (Statistics
in Society) 181, 1193–1209.
Chattopadhyay et al. (2022)
Chattopadhyay, A., E. R. Cohn, and J. R. Zubizarreta (2022).
One-step weighting to generalize and transport treatment effect
estimates to a target population.
Cinelli
et al. (2022)
Cinelli, C., A. Forney, and J. Pearl (2022).
A crash course in good and bad controls.
Sociological Methods & Research.
Cole and
Stuart (2010)
Cole, S. R. and E. A. Stuart (2010).
Generalizing evidence from randomized clinical trials to target
populations: The actg 320 trial.
American Journal of Epidemiology 172, 107–115.
Colnet
et al. (2021)
Colnet, B., J. Josse, E. Scornet, and G. Varoquaux (2021).
Causal effect on a target population: a sensitivity analysis to
handle missing covariates.
Accepted in Journal of Causal Inference.
Colnet et al. (2020)
Colnet, B., I. Mayer, G. Chen, A. Dieng, R. Li, G. Varoquaux, J.-P. Vert,
J. Josse, and S. Yang (2020).
Causal inference methods for combining randomized trials and
observational studies: a review.
CRASH-3 (2019)
CRASH-3 (2019).
Effects of tranexamic acid on death, disability, vascular occlusive
events and other morbidities in patients with acute traumatic brain injury
(CRASH-3): a randomised, placebo-controlled trial.
The Lancet 394(10210), 1713–1723.
Dahabreh et al. (2020)
Dahabreh, I. J., S. E. Robertson, J. A. Steingrimsson, E. A. Stuart, and M. A.
Hernán (2020).
Extending inferences from a randomized trial to a new target
population.
Statistics in Medicine 39(14), 1999–2014.
Dahabreh et al. (2019)
Dahabreh, I. J., J. M. Robins, S. J. Haneuse, and M. A. Hernán (2019).
Generalizing causal inferences from randomized trials: counterfactual
and graphical identification.
arXiv preprint arXiv:1906.10792.
D’Amour et al. (2017)
D’Amour, A., P. Ding, A. Feller, L. Lei, and J. Sekhon (2017, 11).
Overlap in observational studies with high-dimensional covariates.
Journal of Econometrics 221.
Deaton and
Cartwright (2018)
Deaton, A. and N. Cartwright (2018).
Understanding and misunderstanding randomized controlled trials.
Social Science & Medicine 210, 2–21.
Randomized Controlled Trials and Evidence-based Policy: A
Multidisciplinary Dialogue.
Degtiar and
Rose (2022)
Degtiar, I. and S. Rose (2022).
A review of generalizability and transportability.
Annual Review of Statistics and Its Application.
Dewan et al. (2012)
Dewan, Y., E. Komolafe, J. Mejìa-Mantilla, P. Perel, I. Roberts, and
H. Shakur-Still (2012, 06).
CRASH-3: Tranexamic acid for the treatment of significant traumatic
brain injury: study protocol for an international randomized, double-blind,
placebo-controlled trial.
Trials 13, 87.
Efron and
Hinkley (1978)
Efron, B. and D. V. Hinkley (1978).
Assessing the accuracy of the maximum likelihood estimator: Observed
versus expected fisher information.
Biometrika 65(3), 457–482.
Egami and
Hartman (2021)
Egami, N. and E. Hartman (2021, 08).
Covariate selection for generalizing experimental results:
Application to a large‐scale development program in uganda*.
Journal of the Royal Statistical Society: Series A (Statistics
in Society) 184.
Gatsonis and
Sally (2017)
Gatsonis, C. and M. C. Sally (2017).
Methods in Comparative Effectiveness Research, pp. 177–199.
Chapman & Hall.
Hahn (1998)
Hahn, J. (1998).
On the role of the propensity score in efficient semiparametric
estimation of average treatment effects.
Econometrica 66(2), 315–332.
Hahn (2004)
Hahn, J. (2004, 02).
Functional restriction and efficiency in causal inference.
The Review of Economics and Statistics 86, 73–76.
Harshaw
et al. (2021)
Harshaw, C., J. A. Middleton, and F. Sävje (2021).
Optimized variance estimation under interference and complex
experimental designs.
Hartman (2021)
Hartman, E. (2021).
Generalizing Experimental Results, pp. 385–410.
Cambridge University Press.
Hartman
et al. (2015)
Hartman, E., R. Grieve, R. Ramsahai, and J. S. Sekhon (2015).
From sample average treatment effect to population average treatment
effect on the treated: combining experimental with observational studies to
estimate population treatment effects.
Journal of the Royal Statistical Society: Series A (Statistics
in Society) 178(3), 757–778.
Henckel
et al. (2022)
Henckel, L., E. Perković, and M. H. Maathuis (2022, April).
Graphical criteria for efficient total effect estimation via
adjustment in causal linear models.
Journal of the Royal Statistical Society Series B 84(2), 579–599.
Hernan (2020)
Hernan, MA Robins, J. (2020).
Causal Inference: What If.
Boca Raton: Chapman & Hall/CRC.
Hirano
et al. (2003)
Hirano, K., G. Imbens, and G. Ridder (2003, 02).
Efficient estimation of average treatment effects using the estimated
propensity score.
Econometrica 71, 1161–1189.
Horvitz and
Thompson (1952)
Horvitz, D. G. and D. J. Thompson (1952).
A generalization of sampling without replacement from a finite
universe.
Journal of the American Statistical Association 47(260), 663–685.
Huang (2022)
Huang, M. (2022).
Sensitivity analysis in the generalization of experimental results.
Huitfeldt et al. (2019)
Huitfeldt, A., S. Swanson, M. Stensrud, and E. Suzuki (2019, 12).
Effect heterogeneity and variable selection for standardizing causal
effects to a target population.
European Journal of Epidemiology 34.
Imai
et al. (2008)
Imai, K., G. King, and E. A. Stuart (2008).
Misunderstandings between experimentalists and observationalists
about causal inference.
Journal of the Royal Statistical Society: Series A (Statistics
in Society) 171(2), 481–502.
Imbens (2011)
Imbens, G. W. (2011).
Experimental design for unit and cluster randomid trials.
International Initiative for Impact Evaluation Paper.
Imbens and
Rubin (2015)
Imbens, G. W. and D. B. Rubin (2015).
Causal Inference in Statistics, Social, and Biomedical
Sciences.
Cambridge UK: Cambridge University Press.
Josey
et al. (2021)
Josey, K. P., S. A. Berkowitz, D. Ghosh, and S. Raghavan (2021).
Transporting experimental results with entropy balancing.
Statistics in Medicine 40(19), 4310–4326.
Kallus
et al. (2018)
Kallus, N., A. M. Puli, and U. Shalit (2018).
Removing hidden confounding by experimental grounding.
In Advances in neural information processing systems, pp. 10888–10897.
Kern
et al. (2016)
Kern, H. L., E. A. Stuart, J. Hill, and D. P. Green (2016).
Assessing methods for generalizing experimental impact estimates to
target populations.
Journal of research on educational effectiveness 9(1), 103–127.
Lee
et al. (2021)
Lee, D., S. Yang, L. Dong, X. Wang, D. Zeng, and J. Cai (2021, 12).
Improving trial generalizability using observational studies.
Biometrics.
Lefebvre
et al. (2008)
Lefebvre, G., J. Delaney, and R. Platt (2008, 08).
Impact of mis‐specification of the treatment model on estimates
from a marginal structural model.
Statistics in medicine 27, 3629–42.
Ling
et al. (2022)
Ling, A. Y., M. E. Montez-Rath, P. Carita, K. Chandross, L. Lucats, Z. Meng,
B. Sebastien, K. Kapphahn, and M. Desai (2022).
A critical review of methods for real-world applications to
generalize or transport clinical trial findings to target populations of
interest.
Liu
et al. (2021)
Liu, R., S. Rizzo, S. Whipple, N. Pal, A. Lopez Pineda, M. Lu, B. Arnieri,
Y. Lu, W. Capra, R. Copping, and J. Zou (2021, 04).
Evaluating eligibility criteria of oncology trials using real-world
data and ai.
Nature 592.
Lunceford and
Davidian (2004)
Lunceford, J. K. and M. Davidian (2004).
Stratification and weighting via the propensity score in estimation
of causal treatment effects: A comparative study.
In Statistics in Medicine, pp. 2937–2960.
Miratrix
et al. (2013)
Miratrix, L. W., J. S. Sekhon, and B. Yu (2013).
Adjusting treatment effect estimates by post-stratification in
randomized experiments.
Journal of the Royal Statistical Society Series B 75,
369–396.
Nie
et al. (2021)
Nie, X., G. Imbens, and S. Wager (2021).
Covariate balancing sensitivity analysis for extrapolating randomized
trials across locations.
O’Muircheartaigh
and Hedges (2013)
O’Muircheartaigh, C. and L. Hedges (2013, 11).
Generalizing from unrepresentative experiments: A stratified
propensity score approach.
Journal of the Royal Statistical Society: Series C (Applied
Statistics) 63.
Pearl (2015)
Pearl, J. (2015).
Generalizing experimental findings.
Journal of Causal Inference 3(2), 259–266.
Pearl and
Bareinboim (2011)
Pearl, J. and E. Bareinboim (2011).
Transportability of causal and statistical relations: A formal
approach.
In Proceedings of the Twenty-Fifth AAAI Conference on Artificial
Intelligence, AAAI’11, pp. 247–254. AAAI Press.
Robins
et al. (1992)
Robins, J. M., S. D. Mark, and W. Newey (1992).
Estimating exposure effects by modelling the expectation of exposure
conditional on confounders.
Biometrics 48 2, 479–95.
Rosenbaum and
Rubin (1983)
Rosenbaum, P. R. and D. B. Rubin (1983).
The central role of the propensity score in observational studies for
causal effects.
Biometrika 70(1), 41–55.
Rothman (2011)
Rothman, K. J. (2011).
Epidemiology: an introduction (2 ed.).
Oxford University Press.
Rothman and
Greenland (2000)
Rothman, K. J. and S. Greenland (2000).
Modern Epidemiology (2 ed.).
Lippincott Williams and Wilkins.
Rothwell (2007)
Rothwell, P. (2007, 01).
External validity of randomised controlled trials: “to whom do the
results of this trial apply?”.
Lancet 365, 82–93.
Rotnitzky and
Smucler (2020)
Rotnitzky, A. and E. Smucler (2020).
Efficient adjustment sets for population average causal treatment
effect estimation in graphical models.
Journal of Machine Learning Research 21(188), 1–86.
Schnitzer
et al. (2015)
Schnitzer, M., J. Lok, and S. Gruber (2015, 07).
Variable selection for confounder control, flexible modeling and
collaborative targeted minimum loss-based estimation in causal inference.
The international journal of biostatistics 12.
Splawa-Neyman et al. (1990)
Splawa-Neyman, J., D. M. Dabrowska, and T. P. Speed (1990).
On the Application of Probability Theory to Agricultural
Experiments. Essay on Principles. Section 9.
Statistical Science 5(4), 465 – 472.
Stefanski and
Boos (2002)
Stefanski, L. A. and D. D. Boos (2002).
The calculus of m-estimation.
The American Statistician 56(1), 29–38.
Stuart
et al. (2011)
Stuart, E. A., S. R. Cole, C. P. Bradshaw, and P. J. Leaf (2011).
The use of propensity scores to assess the generalizability of
results from randomized trials.
Journal of the Royal Statistical Society: Series A (Statistics
in Society) 174, 369–386.
Stuart and
Rhodes (2017)
Stuart, E. A. and A. Rhodes (2017).
Generalizing treatment effect estimates from sample to population: A
case study in the difficulties of finding sufficient data.
Evaluation Review 41(4), 357–388.
PMID: 27491758.
Tipton (2013)
Tipton, E. (2013).
Improving generalizations from experiments using propensity score
subclassification: Assumptions, properties, and contexts.
Journal of Educational and Behavioral Statistics 38,
239–266.
Tipton
et al. (2016)
Tipton, E., K. Hallberg, L. Hedges, and W. Chan (2016, 07).
Implications of small samples for generalization: Adjustments and
rules of thumb.
Evaluation Review 41.
Velentgas et al. (2013)
Velentgas, P., N. A. Dreyer, P. Nourjah, S. R. Smith, and M. Torchia (2013).
Developing a protocol for observational comparative effectiveness
research: A user’s guide.
Westreich et al. (2017)
Westreich, D., J. Edwards, C. Lesko, E. Stuart, and S. Cole (2017, 05).
Transportability of trial results using inverse odds of sampling
weights.
American journal of epidemiology 186.
Witte and
Didelez (2018)
Witte, J. and V. Didelez (2018, 10).
Covariate selection strategies for causal inference: Classification
and comparison.
Biometrical Journal 61.
Wooldridge (2016)
Wooldridge, J. (2016).
Should instrumental variables be used as matching variables?
Research in Economics 70(2), 232–237.
Yang
et al. (2020)
Yang, S., J. K. Kim, and R. Song (2020).
Doubly robust inference when combining probability and
non‐probability samples with high dimensional data.
Journal of the Royal Statistical Society: Series B (Statistical
Methodology) 82.
APPENDIX
Appendix A Main proofs
A.1 Proof of Theorem 1 - Completely oracle estimator $\hat{\tau}_{\pi,\text{\tiny T,R},n}^{*}$
We first recall the expression of the completely oracle estimator introduced in Definition 3,
$$\hat{\tau}_{\pi,\text{\tiny T,R},n}^{*}=\frac{1}{n}\sum_{i=1}^{n}\frac{p_{\text{\tiny T}}\left(X_{i}\right)}{p_{\text{\tiny R}}\left(X_{i}\right)}\left(\frac{Y_{i}A_{i}}{\pi}-\frac{Y_{i}(1-A_{i})}{1-\pi}\right).$$
This estimator can be rewritten as,
$$\hat{\tau}_{\pi,\text{\tiny T,R},n}^{*}=\sum_{x\in\mathds{X}}\frac{p_{\text{\tiny T}}(x)}{p_{\text{\tiny R}}(x)}\left(\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\left(\frac{Y_{i}A_{i}}{\pi}-\frac{Y_{i}(1-A_{i})}{1-\pi}\right)\right),$$
since $X_{i}$ take values in a categorical set $\mathbb{X}$.
This rewriting is extensively used in the proof.
Bias
Recall that, for all $x\in\mathds{X}$, $p_{\text{\tiny R}}(x)$ and $p_{\text{\tiny T}}(x)$ are not random variables. We have
$$\displaystyle\mathbb{E}\left[\hat{\tau}_{\pi,\text{\tiny T,R},n}^{*}\right]$$
$$\displaystyle=\mathbb{E}\left[\sum_{x\in\mathds{X}}\frac{p_{\text{\tiny T}}(x)}{p_{\text{\tiny R}}(x)}\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\left(\frac{Y_{i}A_{i}}{\pi}-\frac{Y_{i}(1-A_{i})}{1-\pi}\right)\right]$$
By definition
$$\displaystyle=\sum_{x\in\mathds{X}}\mathbb{E}\left[\frac{p_{\text{\tiny T}}(x)}{p_{\text{\tiny R}}(x)}\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\left(\frac{Y_{i}A_{i}}{\pi}-\frac{Y_{i}(1-A_{i})}{1-\pi}\right)\right]$$
Linearity of $$\mathbb{E}[.]$$
$$\displaystyle=\sum_{x\in\mathds{X}}\frac{p_{\text{\tiny T}}(x)}{p_{\text{\tiny R}}(x)}\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\left(\frac{Y_{i}A_{i}}{\pi}-\frac{Y_{i}(1-A_{i})}{1-\pi}\right)\right]$$
$$p_{\text{\tiny R}}(x)$$ and $$p_{\text{\tiny T}}(x)$$ are not random
$$\displaystyle=\sum_{x\in\mathds{X}}\frac{p_{\text{\tiny T}}(x)}{p_{\text{\tiny R}}(x)}\mathbb{E}_{\text{\tiny R}}\left[\mathbbm{1}_{X_{i}=x}\left(\frac{Y_{i}A_{i}}{\pi}-\frac{Y_{i}(1-A_{i})}{1-\pi}\right)\right]$$
Linearity & iid trial
$$\displaystyle=\sum_{x\in\mathds{X}}\frac{p_{\text{\tiny T}}(x)}{p_{\text{\tiny R}}(x)}\mathbb{E}_{\text{\tiny R}}\left[\mathbbm{1}_{X_{i}=x}\left(\frac{Y_{i}^{(1)}A_{i}}{\pi}-\frac{Y_{i}^{(0)}(1-A_{i})}{1-\pi}\right)\right]$$
SUTVA (see Assumption 2).
Noting that,
$$p_{\text{\tiny R}}(x)=\mathbb{P}_{\text{\tiny R}}[X=x]=\mathbb{P}_{\text{\tiny R}}[X_{i}=x]=\mathbb{E}_{\text{\tiny R}}\left[\mathbbm{1}_{X_{i}=x}\right],$$
one can condition on the random variable $X_{i}$, yielding
$$\mathbb{E}_{\text{\tiny R}}\left[\mathbbm{1}_{X_{i}=x}\left(\frac{Y_{i}^{(1)}A_{i}}{\pi}-\frac{Y_{i}^{(0)}(1-A_{i})}{1-\pi}\right)\right]=\mathbb{E}_{\text{\tiny R}}\left[\frac{Y_{i}^{(1)}A_{i}}{\pi}-\frac{Y_{i}^{(0)}(1-A_{i})}{1-\pi}\mid X_{i}=x\right]\underbrace{\mathbb{E_{\text{\tiny R}}}\left[\mathbbm{1}_{X_{i}=x}\right]}_{=p_{\text{\tiny R}}(x)}.$$
Then,
$$\displaystyle\mathbb{E}\left[\hat{\tau}_{\pi,\text{\tiny T,R},n}^{*}\right]$$
$$\displaystyle=\sum_{x\in\mathds{X}}p_{\text{\tiny T}}(x)\mathbb{E}_{\text{\tiny R}}\left[\frac{Y_{i}^{(1)}A_{i}}{\pi}-\frac{Y_{i}^{(0)}(1-A_{i})}{1-\pi}\mid X_{i}=x\right]$$
From previous derivations
$$\displaystyle=\sum_{x\in\mathds{X}}p_{\text{\tiny T}}(x)\left(\frac{\mathbb{E}_{\text{\tiny R}}\left[Y_{i}^{(1)}A_{i}\mid X_{i}=x\right]}{\pi}-\frac{\mathbb{E}_{\text{\tiny R}}\left[Y_{i}^{(0)}(1-A_{i})\mid X_{i}=x\right]}{1-\pi}\right)$$
Linearity of $$\mathbb{E}[.]$$ and $$\pi$$ is constant
$$\displaystyle=\sum_{x\in\mathds{X}}p_{\text{\tiny T}}(x)\Bigg{(}\frac{\mathbb{E}_{\text{\tiny R}}\left[Y_{i}^{(1)}\mid X_{i}=x\right]\mathbb{E}_{\text{\tiny R}}\left[A_{i}\mid X_{i}=x\right]}{\pi}$$
$$\displaystyle\qquad-\frac{\mathbb{E}_{\text{\tiny R}}\left[Y_{i}^{(0)}\mid X_{i}=x\right]\mathbb{E}_{\text{\tiny R}}\left[(1-A_{i})\mid X_{i}=x\right]}{1-\pi}\Bigg{)}$$
Randomization (see Assumption 2)
$$\displaystyle=\sum_{x\in\mathds{X}}p_{\text{\tiny T}}(x)\left(\mathbb{E}_{\text{\tiny R}}\left[Y_{i}^{(1)}\mid X_{i}=x\right]-\mathbb{E}_{\text{\tiny R}}\left[Y_{i}^{(0)}\mid X_{i}=x\right]\right)$$
$$\mathbb{E}_{\text{\tiny R}}\left[A_{i}\mid X_{i}=x\right]=\pi$$
$$\displaystyle=\sum_{x\in\mathds{X}}p_{\text{\tiny T}}(x)\mathbb{E}_{\text{\tiny R}}\left[Y_{i}^{(1)}-Y_{i}^{(0)}\mid X_{i}=x\right]$$
Linearity of $$\mathbb{E}[.]$$
$$\displaystyle=\sum_{x\in\mathds{X}}p_{\text{\tiny T}}(x)\mathbb{E}_{\text{\tiny T}}\left[Y_{i}^{(1)}-Y_{i}^{(0)}\mid X_{i}=x\right]$$
Transportability (see Assumption 3)
$$\displaystyle=\tau,$$
Law of total probability
which concludes the first part of the proof.
Note that the previous derivations, relying on iid, Assumption 2 (Trial internal validity with SUTVA, definition of $\pi$, and randomization), Assumption 3, and the law of total probability, lead to the following intermediary result,
$$\mathbb{E}_{\text{\tiny R}}\left[\frac{Y_{i}^{(1)}A_{i}}{\pi}-\frac{Y_{i}^{(0)}(1-A_{i})}{1-\pi}\mid X_{i}=x\right]=\mathbb{E}_{\text{\tiny T}}\left[Y_{i}^{(1)}-Y_{i}^{(0)}\mid X_{i}=x\right]=\tau(x).$$
(20)
(20) will be used in other proofs.
Variance
To shorten notation, we denote by $\mathbf{X}_{n}\in\mathds{X}^{n}$ the vector composed of the $n$ observations in the trial. We then use the law of total variance, conditioning on $\mathbf{X}_{n}$,
$$\operatorname{Var}\left[\hat{\tau}_{\pi,\text{\tiny T,R},n}^{*}\right]=\operatorname{Var}\left[\mathbb{E}\left[\hat{\tau}_{\pi,\text{\tiny T,R},n}^{*}\mid\mathbf{X}_{n}\right]\right]+\mathbb{E}\left[\operatorname{Var}\left[\hat{\tau}_{\pi,\text{\tiny T,R},n}^{*}\mid\mathbf{X}_{n}\right]\right].$$
(21)
Considering the first term in the right-hand side of (21),
$$\displaystyle\mathbb{E}\left[\hat{\tau}_{\pi,\text{\tiny T,R},n}^{*}\mid\mathbf{X}_{n}\right]$$
$$\displaystyle=\mathbb{E}\left[\sum_{x\in\mathds{X}}\frac{p_{\text{\tiny T}}(x)}{p_{\text{\tiny R}}(x)}\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\left(\frac{Y_{i}^{(1)}A_{i}}{\pi}-\frac{Y_{i}^{(0)}(1-A_{i})}{1-\pi}\right)\mid\mathbf{X}_{n}\right]$$
By definition (and SUTVA)
$$\displaystyle=\sum_{x\in\mathds{X}}\frac{p_{\text{\tiny T}}(x)}{p_{\text{\tiny R}}(x)}\frac{1}{n}\mathbb{E}\left[\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\left(\frac{Y_{i}^{(1)}A_{i}}{\pi}-\frac{Y_{i}^{(0)}(1-A_{i})}{1-\pi}\right)\mid\mathbf{X}_{n}\right].$$
Linearity of $$\mathbb{E}[.]$$
Note that this last derivation also uses the fact that neither $p_{\text{\tiny T}}(x)$ nor $p_{\text{\tiny R}}(x)$ are random variables.
$$\displaystyle\mathbb{E}\left[\hat{\tau}_{\pi,\text{\tiny T,R},n}^{*}\mid\mathbf{X}_{n}\right]$$
$$\displaystyle=\sum_{x\in\mathds{X}}\frac{p_{\text{\tiny T}}(x)}{p_{\text{\tiny R}}(x)}\sum_{i=1}^{n}\frac{\mathbbm{1}_{X_{i}=x}}{n}\mathbb{E}\left[\frac{Y_{i}^{(1)}A_{i}}{\pi}-\frac{Y_{i}^{(0)}(1-A_{i})}{1-\pi}\mid X_{i}\right]$$
iid individuals
$$\displaystyle=\sum_{x\in\mathds{X}}\frac{p_{\text{\tiny T}}(x)}{p_{\text{\tiny R}}(x)}\sum_{i=1}^{n}\frac{\mathbbm{1}_{X_{i}=x}}{n}\tau(X_{i})$$
$$\displaystyle=\frac{1}{n}\sum_{x\in\mathds{X}}\frac{p_{\text{\tiny T}}(x)}{p_{\text{\tiny R}}(x)}\tau(x)\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}$$
Transportability (see Assumption 3)
Now, this last term can be written as a unique sum on $i\in\{1,\ldots,n\}$, that is,
$$\frac{1}{n}\sum_{x\in\mathds{X}}\frac{p_{\text{\tiny T}}(x)}{p_{\text{\tiny R}}(x)}\tau(x)\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}=\frac{1}{n}\sum_{i=1}^{n}\frac{p_{\text{\tiny T}}(X_{i})}{p_{\text{\tiny R}}(X_{i})}\tau(X_{i}).$$
Taking the variance of this term leads to,
$$\displaystyle\operatorname{Var}\left[\mathbb{E}_{\text{\tiny R}}\left[\hat{\tau}_{\pi,\text{\tiny T,R},n}^{*}\mid\mathbf{X}_{n}\right]\right]$$
$$\displaystyle=\operatorname{Var}\left[\frac{1}{n}\sum_{i=1}^{n}\frac{p_{\text{\tiny T}}(X_{i})}{p_{\text{\tiny R}}(X_{i})}\tau(X_{i})\right]$$
$$\displaystyle=\frac{1}{n}\operatorname{Var}_{\text{\tiny R}}\left[\frac{p_{\text{\tiny T}}(X)}{p_{\text{\tiny R}}(X)}\tau(X)\right].$$
iid observations on trial (Assumption 2)
(22)
Regarding the second term,
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{\pi,\text{\tiny T,R},n}^{*}\mid\mathbf{X}_{n}\right]$$
$$\displaystyle=\operatorname{Var}_{\text{\tiny R}}\left[\frac{1}{n}\sum_{i=1}^{n}\frac{p_{\text{\tiny T}}\left(X_{i}\right)}{p_{\text{\tiny R}}\left(X_{i}\right)}\left(\frac{Y_{i}A_{i}}{\pi}-\frac{Y_{i}(1-A_{i})}{1-\pi}\right)\mid\mathbf{X}_{n}\right]$$
$$\displaystyle=\frac{1}{n^{2}}\sum_{i=1}^{n}\left(\frac{p_{\text{\tiny T}}\left(X_{i}\right)}{p_{\text{\tiny R}}\left(X_{i}\right)}\right)^{2}\operatorname{Var}_{\text{\tiny R}}\left[\left(\frac{Y_{i}A_{i}}{\pi}-\frac{Y_{i}(1-A_{i})}{1-\pi}\right)\mid\mathbf{X}_{n}\right]$$
$$\displaystyle=\frac{1}{n^{2}}\sum_{i=1}^{n}\left(\frac{p_{\text{\tiny T}}\left(X_{i}\right)}{p_{\text{\tiny R}}\left(X_{i}\right)}\right)^{2}\operatorname{Var}_{\text{\tiny R}}\left[\left(\frac{Y_{i}A_{i}}{\pi}-\frac{Y_{i}(1-A_{i})}{1-\pi}\right)\mid X_{i}\right].$$
(23)
Recall that the variance of the Horvitz-Thomson estimator (see Definition 1) conditioned on $X_{i}$ is given by
$$\operatorname{Var}_{\text{\tiny R}}\left[\hat{\tau}_{\text{\tiny HT},n}\mid X_{i}\right]=\frac{1}{n}\operatorname{Var}_{\text{\tiny R}}\left[\left(\frac{Y_{i}A_{i}}{\pi}-\frac{Y_{i}(1-A_{i})}{1-\pi}\right)\mid X_{i}\right].$$
(24)
Then, one can use Lemma LABEL:lemma:variance-HT-cond-X (see Section D) to have
$$\displaystyle n\operatorname{Var}\left[\hat{\tau}_{\text{\tiny HT},n}\mid X_{i}\right]$$
$$\displaystyle=\mathbb{E}_{\text{\tiny R}}\left[\frac{\left(Y^{(1)}\right)^{2}}{\pi}\mid X_{i}\right]+\mathbb{E}_{\text{\tiny R}}\left[\frac{\left(Y^{(0)}\right)^{2}}{1-\pi}\mid X_{i}\right]-\tau(X_{i})^{2}:=V_{\text{\tiny HT}}(X_{i}).$$
(25)
Then, coming back to (23),
$$\displaystyle\mathbb{E}_{\text{\tiny R}}\left[\operatorname{Var}\left[\hat{\tau}_{\pi,\text{\tiny T,R},n}^{*}\mid\mathbf{X}_{n}\right]\right]$$
$$\displaystyle=\mathbb{E}_{\text{\tiny R}}\left[\frac{1}{n^{2}}\sum_{i=1}^{n}\left(\frac{p_{\text{\tiny T}}\left(X_{i}\right)}{p_{\text{\tiny R}}\left(X_{i}\right)}\right)^{2}V_{\text{\tiny HT}}(X_{i})\right]$$
$$\displaystyle=\mathbb{E}_{\text{\tiny R}}\left[\frac{1}{n^{2}}\sum_{i=1}^{n}\left(\sum_{x\in\mathds{X}}\mathbbm{1}_{X_{i}=x}\right)\left(\frac{p_{\text{\tiny T}}\left(X_{i}\right)}{p_{\text{\tiny R}}\left(X_{i}\right)}\right)^{2}V_{\text{\tiny HT}}(X_{i})\right]$$
$$\displaystyle=\mathbb{E}_{\text{\tiny R}}\left[\sum_{x\in\mathds{X}}\frac{1}{n^{2}}\left(\frac{p_{\text{\tiny T}}\left(x\right)}{p_{\text{\tiny R}}\left(x\right)}\right)^{2}V_{\text{\tiny HT}}(x)\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\right]$$
$$\displaystyle=\sum_{x\in\mathds{X}}\frac{1}{n^{2}}\left(\frac{p_{\text{\tiny T}}\left(x\right)}{p_{\text{\tiny R}}\left(x\right)}\right)^{2}V_{\text{\tiny HT}}(x)\mathbb{E}_{\text{\tiny R}}\left[\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\right]$$
$$\displaystyle=\sum_{x\in\mathds{X}}\frac{1}{n}\left(\frac{p_{\text{\tiny T}}\left(x\right)}{p_{\text{\tiny R}}\left(x\right)}\right)^{2}V_{\text{\tiny HT}}(x)\mathbb{E}_{\text{\tiny R}}\left[\frac{\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}}{n}\right]$$
$$\displaystyle=\sum_{x\in\mathds{X}}\frac{1}{n}\left(\frac{p_{\text{\tiny T}}\left(x\right)}{p_{\text{\tiny R}}\left(x\right)}\right)^{2}V_{\text{\tiny HT}}(x)p_{\text{\tiny R}}\left(x\right)$$
Assumption 1
$$\displaystyle=\sum_{x\in\mathds{X}}\frac{1}{n}\frac{p^{2}_{\text{\tiny T}}\left(x\right)}{p_{\text{\tiny R}}\left(x\right)}V_{\text{\tiny HT}}(x)$$
$$\displaystyle=\frac{1}{n}\sum_{x\in\mathds{X}}\frac{p^{2}_{\text{\tiny T}}\left(x\right)}{p_{\text{\tiny R}}\left(x\right)}\left(\mathbb{E}_{\text{\tiny R}}\left[\frac{\left(Y^{(1)}\right)^{2}}{\pi}\mid X=x\right]+\mathbb{E}_{\text{\tiny R}}\left[\frac{\left(Y^{(0)}\right)^{2}}{1-\pi}\mid X=x\right]-\tau(x)^{2}\right),$$
(26)
Combining (26) and (22) into (21) leads to, for all $n$,
$$\operatorname{Var}\left[\hat{\tau}_{\pi,\text{\tiny T,R},n}^{*}\right]=\frac{V_{o}}{n}$$
where
$$V_{o}=\operatorname{Var}\left[\frac{p_{\text{\tiny T}}(X_{i})}{p_{\text{\tiny R}}(X_{i})}\tau(X_{i})\right]+\sum_{x\in\mathds{X}}\frac{p^{2}_{\text{\tiny T}}\left(x\right)}{p_{\text{\tiny R}}\left(x\right)}V_{\tiny\text{\tiny HT}}(x).\\
$$
Note that it is also possible to write the result such as,
$$V_{o}=\operatorname{Var}\left[\frac{p_{\text{\tiny T}}(X)}{p_{\text{\tiny R}}(X)}\tau(X)\right]+\mathbb{E}_{\text{\tiny R}}\left[\frac{p^{2}_{\text{\tiny T}}\left(X\right)}{p^{2}_{\text{\tiny R}}\left(X\right)}V_{\tiny\text{\tiny HT}}(X)\right],$$
noting that
$$\sum_{x\in\mathds{X}}\frac{p^{2}_{\text{\tiny T}}\left(x\right)}{p_{\text{\tiny R}}\left(x\right)}V_{\tiny\text{\tiny HT}}(x)=\mathbb{E}_{\text{\tiny R}}\left[\frac{p^{2}_{\text{\tiny T}}\left(X\right)}{p^{2}_{\text{\tiny R}}\left(X\right)}V_{\tiny\text{\tiny HT}}(X)\right]$$
Quadratic risk and consistency
For any estimate $\hat{\tau}$, we have
$$\mathbb{E}\left[\left(\hat{\tau}-\tau\right)^{2}\right]=\left(\mathbb{E}\left[\hat{\tau}\right]-\tau\right)^{2}+\operatorname{Var}\left[\hat{\tau}\right].$$
Therefore, the risk of the completely oracle IPSW estimate satisfies
$$\mathbb{E}\left[\left(\hat{\tau}-\tau\right)^{2}\right]=\frac{V_{o}}{n}.$$
The $L^{2}$ consistency holds by letting $n$ tend to infinity.
A.2 Proofs for the semi-oracle IPSW $\hat{\tau}_{\pi,\text{\tiny T},n}^{*}$
A.2.1 Proof of Proposition 1
Proof.
We first recall the definition of the semi-oracle estimator introduced in Definition 5:
$$\hat{\tau}_{\pi,\text{\tiny T},n}^{*}=\frac{1}{n}\sum_{i=1}^{n}\frac{p_{\text{\tiny T}}\left(X_{i}\right)}{\hat{p}_{\text{\tiny R},n}(X_{i})}\left(\frac{Y_{i}A_{i}}{\pi}-\frac{Y_{i}(1-A_{i})}{1-\pi}\right),$$
where, for all $x\in\mathds{X}$,
$$\hat{p}_{\text{\tiny R},n}\left(x\right)=\frac{\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}}{n}.$$
(28)
Similarly to the completely oracle estimator, the semi-oracle estimator can be written as,
$$\hat{\tau}_{\pi,\text{\tiny T},n}^{*}=\sum_{x\in\mathds{X}}\frac{p_{\text{\tiny T}}(x)}{\hat{p}_{\text{\tiny R},n}(x)}\left(\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\left(\frac{Y_{i}A_{i}}{\pi}-\frac{Y_{i}(1-A_{i})}{1-\pi}\right)\right),$$
since $X_{i}$ take values in a categorical set $\mathbb{X}$.
Bias
To shorten notation, we denote the full vector of covariates $\mathbf{X}_{n}\in\mathds{X}^{n}$, comprising the $n$ observations $X_{1},X_{2},\dots X_{n}\in\mathds{X}$ in the trial. We have
$$\displaystyle\mathbb{E}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\right]$$
$$\displaystyle=\mathbb{E}\left[\sum_{x\in\mathds{X}}\frac{p_{\text{\tiny T}}(x)}{\hat{p}_{\text{\tiny R},n}(x)}\left(\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\left(\frac{Y_{i}A_{i}}{\pi}-\frac{Y_{i}(1-A_{i})}{1-\pi}\right)\right)\right]$$
By definition
$$\displaystyle=\sum_{x\in\mathds{X}}\mathbb{E}\left[\frac{p_{\text{\tiny T}}(x)}{\hat{p}_{\text{\tiny R},n}(x)}\left(\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\left(\frac{Y_{i}^{(1)}A_{i}}{\pi}-\frac{Y_{i}^{(0)}(1-A_{i})}{1-\pi}\right)\right)\right]$$
Linearity and SUTVA
$$\displaystyle=\sum_{x\in\mathds{X}}\mathbb{E}\left[\mathbb{E}\left[\frac{p_{\text{\tiny T}}(x)}{\hat{p}_{\text{\tiny R},n}(x)}\left(\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\left(\frac{Y_{i}^{(1)}A_{i}}{\pi}-\frac{Y_{i}^{(0)}(1-A_{i})}{1-\pi}\right)\right)\mid\mathbf{X}_{n}\right]\right]$$
Law of total expect.
$$\displaystyle=\sum_{x\in\mathds{X}}\mathbb{E}\left[p_{\text{\tiny T}}(x)\mathbb{E}\left[\frac{1}{\hat{p}_{\text{\tiny R},n}(x)}\left(\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\left(\frac{Y_{i}^{(1)}A_{i}}{\pi}-\frac{Y_{i}^{(0)}(1-A_{i})}{1-\pi}\right)\right)\mid\mathbf{X}_{n}\right]\right]$$
$$p_{\text{\tiny T}}(x)$$ is deterministic
$$\displaystyle=\sum_{x\in\mathds{X}}\mathbb{E}\left[\frac{p_{\text{\tiny T}}(x)}{\hat{p}_{\text{\tiny R},n}(x)}\mathbb{E}\left[\left(\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\left(\frac{Y_{i}^{(1)}A_{i}}{\pi}-\frac{Y_{i}^{(0)}(1-A_{i})}{1-\pi}\right)\right)\mid\mathbf{X}_{n}\right]\right]$$
$$\displaystyle=\sum_{x\in\mathds{X}}\mathbb{E}\left[\frac{p_{\text{\tiny T}}(x)}{\hat{p}_{\text{\tiny R},n}(x)}\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\mathbb{E}\left[\frac{Y_{i}^{(1)}A_{i}}{\pi}-\frac{Y_{i}^{(0)}(1-A_{i})}{1-\pi}\mid\mathbf{X}_{n}\right]\right]$$
This last line uses the fact that $\frac{\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}}{n}$ is measurable with respect to $\mathbf{X}_{n}$.
Then, note that,
$$\displaystyle\mathbbm{1}_{X_{i}=x}\mathbb{E}\left[\frac{Y_{i}^{(1)}A_{i}}{\pi}-\frac{Y_{i}^{(0)}(1-A_{i})}{1-\pi}\mid\mathbf{X}_{n}\right]$$
$$\displaystyle=\mathbbm{1}_{X_{i}=x}\mathbb{E}\left[\frac{Y_{i}^{(1)}A_{i}}{\pi}-\frac{Y_{i}^{(0)}(1-A_{i})}{1-\pi}\mid X_{i}\right]$$
iid observations.
Then, recall from the proof in Subsection A.1, and in particular from (20) that
$$\displaystyle\mathbbm{1}_{X_{i}=x}\mathbb{E}\left[\frac{Y_{i}^{(1)}A_{i}}{\pi}-\frac{Y_{i}^{(0)}(1-A_{i})}{1-\pi}\mid\mathbf{X}_{n}\right]$$
$$\displaystyle=\mathbbm{1}_{X_{i}=x}\mathbb{E}\left[\frac{Y_{i}^{(1)}A_{i}}{\pi}-\frac{Y_{i}^{(0)}(1-A_{i})}{1-\pi}\mid X=x\right]$$
Indicator forcing $$X=x$$.
$$\displaystyle=\mathbbm{1}_{X_{i}=x}\tau(x)$$
Transportability.
Therefore,
$$\displaystyle\mathbb{E}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\right]$$
$$\displaystyle=\sum_{x\in\mathds{X}}\mathbb{E}\left[\frac{p_{\text{\tiny T}}(x)}{\hat{p}_{\text{\tiny R},n}(x)}\frac{\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}}{n}\tau(x)\right]$$
$$\displaystyle=\sum_{x\in\mathds{X}}\mathbb{E}\left[\frac{p_{\text{\tiny T}}(x)}{\frac{\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}}{n}}\frac{\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}}{n}\tau(x)\right]$$
Estimation procedure - Equation 28
Let $Z_{n}(x)=\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}$ distributed as $\mathfrak{B}(n,p_{\text{\tiny R}}(x))$. Note that, by convention, the term inside the expectation is null if $Z_{n}(x)=0$.
666Note that to be clearer we could have introduced the multiplication by $\mathbbm{1}_{Z_{n}(x)>0}$in the formula summing over the categories from the beginning.
Indeed, this was implicit as it is the re-writing of a sum on the trial’s observations. But this also leads to heavy notations.
This leads to the following equality,
$$\displaystyle\mathbb{E}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\right]$$
$$\displaystyle=\sum_{x\in\mathds{X}}\mathbb{E}\left[p_{\text{\tiny T}}(x)\tau(x)\mathbbm{1}_{Z_{n}(x)>0}\right]$$
$$\displaystyle=\sum_{x\in\mathds{X}}p_{\text{\tiny T}}(x)\tau(x)\mathbb{E}\left[\mathbbm{1}_{Z_{n}(x)>0}\right]$$
$$\displaystyle=\sum_{x\in\mathds{X}}p_{\text{\tiny T}}(x)\tau(x)\left(1-(1-p_{\text{\tiny R}}(x)\right)^{n}).$$
Upper bound of the bias.
If $p_{\text{\tiny R}}(x)=0$, then $p_{\text{\tiny T}}(x)=0$ (due to the support inclusion assumption, see Assumption 4). Therefore, for all $x\in\mathds{X},0<p_{\text{\tiny R}}(x)$. Then, it is possible to bound the bias for any sample size $n$, noting that,
$$\displaystyle|\mathbb{E}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\right]-\tau|$$
$$\displaystyle=\left|\sum_{x\in\mathds{X}}p_{\text{\tiny T}}(x)\tau(x)\left(1-(1-p_{\text{\tiny R}}(x)\right)^{n})-\tau\right|$$
$$\displaystyle=\left|\sum_{x\in\mathds{X}}p_{\text{\tiny T}}(x)\tau(x)\left(1-(1-p_{\text{\tiny R}}(x))^{n}\right)-\sum_{x\in\mathds{X}}p_{\text{\tiny T}}(x)\tau(x)\right|$$
$$\displaystyle=\left|\sum_{x\in\mathds{X}}p_{\text{\tiny T}}(x)\tau(x)\left(1-p_{\text{\tiny R}}(x)\right)^{n}\right|$$
$$\displaystyle\leq\left(1-\min_{x}p_{\text{\tiny R}}(x)\right)^{n}\sum_{x\in\mathds{X}}p_{\text{\tiny T}}(x)\left|\tau(x)\right|$$
$$\displaystyle\leq\left(1-\min_{x}p_{\text{\tiny R}}(x)\right)^{n}\mathbb{E}_{\text{\tiny T}}\left[|\tau(X)|\right].$$
Variance
The proof follows the same track as that of the completely oracle IPSW, conditioning on $\mathbf{X}_{n}$, and using the law of total variance,
$$\operatorname{Var}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\right]=\operatorname{Var}\left[\mathbb{E}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\mid\mathbf{X}_{n}\right]\right]+\mathbb{E}\left[\operatorname{Var}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\mid\mathbf{X}_{n}\right]\right].$$
(29)
For the first inside term,
$$\displaystyle\mathbb{E}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\mid\mathbf{X}_{n}\right]$$
$$\displaystyle=\mathbb{E}\left[\sum_{x\in\mathds{X}}\frac{p_{\text{\tiny T}}(x)}{\hat{p}_{\text{\tiny R},n}(x)}\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\left(\frac{Y_{i}^{(1)}A_{i}}{\pi}-\frac{Y_{i}^{(0)}(1-A_{i})}{1-\pi}\right)\mid\mathbf{X}_{n}\right]$$
By definition (and SUTVA)
$$\displaystyle=\sum_{x\in\mathds{X}}\frac{p_{\text{\tiny T}}(x)}{\hat{p}_{\text{\tiny R},n}(x)}\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\mathbb{E}\left[\left(\frac{Y_{i}^{(1)}A_{i}}{\pi}-\frac{Y_{i}^{(0)}(1-A_{i})}{1-\pi}\right)\mid\mathbf{X}_{n}\right]$$
Linearity of $$\mathbb{E}[.]$$
$$\displaystyle=\sum_{x\in\mathds{X}}\frac{p_{\text{\tiny T}}(x)}{\hat{p}_{\text{\tiny R},n}(x)}\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\tau(X_{i})$$
$$\displaystyle=\sum_{x\in\mathds{X}}p_{\text{\tiny T}}(x)\tau(x)\mathds{1}_{Z_{n}(x)>0}$$
Equation 28
$$\displaystyle=\mathbb{E}_{\text{\tiny T}}\left[\tau(X)\mathds{1}_{Z_{n}(X)>0}|\mathbf{X}_{n}\right]$$
$$\displaystyle\text{Re-writing the sum as expectancy}.$$
Note that
$$\displaystyle\operatorname{Var}\left[\mathbb{E}_{\text{\tiny T}}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)>0}|\mathbf{X}_{n}\right]\right]$$
$$\displaystyle=\operatorname{Var}\left[\mathbb{E}_{\text{\tiny T}}\left[\tau(X)\right]-\mathbb{E}_{\text{\tiny T}}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)=0}|\mathbf{X}_{n}\right]\right]$$
$$\displaystyle=\operatorname{Var}\left[\tau-\mathbb{E}_{\text{\tiny T}}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)=0}|\mathbf{X}_{n}\right]\right]$$
$$\displaystyle=\operatorname{Var}\left[\mathbb{E}_{\text{\tiny T}}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)=0}|\mathbf{X}_{n}\right]\right],$$
(30)
as the only source of randomness comes from $\mathbb{E}_{\text{\tiny T}}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)=0}|\mathbf{X}_{n}\right]$.
Therefore, the first inside term of (29) corresponds to,
$$\operatorname{Var}\left[\mathbb{E}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\mid\mathbf{X}_{n}\right]\right]=\operatorname{Var}\left[\mathbb{E}_{\text{\tiny T}}\left[\tau(X)\mathds{1}_{Z_{n}(X)=0}|\mathbf{X}_{n}\right]\right].$$
(31)
On the other hand,
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\mid\mathbf{X}_{n}\right]$$
$$\displaystyle=\operatorname{Var}\left[\frac{1}{n}\sum_{i=1}^{n}\frac{p_{\text{\tiny T}}\left(X_{i}\right)}{\hat{p}_{\text{\tiny R},n}\left(X_{i}\right)}\left(\frac{Y_{i}A_{i}}{\pi}-\frac{Y_{i}(1-A_{i})}{1-\pi}\right)\mid\mathbf{X}_{n}\right]$$
$$\displaystyle=\frac{1}{n^{2}}\sum_{i=1}^{n}\left(\frac{p_{\text{\tiny T}}\left(X_{i}\right)}{\hat{p}_{\text{\tiny R},n}\left(X_{i}\right)}\right)^{2}\operatorname{Var}\left[\left(\frac{Y_{i}A_{i}}{\pi}-\frac{Y_{i}(1-A_{i})}{1-\pi}\right)\mid\mathbf{X}_{n}\right]$$
$$\displaystyle=\frac{1}{n^{2}}\sum_{i=1}^{n}\left(\frac{p_{\text{\tiny T}}\left(X_{i}\right)}{\hat{p}_{\text{\tiny R},n}\left(X_{i}\right)}\right)^{2}\operatorname{Var}\left[\left(\frac{Y_{i}A_{i}}{\pi}-\frac{Y_{i}(1-A_{i})}{1-\pi}\right)\mid X_{i}\right]$$
$$\displaystyle=\frac{1}{n^{2}}\sum_{i=1}^{n}\left(\frac{p_{\text{\tiny T}}\left(X_{i}\right)}{\hat{p}_{\text{\tiny R},n}\left(X_{i}\right)}\right)^{2}V_{\text{\tiny HT}}(X_{i}),$$
where the last row comes from intermediary results in the completely oracle proof (see equation (25)), with
$$V_{\text{\tiny HT}}(x):=\mathbb{E}_{\text{\tiny R}}\left[\frac{\left(Y^{(1)}\right)^{2}}{\pi}\mid X_{i}\right]+\mathbb{E}_{\text{\tiny R}}\left[\frac{\left(Y^{(0)}\right)^{2}}{1-\pi}\mid X_{i}\right]-\tau(X)^{2}.$$
Then,
$$\displaystyle\mathbb{E}\left[\operatorname{Var}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\mid\mathbf{X}_{n}\right]\right]$$
$$\displaystyle=\mathbb{E}\left[\frac{1}{n^{2}}\sum_{i=1}^{n}\left(\frac{p_{\text{\tiny T}}\left(X_{i}\right)}{\hat{p}_{\text{\tiny R},n}\left(X_{i}\right)}\right)^{2}V_{\text{\tiny HT}}(X_{i})\right]$$
From previous derivations
$$\displaystyle=\mathbb{E}\left[\sum_{x\in\mathds{X}}\left(\frac{1}{n^{2}}\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\left(\frac{p_{\text{\tiny T}}\left(X_{i}\right)}{\hat{p}_{\text{\tiny R},n}\left(X_{i}\right)}\right)^{2}V_{\text{\tiny HT}}(X_{i})\right)\right]$$
Categorical $$X$$
$$\displaystyle=\mathbb{E}\left[\sum_{x\in\mathds{X}}\frac{1}{n^{2}}\left(\frac{p_{\text{\tiny T}}\left(x\right)}{\hat{p}_{\text{\tiny R},n}\left(x\right)}\right)^{2}V_{\text{\tiny HT}}(x)\left(\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\right)\right]$$
$$\displaystyle=\sum_{x\in\mathds{X}}\frac{1}{n^{2}}p_{\text{\tiny T}}\left(x\right)^{2}V_{\text{\tiny HT}}(x)\mathbb{E}\left[\left(\frac{1}{\hat{p}_{\text{\tiny R},n}\left(x\right)}\right)^{2}\left(\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\right)\right].$$
Replacing $\hat{p}_{\text{\tiny R},n}\left(x\right)$ by its explicit expression,
$$\mathbb{E}\left[\operatorname{Var}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\mid\mathbf{X}_{n}\right]\right]=\frac{1}{n}\sum_{x\in\mathds{X}}p_{\text{\tiny T}}\left(x\right)^{2}V_{\text{\tiny HT}}(x)\mathbb{E}\left[\left(\frac{1}{\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}}\right)^{2}\left(\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\right)\right].$$
(32)
As in the study of the bias, we introduce $Z_{n}(x)=\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}$, distributed as $\mathfrak{B}(n,p)$. One can then write,
$$\displaystyle\mathbb{E}\left[\operatorname{Var}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\mid\mathbf{X}_{n}\right]\right]$$
$$\displaystyle=\frac{1}{n}\sum_{x\in\mathds{X}}p_{\text{\tiny T}}\left(x\right)^{2}V_{\text{\tiny HT}}(x)\mathbb{E}_{\text{\tiny R}}\left[\frac{\mathbbm{1}_{Z_{n}(x)>0}}{\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}}\right].$$
Recalling (29) and (31),
we have
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\right]$$
$$\displaystyle=\operatorname{Var}\left[\mathbb{E}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\mid\mathbf{X}_{n}\right]\right]+\mathbb{E}\left[\operatorname{Var}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\mid\mathbf{X}_{n}\right]\right]$$
$$\displaystyle=\operatorname{Var}\left[\mathbb{E}_{\text{\tiny T}}\left[\tau(X)\mathds{1}_{Z_{n}(X)=0}|\mathbf{X}_{n}\right]\right]+\frac{1}{n}\sum_{x\in\mathds{X}}p_{\text{\tiny T}}\left(x\right)^{2}V_{\text{\tiny HT}}(x)\mathbb{E}\left[\frac{\mathbbm{1}_{Z_{n}(x)>0}}{\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}}\right].$$
(33)
Upper bound on the variance
According to Arnould
et al. (2021) (see page 27), since $Z_{n}(x)$ is distributed as $\mathfrak{B}(n,p_{\text{\tiny R}}(x))$, we have
$$\forall x\in\mathds{X},\,\mathbb{E}\left[\frac{\mathbbm{1}_{Z_{n}(x)\neq 0}}{Z_{n}(x)}\right]\leq\frac{2}{(n+1)p_{\text{\tiny R}}\left(x\right)}.$$
Besides,
$$\displaystyle\operatorname{Var}\left[\mathbb{E}_{\text{\tiny T}}\left[\tau(X)\mathds{1}_{Z_{n}(X)=0}|\mathbf{X}_{n}\right]\right]$$
$$\displaystyle\leq\mathbb{E}_{\text{\tiny T}}\left[\tau(X)^{2}\mathds{1}_{Z_{n}(X)=0}\right]$$
$$\displaystyle\leq\mathbb{E}_{\text{\tiny T}}\left[\tau(X)^{2}\left(1-p_{\text{\tiny R}}(X)\right)^{n}\right]$$
$$\displaystyle\leq\mathbb{E}_{\text{\tiny T}}\left[\tau(X)^{2}\right]\left(1-\min_{x\in\mathbb{X}}p_{\text{\tiny R}}(x)\right)^{n}.$$
Combining these inequalities with (33) yields, for all $n$,
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\right]\leq\mathbb{E}_{\text{\tiny T}}\left[\tau(X)^{2}\right]\left(1-\min_{x\in\mathbb{X}}p_{\text{\tiny R}}(x)\right)^{n}+\frac{2}{n+1}\sum_{x\in\mathds{X}}\frac{p_{\text{\tiny T}}\left(x\right)^{2}}{p_{\text{\tiny R}}\left(x\right)}V_{\text{\tiny HT}}(x).$$
This expression can be further simplified in,
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\right]\leq\frac{2V_{so}}{n+1}+\left(1-\min_{x\in\mathbb{X}}p_{\text{\tiny R}}(x)\right)^{n}\mathbb{E}_{\text{\tiny T}}\left[\tau(X)^{2}\right],$$
where
$$\displaystyle V_{so}:=\sum_{x\in\mathds{X}}\frac{p_{\text{\tiny T}}\left(x\right)^{2}}{p_{\text{\tiny R}}\left(x\right)}V_{\text{\tiny HT}}(x)=\mathbb{E}_{\text{\tiny T}}\left[\left(\frac{p_{\text{\tiny T}}(X)}{p_{\text{\tiny R}}(X)}\right)^{2}V_{\text{\tiny HT}}(X)\right].$$
∎
A.2.2 Proof of Corollary 1
Proof.
Asymptotically unbiased
Recall the expression of the semi-oracle IPSW bias from Proposition 1.
$$\displaystyle\mathbb{E}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\right]$$
$$\displaystyle=\sum_{x\in\mathds{X}}p_{\text{\tiny T}}(x)\tau(x)\left(1-(1-p_{\text{\tiny R}}(x)\right)^{n}).$$
According to Assumption 4, we have
$\forall x\in\mathds{X}$, $0<p_{\text{\tiny R}}(x)<1$. As a consequence,
$$\lim_{n\to\infty}\left(1-(1-p_{\text{\tiny R}}(x)\right)^{n}=1,$$
which leads to
$$\lim_{n\to\infty}\mathbb{E}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\right]=\tau.$$
Asymptotic variance
Recall the expression of the variance of the semi-oracle IPSW from Proposition 1:
$$n\operatorname{Var}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\right]=n\operatorname{Var}\left[\mathbb{E}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\mid\mathbf{X}_{n}\right]\right]+\sum_{x\in\mathds{X}}p_{\text{\tiny T}}\left(x\right)^{2}V_{\text{\tiny HT}}(x)\mathbb{E}_{\text{\tiny R}}\left[\frac{\mathbbm{1}_{Z_{n}(x)>0}}{\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}}\right].$$
(34)
Note that the first term tends to zero since
$$\displaystyle 0\leq n\operatorname{Var}\left[\mathbb{E}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\mid\mathbf{X}_{n}\right]\right]\leq\mathbb{E}_{\text{\tiny T}}\left[\tau(X)^{2}\right]\left(1-\min_{x\in\mathbb{X}}p_{\text{\tiny R}}(x)\right)^{n}.$$
Therefore,
$$\lim\limits_{n\to\infty}n\operatorname{Var}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\right]=\lim\limits_{n\to\infty}\sum_{x\in\mathds{X}}p_{\text{\tiny T}}\left(x\right)^{2}V_{\text{\tiny HT}}(x)\mathbb{E}_{\text{\tiny R}}\left[\frac{\mathbbm{1}_{Z_{n}(x)>0}}{\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}}\right].$$
(35)
The next part of the proof consists in characterizing how the term $\mathbb{E}_{\text{\tiny R}}\left[\frac{\mathbbm{1}_{Z_{n}(x)>0}}{Z_{n}(x)/n}\right]$ converges. Let $\varepsilon>0$. Since, for all $x$, $p_{\text{\tiny R}}(x)>0$, we have
$$\displaystyle\mathbb{E}\left[\frac{\mathbbm{1}_{Z_{n}(x)>0}}{\frac{Z_{n}(x)}{n}}\right]$$
$$\displaystyle=\mathbb{E}\left[\frac{\mathbbm{1}_{Z_{n}(x)>0}}{\frac{Z_{n}(x)}{n}}\mathbbm{1}_{|\frac{Z_{n}(x)}{n}-p_{\text{\tiny R}}(x)|\geq\varepsilon}\right]+\mathbb{E}\left[\frac{\mathbbm{1}_{Z_{n}(x)>0}}{\frac{Z_{n}(x)}{n}}\mathbbm{1}_{|\frac{Z_{n}(x)}{n}-p_{\text{\tiny R}}(x)|<\varepsilon}\right].$$
(36)
Regarding the first term in (36), we have
$$\displaystyle\mathbb{E}\left[\frac{\mathbbm{1}_{Z_{n}(x)>0}}{\frac{Z_{n}(x)}{n}}\mathbbm{1}_{|\frac{Z_{n}(x)}{n}-p_{\text{\tiny R}}(x)|\geq\varepsilon}\right]$$
$$\displaystyle\leq n\mathbb{P}\left[|\frac{Z_{n}(x)}{n}-p_{\text{\tiny R}}(x)|\geq\varepsilon\right],$$
since, on the event $Z_{n}(x)>0$, $Z_{n}(x)\geq 1$. Now, by Chernoff’s inequality,
$$\displaystyle\mathbb{P}\left[|\frac{Z_{n}(x)}{n}-p_{\text{\tiny R}}(x)|\geq\varepsilon\right]\leq 2\exp\left(-2\varepsilon^{2}n\right),$$
which yields
$$\displaystyle\mathbb{E}\left[\frac{\mathbbm{1}_{Z_{n}(x)>0}}{\frac{Z_{n}(x)}{n}}\mathbbm{1}_{|\frac{Z_{n}(x)}{n}-p_{\text{\tiny R}}(x)|\geq\varepsilon}\right]\leq 2n\exp\left(-2\varepsilon^{2}n\right).$$
(37)
Regarding the second term in equation (36), since
$$\displaystyle\frac{\mathbbm{1}_{Z_{n}(x)>0}}{\frac{Z_{n}(x)}{n}}\mathbbm{1}_{|\frac{Z_{n}(x)}{n}-p_{\text{\tiny R}}(x)|<\varepsilon}$$
is bounded above, for $\varepsilon<p_{\text{\tiny R}}(x)/2$ and converges in probability to $1/p_{\text{\tiny R}}(x)$, we have
$$\displaystyle\mathbb{E}\left[\frac{\mathbbm{1}_{Z_{n}(x)>0}}{\frac{Z_{n}(x)}{n}}\mathbbm{1}_{|\frac{Z_{n}(x)}{n}-p_{\text{\tiny R}}(x)|<\varepsilon}\right]\to\frac{1}{p_{\text{\tiny R}}(x)},\quad\textrm{as}\leavevmode\nobreak\ n\to\infty.$$
(38)
Combining (37) and (38), we have
$$\displaystyle\mathbb{E}\left[\frac{\mathbbm{1}_{Z_{n}(x)>0}}{Z_{n}(x)/n}\right]\to\frac{1}{p_{\text{\tiny R}}(x)},\quad\textrm{as}\leavevmode\nobreak\ n\to\infty.$$
Using equation (35), we finally obtain
$$\lim_{n\to\infty}n\operatorname{Var}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\right]=\sum_{x\in\mathds{X}}\frac{p_{\text{\tiny T}}\left(x\right)^{2}}{p_{\text{\tiny R}}\left(x\right)}V_{\text{\tiny HT}}(x)=\mathbb{E}\left[\left(\frac{p_{\text{\tiny T}}\left(X\right)}{p_{\text{\tiny R}}\left(X\right)}\right)^{2}V_{\text{\tiny HT}}(X)\right]:=V_{\text{\tiny so}}.$$
∎
A.2.3 Proof of Theorem 2
Proof.
For any estimate $\hat{\tau}$, we have
$$\mathbb{E}\left[\left(\hat{\tau}-\tau\right)^{2}\right]=\left(\mathbb{E}\left[\hat{\tau}\right]-\tau\right)^{2}+\operatorname{Var}\left[\hat{\tau}\right].$$
Therefore, the risk of the semi-oracle IPSW estimate can be bounded using results from Subsection A.2.1 (or Proposition 1), and in particular the bounds on the variance and the bias,
$$\displaystyle\mathbb{E}\left[\left(\hat{\tau}-\tau\right)^{2}\right]$$
$$\displaystyle\leq\left(1-\min_{x}p_{\text{\tiny R}}(x)\right)^{2n}\mathbb{E}_{\text{\tiny T}}\left[|\tau(X)|\right]^{2}+\frac{2V_{so}}{n+1}+\left(1-\min_{x\in\mathbb{X}}p_{\text{\tiny R}}(x)\right)^{n}\mathbb{E}_{\text{\tiny T}}\left[\tau(X)^{2}\right]$$
$$\displaystyle\leq\frac{2V_{so}}{n+1}+2\left(1-\min_{x\in\mathbb{X}}p_{\text{\tiny R}}(x)\right)^{n}\mathbb{E}_{\text{\tiny T}}\left[\tau(X)^{2}\right],$$
In particular thanks to the fact that,
$$\displaystyle\operatorname{Var}_{\text{\tiny T}}\left[|\tau(X)|\right]$$
$$\displaystyle=\mathbb{E}_{\text{\tiny T}}\left[\tau(X)^{2}\right]-\mathbb{E}_{\text{\tiny T}}\left[|\tau(X)|\right]^{2},$$
so that,
$$\displaystyle\mathbb{E}_{\text{\tiny T}}\left[|\tau(X)|\right]^{2}$$
$$\displaystyle\leq\mathbb{E}_{\text{\tiny T}}\left[\tau(X)^{2}\right].$$
The $L^{2}$ consistency holds by letting $n$ tend to infinity.
∎
A.3 Proofs for (estimated) IPSW $\hat{\tau}_{\pi,n,m}$
We first recall the definition of a fully estimated estimator introduced in Definition 6.
$$\hat{\tau}_{\pi,n,m}=\frac{1}{n}\sum_{i=1}^{n}\frac{\hat{p}_{\text{\tiny T},m}\left(X_{i}\right)}{\hat{p}_{\text{\tiny R},n}(X_{i})}\left(\frac{Y_{i}A_{i}}{\pi}-\frac{Y_{i}(1-A_{i})}{1-\pi}\right),$$
where, for all $x\in\mathds{X}$,
$$\hat{p}_{\text{\tiny R},n}\left(x\right)=\frac{\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}}{n},\qquad\text{and}\qquad\hat{p}_{\text{\tiny T},m}\left(x\right)=\frac{\sum_{i=n+1}^{n+m}\mathbbm{1}_{X_{i}=x}}{m}.$$
(39)
Similar to the completely oracle estimator, this estimated IPSW can be written as,
$$\hat{\tau}_{\pi,n,m}=\sum_{x\in\mathds{X}}\frac{\hat{p}_{\text{\tiny T},m}\left(x\right)}{\hat{p}_{\text{\tiny R},n}(x)}\left(\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\left(\frac{Y_{i}A_{i}}{\pi}-\frac{Y_{i}(1-A_{i})}{1-\pi}\right)\right).$$
All the proofs below rely on this decomposition.
A.3.1 Proof of Proposition 2
Proof.
Expression of the bias
Using the exact same derivations as in Subsection A.2.1 (Bias), but using the law of total expectation when conditioning on $\mathbf{X}_{n+m}\in\mathds{X}^{n+m}$ (i.e. comprising the $n$ and $m$ observations $X_{1},X_{2},\dots X_{n},X_{n+1}\dots X_{n+m}\in\mathds{X}$ in the trial and target population, one has,
$$\displaystyle\mathbb{E}\left[\hat{\tau}_{\pi,n,m}\right]$$
$$\displaystyle=\sum_{x\in\mathds{X}}\mathbb{E}\left[\frac{\hat{p}_{\text{\tiny T},m}(x)}{\hat{p}_{\text{\tiny R},n}(x)}\frac{\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}}{n}\tau(x)\right]$$
$$\displaystyle=\sum_{x\in\mathds{X}}\mathbb{E}\left[\frac{\hat{p}_{\text{\tiny T},m}(x)}{\frac{\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}}{n}}\frac{\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}}{n}\tau(x)\right]$$
Estimation procedure - Equation 39
$$\displaystyle=\sum_{x\in\mathds{X}}\mathbb{E}\left[\hat{p}_{\text{\tiny T},m}(x)\tau(x)\mathbbm{1}_{Z_{n}(x)\neq 0}\right].$$
Note that $Z_{n}(x)$ only depend on the trial sample $\mathcal{R}$ and $\hat{p}_{\text{\tiny T},m}(x)$ on the observational sample. In addition, $\tau(x)$ is deterministic, therefore
$$\displaystyle\mathbb{E}\left[\hat{\tau}_{\pi,n,m}\right]$$
$$\displaystyle=\sum_{x\in\mathds{X}}\tau(x)\mathbb{E}\left[\hat{p}_{\text{\tiny T},m}(x)\right]\mathbb{E}\left[\mathbbm{1}_{Z_{n}(x)\neq 0}\right].$$
Note that $\mathbb{E}\left[\hat{p}_{\text{\tiny T},m}(x)\right]=p_{\text{\tiny T}}(x)$. Besides, according to the proof of the semi-oracle IPSW,
$$\displaystyle\mathbb{E}\left[\mathbbm{1}_{Z_{n}(x)\neq 0}\right]=\left(1-(1-p_{\text{\tiny R}}(x)\right)^{n}).$$
Therefore,
$$\displaystyle\mathbb{E}\left[\hat{\tau}_{\pi,n,m}\right]$$
$$\displaystyle=\sum_{x\in\mathds{X}}p_{\text{\tiny T}}(x)\tau(x)\left(1-(1-p_{\text{\tiny R}}(x)\right)^{n}),$$
that is
$$\displaystyle\mathbb{E}\left[\hat{\tau}_{\pi,n,m}\right]-\tau$$
$$\displaystyle=-\sum_{x\in\mathds{X}}p_{\text{\tiny T}}(x)\tau(x)\left(1-p_{\text{\tiny R}}(x)\right)^{n}.$$
Upper bound on the bias
It is possible to bound the bias for any sample size $n$, using the exact same derivations than for the semi-oracle IPSW.
Expression of the variance
The proof follows a similar spirit as the proof for the completely oracle estimator, conditioning on all observations $\mathbf{X}_{n+m}$.
$$\operatorname{Var}\left[\hat{\tau}_{\pi,n,m}\right]=\operatorname{Var}\left[\mathbb{E}\left[\hat{\tau}_{\pi,n,m}\mid\mathbf{X}_{n+m}\right]\right]+\mathbb{E}\left[\operatorname{Var}\left[\hat{\tau}_{\pi,n,m}\mid\mathbf{X}_{n+m}\right]\right].$$
(40)
$$\displaystyle\mathbb{E}\left[\hat{\tau}_{\pi,n,m}\mid\mathbf{X}_{n+m}\right]$$
$$\displaystyle=\mathbb{E}\left[\sum_{x\in\mathds{X}}\frac{\hat{p}_{\text{\tiny T},m}\left(x\right)}{\hat{p}_{\text{\tiny R},n}(x)}\left(\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\left(\frac{Y_{i}A_{i}}{\pi}-\frac{Y_{i}(1-A_{i})}{1-\pi}\right)\right)\mid\mathbf{X}_{n+m}\right]$$
$$\displaystyle=\sum_{x\in\mathds{X}}\mathbb{E}\left[\frac{\hat{p}_{\text{\tiny T},m}\left(x\right)}{\hat{p}_{\text{\tiny R},n}(x)}\left(\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\left(\frac{Y_{i}A_{i}}{\pi}-\frac{Y_{i}(1-A_{i})}{1-\pi}\right)\right)\mid\mathbf{X}_{n+m}\right]$$
Linearity of $$\mathbb{E}[.]$$
$$\displaystyle=\sum_{x\in\mathds{X}}\frac{\hat{p}_{\text{\tiny T},m}\left(x\right)}{\hat{p}_{\text{\tiny R},n}(x)}\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\left(\frac{Y_{i}A_{i}}{\pi}-\frac{Y_{i}(1-A_{i})}{1-\pi}\right)\mid\mathbf{X}_{n+m}\right].$$
Indeed, $\hat{p}_{\text{\tiny R},n}(x)$ and $\hat{p}_{\text{\tiny T},m}(x)$ are measurable with respect to $\mathbf{X}_{n+m}$.
Pursuing the computation, we have
$$\displaystyle\mathbb{E}\left[\hat{\tau}_{\pi,n,m}\mid\mathbf{X}_{n+m}\right]$$
$$\displaystyle=\sum_{x\in\mathds{X}}\frac{\hat{p}_{\text{\tiny T},m}\left(x\right)}{\hat{p}_{\text{\tiny R},n}(x)}\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\left(\frac{Y_{i}A_{i}}{\pi}-\frac{Y_{i}(1-A_{i})}{1-\pi}\right)\mid\mathbf{X}_{n+m}\right]$$
$$\displaystyle=\sum_{x\in\mathds{X}}\frac{\hat{p}_{\text{\tiny T},m}\left(x\right)}{\hat{p}_{\text{\tiny R},n}(x)}\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\left[\mathbbm{1}_{X_{i}=x}\left(\frac{Y_{i}A_{i}}{\pi}-\frac{Y_{i}(1-A_{i})}{1-\pi}\right)\mid\mathbf{X}_{n+m}\right]$$
Linearity of $$\mathbb{E}[.]$$
$$\displaystyle=\sum_{x\in\mathds{X}}\frac{\hat{p}_{\text{\tiny T},m}\left(x\right)}{\hat{p}_{\text{\tiny R},n}(x)}\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\mathbb{E}\left[\left(\frac{Y_{i}A_{i}}{\pi}-\frac{Y_{i}(1-A_{i})}{1-\pi}\right)\mid\mathbf{X}_{n+m}\right]$$
Conditioning on $$\mathbf{X}_{n}$$
$$\displaystyle=\sum_{x\in\mathds{X}}\frac{\hat{p}_{\text{\tiny T},m}\left(x\right)}{\hat{p}_{\text{\tiny R},n}(x)}\tau(x)\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}$$
Transportability
$$\displaystyle=\sum_{x\in\mathds{X}}\hat{p}_{\text{\tiny T},m}\left(x\right)\tau(x)\mathds{1}_{Z_{n}(x)\neq 0},$$
where $Z_{n}(x)=\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}$. Then,
$$\displaystyle\operatorname{Var}\left[\mathbb{E}\left[\hat{\tau}_{\pi,n,m}\mid\mathbf{X}_{n+m}\right]\right]$$
$$\displaystyle=\operatorname{Var}\left[\sum_{x\in\mathds{X}}\hat{p}_{\text{\tiny T},m}\left(x\right)\tau(x)\mathbbm{1}_{Z_{n}(x)\neq 0}\right]$$
$$\displaystyle=\operatorname{Var}\left[\sum_{x\in\mathds{X}}\frac{\sum_{i=n+1}^{n+m}\mathbbm{1}_{X_{i}=x}}{m}\tau(x)\mathbbm{1}_{Z_{n}(x)\neq 0}\right]$$
$$\displaystyle=\operatorname{Var}\left[\frac{1}{m}\sum_{i=n+1}^{n+m}\tau(X_{i})\mathbbm{1}_{Z_{n}(X_{i})\neq 0}\right]$$
Note that, contrary to the semi-oracle IPSW, this term is non-null due to estimation of $\hat{p}_{\text{\tiny T},m}$. By the law of total variance,
$$\displaystyle\operatorname{Var}\left[\frac{1}{m}\sum_{i=n+1}^{n+m}\tau(X_{i})\mathbbm{1}_{Z_{n}(X_{i})\neq 0}\right]$$
$$\displaystyle=\mathbb{E}\left[\operatorname{Var}\left[\frac{1}{m}\sum_{i=n+1}^{n+m}\tau(X_{i})\mathbbm{1}_{Z_{n}(X_{i})\neq 0}|\mathbf{X}_{n}\right]\right]$$
$$\displaystyle\quad+\operatorname{Var}\left[\mathbb{E}\left[\frac{1}{m}\sum_{i=n+1}^{n+m}\tau(X_{i})\mathbbm{1}_{Z_{n}(X_{i})\neq 0}|\mathbf{X}_{n}\right]\right]$$
$$\displaystyle=\frac{1}{m}\mathbb{E}\left[\operatorname{Var}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)\neq 0}|\mathbf{X}_{n}\right]\right]+\operatorname{Var}\left[\mathbb{E}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)\neq 0}|\mathbf{X}_{n}\right]\right]$$
$$\displaystyle=\frac{1}{m}\operatorname{Var}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)\neq 0}\right]+\left(1-\frac{1}{m}\right)\operatorname{Var}\left[\mathbb{E}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)\neq 0}|\mathbf{X}_{n}\right]\right],$$
where the last line comes from the law of total variance applied to $\operatorname{Var}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)\neq 0}\right]$. Recalling similar derivations from the semi-oracle IPSW proof, and in particular (30), one has
$$\displaystyle\operatorname{Var}\left[\mathbb{E}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)\neq 0}|\mathbf{X}_{n}\right]\right]$$
$$\displaystyle=\operatorname{Var}\left[\mathbb{E}_{\text{\tiny T}}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)=0}|\mathbf{X}_{n}\right]\right],$$
so that
$$\displaystyle\operatorname{Var}\left[\mathbb{E}\left[\hat{\tau}_{\pi,n,m}\mid\mathbf{X}_{n}\right]\right]$$
$$\displaystyle=\frac{1}{m}\operatorname{Var}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)\neq 0}\right]+\left(1-\frac{1}{m}\right)\operatorname{Var}\left[\mathbb{E}_{\text{\tiny T}}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)=0}|\mathbf{X}_{n}\right]\right].$$
(41)
For the other term of (40),
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{\pi,n,m}\mid\mathbf{X}_{n+m}\right]$$
$$\displaystyle=\operatorname{Var}\left[\frac{1}{n}\sum_{i=1}^{n}\frac{\hat{p}_{\text{\tiny T},m}\left(X_{i}\right)}{\hat{p}_{\text{\tiny R},n}\left(X_{i}\right)}\left(\frac{Y_{i}A_{i}}{\pi}-\frac{Y_{i}(1-A_{i})}{1-\pi}\right)\mid\mathbf{X}_{n+m}\right]$$
$$\displaystyle=\frac{1}{n^{2}}\sum_{i=1}^{n}\left(\frac{\hat{p}_{\text{\tiny T},m}\left(X_{i}\right)}{\hat{p}_{\text{\tiny R},n}\left(X_{i}\right)}\right)^{2}V_{\text{\tiny HT}}(X_{i}).$$
Derivations are very similar to the semi-oracle estimator, using the fact that $\hat{p}_{\text{\tiny R},n}(x)$ and $\hat{p}_{\text{\tiny T},m}(x)$ are measurable with respect to $\mathbf{X}_{n+m}$. We have
$$\displaystyle\mathbb{E}\left[\operatorname{Var}\left[\hat{\tau}_{\pi,n,m}\mid\mathbf{X}_{n+m}\right]\right]$$
$$\displaystyle=\mathbb{E}\left[\frac{1}{n^{2}}\sum_{i=1}^{n}\left(\frac{\hat{p}_{\text{\tiny T},m}\left(X_{i}\right)}{\hat{p}_{\text{\tiny R},n}\left(X_{i}\right)}\right)^{2}V_{\text{\tiny HT}}(X_{i})\right]$$
From previous derivations
$$\displaystyle=\mathbb{E}\left[\sum_{x\in\mathds{X}}\left(\frac{1}{n^{2}}\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\left(\frac{\hat{p}_{\text{\tiny T},m}\left(X_{i}\right)}{\hat{p}_{\text{\tiny R},n}\left(X_{i}\right)}\right)^{2}V_{\text{\tiny HT}}(X_{i})\right)\right]$$
Categorical $$X$$
$$\displaystyle=\mathbb{E}\left[\sum_{x\in\mathds{X}}\frac{1}{n^{2}}\left(\frac{\hat{p}_{\text{\tiny T},m}\left(x\right)}{\hat{p}_{\text{\tiny R},n}\left(x\right)}\right)^{2}V_{\text{\tiny HT}}(x)\left(\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\right)\right]$$
$$\displaystyle=\sum_{x\in\mathds{X}}\frac{1}{n^{2}}V_{\text{\tiny HT}}(x)\mathbb{E}\left[\left(\frac{\hat{p}_{\text{\tiny T},m}\left(x\right)}{\hat{p}_{\text{\tiny R},n}\left(x\right)}\right)^{2}\left(\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\right)\right]$$
$$\displaystyle=\sum_{x\in\mathds{X}}\frac{1}{n}V_{\text{\tiny HT}}(x)\mathbb{E}_{\text{\tiny R}}\left[\frac{\left(\hat{p}_{\text{\tiny T},m}\left(x\right)\right)^{2}}{\hat{p}_{\text{\tiny R},n}\left(x\right)}\mathbbm{1}_{Z_{n}(x)\neq 0}\right].$$
In particular, the last term can be simplified in
$$\mathbb{E}\left[\operatorname{Var}\left[\hat{\tau}_{\pi,n,m}\mid\mathbf{X}_{n+m}\right]\right]=\sum_{x\in\mathds{X}}\frac{1}{n}V_{\text{\tiny HT}}(x)\mathbb{E}\left[\left(\hat{p}_{\text{\tiny T},m}\left(x\right)\right)^{2}\right]\mathbb{E}\left[\frac{\mathbbm{1}_{Z_{n}(x)\neq 0}}{\hat{p}_{\text{\tiny R},n}\left(x\right)}\right].$$
(42)
This last derivation is possible because $\hat{p}_{\text{\tiny T},m}\left(x\right)$, which depends on $\mathcal{T}$, and $\hat{p}_{\text{\tiny R},n}\left(x\right)$, which depends on $\mathcal{R}$, are independent. The difference from the semi-oracle estimator comes from the term
$$\displaystyle\mathbb{E}\left[\left(\hat{p}_{\text{\tiny T},m}\left(x\right)\right)^{2}\right]$$
$$\displaystyle=\mathbb{E}\left[\left(\frac{\sum_{i=n+1}^{m}\mathbbm{1}_{X_{i}=x}}{m}\right)^{2}\right]$$
$$\displaystyle=\frac{1}{m^{2}}\mathbb{E}\left[\left(\sum_{i=n+1}^{m}\mathbbm{1}_{X_{i}=x}\right)^{2}\right]$$
$$\displaystyle=\frac{1}{m^{2}}\left(mp_{\text{\tiny T}}(x)(1-p_{\text{\tiny T}})(x)+m^{2}p_{\text{\tiny T}}^{2}(x)\right)$$
$$\displaystyle=\frac{p_{\text{\tiny T}}(x)(1-p_{\text{\tiny T}}(x))}{m}+p_{\text{\tiny T}}^{2}(x).$$
(43)
Using (41) and (43) in (40), we have
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{\pi,n,m}\right]$$
$$\displaystyle=\operatorname{Var}\left[\mathbb{E}\left[\hat{\tau}_{\pi,n,m}\mid\mathbf{X}_{n+m}\right]\right]+\mathbb{E}\left[\operatorname{Var}\left[\hat{\tau}_{\pi,n,m}\mid\mathbf{X}_{n+m}\right]\right]$$
$$\displaystyle=\frac{1}{m}\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)\neq 0}\right]+\left(1-\frac{1}{m}\right)\operatorname{Var}\left[\mathbb{E}_{\text{\tiny T}}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)=0}|\mathbf{X}_{n}\right]\right]$$
$$\displaystyle\qquad+\frac{1}{n}\sum_{x\in\mathds{X}}V_{\text{\tiny HT}}(x)\frac{p_{\text{\tiny T}}(x)(1-p_{\text{\tiny T}}(x))}{m}\mathbb{E}\left[\frac{\mathbbm{1}_{Z_{n}(x)\neq 0}}{\hat{p}_{\text{\tiny R},n}\left(x\right)}\right]+\frac{1}{n}\sum_{x\in\mathds{X}}V_{\text{\tiny HT}}(x)p_{\text{\tiny T}}^{2}(x)\mathbb{E}\left[\frac{\mathbbm{1}_{Z_{n}(x)\neq 0}}{\hat{p}_{\text{\tiny R},n}\left(x\right)}\right]$$
$$\displaystyle=\frac{1}{m}\left(\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)\neq 0}\right]-\operatorname{Var}\left[\mathbb{E}_{\text{\tiny T}}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)=0}|\mathbf{X}_{n}\right]\right]\right)+\operatorname{Var}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\right]$$
$$\displaystyle\qquad+\frac{1}{nm}\sum_{x\in\mathds{X}}V_{\text{\tiny HT}}(x)p_{\text{\tiny T}}(x)(1-p_{\text{\tiny T}}(x))\mathbb{E}\left[\frac{\mathbbm{1}_{Z_{n}(x)\neq 0}}{\hat{p}_{\text{\tiny R},n}\left(x\right)}\right].$$
(44)
Upper bound on the variance.
We first bound (41), corresponding to
$$\displaystyle\operatorname{Var}\left[\mathbb{E}\left[\hat{\tau}_{\pi,n,m}\mid\mathbf{X}_{n+m}\right]\right]$$
$$\displaystyle=\frac{1}{m}\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)\neq 0}\right]+\left(1-\frac{1}{m}\right)\operatorname{Var}\left[\mathbb{E}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)=0}|\mathbf{X}_{n+m}\right]\right].$$
We have
$$\displaystyle\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)\neq 0}\right]$$
$$\displaystyle=\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)-\tau(X)\mathbbm{1}_{Z_{n}(X)=0}\right]$$
$$\displaystyle=\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\right]-2\operatorname{Cov}_{\text{\tiny T}}(\tau(X),\tau(X)\mathbbm{1}_{Z_{n}(X)=0})+\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)=0}\right]$$
$$\displaystyle\leq\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\right]+2\left(\operatorname{Var}_{\text{\tiny T}}[\tau(X)]\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)=0}\right]\right)^{1/2}+\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)=0}\right],$$
with
$$\displaystyle\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)=0}\right]$$
$$\displaystyle\leq\mathbb{E}\left[\tau(X)^{2}\mathbbm{1}_{Z_{n}(X)=0}\right]$$
$$\displaystyle\leq\mathbb{E}\left[\tau(X)^{2}\mathbb{E}\left[\mathbbm{1}_{Z_{n}(X)=0}\mid X\right]\right]$$
$$\displaystyle\leq\mathbb{E}\left[\tau(X)^{2}(1-p_{\text{\tiny R}}(X))^{n}\right]$$
$$\displaystyle\leq\left(1-\min_{x}p_{\text{\tiny R}}(x)\right)^{n}\mathbb{E}_{\text{\tiny T}}\left[\tau(X)^{2}\right].$$
Consequently,
$$\displaystyle\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)\neq 0}\right]$$
$$\displaystyle\leq\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\right]+2\mathbb{E}_{\text{\tiny T}}\left[\tau(X)^{2}\right]\left(1-\min_{x}p_{\text{\tiny R}}(x)\right)^{n/2}+\left(1-\min_{x}p_{\text{\tiny R}}(x)\right)^{n}\mathbb{E}_{\text{\tiny T}}\left[\tau(X)^{2}\right]$$
$$\displaystyle\leq\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\right]+4\mathbb{E}_{\text{\tiny T}}\left[\tau(X)^{2}\right]\left(1-\min_{x}p_{\text{\tiny R}}(x)\right)^{n/2}.$$
One can also bound the other term of (41) following the same derivations as the semi-oracle IPSW,
$$\displaystyle\operatorname{Var}\left[\mathbb{E}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)=0}|\mathbf{X}_{n+m}\right]\right]$$
$$\displaystyle\leq\mathbb{E}\left[\tau(X)^{2}\mathbbm{1}_{Z_{n}(X)=0}\right]$$
$$\displaystyle=\mathbb{E}\left[\tau(X)^{2}\mathbb{E}\left[\mathbbm{1}_{Z_{n}(X)=0}|X\right]\right]$$
$$\displaystyle=\mathbb{E}\left[\tau(X)^{2}\mathbb{P}\left[Z_{n}(X)=0|X\right]\right]$$
$$\displaystyle\leq\mathbb{E}\left[\tau(X)^{2}\right]\left(1-\min_{x}p_{\text{\tiny R}}(x)\right)^{n}.$$
(45)
The first bound is obtained using the fact that the variance of a random variable is bounded by the expectancy of the squared random variables, and either the law of total variance or Jensen inequality.
Then, using the fact that $1-\frac{1}{m}\leq 1$,
$$\displaystyle\operatorname{Var}\left[\mathbb{E}_{\text{\tiny T}}\left[\hat{\tau}_{\pi,n,m}\mid\mathbf{X}_{n+m}\right]\right]$$
$$\displaystyle\leq\frac{\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\right]}{m}+\frac{4\mathbb{E}_{\text{\tiny T}}\left[\tau(X)^{2}\right]}{m}\left(1-\min_{x}p_{\text{\tiny R}}(x)\right)^{n/2}+\mathbb{E}_{\text{\tiny T}}\left[\tau(X)^{2}\right]\left(1-\min_{x}p_{\text{\tiny R}}(x)\right)^{n}$$
$$\displaystyle\leq\frac{\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\right]}{m}+\mathbb{E}_{\text{\tiny T}}\left[\tau(X)^{2}\right]\left(\frac{4}{m}+1\right)\left(1-\min_{x}p_{\text{\tiny R}}(x)\right)^{n/2}.$$
(46)
Then, for the other term of the asymptotic variance, one can use the results from Arnould
et al. (2021) (see page 27) to bound the variance, which leads to
$$\displaystyle\mathbb{E}\left[\operatorname{Var}\left[\hat{\tau}_{\pi,n,m}\mid\mathbf{X}_{n+m}\right]\right]$$
$$\displaystyle=\sum_{x\in\mathds{X}}\frac{1}{n}g(x)\mathbb{E}\left[\left(\hat{p}_{\text{\tiny T},m}\left(x\right)\right)^{2}\right]\mathbb{E}\left[\frac{\mathbbm{1}_{Z_{n}(x)\neq 0}}{\frac{Z_{n}(x)}{n}}\right]$$
$$\displaystyle\leq\sum_{x\in\mathds{X}}g(x)\mathbb{E}\left[\left(\hat{p}_{\text{\tiny T},m}\left(x\right)\right)^{2}\right]\frac{2}{(n+1)p_{\text{\tiny R}}\left(x\right)}$$
Arnould
et al. (2021) (p.27)
$$\displaystyle=\sum_{x\in\mathds{X}}g(x)\left(\frac{p_{\text{\tiny T}}\left(x\right)(1-p_{\text{\tiny T}}\left(x\right))}{m}+p_{\text{\tiny T}}\left(x\right)^{2}\right)\frac{2}{(n+1)p_{\text{\tiny R}}\left(x\right)}$$
Finally, using (46), and (44),
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{\pi,n,m}\right]$$
$$\displaystyle\leq\frac{\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\right]}{m}+\mathbb{E}_{\text{\tiny T}}\left[\tau(X)^{2}\right]\left(\frac{4}{m}+1\right)\left(1-\min_{x}p_{\text{\tiny R}}(x)\right)^{n/2}$$
$$\displaystyle\qquad+\frac{2}{n+1}\left(\mathbb{E}_{\text{\tiny R}}\left[\left(\frac{p_{\text{\tiny T}}\left(X\right)}{p_{\text{\tiny R}}\left(X\right)}\right)^{2}V_{\text{\tiny HT}}(X)\right]+\frac{1}{m}\mathbb{E}_{\text{\tiny R}}\left[\frac{p_{\text{\tiny T}}\left(X\right)(1-p_{\text{\tiny T}}\left(X\right))}{p_{\text{\tiny R}}\left(X\right)^{2}}V_{\text{\tiny HT}}(X)\right]\right).$$
(47)
∎
A.3.2 Proof of Corollary 2
Asymptotic bias
The proof is exactly the same as for the semi-oracle IPSW, see Subsection A.2.2.
Asymptotic variance
We recall that the explicit expression of the variance is
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{\pi,n,m}\right]$$
$$\displaystyle=\frac{1}{m}\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)\neq 0}\right]+\left(1-\frac{1}{m}\right)\operatorname{Var}\left[\mathbb{E}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)=0}|\mathbf{X}_{n}\right]\right]$$
$$\displaystyle\qquad+\frac{1}{n}\sum_{x\in\mathds{X}}V_{\text{\tiny HT}}(x)\frac{p_{\text{\tiny T}}(x)(1-p_{\text{\tiny T}}(x))}{m}\mathbb{E}\left[\frac{\mathbbm{1}_{Z_{n}(x)\neq 0}}{\hat{p}_{\text{\tiny R},n}\left(x\right)}\right]+\operatorname{Var}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\right].$$
Let’s consider a slightly different quantity, multiplying by $\min(n,m)$,
$$\displaystyle\min(n,m)\operatorname{Var}\left[\hat{\tau}_{\pi,n,m}\right]$$
$$\displaystyle=\frac{\min(n,m)}{m}\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)\neq 0}\right]+\min(n,m)\left(1-\frac{1}{m}\right)\operatorname{Var}\left[\mathbb{E}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)=0}|\mathbf{X}_{n}\right]\right]$$
$$\displaystyle\qquad+\frac{\min(n,m)}{nm}\sum_{x\in\mathds{X}}V_{\text{\tiny HT}}(x)p_{\text{\tiny T}}(x)(1-p_{\text{\tiny T}}(x))\mathbb{E}\left[\frac{\mathbbm{1}_{Z_{n}(x)\neq 0}}{\hat{p}_{\text{\tiny R},n}\left(x\right)}\right]+\min(n,m)\operatorname{Var}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\right].$$
Now, we study an asymptotic regime where $n$ and $m$ can grow toward infinity but at different paces. Let $\lim\limits_{n,m\to\infty}\frac{m}{n}=\lambda\in[0,\infty]$,where $\lambda$ characterizes the regime.
Case 1:
If $\lambda\in[1,\infty]$, one can replace $\min(n,m)$ by $n$, so that
$$\displaystyle\lim\limits_{n,m\to\infty}n\operatorname{Var}\left[\hat{\tau}_{\pi,n,m}\right]$$
$$\displaystyle=\lim\limits_{n,m\to\infty}\left(\underbrace{\frac{n}{m}}_{\frac{1}{\lambda}}\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)\neq 0}\right]+n\left(1-\frac{1}{m}\right)\operatorname{Var}\left[\mathbb{E}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)=0}|\mathbf{X}_{n}\right]\right]\right)$$
$$\displaystyle\qquad+\underbrace{\lim\limits_{n,m\to\infty}\left(\frac{1}{m}\sum_{x\in\mathds{X}}V_{\text{\tiny HT}}(x)p_{\text{\tiny T}}(x)(1-p_{\text{\tiny T}}(x))\mathbb{E}\left[\frac{\mathbbm{1}_{Z_{n}(x)\neq 0}}{\hat{p}_{\text{\tiny R},n}\left(x\right)}\right]\right)}_{=0}$$
$$\displaystyle\qquad+\underbrace{\lim\limits_{n,m\to\infty}\left(n\operatorname{Var}\left[\hat{\tau}_{\pi,\text{\tiny T},n}^{*}\right]\right)}_{=V_{\text{so}}},$$
Corollary 1
where we also used from former proof, (37) and (38) stating that
$$\displaystyle\mathbb{E}\left[\frac{\mathbbm{1}_{Z_{n}(x)>0}}{Z_{n}(x)/n}\right]\to\frac{1}{p_{\text{\tiny R}}(x)},\quad\textrm{as}\leavevmode\nobreak\ n\to\infty.$$
Recalling (45),
$$0\leq\operatorname{Var}\left[\mathbb{E}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)=0}|\mathbf{X}_{n}\right]\right]\leq\tau\left(1-\min_{x}p_{\text{\tiny R}}(x)\right)^{2n},$$
due to the exponential convergence one has,
$$\displaystyle\lim\limits_{n\to\infty}n\operatorname{Var}\left[\mathbb{E}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)=0}|\mathbf{X}_{n}\right]\right]$$
$$\displaystyle=0,$$
and therefore,
$$\displaystyle\lim\limits_{n,m\to\infty}n\left(1-\frac{1}{m}\right)\operatorname{Var}\left[\mathbb{E}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)=0}|\mathbf{X}_{n}\right]\right]$$
$$\displaystyle=0.$$
(48)
Besides,
$$\displaystyle\lim\limits_{n\to\infty}\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)\neq 0}\right]$$
$$\displaystyle=\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\right],$$
To summarize, if $\lambda\in[1,\infty]$, one can conclude that
$$\displaystyle\lim\limits_{n,m\to\infty}n\operatorname{Var}\left[\hat{\tau}_{\pi,n,m}\right]=\frac{\operatorname{Var}\left[\tau(X)\right]}{\lambda}+V_{so}.$$
(49)
Case 2: If $\lambda\in[0,1]$, one can replace $\min(n,m)$ by $m$, so that
$$\displaystyle\lim\limits_{n,m\to\infty}m\operatorname{Var}\left[\hat{\tau}_{\pi,n,m}\right]$$
$$\displaystyle=\lim\limits_{n,m\to\infty}\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)\neq 0}\right]+\lim\limits_{n,m\to\infty}m\left(1-\frac{1}{m}\right)\operatorname{Var}\left[\mathbb{E}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)=0}|\mathbf{X}_{n}\right]\right]$$
$$\displaystyle\qquad+\lim\limits_{n,m\to\infty}\frac{1}{n}\sum_{x\in\mathds{X}}V_{\text{\tiny HT}}(x)p_{\text{\tiny T}}(x)(1-p_{\text{\tiny T}}(x))\mathbb{E}\left[\frac{\mathbbm{1}_{Z_{n}(x)\neq 0}}{\hat{p}_{\text{\tiny R},n}\left(x\right)}\right]$$
$$\displaystyle\qquad\qquad+\lim\limits_{n,m\to\infty}\lambda\sum_{x\in\mathds{X}}V_{\text{\tiny HT}}(x)p_{\text{\tiny T}}^{2}(x)\mathbb{E}\left[\frac{\mathbbm{1}_{Z_{n}(x)\neq 0}}{\hat{p}_{\text{\tiny R},n}\left(x\right)}\right].$$
In particular,
$$\displaystyle\lim\limits_{n,m\to\infty}\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)\neq 0}\right]$$
$$\displaystyle=\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\right].$$
As above, we have
$$\displaystyle\lim\limits_{n,m\to\infty}m\left(1-\frac{1}{m}\right)\operatorname{Var}\left[\mathbb{E}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)=0}|\mathbf{X}_{n}\right]\right]$$
$$\displaystyle=0,$$
because,
$$\displaystyle 0\leq m\left(1-\frac{1}{m}\right)\operatorname{Var}\left[\mathbb{E}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)=0}|\mathbf{X}_{n}\right]\right]\leq n\left(1-\frac{1}{m}\right)\operatorname{Var}\left[\mathbb{E}\left[\tau(X)\mathbbm{1}_{Z_{n}(X)=0}|\mathbf{X}_{n}\right]\right].$$
In addition, (37) and (38) ensure that
$$\displaystyle\lim\limits_{n,m\to\infty}\lambda\sum_{x\in\mathds{X}}V_{\text{\tiny HT}}(x)p_{\text{\tiny T}}^{2}(x)\mathbb{E}\left[\frac{\mathbbm{1}_{Z_{n}(x)\neq 0}}{\hat{p}_{\text{\tiny R},n}\left(x\right)}\right]$$
$$\displaystyle=\lambda V_{so}.$$
As an intermediary conclusion, if $\lambda\in[0,1]$,
$$\displaystyle\lim\limits_{n,m\to\infty}\min(n,m)\operatorname{Var}\left[\hat{\tau}_{\pi,n,m}\right]$$
$$\displaystyle=\operatorname{Var}\left[\tau(X)\right]+\lambda V_{so}$$
(50)
General conclusion:
It is possible to gather equations (49) and (50) in one single conclusion. Therefore, letting $\lim\limits_{n,m\to\infty}m/n=\lambda\in[0,\infty]$, the asymptotic variance of estimated IPSW satisfies
$$\displaystyle\lim\limits_{n,m\to\infty}\min(n,m)\operatorname{Var}\left[\hat{\tau}_{\pi,n,m}\right]=\min(1,\lambda)\left(\frac{\operatorname{Var}\left[\tau(X)\right]}{\lambda}+V_{so}\right).$$
A.3.3 Proof of Theorem 3
Proof.
For any estimate $\hat{\tau}$, we have
$$\mathbb{E}\left[\left(\hat{\tau}-\tau\right)^{2}\right]=\left(\mathbb{E}\left[\hat{\tau}\right]-\tau\right)^{2}+\operatorname{Var}\left[\hat{\tau}\right].$$
Therefore, the risk of the (estimated) IPSW estimate can be bounded using results from Subsection A.3.1 (or Proposition 2), and in particular the bounds on the variance and the bias,
$$\displaystyle\mathbb{E}\left[\left(\hat{\tau}-\tau\right)^{2}\right]$$
$$\displaystyle\leq(1-\min_{x}p_{\text{\tiny R}}(x))^{2n}\mathbb{E}_{\text{\tiny T}}[\tau(X)^{2}]+\frac{\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\right]}{m}+\left(1-\min_{x}p_{R}(x)\right)^{n/2}\left(\frac{4\mathbb{E}_{\text{\tiny T}}\left[\tau(X)^{2}\right]}{m}+\tau\right)$$
$$\displaystyle\qquad+\frac{2}{n+1}\left(\mathbb{E}_{\text{\tiny R}}\left[\left(\frac{p_{\text{\tiny T}}\left(X\right)}{p_{\text{\tiny R}}\left(X\right)}\right)^{2}V_{\text{\tiny HT}}(X)\right]+\frac{1}{m}\mathbb{E}_{\text{\tiny R}}\left[\frac{p_{\text{\tiny T}}\left(X\right)(1-p_{\text{\tiny T}}\left(X\right))}{p_{\text{\tiny R}}\left(X\right)^{2}}V_{\text{\tiny HT}}(X)\right]\right)$$
$$\displaystyle\leq\frac{2V_{so}}{n+1}+\frac{\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\right]}{m}+\frac{2}{m(n+1)}\mathbb{E}_{\text{\tiny R}}\left[\frac{p_{\text{\tiny T}}\left(X\right)(1-p_{\text{\tiny T}}\left(X\right))}{p_{\text{\tiny R}}\left(X\right)^{2}}V_{\text{\tiny HT}}(X)\right]$$
$$\displaystyle\quad+\left(1-\min_{x}p_{\text{\tiny R}}\left(x\right)\right)^{n/2}\left(1+\mathbb{E}_{\text{\tiny T}}[\tau(X)^{2}]+\frac{4\mathbb{E}_{\text{\tiny T}}\left[\tau(X)^{2}\right]}{m}\right).$$
The $L^{2}$ consistency holds by letting $n$ and $m$ tend to infinity.
∎
A.4 Estimated IPSW with estimated $\hat{\pi}_{n}(x)$
A.4.1 Proof of Proposition 3
Proof.
Bias
We start by computing the bias of
$$\displaystyle\mathbb{E}\left[\hat{\tau}_{n,m}\right]$$
$$\displaystyle=\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^{n}\frac{\hat{p}_{\text{\tiny T},m}(X_{i})}{\hat{p}_{\text{\tiny R},n}(X_{i})}Y_{i}\left(\frac{A_{i}}{\hat{\pi}_{n}(X_{i})}-\frac{1-A_{i}}{1-\hat{\pi}_{n}(X_{i})}\right)\right]$$
$$\displaystyle=\frac{1}{n}\mathbb{E}\left[\mathbb{E}\left[\sum_{x\in\mathbb{X}}\frac{\hat{p}_{\text{\tiny T},m}(x)}{\hat{p}_{\text{\tiny R},n}(x)}\sum_{i=1}^{n}\mathds{1}_{X_{i}=x}Y_{i}\left(\frac{A_{i}}{\hat{\pi}_{n}(x)}-\frac{1-A_{i}}{1-\hat{\pi}_{n}(x)}\right)\mid\mathbf{X}_{n},\mathbf{A}_{n},\mathbf{Y}_{n}\right]\right]$$
$$\displaystyle=\frac{1}{n}\mathbb{E}\left[\sum_{x\in\mathbb{X}}\frac{\mathbb{E}\left[\hat{p}_{\text{\tiny T},m}(x)\mid\mathbf{X}_{n},\mathbf{A}_{n},\mathbf{Y}_{n}\right]}{\hat{p}_{\text{\tiny R},n}(x)}\sum_{i=1}^{n}\mathds{1}_{X_{i}=x}Y_{i}\left(\frac{A_{i}}{\hat{\pi}_{n}(x)}-\frac{1-A_{i}}{1-\hat{\pi}_{n}(x)}\right)\right]$$
$$\displaystyle=\frac{1}{n}\mathbb{E}\left[\sum_{x\in\mathbb{X}}\frac{p_{\text{\tiny T}}(x)}{\hat{p}_{\text{\tiny R},n}(x)}\sum_{i=1}^{n}\mathds{1}_{X_{i}=x}Y_{i}\left(\frac{A_{i}}{\hat{\pi}_{n}(x)}-\frac{1-A_{i}}{1-\hat{\pi}_{n}(x)}\right)\right].$$
This derivation is possible as $\hat{p}_{\text{\tiny T},m}$ is estimated on a different data set than the trial.
Using SUTVA (Assumption 2), one can replace observed outcomes by potential outcomes, and
$$\displaystyle\mathbb{E}\left[\hat{\tau}_{n,m}\right]$$
$$\displaystyle=\frac{1}{n}\mathbb{E}\left[\sum_{x\in\mathbb{X}}\frac{p_{\text{\tiny T}}(x)}{\hat{p}_{\text{\tiny R},n}(x)}\sum_{i=1}^{n}\mathds{1}_{X_{i}=x}\left(\frac{Y_{i}^{(1)}A_{i}}{\hat{\pi}_{n}(x)}-\frac{Y_{i}^{(0)}(1-A_{i})}{1-\hat{\pi}_{n}(x)}\right)\right]$$
$$\displaystyle=\frac{1}{n}\mathbb{E}\left[\sum_{x\in\mathbb{X}}\frac{p_{\text{\tiny T}}(x)}{\hat{p}_{\text{\tiny R},n}(x)}\sum_{i=1}^{n}\mathds{1}_{X_{i}=x}\mathbb{E}\left[\left(\frac{Y_{i}^{(1)}A_{i}}{\hat{\pi}_{n}(x)}-\frac{Y_{i}^{(0)}(1-A_{i})}{1-\hat{\pi}_{n}(x)}\right)|\mathbf{X}_{n},\mathbf{Y}_{n}^{(1)},\mathbf{Y}_{n}^{(0)}\right]\right].$$
Let us consider, for any fixed $x\in\mathbb{X}$,
$$\displaystyle\mathbb{E}\left[\frac{Y_{i}^{(1)}A_{i}}{\hat{\pi}_{n}(x)}|\mathbf{X}_{n},\mathbf{Y}_{n}^{(1)},\mathbf{Y}_{n}^{(0)}\right]=Y_{i}^{(1)}\mathbb{E}\left[\frac{A_{i}}{\hat{\pi}_{n}(x)}|\mathbf{X}_{n}\right].$$
Up to reordering the $X_{i}$’s, we have
$$\displaystyle\mathbb{E}\left[\frac{A_{i}}{\hat{\pi}_{n}(x)}|\mathbf{X}_{n}\right]$$
$$\displaystyle=\mathbb{E}\left[\frac{A_{i}}{\frac{\sum_{j=1}^{Z_{n}(x)}A_{j}}{Z_{n}(x)}}|\mathbf{X}_{n}\right]$$
$$\displaystyle=Z_{n}(x)\mathbb{E}\left[\frac{A_{i}}{\sum_{j=1}^{Z_{n}(x)}A_{j}}|\mathbf{X}_{n}\right]$$
$$\displaystyle=Z_{n}(x)\pi(x)\mathbb{E}\left[\frac{1}{1+\sum_{j=2}^{Z_{n}(x)}A_{j}}|\mathbf{X}_{n}\right].$$
The last rows uses the law of total probability.
According to Lemma 11 $(i)$ in Biau (2012), and considering $B_{n}(x)\sim\mathfrak{B}(n,p)$, for any $x\in\mathds{X}$,
$$\displaystyle\mathbb{E}\left[\frac{1}{1+B_{n}(x)}\right]=\frac{1}{(n+1)p}-\frac{(1-p)^{n+1}}{(n+1)p}.$$
Since, conditional on $\mathbf{X}_{n}$, $\sum_{j=2}^{Z_{n}(x)}A_{j}$ is distributed as $\mathfrak{B}(Z_{n}(x)-1,\pi(x))$,
$$\displaystyle\mathbb{E}\left[\frac{A_{i}}{\hat{\pi}_{n}(x)}|\mathbf{X}_{n}\right]$$
$$\displaystyle=Z_{n}(x)\pi(x)\left(\frac{1}{Z_{n}(x)\pi(x)}-\frac{(1-\pi(x))^{Z_{n}(x)}}{Z_{n}(x)\pi(x)}\right)$$
$$\displaystyle=1-(1-\pi(x))^{Z_{n}(x)}.$$
Similarly,
$$\displaystyle\mathbb{E}\left[\frac{(1-A_{i})}{1-\hat{\pi}_{n}(x)}|\mathbf{X}_{n}\right]=1-\pi(x)^{Z_{n}(x)}.$$
Consequently,
$$\displaystyle\mathbb{E}\left[\left(\frac{Y_{i}^{(1)}A_{i}}{\hat{\pi}_{n}(x)}-\frac{Y_{i}^{(0)}(1-A_{i})}{1-\hat{\pi}_{n}(x)}\right)|\mathbf{X}_{n},\mathbf{Y}_{n}^{(1)},\mathbf{Y}_{n}^{(0)}\right]$$
$$\displaystyle=Y_{i}^{(1)}\left(1-(1-\pi(x))^{Z_{n}(x)}\right)-Y_{i}^{(0)}\left(1-\pi(x)^{Z_{n}(x)}\right)$$
$$\displaystyle=\left(Y_{i}^{(1)}-Y_{i}^{(0)}\right)-Y_{i}^{(1)}(1-\pi(x))^{Z_{n}(x)}+Y_{i}^{(0)}\pi(x)^{Z_{n}(x)}.$$
Therefore,
$$\displaystyle\mathbb{E}\left[\hat{\tau}_{n,m}\right]$$
$$\displaystyle=\frac{1}{n}\mathbb{E}\left[\sum_{x\in\mathbb{X}}\frac{p_{\text{\tiny T}}(x)}{\hat{p}_{\text{\tiny R},n}(x)}\sum_{i=1}^{n}\mathds{1}_{X_{i}=x}(Y_{i}^{(1)}-Y_{i}^{(0)})\right]$$
(51)
$$\displaystyle\qquad+\frac{1}{n}\mathbb{E}\left[\sum_{x\in\mathbb{X}}\frac{p_{\text{\tiny T}}(x)}{\hat{p}_{\text{\tiny R},n}(x)}\sum_{i=1}^{n}\mathds{1}_{X_{i}=x}Y_{i}^{(0)}\pi(x)^{Z_{n}(x)}\right]$$
(52)
$$\displaystyle\qquad\qquad-\frac{1}{n}\mathbb{E}\left[\sum_{x\in\mathbb{X}}\frac{p_{\text{\tiny T}}(x)}{\hat{p}_{\text{\tiny R},n}(x)}\sum_{i=1}^{n}\mathds{1}_{X_{i}=x}Y_{i}^{(1)}(1-\pi(x))^{Z_{n}(x)}\right].$$
(53)
On one hand, considering (51),
$$\displaystyle\frac{1}{n}\mathbb{E}\left[\sum_{x\in\mathbb{X}}\frac{p_{\text{\tiny T}}(x)}{\hat{p}_{\text{\tiny R},n}(x)}\sum_{i=1}^{n}\mathds{1}_{X_{i}=x}(Y_{i}^{(1)}-Y_{i}^{(0)})\right]$$
$$\displaystyle=\frac{1}{n}\sum_{x\in\mathbb{X}}\mathbb{E}\left[\frac{p_{\text{\tiny T}}(x)}{\hat{p}_{\text{\tiny R},n}(x)}\sum_{i=1}^{n}\mathds{1}_{X_{i}=x}(Y_{i}^{(1)}-Y_{i}^{(0)})\right]$$
$$\displaystyle=\sum_{x\in\mathbb{X}}\mathbb{E}\left[p_{\text{\tiny T}}(x)\mathds{1}_{Z_{n}(x)>0}\tau(x)\right],$$
corresponding to the bias in the semi-oracle and the estimated IPSW. Indeed, we recall from the semi-oracle IPSW proof that,
$$\displaystyle\sum_{x\in\mathbb{X}}\mathbb{E}\left[p_{\text{\tiny T}}(x)\mathds{1}_{Z_{n}(x)>0}\tau(x)\right]$$
$$\displaystyle=\sum_{x\in\mathbb{X}}p_{\text{\tiny T}}(x)(1-(1-p_{\text{\tiny R}}(x))^{n})\tau(x).$$
On the other hand, considering (52),
$$\displaystyle\frac{1}{n}\mathbb{E}\left[\sum_{x\in\mathbb{X}}\frac{p_{\text{\tiny T}}(x)}{\hat{p}_{\text{\tiny R},n}(x)}\sum_{i=1}^{n}\mathds{1}_{X_{i}=x}Y_{i}^{(0)}\pi(x)^{Z_{n}(x)}\right]$$
$$\displaystyle=\frac{1}{n}\mathbb{E}\left[\sum_{x\in\mathbb{X}}\frac{p_{\text{\tiny T}}(x)}{\hat{p}_{\text{\tiny R},n}(x)}\sum_{i=1}^{n}\mathds{1}_{X_{i}=x}\mathbb{E}\left[Y_{i}^{(0)}\mid X_{i}=x\right]\pi(x)^{Z_{n}(x)}\right]$$
$$\displaystyle=\mathbb{E}\left[\sum_{x\in\mathbb{X}}p_{\text{\tiny T}}(x)\mathds{1}_{Z_{n}(x)>0}\mathbb{E}\left[Y_{i}^{(0)}\mid X_{i}=x\right]\pi(x)^{Z_{n}(x)}\right]$$
$$\displaystyle=\sum_{x\in\mathbb{X}}p_{\text{\tiny T}}(x)\mathbb{E}\left[Y_{i}^{(0)}\mid X_{i}=x\right]\mathbb{E}\left[\mathds{1}_{Z_{n}(x)>0}\pi(x)^{Z_{n}(x)}\right]$$
$$\displaystyle=\sum_{x\in\mathbb{X}}p_{\text{\tiny T}}(x)\mathbb{E}\left[Y_{i}^{(0)}\mid X_{i}=x\right]\mathbb{E}\left[\left(1-\mathds{1}_{Z_{n}(x)=0}\right)\pi(x)^{Z_{n}(x)}\right]$$
$$\displaystyle=\sum_{x\in\mathbb{X}}p_{\text{\tiny T}}(x)\mathbb{E}\left[Y_{i}^{(0)}\mid X_{i}=x\right]\left(\mathbb{E}\left[\pi(x)^{Z_{n}(x)}\right]-\mathbb{E}\left[\mathds{1}_{Z_{n}(x)=0}\right]\right).$$
Now, note that $\mathbb{P}\left[Z_{n}(x)=0\right]=(1-p_{\text{\tiny R}}(x))^{n}$ and
$$\displaystyle\mathbb{E}\left[\pi(x)^{Z_{n}(x)}\right]$$
$$\displaystyle=\prod_{j=1}^{n}\mathbb{E}\left[\pi(x)^{\mathds{1}_{X_{i}=x}}\right]$$
$$\displaystyle=\left(\pi(x)p_{\text{\tiny R}}(x)+(1-p_{\text{\tiny R}}(x))\right)^{n}.$$
$$\displaystyle=\left(1-p_{\text{\tiny R}}(x)\left(1-\pi(x)\right)\right)^{n}.$$
Therefore,
$$\displaystyle\frac{1}{n}\mathbb{E}\left[\sum_{x\in\mathbb{X}}\frac{p_{\text{\tiny T}}(x)}{\hat{p}_{\text{\tiny R},n}(x)}\sum_{i=1}^{n}\mathds{1}_{X_{i}=x}Y_{i}^{(0)}\pi(x)^{Z_{n}(x)}\right]$$
$$\displaystyle=\sum_{x\in\mathbb{X}}p_{\text{\tiny T}}(x)\mathbb{E}\left[Y_{i}^{(0)}\mid X_{i}=x\right]\big{(}\left(1-p_{\text{\tiny R}}(x)\left(1-\pi(x)\right)\right)^{n}-\left(1-p_{\text{\tiny R}}(x)\right)^{n}\big{)}.$$
Similarly, considering (53),
$$\displaystyle-\frac{1}{n}\mathbb{E}\left[\sum_{x\in\mathbb{X}}\frac{p_{\text{\tiny T}}(x)}{\hat{p}_{\text{\tiny R},n}(x)}\sum_{i=1}^{n}\mathds{1}_{X_{i}=x}Y_{i}^{(1)}(1-\pi)^{Z_{n}(x)}\right]$$
$$\displaystyle=-\sum_{x\in\mathbb{X}}p_{\text{\tiny T}}(x)\mathbb{E}\left[Y_{i}^{(1)}\mid X_{i}=x\right]\big{(}\left(1-p_{\text{\tiny R}}(x)\pi(x)\right)^{n}-(1-p_{\text{\tiny R}}(x))^{n}\big{)}.$$
Finally, the bias of the estimated IPSW with estimated treatment proportion is given by
$$\displaystyle\mathbb{E}\left[\hat{\tau}_{n,m}\right]-\tau$$
$$\displaystyle=-\sum_{x\in\mathds{X}}p_{\text{\tiny T}}(x)\tau(x)\left(1-p_{\text{\tiny R}}(x)\right)^{n}$$
$$\displaystyle\qquad+\sum_{x\in\mathds{X}}p_{\text{\tiny T}}(x)\left(\mathbb{E}\left[Y_{i}^{(1)}\mid X_{i}=x\right]-\mathbb{E}\left[Y_{i}^{(0)}\mid X_{i}=x\right]\right)(1-p_{\text{\tiny R}}(x))^{n}$$
$$\displaystyle\qquad\qquad+\sum_{x\in\mathds{X}}p_{\text{\tiny T}}(x)\mathbb{E}\left[Y_{i}^{(0)}\mid X_{i}=x\right]\left(1-p_{\text{\tiny R}}(x)\left(1-\pi(x)\right)\right)^{n}$$
$$\displaystyle\qquad\qquad\qquad-\sum_{x\in\mathds{X}}p_{\text{\tiny T}}(x)\mathbb{E}\left[Y_{i}^{(1)}\mid X_{i}=x\right]\left(1-p_{\text{\tiny R}}(x)\pi(x)\right)^{n},$$
such that,
$$\displaystyle\mathbb{E}\left[\hat{\tau}_{n,m}\right]-\tau$$
$$\displaystyle=\sum_{x\in\mathds{X}}p_{\text{\tiny T}}(x)\mathbb{E}\left[Y_{i}^{(0)}\mid X_{i}=x\right]\left(1-p_{\text{\tiny R}}(x)\left(1-\pi(x)\right)\right)^{n}$$
$$\displaystyle\quad-\sum_{x\in\mathds{X}}p_{\text{\tiny T}}(x)\mathbb{E}\left[Y_{i}^{(1)}\mid X_{i}=x\right]\left(1-p_{\text{\tiny R}}(x)\pi(x)\right)^{n}\,.$$
Variance
As above, we have
$$\operatorname{Var}\left[\hat{\tau}_{n,m}\right]=\operatorname{Var}\left[\mathbb{E}\left[\hat{\tau}_{n,m}\mid\mathbf{X}_{m+n}\right]\right]+\mathbb{E}\left[\operatorname{Var}\left[\hat{\tau}_{n,m}\mid\mathbf{X}_{m+n}\right]\right].$$
(54)
Let us examine the first term. We have
$$\displaystyle\mathbb{E}\left[\hat{\tau}_{n,m}\mid\mathbf{X}_{m+n}\right]$$
$$\displaystyle=\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^{n}\frac{\hat{p}_{\text{\tiny T},m}(X_{i})}{\hat{p}_{\text{\tiny R},n}(X_{i})}Y_{i}\left(\frac{A_{i}}{\hat{\pi}_{n}(X_{i})}-\frac{1-A_{i}}{1-\hat{\pi}_{n}(X_{i})}\right)\mid\mathbf{X}_{m+n}\right]$$
$$\displaystyle=\frac{1}{n}\sum_{i=1}^{n}\frac{\hat{p}_{\text{\tiny T},m}(X_{i})}{\hat{p}_{\text{\tiny R},n}(X_{i})}\mathbb{E}\left[\frac{Y_{i}^{(1)}A_{i}}{\hat{\pi}_{n}(X_{i})}-\frac{Y_{i}^{(0)}(1-A_{i})}{1-\hat{\pi}_{n}(X_{i})}\mid\mathbf{X}_{m+n}\right]$$
$$\displaystyle=\frac{1}{n}\sum_{i=1}^{n}\frac{\hat{p}_{\text{\tiny T},m}(X_{i})}{\hat{p}_{\text{\tiny R},n}(X_{i})}\mathbb{E}\left[\mathbb{E}\left[\frac{Y_{i}^{(1)}A_{i}}{\hat{\pi}_{n}(X_{i})}-\frac{Y_{i}^{(0)}(1-A_{i})}{1-\hat{\pi}_{n}(X_{i})}\mid\mathbf{X}_{m+n},\mathbf{Y}_{n}^{(1)},\mathbf{Y}_{n}^{(0)}\right]\mid\mathbf{X}_{m+n}\right].$$
A similar computation as the one used in the derivation of the bias above shows that
$$\displaystyle\mathbb{E}\left[\frac{Y_{i}^{(1)}A_{i}}{\hat{\pi}_{n}(X_{i})}-\frac{Y_{i}^{(0)}(1-A_{i})}{1-\hat{\pi}_{n}(X_{i})}\mid\mathbf{X}_{m+n},\mathbf{Y}_{n}^{(1)},\mathbf{Y}_{n}^{(0)}\right]=\left(Y_{i}^{(1)}-Y_{i}^{(0)}\right)-Y_{i}^{(1)}(1-\pi(X_{i}))^{Z_{n}(X_{i})}+Y_{i}^{(0)}\pi(X_{i})^{Z_{n}(X_{i})},$$
which leads to
$$\displaystyle\mathbb{E}\left[\hat{\tau}_{n,m}\mid\mathbf{X}_{m+n}\right]$$
$$\displaystyle=\frac{1}{n}\sum_{i=1}^{n}\frac{\hat{p}_{\text{\tiny T},m}(X_{i})}{\hat{p}_{\text{\tiny R},n}(X_{i})}\mathbb{E}\left[\left(Y_{i}^{(1)}-Y_{i}^{(0)}\right)-Y_{i}^{(1)}(1-\pi(X_{i}))^{Z_{n}(X_{i})}+Y_{i}^{(0)}\pi(X_{i})^{Z_{n}(X_{i})}\mid\mathbf{X}_{m+n}\right]$$
$$\displaystyle=\frac{1}{n}\sum_{i=1}^{n}\frac{\hat{p}_{\text{\tiny T},m}(X_{i})}{\hat{p}_{\text{\tiny R},n}(X_{i})}\left(\tau(X_{i})-\mathbb{E}\left[Y_{i}^{(1)}\mid X_{i}\right](1-\pi(X_{i}))^{Z_{n}(X_{i})}+\mathbb{E}\left[Y_{i}^{(0)}\mid X_{i}\right]\pi(X_{i})^{Z_{n}(X_{i})}\right).$$
Rewriting the previous sum yields
$$\displaystyle\mathbb{E}\left[\hat{\tau}_{n,m}\mid\mathbf{X}_{m+n}\right]$$
$$\displaystyle=\frac{1}{n}\sum_{x\in\mathbb{X}}\sum_{i=1}^{n}\mathds{1}_{X_{i}=x}\frac{\hat{p}_{\text{\tiny T},m}(X_{i})}{\hat{p}_{\text{\tiny R},n}(X_{i})}\left(\tau(X_{i})-\mathbb{E}\left[Y_{i}^{(1)}\mid X_{i}\right](1-\pi(X_{i}))^{Z_{n}(X_{i})}+\mathbb{E}\left[Y_{i}^{(0)}\mid X_{i}\right]\pi(X_{i})^{Z_{n}(X_{i})}\right)$$
$$\displaystyle=\sum_{x\in\mathbb{X}}\hat{p}_{\text{\tiny T},m}(x)\left(\tau(x)-\mathbb{E}\left[Y_{i}^{(1)}\mid X_{i}=x\right](1-\pi(x))^{Z_{n}(x)}+\mathbb{E}\left[Y_{i}^{(0)}\mid X_{i}=x\right]\pi(x)^{Z_{n}(x)}\right)$$
$$\displaystyle=\frac{1}{m}\sum_{i=n+1}^{n+m}U_{n}(X_{i}),$$
where
$$\displaystyle U_{n}(X_{i}):=\left(\tau(X_{i})-\mathbb{E}\left[Y_{i}^{(1)}\mid X_{i}\right](1-\pi(X_{i}))^{Z_{n}(X_{i})}+\mathbb{E}\left[Y_{i}^{(0)}\mid X_{i}\right]\pi(X_{i})^{Z_{n}(X_{i})}\right).$$
By the law of total variance,
$$\displaystyle\operatorname{Var}\left[\mathbb{E}\left[\hat{\tau}_{n,m}\mid\mathbf{X}_{m+n}\right]\right]$$
$$\displaystyle=$$
$$\displaystyle\operatorname{Var}\left[\frac{1}{m}\sum_{i=n+1}^{n+m}U_{n}(X_{i})\right]$$
$$\displaystyle=$$
$$\displaystyle\mathbb{E}\left[\operatorname{Var}\left[\frac{1}{m}\sum_{i=n+1}^{n+m}U_{n}(X_{i})|\mathbf{X}_{n}\right]\right]+\operatorname{Var}\left[\mathbb{E}\left[\frac{1}{m}\sum_{i=n+1}^{n+m}U_{n}(X_{i})|\mathbf{X}_{n}\right]\right]$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{m}\mathbb{E}\left[\operatorname{Var}\left[U_{n}(X)|\mathbf{X}_{n}\right]\right]+\operatorname{Var}\left[\mathbb{E}\left[U_{n}(X)|\mathbf{X}_{n}\right]\right]$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{m}\operatorname{Var}\left[U_{n}(X)\right]+\left(1-\frac{1}{m}\right)\operatorname{Var}\left[\mathbb{E}\left[U_{n}(X)|\mathbf{X}_{n}\right]\right],$$
where the last line comes from the law of total variance applied to $\operatorname{Var}\left[U_{n}(X)\right]$. Since
$$\displaystyle\operatorname{Var}\left[\mathbb{E}\left[U_{n}(X)|\mathbf{X}_{n}\right]\right]$$
$$\displaystyle=$$
$$\displaystyle\operatorname{Var}\left[\mathbb{E}\left[\left(\tau(X)-\mathbb{E}\left[Y^{(1)}\mid X\right](1-\pi(X))^{Z_{n}(X)}+\mathbb{E}\left[Y^{(0)}\mid X\right]\pi(X)^{Z_{n}(X)}\right)|\mathbf{X}_{n}\right]\right]$$
$$\displaystyle=$$
$$\displaystyle\operatorname{Var}\left[\mathbb{E}\left[\mathbb{E}\left[Y^{(0)}\mid X\right]\pi(X)^{Z_{n}(X)}-\mathbb{E}\left[Y^{(1)}\mid X\right](1-\pi(X))^{Z_{n}(X)}|\mathbf{X}_{n}\right]\right],$$
as the only source of randomness comes from $Z_{n}(X)$ (and not from $\tau(X)$),
we have
$$\displaystyle\operatorname{Var}\left[\mathbb{E}\left[\hat{\tau}_{n,m}\mid\mathbf{X}_{m+n}\right]\right]$$
$$\displaystyle=\frac{1}{m}\operatorname{Var}\left[\tau(X)-C_{n}(X)\right]+\left(1-\frac{1}{m}\right)\operatorname{Var}\left[\mathbb{E}\left[C_{n}(X)|\mathbf{X}_{n}\right]\right],$$
(55)
where
$$\displaystyle C_{n}(X)=\mathbb{E}\left[Y^{(1)}\mid X\right](1-\pi(X))^{Z_{n}(X)}-\mathbb{E}\left[Y^{(0)}\mid X\right]\pi(X)^{Z_{n}(X)}.$$
Regarding the other term, and first re-writing $\hat{\tau}_{n,m}$,
$$\displaystyle\hat{\tau}_{n,m}$$
$$\displaystyle=\frac{1}{n}\sum_{i=1}^{n}\frac{\hat{p}_{\text{\tiny T},m}(X_{i})}{\hat{p}_{\text{\tiny R},n}(X_{i})}\left(\frac{A_{i}Y_{i}^{(1)}}{\hat{\pi}_{n}(X_{i})}-\frac{(1-A_{i})Y_{i}^{(0)}}{1-\hat{\pi}_{n}(X_{i})}\right)$$
$$\displaystyle=\sum_{i=1}^{n}\frac{\hat{p}_{\text{\tiny T},m}(X_{i})}{Z_{n}(X_{i})}\left(\frac{A_{i}Y_{i}^{(1)}}{\hat{\pi}_{n}(X_{i})}-\frac{(1-A_{i})Y_{i}^{(0)}}{1-\hat{\pi}_{n}(X_{i})}\right)$$
$$\displaystyle=\sum_{x\in\mathbb{X}}\frac{\hat{p}_{\text{\tiny T},m}(x)}{Z_{n}(x)}\sum_{i=1}^{n}\mathds{1}_{X_{i}=x}\left(\frac{A_{i}Y_{i}^{(1)}}{\hat{\pi}_{n}(x)}-\frac{(1-A_{i})Y_{i}^{(0)}}{1-\hat{\pi}_{n}(x)}\right).$$
Hence,
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{n,m}\mid\mathbf{X}_{n+m}\right]$$
$$\displaystyle=$$
$$\displaystyle\operatorname{Var}\left[\sum_{x\in\mathbb{X}}\frac{\hat{p}_{\text{\tiny T},m}(x)}{Z_{n}(x)}\sum_{i=1}^{n}\mathds{1}_{X_{i}=x}\left(\frac{A_{i}Y_{i}^{(1)}}{\hat{\pi}_{n}(x)}-\frac{(1-A_{i})Y_{i}^{(0)}}{1-\hat{\pi}_{n}(x)}\right)\mid\mathbf{X}_{n+m}\right]$$
$$\displaystyle=$$
$$\displaystyle\sum_{x\in\mathbb{X}}(\hat{p}_{\text{\tiny T},m}(x))^{2}\operatorname{Var}\left[\frac{1}{Z_{n}(x)}\sum_{i=1}^{n}\mathds{1}_{X_{i}=x}\left(\frac{A_{i}Y_{i}^{(1)}}{\hat{\pi}_{n}(x)}-\frac{(1-A_{i})Y_{i}^{(0)}}{1-\hat{\pi}_{n}(x)}\right)\mid\mathbf{X}_{n+m}\right]$$
$$\displaystyle+\sum_{x,y\in\mathbb{X},x\neq y}\operatorname{Cov}\left[\frac{\hat{p}_{\text{\tiny T},m}(x)}{Z_{n}(x)}\sum_{i=1}^{n}\mathds{1}_{X_{i}=x}\left(\frac{A_{i}Y_{i}^{(1)}}{\hat{\pi}_{n}(x)}-\frac{(1-A_{i})Y_{i}^{(0)}}{1-\hat{\pi}_{n}(x)}\right),\frac{\hat{p}_{\text{\tiny T},m}(y)}{Z_{n}(y)}\sum_{j=1}^{n}\mathds{1}_{X_{j}=y}\left(\frac{A_{j}Y_{j}^{(1)}}{\hat{\pi}_{n}(y)}-\frac{(1-A_{j})Y_{j}^{(0)}}{1-\hat{\pi}_{n}(y)}\right)\mid\mathbf{X}_{n+m}\right].$$
Note that the term
$$\displaystyle\operatorname{Var}\left[\frac{1}{Z_{n}(x)}\sum_{i=1}^{n}\mathds{1}_{X_{i}=x}\left(\frac{A_{i}Y_{i}^{(1)}}{\hat{\pi}_{n}(x)}-\frac{(1-A_{i})Y_{i}^{(0)}}{1-\hat{\pi}_{n}(x)}\right)\mid\mathbf{X}_{n+m}\right]$$
corresponds to the variance of the difference-in-means estimator on the strata $X=x$ (where $n$ is replaced by $Z_{n}(x)$) and therefore equals
$$\displaystyle V_{\text{\tiny DM},n}(x)\mathds{1}_{Z_{n}(x)>0}/Z_{n}(x),$$
where (see Lemma 2),
$$\displaystyle V_{\text{\tiny DM}}(x)$$
$$\displaystyle=\left(\mathbb{E}\left[\frac{\mathds{1}_{\hat{\pi}>0}}{\hat{\pi}}\right]\operatorname{Var}\left[Y_{i}^{(1)}\right]+\mathbb{E}\left[\frac{\mathds{1}_{(1-\hat{\pi})>0}}{1-\hat{\pi}}\right]\operatorname{Var}\left[Y_{i}^{(0)}\right]\right)+\mathcal{O}\left(Z_{n}(x)\max(\pi,1-\pi)^{n}\right).$$
Consequently,
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{n,m}\mid\mathbf{X}_{n+m}\right]$$
$$\displaystyle=$$
$$\displaystyle\sum_{x\in\mathbb{X}}\frac{(\hat{p}_{\text{\tiny T},m}(x))^{2}V_{\text{\tiny DM},n}(x)\mathds{1}_{Z_{n}(x)>0}}{Z_{n}(x)}$$
$$\displaystyle+\sum_{x,y\in\mathbb{X},x\neq y}\frac{\hat{p}_{\text{\tiny T},m}(x)}{Z_{n}(x)}\frac{\hat{p}_{\text{\tiny T},m}(y)}{Z_{n}(y)}\sum_{i,j}\mathds{1}_{X_{i}=x}\mathds{1}_{X_{j}=y}\operatorname{Cov}\left[\left(\frac{A_{i}Y_{i}^{(1)}}{\hat{\pi}_{n}(x)}-\frac{(1-A_{i})Y_{i}^{(0)}}{1-\hat{\pi}_{n}(x)}\right),\left(\frac{A_{j}Y_{j}^{(1)}}{\hat{\pi}_{n}(y)}-\frac{(1-A_{j})Y_{j}^{(0)}}{1-\hat{\pi}_{n}(y)}\right)\mid\mathbf{X}_{n+m}\right].$$
Note that for $x\neq y$, $\hat{\pi}_{n}(x)\perp\!\!\!\perp\hat{\pi}_{n}(y)$. Consequently, for $i\neq j$,
$$\displaystyle\left(\frac{A_{i}Y_{i}^{(1)}}{\hat{\pi}_{n}(x)}-\frac{(1-A_{i})Y_{i}^{(0)}}{1-\hat{\pi}_{n}(x)}\right)\perp\!\!\!\perp\left(\frac{A_{j}Y_{j}^{(1)}}{\hat{\pi}_{n}(x)}-\frac{(1-A_{j})Y_{j}^{(0)}}{1-\hat{\pi}_{n}(x)}\right).$$
Consequently,
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{n,m}\mid\mathbf{X}_{n+m}\right]=\sum_{x\in\mathbb{X}}\frac{(\hat{p}_{\text{\tiny T},m}(x))^{2}V_{\text{\tiny DM},n}(x)\mathds{1}_{Z_{n}(x)>0}}{Z_{n}(x)},$$
and, taking the expectation with respect to $\mathbf{X}_{n+m}$, we have
$$\displaystyle\mathbb{E}\left[\operatorname{Var}\left[\hat{\tau}_{n,m}\mid\mathbf{X}_{n+m}\right]\right]$$
$$\displaystyle=\sum_{x\in\mathbb{X}}\mathbb{E}\left[(\hat{p}_{\text{\tiny T},m}(x))^{2}\right]\mathbb{E}\left[\frac{\mathds{1}_{Z_{n}(x)>0}}{Z_{n}(x)}\right]V_{\text{\tiny DM},n}(x)$$
$$\displaystyle=\sum_{x\in\mathbb{X}}\left(\frac{p_{\text{\tiny T}}(x)(1-p_{\text{\tiny T}}(x))}{m}+p_{\text{\tiny T}}^{2}(x)\right)\mathbb{E}\left[\frac{\mathds{1}_{Z_{n}(x)>0}}{Z_{n}(x)}\right]V_{\text{\tiny DM},n}(x).$$
(56)
Gathering (55) and (56), we finally obtain,
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{n,m}\right]$$
$$\displaystyle=\frac{1}{m}\operatorname{Var}\left[\tau(X)-C_{n}(X)\right]+\left(1-\frac{1}{m}\right)\operatorname{Var}\left[\mathbb{E}\left[C_{n}(X)|\mathbf{X}_{n}\right]\right]$$
$$\displaystyle\quad+\sum_{x\in\mathbb{X}}\left(\frac{p_{\text{\tiny T}}(x)(1-p_{\text{\tiny T}}(x))}{m}+p_{\text{\tiny T}}^{2}(x)\right)\mathbb{E}\left[\frac{\mathds{1}_{Z_{n}(x)>0}}{Z_{n}(x)}\right]V_{\text{\tiny DM},n}(x),$$
where
$$\displaystyle C_{n}(X)=\mathbb{E}\left[Y^{(1)}\mid X\right](1-\pi(X))^{Z_{n}(X)}-\mathbb{E}\left[Y^{(0)}\mid X\right]\pi(X)^{Z_{n}(X)}.$$
Note that, by Jensen’s inequality,
$$\displaystyle\quad\operatorname{Var}\left[\mathbb{E}\left[C_{n}(X)|\mathbf{X}_{n}\right]\right]$$
$$\displaystyle\leq\mathbb{E}\left[C_{n}(X)^{2}\right]$$
$$\displaystyle\leq 2\mathbb{E}\left[\mathbb{E}\left[Y^{(1)}\mid X\right]^{2}(1-\pi(X))^{2Z_{n}(X)}\right]+2\mathbb{E}\left[\mathbb{E}\left[Y^{(0)}\mid X\right]^{2}\pi(X)^{2Z_{n}(X)}\right]$$
$$\displaystyle\leq 2\mathbb{E}\left[\mathbb{E}\left[Y^{(1)}\mid X\right]^{2}\mathbb{E}\left[(1-\pi(X))^{2Z_{n}(X)}\mid X\right]\right]+2\mathbb{E}\left[\mathbb{E}\left[Y^{(0)}\mid X\right]^{2}\mathbb{E}\left[\pi(X)^{2Z_{n}(X)}\mid X\right]\right]$$
$$\displaystyle\leq 2\mathbb{E}\left[\mathbb{E}\left[Y^{(1)}\mid X\right]^{2}\left(1-\left(1-\pi(X)^{2}\right)p_{\text{\tiny R}}(X)\right)^{n}\right]+2\mathbb{E}\left[\mathbb{E}\left[Y^{(0)}\mid X\right]^{2}\left(1-\left(1-(1-\pi(X))^{2}\right)p_{\text{\tiny R}}(X)\right)^{n}\right]$$
$$\displaystyle\leq 2\left(1-\min_{x}\left((1-\tilde{\pi}(x)^{2})p_{\text{\tiny R}}(x)\right)\right)^{n}\mathbb{E}\left[(Y^{(1)})^{2}+(Y^{(0)})^{2}\right],$$
where $\tilde{\pi}(x)=\max(\pi(x),1-\pi(x))$, and we have used the fact that
$$\displaystyle\mathbb{E}\left[\pi(X)^{2Z_{n}(X)}\mid X\right]$$
$$\displaystyle=\left(\pi(X)^{2}p_{\text{\tiny R}}(X)+1-p_{\text{\tiny R}}(X)\right)^{n}$$
$$\displaystyle=\left(1-\left(1-\pi(X)^{2}\right)p_{\text{\tiny R}}(X)\right)^{n}.$$
Besides, we have
$$\displaystyle\operatorname{Var}\left[\tau(X)-C_{n}(X)\right]\leq\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\right]+2\left(\operatorname{Var}_{\text{\tiny T}}[\tau(X)]\operatorname{Var}_{\text{\tiny T}}\left[C_{n}(X)\right]\right)^{1/2}+\operatorname{Var}_{\text{\tiny T}}\left[C_{n}(X)\right],$$
where
$$\displaystyle\operatorname{Var}_{\text{\tiny T}}\left[C_{n}(X)\right]$$
$$\displaystyle\leq\mathbb{E}\left[C_{n}(X)^{2}\right]$$
$$\displaystyle\leq 2\left(1-\min_{x}\left((1-\tilde{\pi}(x)^{2})p_{\text{\tiny R}}(x)\right)\right)^{n}\mathbb{E}\left[(Y^{(1)})^{2}+(Y^{(0)})^{2}\right].$$
Consequently,
$$\displaystyle\operatorname{Var}\left[\tau(X)-C_{n}(X)\right]$$
$$\displaystyle\leq\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\right]+4\left(1-\min_{x}\left((1-\tilde{\pi}(x)^{2})p_{\text{\tiny R}}(x)\right)\right)^{n/2}\mathbb{E}\left[(Y^{(1)})^{2}+(Y^{(0)})^{2}\right]$$
$$\displaystyle\quad+2\left(1-\min_{x}\left((1-\tilde{\pi}(x)^{2})p_{\text{\tiny R}}(x)\right)\right)^{n}\mathbb{E}\left[(Y^{(1)})^{2}+(Y^{(0)})^{2}\right]$$
$$\displaystyle\leq\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)\right]+6\left(1-\min_{x}\left((1-\tilde{\pi}(x)^{2})p_{\text{\tiny R}}(x)\right)\right)^{n/2}\mathbb{E}\left[(Y^{(1)})^{2}+(Y^{(0)})^{2}\right].$$
Finally,
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{n,m}\right]$$
$$\displaystyle\leq\frac{2}{n+1}\mathbb{E}_{\text{\tiny R}}\left[\left(\frac{p_{\text{\tiny T}}\left(X\right)}{p_{\text{\tiny R}}\left(X\right)}\right)^{2}V_{\text{\tiny DM}}(X)\right]+\frac{\operatorname{Var}\left[\tau(X)\right]}{m}+\frac{2}{(n+1)m}\mathbb{E}_{\text{\tiny R}}\left[\frac{p_{\text{\tiny T}}\left(X\right)(1-p_{\text{\tiny T}}\left(X\right))}{p_{\text{\tiny R}}\left(X\right)^{2}}V_{\text{\tiny DM},n}(X)\right]$$
$$\displaystyle\quad+2\left(1+\frac{3}{m}\right)\left(1-\min_{x}\left((1-\tilde{\pi}(x)^{2})p_{\text{\tiny R}}(x)\right)\right)^{n/2}\mathbb{E}\left[(Y^{(1)})^{2}+(Y^{(0)})^{2}\right].$$
∎
A.4.2 Proof of Corollary 3
Proof.
The proof follows exactly the same structure as that of the proof of Corollary 2.
Proof.
We recall the explicit expression of the variance of $\hat{\tau}_{n,m}$,
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{n,m}\right]$$
$$\displaystyle=\frac{1}{m}\operatorname{Var}\left[\tau(X)-C_{n}(X)\right]+\left(1-\frac{1}{m}\right)\operatorname{Var}\left[\mathbb{E}\left[C_{n}(X)|\mathbf{X}_{n}\right]\right]$$
$$\displaystyle\quad+\sum_{x\in\mathbb{X}}\left(\frac{p_{\text{\tiny T}}(x)(1-p_{\text{\tiny T}}(x))}{m}+p_{\text{\tiny T}}^{2}(x)\right)\mathbb{E}\left[\frac{\mathds{1}_{Z_{n}(x)>0}}{Z_{n}(x)}\right]V_{\text{\tiny DM}}(x),$$
where
$$\displaystyle C_{n}(X)=\mathbb{E}\left[Y^{(1)}\mid X\right](1-\pi(X))^{Z_{n}(X)}-\mathbb{E}\left[Y^{(0)}\mid X\right]\pi(X)^{Z_{n}(X)}.$$
Recall that using (37) and (38), one has
$$\displaystyle\lim_{n\rightarrow\infty}\mathbb{E}\left[\frac{\mathbbm{1}_{Z_{n}(x)>0}}{Z_{n}(x)/n}\right]=\frac{1}{p_{\text{\tiny R}}(x)},$$
and we also have
$$\lim_{n\rightarrow\infty}\operatorname{Var}_{\text{\tiny T}}\left[\tau(X)-C_{n}(X)\right]=\operatorname{Var}_{\text{\tiny T}}[\tau(X)]=\operatorname{Var}[\tau(X)].$$
Finally, note that the term $\operatorname{Var}\left[\mathbb{E}\left[C_{n}(X)|\mathbf{X}_{n}\right]\right]$ can be bounded by a term proportional to $(1-\operatorname{min}(\pi,1-\pi))^{n}$, so that the convergence toward $0$ it at an exponential pace with $n$.
Multiplying the explicit variance by $\operatorname{min}(n,m)$ one has,
$$\displaystyle\operatorname{min}(n,m)\operatorname{Var}\left[\hat{\tau}_{n,m}\right]$$
$$\displaystyle=\frac{\operatorname{min}(n,m)}{m}\operatorname{Var}\left[\tau(X)-C_{n}(X)\right]+\operatorname{min}(n,m)\left(1-\frac{1}{m}\right)\operatorname{Var}\left[\mathbb{E}\left[C_{n}(X)|\mathbf{X}_{n}\right]\right]$$
$$\displaystyle\quad+\frac{\operatorname{min}(n,m)}{n}\sum_{x\in\mathbb{X}}\left(\frac{p_{\text{\tiny T}}(x)(1-p_{\text{\tiny T}}(x))}{m}+p_{\text{\tiny T}}^{2}(x)\right)\mathbb{E}\left[\frac{\mathds{1}_{Z_{n}(x)>0}}{Z_{n}(x)/n}\right]V_{\text{\tiny DM}}(x).$$
Now, we study an asymptotic regime where $n$ and $m$ can grow toward infinity but at different paces. Let $\lim\limits_{n,m\to\infty}\frac{m}{n}=\lambda\in[0,\infty]$,where $\lambda$ characterizes the regime.
Case 1:
If $\lambda\in[1,\infty]$, one can replace $\min(n,m)$ by $n$, so that
$$\displaystyle\operatorname{min}(n,m)\operatorname{Var}\left[\hat{\tau}_{n,m}\right]$$
$$\displaystyle=n\operatorname{Var}\left[\hat{\tau}_{n,m}\right]$$
$$\displaystyle=\frac{1}{\lambda}\operatorname{Var}\left[\tau(X)-C_{n}(X)\right]+\left(n-\frac{1}{\lambda}\right)\operatorname{Var}\left[\mathbb{E}\left[C_{n}(X)|\mathbf{X}_{n}\right]\right]$$
$$\displaystyle\quad+\sum_{x\in\mathbb{X}}\left(\frac{p_{\text{\tiny T}}(x)(1-p_{\text{\tiny T}}(x))}{m}+p_{\text{\tiny T}}^{2}(x)\right)\mathbb{E}\left[\frac{\mathds{1}_{Z_{n}(x)>0}}{Z_{n}(x)}\right]V_{\text{\tiny DM}}(x),$$
such that
$$\displaystyle\lim_{n,m\rightarrow\infty}n\operatorname{Var}\left[\hat{\tau}_{n,m}\right]$$
$$\displaystyle=\frac{\operatorname{Var}\left[\tau(X)\right]}{\lambda}+\tilde{V}_{so},$$
where
$$\displaystyle\tilde{V}_{so}:=\mathbb{E}_{\text{\tiny R}}\left[\left(\frac{p_{\text{\tiny T}}(X)}{p_{\text{\tiny R}}(X)}\right)^{2}V_{\text{\tiny DM},\infty}(X)\right].$$
Case 2:
If $\lambda\in[0,1]$, one can replace $\min(n,m)$ by $m$, so that,
$$\displaystyle\min(n,m)\operatorname{Var}\left[\hat{\tau}_{n,m}\right]=m\operatorname{Var}\left[\hat{\tau}_{n,m}\right]$$
$$\displaystyle=\operatorname{Var}\left[\tau(X)-C_{n}(X)\right]+m\left(1-\frac{1}{m}\right)\operatorname{Var}\left[\mathbb{E}\left[C_{n}(X)|\mathbf{X}_{n}\right]\right]$$
$$\displaystyle\quad+\lambda\sum_{x\in\mathbb{X}}\left(\frac{p_{\text{\tiny T}}(x)(1-p_{\text{\tiny T}}(x))}{m}+p_{\text{\tiny T}}^{2}(x)\right)\mathbb{E}\left[\frac{\mathds{1}_{Z_{n}(x)>0}}{Z_{n}(x)/n}\right]V_{\text{\tiny DM}}(x).$$
Because $m\leq n$, then
$$\displaystyle m\left(1-\frac{1}{m}\right)\operatorname{Var}\left[\mathbb{E}\left[C_{n}(X)|\mathbf{X}_{n}\right]\right]$$
$$\displaystyle\leq n\left(1-\frac{1}{m}\right)\operatorname{Var}\left[\mathbb{E}\left[C_{n}(X)|\mathbf{X}_{n}\right]\right],$$
so that
$$\displaystyle\lim_{n,m\rightarrow\infty}m\left(1-\frac{1}{m}\right)\operatorname{Var}\left[\mathbb{E}\left[C_{n}(X)|\mathbf{X}_{n}\right]\right]$$
$$\displaystyle\leq\lim_{n,m\rightarrow\infty}n\left(1-\frac{1}{m}\right)\operatorname{Var}\left[\mathbb{E}\left[C_{n}(X)|\mathbf{X}_{n}\right]\right]=0.$$
Finally,
$$\displaystyle\lim_{n,m\rightarrow\infty}m\operatorname{Var}\left[\hat{\tau}_{n,m}\right]$$
$$\displaystyle=\operatorname{Var}\left[\tau(X)\right]+\lambda\tilde{V}_{so}.$$
∎
∎
A.4.3 Proof of Theorem 4
Proof.
According to Proposition 3, the bias of the IPSW estimator with estimated $\hat{\pi}_{n}$ can be upper bounded via
$$\displaystyle\left|\mathbb{E}\left[\hat{\tau}_{n,m}\right]-\tau\right|$$
$$\displaystyle\leq\sum_{x\in\mathds{X}}p_{\text{\tiny T}}(x)\left|\mathbb{E}\left[Y^{(0)}\mid X=x\right]\right|\left(1-p_{\text{\tiny R}}(x)\left(1-\pi(x)\right)\right)^{n}$$
$$\displaystyle\quad+\sum_{x\in\mathds{X}}p_{\text{\tiny T}}(x)\left|\mathbb{E}\left[Y^{(1)}\mid X=x\right]\right|\left(1-p_{\text{\tiny R}}(x)\pi(x)\right)^{n}$$
$$\displaystyle\leq\left(1-\min_{x}\left((1-\tilde{\pi}(x))p_{\text{\tiny R}}(x)\right)\right)^{n}\mathbb{E}_{\text{\tiny T}}\left[\left|\mathbb{E}\left[Y^{(1)}\mid X\right]\right|+\left|\mathbb{E}\left[Y^{(0)}\mid X\right]\right|\right].$$
Therefore, the risk of the (estimated) IPSW estimate with estimated $\hat{\pi}_{n}$ satisfies,
$$\displaystyle\quad\mathbb{E}\left[\left(\hat{\tau}_{n,m}-\tau\right)^{2}\right]$$
$$\displaystyle\leq\left(1-\min_{x}\left((1-\tilde{\pi}(x))p_{\text{\tiny R}}(x)\right)\right)^{2n}\mathbb{E}_{\text{\tiny T}}\left[\left|\mathbb{E}\left[Y^{(1)}\mid X\right]\right|+\left|\mathbb{E}\left[Y^{(0)}\mid X\right]\right|\right]^{2}+\frac{2}{n+1}\mathbb{E}_{\text{\tiny R}}\left[\left(\frac{p_{\text{\tiny T}}\left(X\right)}{p_{\text{\tiny R}}\left(X\right)}\right)^{2}V_{\text{\tiny DM}}(X)\right]$$
$$\displaystyle\quad+\frac{\operatorname{Var}\left[\tau(X)\right]}{m}+\frac{2}{(n+1)m}\mathbb{E}_{\text{\tiny R}}\left[\frac{p_{\text{\tiny T}}\left(X\right)(1-p_{\text{\tiny T}}\left(X\right))}{p_{\text{\tiny R}}\left(X\right)^{2}}V_{\text{\tiny DM}}(X)\right]$$
$$\displaystyle\quad+2\left(1+\frac{3}{m}\right)\left(1-\min_{x}\left((1-\tilde{\pi}(x)^{2})p_{\text{\tiny R}}(x)\right)\right)^{n/2}\mathbb{E}\left[(Y^{(1)})^{2}+(Y^{(0)})^{2}\right]$$
$$\displaystyle\leq\frac{2}{n+1}\mathbb{E}_{\text{\tiny R}}\left[\left(\frac{p_{\text{\tiny T}}\left(X\right)}{p_{\text{\tiny R}}\left(X\right)}\right)^{2}V_{\text{\tiny DM}}(X)\right]+\frac{\operatorname{Var}\left[\tau(X)\right]}{m}+\frac{2}{m(n+1)}\mathbb{E}_{\text{\tiny R}}\left[\frac{p_{\text{\tiny T}}\left(X\right)(1-p_{\text{\tiny T}}\left(X\right))}{p_{\text{\tiny R}}\left(X\right)^{2}}V_{\text{\tiny DM}}(X)\right]$$
$$\displaystyle\quad+2\left(2+\frac{3}{m}\right)\left(1-\min_{x}\left((1-\tilde{\pi}(x))p_{\text{\tiny R}}(x)\right)\right)^{n/2}\mathbb{E}\left[(Y^{(1)})^{2}+(Y^{(0)})^{2}\right].$$
∎
Appendix B Extended adjustment set
B.1 Proof of Corollary 4
Proof.
According to Corollary 1, we have
$$\displaystyle\lim_{n\to\infty}n\operatorname{Var}\left[\hat{\tau}_{\text{\tiny T},n}^{*}(X)\right]$$
$$\displaystyle=V_{\text{so}},$$
(57)
where
$$V_{\text{so}}:=\mathbb{E}_{\text{\tiny R}}\left[\left(\frac{p_{\text{\tiny T}}(X)}{p_{\text{\tiny R}}(X)}\right)^{2}V_{\text{\tiny HT}}(X)\right],$$
with
$$V_{\text{\tiny HT}}(x)=\mathbb{E}_{\text{\tiny R}}\left[\frac{\left(Y^{(1)}\right)^{2}}{\pi}\mid X=x\right]+\mathbb{E}_{\text{\tiny R}}\left[\frac{\left(Y^{(0)}\right)^{2}}{1-\pi}\mid X=x\right]-\tau(x)^{2}.$$
Since, by assumption, $V$ is composed of covariates that are not treatment effect modifiers, using Definition 11, we have, for all $(x,v)$,
$$\displaystyle V_{\text{\tiny HT}}(x,v)=V_{\text{\tiny HT}}(x).$$
(58)
Now, considering the set $(X,V)$ instead of $X$ in the expression (57) leads to
$$\displaystyle\lim_{n\to\infty}n\operatorname{Var}_{\text{\tiny R}}\left[\hat{\tau}_{\text{\tiny T},n}^{*}(X,V)\right]$$
$$\displaystyle=\mathbb{E}_{\text{\tiny R}}\left[\left(\frac{p_{\text{\tiny T}}(X,V)}{p_{\text{\tiny R}}(X,V)}\right)^{2}V_{\text{\tiny HT}}(X,V)\right]$$
$$\displaystyle=\sum_{x,v\in\mathcal{X},\mathcal{V}}\frac{p_{\text{\tiny T}}^{2}(x,v)}{p_{\text{\tiny R}}(x,v)}V_{\text{\tiny HT}}(x,v)$$
$$\displaystyle=\sum_{x,v\in\mathcal{X},\mathcal{V}}\frac{p_{\text{\tiny T}}^{2}(x,v)}{p_{\text{\tiny R}}(x,v)}V_{\text{\tiny HT}}(x)$$
Equation. (58)
$$\displaystyle=\sum_{x,v\in\mathcal{X},\mathcal{V}}\frac{p_{\text{\tiny T}}^{2}(x)p_{\text{\tiny T}}^{2}(v)}{p_{\text{\tiny R}}(x)p_{\text{\tiny R}}(v)}V_{\text{\tiny HT}}(x)$$
$$V\perp\!\!\!\perp X$$
$$\displaystyle=\left(\sum_{v\in\mathcal{V}}\frac{p_{\text{\tiny T}}(v)^{2}}{p_{\text{\tiny R}}(v)}\right)\sum_{x\in\mathcal{X}}\frac{p_{\text{\tiny T}}^{2}(x)}{p_{\text{\tiny R}}(x)}V_{\text{\tiny HT}}(x)$$
$$\displaystyle=\left(\sum_{v\in\mathcal{V}}\frac{p_{\text{\tiny T}}(v)^{2}}{p_{\text{\tiny R}}(v)}\right)\lim_{n\to\infty}n\operatorname{Var}_{\text{\tiny R}}\left[\hat{\tau}_{\text{\tiny T},n}^{*}(X)\right],$$
Now, note that
$$\displaystyle\sum_{v\in\mathcal{V}}\frac{p_{\text{\tiny T}}(v)^{2}}{p_{\text{\tiny R}}(v)}$$
$$\displaystyle=\mathbb{E}_{\text{\tiny R}}\left[\frac{p_{\text{\tiny T}}(V)^{2}}{p_{\text{\tiny R}}(V)^{2}}\right]$$
$$\displaystyle\geq\left(\mathbb{E}_{\text{\tiny R}}\left[\frac{p_{\text{\tiny T}}(V)}{p_{\text{\tiny R}}(V)}\right]\right)^{2}$$
$$\displaystyle\geq\left(\sum_{v\in\mathcal{V}}p_{\text{\tiny T}}(v)\right)^{2}$$
$$\displaystyle\geq 1,$$
where the first inequality results from Jensen’s inequality. Consequently,
$$\displaystyle\lim_{n\to\infty}n\operatorname{Var}_{\text{\tiny R}}\left[\hat{\tau}_{\text{\tiny T},n}^{*}(X,V)\right]\geq\lim_{n\to\infty}n\operatorname{Var}_{\text{\tiny R}}\left[\hat{\tau}_{\text{\tiny T},n}^{*}(X)\right].$$
∎
B.2 Proof of Corollary 5
Proof.
By the law of total variance, we have, for all $x$,
$$\displaystyle V_{\text{\tiny DM}}(x)=\mathbb{E}\left[V_{\text{\tiny DM}}(x,V)\right]+\operatorname{Var}\left[\tau(x,V)\right].$$
(59)
Indeed, according to the law of total variance, for all random variables $Z,X_{1},X_{2}$, we have, a.s.,
$$\displaystyle\operatorname{Var}\left[Z\mid X_{1}\right]=\mathbb{E}\left[\operatorname{Var}\left[Z\mid X_{1},X_{2}\right]\mid X_{1}\right]+\operatorname{Var}\left[\mathbb{E}\left[Z\mid X_{1},X_{2}\right]\mid X_{1}\right].$$
Letting $X_{1}=X,X_{2}=V$ and $Z=(YA/\pi)-(Y(1-A)/\pi)$ yields equation (59).
Now, we can write
$$\displaystyle\lim_{n\to\infty}n\operatorname{Var}_{\text{\tiny R}}\left[\hat{\tau}_{\text{\tiny T},n}^{*}(X,V)\right]$$
$$\displaystyle=\mathbb{E}_{\text{\tiny R}}\left[\left(\frac{p_{\text{\tiny T}}(X,V)}{p_{\text{\tiny R}}(X,V)}\right)^{2}V_{\text{\tiny DM}}(X,V)\right]$$
$$\displaystyle=\sum_{x,v\in\mathcal{X},\mathcal{V}}\frac{p_{\text{\tiny T}}^{2}(x,v)}{p_{\text{\tiny R}}(x,v)}V_{\text{\tiny DM}}(x,v)$$
$$\displaystyle=\sum_{x,v\in\mathcal{X},\mathcal{V}}\frac{p_{\text{\tiny T}}^{2}(x)p_{\text{\tiny T}}^{2}(v)}{p_{\text{\tiny R}}(x)p_{\text{\tiny R}}(v)}V_{\text{\tiny DM}}(x,v)$$
$$V\perp\!\!\!\perp X$$
$$\displaystyle=\sum_{x\mathcal{X}}\frac{p_{\text{\tiny T}}^{2}(x)}{p_{\text{\tiny R}}(x)}\sum_{v\in\mathcal{V}}\frac{p_{\text{\tiny T}}^{2}(v)}{p_{\text{\tiny R}}(v)}V_{\text{\tiny DM}}(x,v)$$
$$\displaystyle=\sum_{x\mathcal{X}}\frac{p_{\text{\tiny T}}^{2}(x)}{p_{\text{\tiny R}}(x)}\sum_{v\in\mathcal{V}}p_{\text{\tiny T}}(v)V_{\text{\tiny HT}}(x,v)$$
by Definition 12
$$\displaystyle=\sum_{x\mathcal{X}}\frac{p_{\text{\tiny T}}^{2}(x)}{p_{\text{\tiny R}}(x)}\left(V_{\text{\tiny DM}}(x)-\operatorname{Var}\left[\tau(x,V)\right]\right)$$
Equation (59)
$$\displaystyle=\lim_{n\to\infty}n\operatorname{Var}_{\text{\tiny R}}\left[\hat{\tau}_{\text{\tiny T},n}^{*}(X)\right]-\mathbb{E}_{\text{\tiny R}}\left[\frac{p_{\text{\tiny T}}(X)}{p_{\text{\tiny R}}(X)}\operatorname{Var}\left[\tau(X,V)\mid X\right]\right],$$
which concludes the proof.
∎
Appendix C Semi-synthetic simulation’s data preparation
C.1 Context
The semi-synthetic simulation is made of real world data, a trial called CRASH-3 (Dewan et al., 2012; CRASH-3, 2019) and an observational data base called Traumabase.
The covariates of both data sources are used to generate the true distribution from which the simulated data are generated. This part details the pre-treatment performed on the covariates, which is contained in the R notebook entitled Prepare-semi-synthetic-simulation.Rmd.
As explained in the main document, in this semi-synthetic simulation we only consider six baseline covariates:
•
Glasgow Coma Scale score777The GCS is a neurological scale which aims to assess a person’s consciousness. The lower the score, the higher the severity of the trauma. (GCS) (categorical);
•
Gender (categorical);
•
Pupil reactivity (categorical);
•
Age (continuous);
•
Systolic blood pressure (continuous);
•
Time-to-treatment (continuous), being the time between the trauma and the administration of the treatment.
As three covariates out of 6 are continuous, we categorize them to obtain a completely categorical data. The time-to-treatment is categorized in 4 levels, systolic blood pressure in 3 levels, and age in 3 levels.
To further reduce the number of categories, and follow the CRASH-3 trial stratification, the Glasgow score is also gathered in 3 levels, from severe to moderately injured individuals, based on their Glasgow score.
CRASH-3 trial
The CRASH-3 trial data contains information on $12,737$ individuals. Over the six covariates of interest and the $12,737$ individuals, $108$ values are missing. We imputed them using the R package missRanger.
Traumabase observational data
The complete Traumabase data contains $20,037$ observations, but when keeping only the individuals suffering from Traumatic Brain Injury (TBI) as it is the case in the CRASH-3 trial, only $8,289$ observations could be kept. Many data are missing, in particular $2,660$ missing values for $8,289$ individuals and along $5$ baseline covariates considered. We impute them with the R package missRanger, using $35$ other available baseline covariates.
Because the time to treatment is not observed in the Traumabase this covariate is generated following a beta law, and considering a shifted distribution compared to the trial, in particular toward lower time-to-treatment values than in the trial.
Ensuring overlap
When binding the two data sets, we had to ensure that the support inclusion assumption (Assumption 4) was verified.
Out of the $586$ modalities present in the target data, only $192$ are also present in the trial data. Therefore only these observations are kept, such that the observational sample finally contains $8,058$ observations ($8,289$ at the beginning). All the observations in the trial are kept as there is no restriction for the trial to contain a larger support as presented in Assumption 4.
C.1.1 Covariate shift vizualization
For each of the six baseline and categorical considered, visualization of the covariate shift between the two data source is represented on Figures 10, 11, 12, 13, 14, and 15.
C.2 Synthetic outcome model
As detailed above, for now the covariate support reflects a true situation, where only the time-to-treatment covariate was created as it is missing in the target population sample (Colnet
et al., 2021).
For the purpose of simulation, the outcome model is completely synthetic, and for each strata a number is affected, from $1$ to the number of strata, starting to the lowest category (for example youngest strata, or lowest Glasgow score, or lower systolic blood pressure), to the highest one.
Doing so, the outcome model considered is such as,
$$\displaystyle Y$$
$$\displaystyle=10-\texttt{Glasgow}+\left(\texttt{if Girl:}-5\texttt{ else:}0\right)$$
$$\displaystyle\qquad+A\left(15(6-\texttt{TTT})+3*(\texttt{Systolic.blood.pressure}-1)^{2}\right)+\varepsilon_{\texttt{TTT}},$$
where $\varepsilon_{\texttt{TTT}}$ is a random Gaussian noise with a standard deviation depending on the value of the covariate TTT. In particular if the treatment is given later, then the noise is stronger.
Appendix D Useful results about RCTs under a Bernoulli design
Here we recall the definition of a Bernoulli trial (see Definition 13) and results such as variance expression of the Horvitz-Thomson and difference-in-means estimators under this design. We also provide details about variance inequality between the variance of the Horvitz-Thomson compared to the variance of the difference-in-means.
In the literature we have not found detailed derivations about the finite sample bias and variance of the difference-in-means under a Bernoulli design.
Extensively detailed derivations are available in Chapter two of Imbens and
Rubin (2015), but for a completely randomized design.
Also note that in this work we assume a superpopulation framework, and a large part of the existing literature focuses on inference on a finite population. Indeed, when considering a finite sample, bias and variance of the Horvitz-Thomson and difference-in-means are not the same as when inferring the superpopulation treatment effect (Splawa-Neyman et al., 1990; Imbens, 2011; Miratrix
et al., 2013; Harshaw
et al., 2021).
Note that all the results in this section considers one population, and not two populations with two distributions (target and randomized), therefore no index is placed on the expectation. When the following results on RCTs are used in the main paper and/or in the proofs, we use the index $R$ in the expectation as the trial in the main paper is sample according to $P_{\text{\tiny R}}$.
D.1 Bernoulli trial
A Bernoulli trial is a trial where the treatment assignment vector, being $\boldsymbol{A}=(A_{1},\dots,A_{n})$ follows a Bernoulli law with a constant probability. More formally,
Definition 13 (Assignment mechanism for a Bernoulli Trial).
If the assignment mechanism is a Bernoulli trial with a probability $\pi$, then
$$\forall i,\,\mathbb{P}[A_{i}]=\pi,$$
and considering a sample for $n$ units,
$$\mathbb{P}\left[\mathbf{A}\mid i\in\mathcal{R}\right]=\prod_{i=1}^{n}\left[\pi^{A_{i}}\cdot\left(1-\pi\right)^{1-A_{i}}\right],$$
where $\mathbf{A}$ denotes the vector of treatment allocation for the trial sample $\mathcal{R}$.
In this design the treatment allocation is independent of all other treatment allocations.
A disadvantage of such design is the fact that there is always a small probability that all units receive the treatment or no treatment.
This is why other designs are possible, such as the so-called completely randomized design, where the number of treated units is selected prior to treatment allocation (usually $n/2$ units are given treatment). The interest is to ensure a balanced group of treated and controls, and avoid a possible pathological case of high unbalance between the number of treated and control individuals.
Mathematically, treating the situation of a completely randomized design is different than a Bernoulli design, as in the former the probability of treatement is not independent between units, for example
$$\forall i,j\in\mathcal{R},\,\mathbb{P}_{\text{\tiny Comp. rand.}}\left[A_{i}=1\mid A_{j}=1\right]\neq\mathbb{P}_{\text{\tiny Comp. rand.}}\left[A_{i}=1\right]=\pi,$$
D.2 Horvitz-Thomson’s
The Horvitz-Thomson estimator is unbiased and has an explicit finite sample variance.
Lemma 1 (Finite sample bias and variance of the Horvitz-Thomson estimator).
Assuming trial internal validity (Assumption 2), then
$$\forall n,\quad\mathbb{E}[\hat{\tau}_{\text{\tiny HT}}]-\tau=0,$$
and
$$\forall n,\quad n\operatorname{Var}\left[\hat{\tau}_{\text{\tiny HT},n}\right]=\mathbb{E}\left[\frac{\left(Y^{(1)}\right)^{2}}{\pi}\right]+\mathbb{E}\left[\frac{\left(Y^{(0)}\right)^{2}}{1-\pi}\right]-\tau^{2}.$$
Note that the following proof can be extended to any $\pi(x)$ depending on baseline covariates, and therefore extends to the oracle IPW in the causal inference literature.
Proof.
Bias
$$\displaystyle\mathbb{E}[\hat{\tau}_{\text{\tiny HT}}]$$
$$\displaystyle=\frac{\mathbb{E}\left[A_{i}Y_{i}^{(1)}\right]}{\pi}-\frac{\mathbb{E}\left[(1-A_{i})Y_{i}^{(0)}\right]}{1-\pi}$$
Linearity & SUTVA
$$\displaystyle=\frac{\mathbb{E}\left[A_{i}\right]\mathbb{E}\left[Y_{i}^{(1)}\right]}{\pi}-\frac{\mathbb{E}\left[(1-A_{i})\right]\mathbb{E}\left[Y_{i}^{(0)}\right]}{1-\pi}$$
Randomization
$$\displaystyle=\frac{\pi\mathbb{E}\left[Y_{i}^{(1)}\right]}{\pi}-\frac{(1-\pi)\mathbb{E}\left[Y_{i}^{(0)}\right]}{1-\pi}$$
Def. of $$\pi$$ - Bernoulli design
$$\displaystyle=\tau,$$
Linearity.
Variance
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{\text{\tiny HT},n}\right]$$
$$\displaystyle=\operatorname{Var}\left[\frac{1}{n}\sum_{i=1}^{n}\frac{A_{i}Y_{i}}{\pi}-\frac{(1-A_{i})Y_{i}}{1-\pi}\right]$$
$$\displaystyle=\frac{1}{n^{2}}\operatorname{Var}\left[\sum_{i=1}^{n}\frac{A_{i}Y^{(1)}_{i}}{\pi}-\frac{(1-A_{i})Y^{(0)}_{i}}{1-e}\right]$$
Assumption 2
$$\displaystyle=\frac{1}{n}\operatorname{Var}\left[\frac{AY^{(1)}}{\pi}-\frac{(1-A)Y^{(0)}}{1-\pi}\right].$$
iid
Then,
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{\text{\tiny HT},n}\right]$$
$$\displaystyle=\frac{1}{n}\left(\operatorname{Var}\left[\frac{AY^{(1)}}{\pi}\right]+\operatorname{Var}\left[\frac{(1-A)Y^{(0)}}{1-\pi}\right]-2\,\operatorname{Cov}\left[\frac{AY^{(1)}}{\pi},\frac{(1-A)Y^{(0)}}{1-\pi}\right]\right).$$
(60)
The first two terms can be simplified, noting that
$$\displaystyle\mathbb{E}\left[\left(\frac{AY^{(1)}}{\pi}\right)^{2}\right]$$
$$\displaystyle=\mathbb{E}\left[\mathds{1}_{\left\{A_{i}=1\right\}}\left(\frac{Y^{(1)}}{\pi}\right)^{2}\right]$$
A is binary
$$\displaystyle=\mathbb{E}\left[\frac{\left(Y^{(1)}\right)^{2}}{\pi^{2}}\right]\mathbb{E}_{\text{\tiny R}}\left[\mathds{1}_{\left\{A_{i}=1\right\}}\right]$$
Randomization of trial
$$\displaystyle=\mathbb{E}\left[\frac{\left(Y^{(1)}\right)^{2}}{\pi}\right]$$
Definition of $$\pi$$
Similarly,
$$\mathbb{E}\left[\left(\frac{(1-A)Y^{(0)}}{1-\pi}\right)^{2}\right]=\mathbb{E}\left[\frac{\left(Y^{(0)}\right)^{2}}{1-\pi}\right].$$
So,
$$\displaystyle\operatorname{Var}\left[\frac{AY^{(1)}}{\pi}\right]$$
$$\displaystyle=\mathbb{E}\left[\left(\frac{AY^{(1)}}{\pi}\right)^{2}\right]-\mathbb{E}\left[\frac{AY^{(1)}}{\pi}\right]^{2}$$
$$\displaystyle=\mathbb{E}\left[\frac{\left(Y^{(1)}\right)^{2}}{\pi}\right]-\mathbb{E}\left[Y^{(1)}\right]^{2}.$$
Similarly,
$$\operatorname{Var}\left[\frac{(1-A)Y^{(0)}}{1-\pi}\right]=\mathbb{E}\left[\frac{\left(Y^{(0)}\right)^{2}}{1-\pi}\right]-\mathbb{E}\left[Y^{(0)}\right]^{2}.$$
The third term in equation (60) can also be decomposed, so that,
$$\displaystyle\operatorname{Cov}\left[\frac{AY^{(1)}}{\pi},\frac{(1-A)Y^{(0)}}{1-\pi}\right]$$
$$\displaystyle=\mathbb{E}_{\text{\tiny R}}\left[\left(\frac{AY^{(1)}}{\pi}-\mathbb{E}\left[Y^{(1)}\right]\right)\left(\frac{(1-A)Y^{(0)}}{1-\pi}-\mathbb{E}_{\text{\tiny R}}\left[Y^{(0)}\right]\right)\right]$$
$$\displaystyle=\mathbb{E}_{\text{\tiny R}}\left[\underbrace{\frac{AY^{(1)}}{\pi}\frac{(1-A)Y^{(0)}}{1-\pi}}_{\textrm{$=0$}}\right]-\mathbb{E}_{\text{\tiny R}}\left[Y^{(0)}\right]\mathbb{E}_{\text{\tiny R}}\left[Y^{(1)}\right].$$
Finally,
$$\displaystyle n\operatorname{Var}\left[\hat{\tau}_{\text{\tiny HT},n}\right]$$
$$\displaystyle=\mathbb{E}_{\text{\tiny R}}\left[\frac{\left(Y^{(1)}\right)^{2}}{\pi}\right]+\mathbb{E}_{\text{\tiny R}}\left[\frac{\left(Y^{(0)}\right)^{2}}{1-\pi}\right]-\tau^{2}:=V_{\text{\tiny HT}}.$$
∎
D.3 General results about the Difference-in-means
First, note that the Difference-in-Means estimator (in Definition 2) can be re-written as,
$$\hat{\tau}_{\text{\tiny DM},n}=\frac{1}{n}\sum_{i=1}^{n}\frac{A_{i}Y_{i}}{\frac{\sum_{i=1}^{n}A_{i}}{n}}-\frac{1}{n}\sum_{i=1}^{n}\frac{(1-A_{i})Y_{i}}{\frac{\sum_{i=1}^{n}1-A_{i}}{n}},$$
which corresponds to the Horvitz-Thomson where the probability to be treated is estimated with the data.
This estimator is always defined, even if due to the Bernoulli design it possible that all observations were allocated treatment or control. For example, if all units are given control, then
$$\displaystyle\sum_{i=1}^{n}A_{i}$$
$$\displaystyle=0,$$
and because for all $i$, $A_{i}=0$, the ratio $\frac{1}{n}\sum_{i=1}^{n}\frac{A_{i}Y_{i}}{\frac{\sum_{i=1}^{n}A_{i}}{n}}$ is defined and equal to $\frac{0}{0}=0$ by convention.
Lemma 2 (Finite sample and large sample properties of the difference-in-means estimator).
Assuming trial internal validity (Assumption 2), then
$$\displaystyle\forall n,\quad\mathbb{E}\left[\hat{\tau}_{\text{\tiny DM},n}\right]-\tau$$
$$\displaystyle=\pi^{n}\mathbb{E}\left[Y_{i}^{(0)}\right]-(1-\pi)^{n}\mathbb{E}\left[Y_{i}^{(1)}\right],$$
and
$$\displaystyle\forall n,\quad\operatorname{Var}\left[\hat{\tau}_{\text{\tiny DM},n}\right]$$
$$\displaystyle=\frac{1}{n}\left(\mathbb{E}\left[\frac{\mathds{1}_{\hat{\pi}>0}}{\hat{\pi}}\right]\operatorname{Var}\left[Y_{i}^{(1)}\right]+\mathbb{E}\left[\frac{\mathds{1}_{(1-\hat{\pi})>0}}{1-\hat{\pi}}\right]\operatorname{Var}\left[Y_{i}^{(0)}\right]\right)+D_{n},$$
where $D_{n}=\mathbb{E}\left[Y_{i}^{(1)}\right]^{2}(1-\pi)^{n}+\mathbb{E}\left[Y_{i}^{(0)}\right]^{2}\pi^{n}-\left(\mathbb{E}\left[Y_{i}^{(1)}\right](1-\pi)^{n}+\mathbb{E}\left[Y_{i}^{(0)}\right]\pi^{n}\right)^{2}$.
Asymptotically, the difference-in-means is unbiased
$$\displaystyle\lim_{n\to\infty}\mathbb{E}\left[\hat{\tau}_{\text{\tiny DM},n}\right]=\tau,$$
and has the following variance
$$\displaystyle\lim_{n\to\infty}n\operatorname{Var}\left[\hat{\tau}_{\text{\tiny DM},n}\right]=\frac{\operatorname{Var}\left[Y^{(1)}\right]}{\pi}+\frac{\operatorname{Var}\left[Y^{(0)}\right]}{1-\pi}:=V_{\text{\tiny DM},\infty}.$$
The difference-in-means under a Bernoulli design has a finite sample bias due to the possibility of a sample where everyone receive treatments or control. But the bias is exponentially decreasing with $n$. Also note that,
$$\displaystyle D_{n}$$
$$\displaystyle=\mathcal{O}\left(\max(\pi,1-\pi)^{n}\right)$$
The asymptotic variance of the difference-in-means is the variance usually reported in textbooks, and corresponds to the finite sample of the Difference-in-Means estimator under a completely randomized trial. Note that we could also show that the Difference-in-Means is asymptotically normally distributed, for example using M-estimation technics (Stefanski and
Boos, 2002). As this result is not used in this paper, we do not detail the proof.
Note that for a completely randomized design, the difference-in-means is unbiased and its finite sample variance is,
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{\text{\tiny DM},n}\right]$$
$$\displaystyle=\frac{\operatorname{Var}\left[Y^{(1)}\right]}{n_{1}}+\frac{\operatorname{Var}\left[Y^{(0)}\right]}{n_{0}},$$
where $n_{1}$ is the number of treated units ($\sim\pi n$) and $n_{0}$ is the number of control units ($\sim(1-\pi)n$).
This formula is extensively used in the literature, but under a Bernoulli design this formula is true only in large sample as detailed in Lemma 2.
Proof.
Bias
One can use the law of total expectation, conditioning on the treatment assignment vector denoted $\mathbf{A}$,
$$\displaystyle\mathbb{E}\left[\hat{\tau}_{\text{\tiny DM}}\right]$$
$$\displaystyle=\mathbb{E}\left[\mathbb{E}\left[\hat{\tau}_{\text{\tiny DM}}\mid\mathbf{A}\right]\right]$$
$$\displaystyle=\mathbb{E}\left[\frac{\frac{1}{n}\sum_{i=1}^{n}A_{i}}{\frac{1}{n}\sum_{i=1}^{n}A_{i}}\mathbb{E}\left[Y_{i}^{(1)}\mid\mathbf{A}\right]-\frac{\frac{1}{n}\sum_{i=1}^{n}(1-A_{i})}{\frac{1}{n}\sum_{i=1}^{n}(1-A_{i})}\mathbb{E}\left[Y_{i}^{(0)}\mid\mathbf{A}\right]\right]$$
$$\displaystyle=\mathbb{E}\left[\frac{\frac{1}{n}\sum_{i=1}^{n}A_{i}}{\frac{1}{n}\sum_{i=1}^{n}A_{i}}\mathbb{E}\left[Y_{i}^{(1)}\right]-\frac{\frac{1}{n}\sum_{i=1}^{n}(1-A_{i})}{\frac{1}{n}\sum_{i=1}^{n}(1-A_{i})}\mathbb{E}\left[Y_{i}^{(0)}\right]\right]$$
$$\{Y_{i}^{(1)},Y_{i}^{(0)}\}\perp\!\!\!\perp A_{i}$$
$$\displaystyle=\mathbb{E}\left[\mathds{1}_{\sum_{i=1}^{n}A_{i}>0}\mathbb{E}\left[Y_{i}^{(1)}\right]-\mathds{1}_{\sum_{i=1}^{n}1-A_{i}>0}\mathbb{E}\left[Y_{i}^{(0)}\right]\right]$$
$$\displaystyle=\mathbb{E}\left[Y_{i}^{(1)}\right]\mathbb{E}\left[\mathds{1}_{\sum_{i=1}^{n}A_{i}>0}\right]-\mathbb{E}\left[\mathds{1}_{\sum_{i=1}^{n}1-A_{i}>0}\right]\mathbb{E}\left[Y_{i}^{(0)}\right]$$
$$\displaystyle=\left(1-(1-\pi)^{n}\right)\mathbb{E}\left[Y_{i}^{(1)}\right]-\left(1-\pi^{n}\right)\mathbb{E}\left[Y_{i}^{(0)}\right]$$
$$\displaystyle=\mathbb{E}\left[Y_{i}^{(1)}-Y_{i}^{(0)}\right]-(1-\pi)^{n}\mathbb{E}\left[Y_{i}^{(1)}\right]+\pi^{n}\mathbb{E}\left[Y_{i}^{(0)}\right]$$
$$\displaystyle=\tau-(1-\pi)^{n}\mathbb{E}\left[Y_{i}^{(1)}\right]+\pi^{n}\mathbb{E}\left[Y_{i}^{(0)}\right],$$
where the second row uses linearity of expectation and the conditioning on $\mathbf{A}$. To summarize, the difference-in-means has a finite sample bias,
$$\displaystyle\mathbb{E}\left[\hat{\tau}_{\text{\tiny DM},n}\right]-\tau$$
$$\displaystyle=\pi^{n}\mathbb{E}\left[Y_{i}^{(0)}\right]-(1-\pi)^{n}\mathbb{E}\left[Y_{i}^{(1)}\right].$$
Variance
Using the law of total variance, and conditioning on the treatment assignment vector $\mathbf{A}$, one has
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{\text{\tiny DM}}\right]$$
$$\displaystyle=\operatorname{Var}\left[\mathbb{E}\left[\hat{\tau}_{\text{\tiny DM}}\mid\mathbf{A}\right]\right]+\mathbb{E}\left[\operatorname{Var}\left[\hat{\tau}_{\text{\tiny DM}}\mid\mathbf{A}\right]\right].$$
Recall from derivations about the bias that,
$$\displaystyle\mathbb{E}\left[\hat{\tau}_{\text{\tiny DM}}\mid\mathbf{A}\right]$$
$$\displaystyle=\mathds{1}_{\sum_{i=1}^{n}A_{i}>0}\mathbb{E}\left[Y_{i}^{(1)}\right]-\mathds{1}_{\sum_{i=1}^{n}1-A_{i}>0}\mathbb{E}\left[Y_{i}^{(0)}\right].$$
Note that if the number of treated was fixed, we would have $\mathbb{E}\left[\hat{\tau}_{\text{\tiny DM}}\mid\mathbf{A}\right]=\tau$, and therefore, $\operatorname{Var}\left[\mathbb{E}\left[\hat{\tau}_{\text{\tiny DM}}\mid\mathbf{A}\right]\right]=0$.
Here, one has,
$$\displaystyle\operatorname{Var}\left[\mathbb{E}\left[\hat{\tau}_{\text{\tiny DM}}\mid\mathbf{A}\right]\right]$$
$$\displaystyle=\operatorname{Var}\left[\mathds{1}_{\sum_{i=1}^{n}A_{i}>0}\mathbb{E}\left[Y_{i}^{(1)}\right]-\mathds{1}_{\sum_{i=1}^{n}1-A_{i}>0}\mathbb{E}\left[Y_{i}^{(0)}\right]\right]$$
$$\displaystyle=\mathbb{E}\left[Y_{i}^{(1)}\right]^{2}\operatorname{Var}\left[\mathds{1}_{\sum_{i=1}^{n}A_{i}>0}\right]+\mathbb{E}\left[Y_{i}^{(0)}\right]^{2}\operatorname{Var}\left[\mathds{1}_{\sum_{i=1}^{n}1-A_{i}>0}\right]$$
$$\displaystyle\qquad-2\mathbb{E}\left[Y_{i}^{(1)}\right]\mathbb{E}\left[Y_{i}^{(0)}\right]\operatorname{Cov}\left[\mathds{1}_{\sum_{i=1}^{n}A_{i}>0},\mathds{1}_{\sum_{i=1}^{n}1-A_{i}>0}\right].$$
Besides,
$$\displaystyle\operatorname{Var}\left[\mathds{1}_{\sum_{i=1}^{n}A_{i}>0}\right]$$
$$\displaystyle=\mathbb{E}\left[\mathds{1}_{\sum_{i=1}^{n}A_{i}>0}^{2}\right]-\mathbb{E}\left[\mathds{1}_{\sum_{i=1}^{n}A_{i}>0}\right]^{2}$$
$$\displaystyle=(1-\pi)^{n}\left(1-(1-\pi)^{n}\right),$$
and similarly,
$$\displaystyle\operatorname{Var}\left[\mathds{1}_{\sum_{i=1}^{n}1-A_{i}>0}\right]$$
$$\displaystyle=\pi^{n}\left(1-\pi^{n}\right).$$
On the other hand,
$$\displaystyle\operatorname{Cov}\left[\mathds{1}_{\sum_{i=1}^{n}A_{i}>0},\mathds{1}_{\sum_{i=1}^{n}1-A_{i}>0}\right]$$
$$\displaystyle=\mathbb{E}\left[\left(\mathds{1}_{\sum_{i=1}^{n}A_{i}>0}-\left(1-(1-\pi)^{n}\right)\right)\left(\mathds{1}_{\sum_{i=1}^{n}1-A_{i}>0}-1-\pi^{n}\right)\right]$$
$$\displaystyle=\mathbb{E}\left[\mathds{1}_{\sum_{i=1}^{n}A_{i}>0}\mathds{1}_{\sum_{i=1}^{n}1-A_{i}>0}\right]-\left(1-(1-\pi)^{n}\right)\left(1-\pi^{n}\right)$$
$$\displaystyle=1-(1-\pi)^{n}-\pi^{n}-\left(1-\pi^{n}-(1-\pi)^{n}-\pi^{n}(1-\pi)^{n}\right)$$
$$\displaystyle=\pi^{n}(1-\pi)^{n},$$
such that,
$$\displaystyle\operatorname{Var}\left[\mathbb{E}\left[\hat{\tau}_{\text{\tiny DM}}\mid\mathbf{A}\right]\right]$$
$$\displaystyle=\mathbb{E}\left[Y_{i}^{(1)}\right]^{2}(1-\pi)^{n}\left(1-(1-\pi)^{n}\right)+\mathbb{E}\left[Y_{i}^{(0)}\right]^{2}\pi^{n}\left(1-\pi^{n}\right)-2\mathbb{E}\left[Y_{i}^{(1)}\right]\mathbb{E}\left[Y_{i}^{(0)}\right]\pi^{n}(1-\pi)^{n}$$
$$\displaystyle=\mathbb{E}\left[Y_{i}^{(1)}\right]^{2}(1-\pi)^{n}+\mathbb{E}\left[Y_{i}^{(0)}\right]^{2}\pi^{n}-\left(\mathbb{E}\left[Y_{i}^{(1)}\right](1-\pi)^{n}+\mathbb{E}\left[Y_{i}^{(0)}\right]\pi^{n}\right)^{2}$$
$$\displaystyle\leq\mathbb{E}\left[Y_{i}^{(1)}\right]^{2}(1-\pi)^{n}+\mathbb{E}\left[Y_{i}^{(0)}\right]^{2}\pi^{n}$$
$$\displaystyle\leq\left(\mathbb{E}\left[Y^{(1)}\right]^{2}+\mathbb{E}\left[Y^{(0)}\right]^{2}\right)\max(\pi,1-\pi)^{n}.$$
Now,
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{\text{\tiny DM}}\mid\mathbf{A}\right]$$
$$\displaystyle=\operatorname{Var}\left[\frac{1}{n}\sum_{i=1}^{n}\left(\frac{A_{i}Y_{i}^{(1)}}{\hat{\pi}}-\frac{(1-A_{i})Y_{i}^{(0)}}{1-\hat{\pi}}\right)\mid\mathbf{A}\right]$$
$$\displaystyle=\frac{1}{n}\operatorname{Var}\left[\frac{A_{i}Y_{i}^{(1)}}{\hat{\pi}}-\frac{(1-A_{i})Y_{i}^{(0)}}{1-\hat{\pi}}\mid\mathbf{A}\right]$$
iid
$$\displaystyle=\frac{1}{n}\left(\operatorname{Var}\left[\frac{A_{i}Y_{i}^{(1)}}{\hat{\pi}}\mid\mathbf{A}\right]+\operatorname{Var}\left[\frac{(1-A_{i})Y_{i}^{(0)}}{1-\hat{\pi}}\mid\mathbf{A}\right]-2\operatorname{Cov}\left[\frac{A_{i}Y_{i}^{(1)}}{\hat{\pi}},\frac{(1-A_{i})Y_{i}^{(0)}}{1-\hat{\pi}}\mid\mathbf{A}\right]\right).$$
Now, developing the covariance term, it is possible to show that,
$$\displaystyle\operatorname{Cov}\left[\frac{A_{i}Y_{i}^{(1)}}{\hat{\pi}},\frac{(1-A_{i})Y_{i}^{(0)}}{1-\hat{\pi}}\mid\mathbf{A}\right]$$
$$\displaystyle=-\mathbb{E}\left[\frac{(1-A_{i})Y_{i}^{(0)}}{1-\hat{\pi}}\mid\mathbf{A}\right]\mathbb{E}\left[\frac{A_{i}Y_{i}^{(1)}}{\hat{\pi}}\mid\mathbf{A}\right]$$
$$\displaystyle=-\frac{(1-A_{i})\mathbb{E}\left[Y_{i}^{(0)}\mid\mathbf{A}\right]}{1-\hat{\pi}}\frac{A_{i}\mathbb{E}\left[Y_{i}^{(1)}\mid\mathbf{A}\right]}{\hat{\pi}}$$
Linearity and conditioned on $$\mathbf{A}$$
$$\displaystyle=0.$$
$$A_{i}(1-A_{i})=0$$
Now, also using linearity of expectation, and the fact that we conditioned on $\mathbf{A}$, one has
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{\text{\tiny DM}}\mid\mathbf{A}\right]$$
$$\displaystyle=\frac{1}{n}\left(\left(\frac{A_{i}}{\hat{\pi}}\right)^{2}\operatorname{Var}\left[Y_{i}^{(1)}\mid\mathbf{A}\right]+\left(\frac{1-A_{i}}{1-\hat{\pi}}\right)^{2}\operatorname{Var}\left[Y_{i}^{(0)}\mid\mathbf{A}\right]\right)$$
$$\displaystyle=\frac{1}{n}\left(\left(\frac{A_{i}}{\hat{\pi}}\right)^{2}\operatorname{Var}\left[Y_{i}^{(1)}\right]+\left(\frac{1-A_{i}}{1-\hat{\pi}}\right)^{2}\operatorname{Var}\left[Y_{i}^{(0)}\right]\right),$$
using $$\{Y_{i}^{(1)},Y_{i}^{(0)}\}\perp\!\!\!\perp A_{i}$$.
Taking the expecation of the previous term leads to,
$$\displaystyle\mathbb{E}\left[\operatorname{Var}\left[\hat{\tau}_{\text{\tiny DM}}\mid\mathbf{A}\right]\right]$$
$$\displaystyle=\mathbb{E}\left[\frac{1}{n}\left(\left(\frac{A_{i}}{\hat{\pi}}\right)^{2}\operatorname{Var}\left[Y_{i}^{(1)}\right]+\left(\frac{1-A_{i}}{1-\hat{\pi}}\right)^{2}\operatorname{Var}\left[Y_{i}^{(0)}\right]\right)\right]$$
$$\displaystyle=\frac{1}{n}\left(\mathbb{E}\left[\left(\frac{A_{i}}{\hat{\pi}}\right)^{2}\right]\operatorname{Var}\left[Y_{i}^{(1)}\right]+\frac{1}{n}\mathbb{E}\left[\left(\frac{1-A_{i}}{1-\hat{\pi}}\right)^{2}\right]\operatorname{Var}\left[Y_{i}^{(0)}\right]\right),$$
by linearity.
Note that,
$$\displaystyle\mathbb{E}\left[\left(\frac{A_{i}}{\hat{\pi}}\right)^{2}\right]$$
$$\displaystyle=\mathbb{E}\left[\frac{A_{i}}{\left(\hat{\pi}\right)^{2}}\right]$$
$$\displaystyle=\frac{1}{n}\left(\mathbb{E}\left[\frac{A_{1}}{\hat{\pi}^{2}}\right]+\mathbb{E}\left[\frac{A_{2}}{\hat{\pi}^{2}}\right]+\dots+\mathbb{E}\left[\frac{A_{n}}{\hat{\pi}^{2}}\right]\right)$$
$$\displaystyle=\mathbb{E}\left[\frac{\hat{\pi}}{\hat{\pi}^{2}}\right]$$
$$\displaystyle=\mathbb{E}\left[\frac{\mathds{1}_{\hat{\pi}>0}}{\hat{\pi}}\right],$$
so that
$$\displaystyle\mathbb{E}\left[\operatorname{Var}\left[\hat{\tau}_{\text{\tiny DM}}\mid\mathbf{A}\right]\right]$$
$$\displaystyle=\frac{1}{n}\left(\mathbb{E}\left[\frac{\mathds{1}_{\hat{\pi}>0}}{\hat{\pi}}\right]\operatorname{Var}\left[Y_{i}^{(1)}\right]+\mathbb{E}\left[\frac{\mathds{1}_{(1-\hat{\pi})>0}}{1-\hat{\pi}}\right]\operatorname{Var}\left[Y_{i}^{(0)}\right]\right).$$
Coming back to the law of total variance, one has,
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{\text{\tiny DM}}\right]$$
$$\displaystyle=\operatorname{Var}\left[\mathbb{E}\left[\hat{\tau}_{\text{\tiny DM}}\mid\mathbf{A}\right]\right]+\mathbb{E}\left[\operatorname{Var}\left[\hat{\tau}_{\text{\tiny DM}}\mid\mathbf{A}\right]\right]$$
$$\displaystyle=\mathbb{E}\left[Y_{i}^{(1)}\right]^{2}(1-\pi)^{n}+\mathbb{E}\left[Y_{i}^{(0)}\right]^{2}\pi^{n}-\left(\mathbb{E}\left[Y_{i}^{(1)}\right](1-\pi)^{n}+\mathbb{E}\left[Y_{i}^{(0)}\right]\pi^{n}\right)^{2}$$
$$\displaystyle\qquad+\frac{1}{n}\left(\mathbb{E}\left[\frac{\mathds{1}_{\hat{\pi}>0}}{\hat{\pi}}\right]\operatorname{Var}\left[Y_{i}^{(1)}\right]+\mathbb{E}\left[\frac{\mathds{1}_{(1-\hat{\pi})>0}}{1-\hat{\pi}}\right]\operatorname{Var}\left[Y_{i}^{(0)}\right]\right)$$
In particular, for any sample size,
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{\text{\tiny DM}}\right]$$
$$\displaystyle=\frac{1}{n}\left(\mathbb{E}\left[\frac{\mathds{1}_{\hat{\pi}>0}}{\hat{\pi}}\right]\operatorname{Var}\left[Y_{i}^{(1)}\right]+\mathbb{E}\left[\frac{\mathds{1}_{(1-\hat{\pi})>0}}{1-\hat{\pi}}\right]\operatorname{Var}\left[Y_{i}^{(0)}\right]\right)+\mathcal{O}\left(\max(\pi,1-\pi)^{n}\right),$$
and more particularly,
$$\displaystyle\lim_{n\to\infty}n\operatorname{Var}\left[\hat{\tau}_{\text{\tiny DM}}\right]$$
$$\displaystyle=\frac{\operatorname{Var}\left[Y^{(1)}\right]}{\pi}+\frac{\operatorname{Var}\left[Y^{(0)}\right]}{1-\pi}:=V_{\text{\tiny DM},\infty}.$$
∎
D.4 Variance inequality between a Horvitz-Thomson and difference-in-means
In this work we use an inequality to compare the variance of the Horvitz-Thomson with the variance of the difference-in-means under a Bernoulli design. We propose two inequalities, one for the finite sample and one for the asymptotic variance. The result in finite sample depends on another equality on Binomial law, and in particular $\hat{\pi}$, that we detail in Lemma 3.
Lemma 3 (Inequality on $\hat{\pi}$).
Consider a Bernoulli trial (Definition 13) and the estimated propensity score $\hat{\pi}$ defined as,
$$\hat{\pi}=\frac{\sum_{i=1}^{n}A_{i}}{n}.$$
Then, for all $n\geq 1$ and for all $\alpha\in(0,\frac{1}{2})$,
$$\mathbb{E}\left[\frac{\mathds{1}_{\hat{\pi}>0}}{\hat{\pi}}\right]\leq\frac{1+C_{\alpha,\pi}n^{-\alpha}}{\pi},$$
where $C_{\alpha,\pi}=1+2\left(\frac{16}{\pi^{2}(1-2\alpha)}\right)^{\frac{2}{1-2\alpha}}$.
Proof.
Let $\varepsilon>0$. (and later in the proof, we will more precisely posit $\varepsilon=\frac{\pi}{4}n^{-\alpha}$ with $\alpha\in(0,\frac{1}{2})$)
The law of total expectation leads to,
$$\displaystyle\mathbb{E}\left[\frac{\mathds{1}_{\hat{\pi}>0}}{\hat{\pi}}\right]$$
$$\displaystyle=\mathbb{E}\left[\frac{\mathds{1}_{\hat{\pi}>0}}{\hat{\pi}}\mathds{1}_{|\hat{\pi}-\pi|<\varepsilon}\right]+\mathbb{E}\left[\frac{\mathds{1}_{\hat{\pi}>0}}{\hat{\pi}}\mathds{1}_{|\hat{\pi}-\pi|\geq\varepsilon}\right].$$
For the first term,
$$\displaystyle\mathbb{E}\left[\frac{\mathds{1}_{\hat{\pi}>0}}{\hat{\pi}}\mathds{1}_{|\hat{\pi}-\pi|<\varepsilon}\right]$$
$$\displaystyle\leq\frac{1}{\pi-\varepsilon}\mathbb{E}\left[\mathds{1}_{\hat{\pi}>0}\mathds{1}_{|\hat{\pi}-\pi|<\varepsilon}\right]$$
$$\displaystyle\leq\frac{1}{\pi-\varepsilon},$$
and for the second term,
$$\displaystyle\mathbb{E}\left[\frac{\mathds{1}_{\hat{\pi}>0}}{\hat{\pi}}\mathds{1}_{|\hat{\pi}-\pi|\geq\varepsilon}\right]$$
$$\displaystyle\leq n\mathbb{E}\left[\mathds{1}_{\hat{\pi}>0}\mathds{1}_{|\hat{\pi}-\pi|\geq\varepsilon}\right]$$
$$\displaystyle\leq n\mathbb{P}\left(|\hat{\pi}-\pi|\geq\varepsilon\right)$$
$$\displaystyle\leq 2ne^{-2\varepsilon^{2}n}.$$
The last row is obtained through Chernoff’s inequality in a similar manned as in the proof for the semi-oracle (see (36)). As a consequence, and gathering the two previous inequalities,
$$\displaystyle\mathbb{E}\left[\frac{\mathds{1}_{\hat{\pi}>0}}{\hat{\pi}}\right]$$
$$\displaystyle\leq\frac{1}{\pi-\varepsilon}+2ne^{-2\varepsilon^{2}n}$$
$$\displaystyle=\frac{1}{\pi}\frac{1}{1-\frac{\varepsilon}{\pi}}+2ne^{-2\varepsilon^{2}n}.$$
One can show using function analysis, that, for all $0\leq x<\frac{1}{2}$, we have
$$\displaystyle\frac{1}{1-x}\leq 1+\frac{x}{1-2x}.$$
Then, as soon as $\varepsilon$ is small enough, then $\frac{\varepsilon}{\pi}<\frac{1}{2}$, so that,
$$\displaystyle\mathbb{E}\left[\frac{\mathds{1}_{\hat{\pi}>0}}{\hat{\pi}}\right]$$
$$\displaystyle\leq\frac{1}{\pi}\frac{1}{1-\frac{\varepsilon}{\pi}}+2ne^{-2\varepsilon^{2}n}$$
$$\displaystyle\leq\frac{1}{\pi}\left(1+\frac{\frac{\varepsilon}{\pi}}{1-2\frac{\varepsilon}{\pi}}\right)+2ne^{-2\varepsilon^{2}n}.$$
Letting $\varepsilon=\frac{\pi}{4}n^{-\alpha}$ with $\alpha\in(0,\frac{1}{2})$, we have
$$\displaystyle\mathbb{E}\left[\frac{\mathds{1}_{\hat{\pi}>0}}{\hat{\pi}}\right]$$
$$\displaystyle\leq\frac{1}{\pi}+\frac{1}{4\pi}\frac{n^{-\alpha}}{1-\frac{n^{-\alpha}}{2}}+2ne^{-\frac{\pi^{2}}{8}n^{1-2\alpha}}$$
Now, using the fact that
$$\displaystyle\forall x\geq 1,\forall\alpha\in(0,\frac{1}{2}),\quad x^{2}e^{-\frac{\pi^{2}}{8}x^{1-2\alpha}}$$
$$\displaystyle\leq\underbrace{\left(\frac{16}{\pi^{2}(1-2\alpha)}\right)^{\frac{2}{1-2\alpha}}}_{C_{\alpha,\pi}},$$
allows to have
$$\displaystyle\mathbb{E}\left[\frac{\mathds{1}_{\hat{\pi}>0}}{\hat{\pi}}\right]$$
$$\displaystyle\leq\frac{1}{\pi}+\frac{1}{4\pi}\frac{n^{-\alpha}}{1-\frac{n^{-\alpha}}{2}}+2\frac{C_{\alpha,\pi}}{n}$$
$$\displaystyle\leq\frac{1}{\pi}+\frac{n^{-\alpha}}{\pi}+2\frac{C_{\alpha,\pi}}{\pi n^{\alpha}}$$
$$\displaystyle=\frac{1+n^{-\alpha(1+2C_{\alpha,\pi})}}{\pi}.$$
∎
Lemma 4 (Variance inequality).
Considering the Horvitz-Thomson estimator (Definition 1) and the difference-in-means estimator (Definition 2), with an internally valid randomized controlled trial of size $n$ (Assumption 2), then asymptotic variance of the difference-in-means is always smaller or equal than the Horvitz-Thomson, such as
$$V_{\text{\tiny DM},\infty}=V_{\text{\tiny HT}}-\left(\sqrt{\frac{1-\pi}{\pi}}\mathbb{E}_{\text{\tiny R}}[Y^{(1)}]+\sqrt{\frac{\pi}{1-\pi}}\mathbb{E}_{\text{\tiny R}}[Y^{(0)}]\right)^{2}\leq V_{\text{\tiny HT}}.$$
In addition, and using the previous inequality, Lemma 2 and Lemma 3, one can bound the finite sample difference-in-means’s variance:
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{\text{\tiny DM},n}\right]$$
$$\displaystyle=\leq\frac{1}{n}\left(\frac{\operatorname{Var}\left[Y^{(1)}\right]}{\pi}+\frac{\operatorname{Var}\left[Y^{(0)}\right]}{1-\pi}\right)+\mathcal{O}\left(n^{-3/2}\right)$$
$$\displaystyle\leq V_{\text{\tiny HT}}+\mathcal{O}\left(n^{-3/2}\right).$$
Proof.
Asymptotic inequality
Recall that,
$$V_{\text{\tiny HT}}=\mathbb{E}\left[\frac{\left(Y^{(1)}\right)^{2}}{\pi}\right]+\mathbb{E}\left[\frac{\left(Y^{(0)}\right)^{2}}{1-\pi}\right]-\tau^{2}.$$
Noting that,
$$\tau^{2}=\left(\mathbb{E}\left[Y^{(1)}-Y^{(0)}\right]\right)^{2}=\mathbb{E}\left[Y^{(1)}\right]^{2}+\mathbb{E}\left[Y^{(0)}\right]^{2}-2\mathbb{E}\left[Y^{(1)}\right]\mathbb{E}\left[Y^{(0)}\right],$$
and that for any $a\in\{0,1\}$,
$$\operatorname{Var}\left[Y^{(a)}\right]=\mathbb{E}\left[\left(Y^{(a)}\right)^{2}\right]-\mathbb{E}\left[Y^{(a)}\right]^{2},$$
allows to obtain,
$$\displaystyle V_{\text{\tiny HT}}$$
$$\displaystyle=\frac{\operatorname{Var}\left[Y^{(1)}\right]}{\pi}+\frac{\operatorname{Var}\left[Y^{(0)}\right]}{1-\pi}-(1-\frac{1}{\pi})\mathbb{E}\left[Y^{(1)}\right]^{2}-(1-\frac{1}{1-\pi})\mathbb{E}\left[Y^{(0)}\right]^{2}+2\mathbb{E}\left[Y^{(1)}\right]\mathbb{E}\left[Y^{(0)}\right]$$
$$\displaystyle=V_{\text{\tiny DM},\infty}+\left(\sqrt{\frac{1-\pi}{\pi}}\mathbb{E}_{\text{\tiny R}}[Y^{(1)}]+\sqrt{\frac{\pi}{1-\pi}}\mathbb{E}_{\text{\tiny R}}[Y^{(0)}]\right)^{2}.$$
Finite sample inequality
Recall the finite sample variance of the difference-in-means from Lemma 2, and using the inequality from Lemma 3,
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{\text{\tiny DM},n}\right]$$
$$\displaystyle=\mathbb{E}\left[Y_{i}^{(1)}\right]^{2}(1-\pi)^{n}+\mathbb{E}\left[Y_{i}^{(0)}\right]^{2}\pi^{n}-\left(\mathbb{E}\left[Y_{i}^{(1)}\right](1-\pi)^{n}+\mathbb{E}\left[Y_{i}^{(0)}\right]\pi^{n}\right)^{2}$$
$$\displaystyle\qquad+\frac{1}{n}\left(\mathbb{E}\left[\frac{\mathds{1}_{\hat{\pi}>0}}{\hat{\pi}}\right]\operatorname{Var}\left[Y_{i}^{(1)}\right]+\mathbb{E}\left[\frac{\mathds{1}_{(1-\hat{\pi})>0}}{1-\hat{\pi}}\right]\operatorname{Var}\left[Y_{i}^{(0)}\right]\right)$$
$$\displaystyle\leq\mathbb{E}\left[Y_{i}^{(1)}\right]^{2}(1-\pi)^{n}+\mathbb{E}\left[Y_{i}^{(0)}\right]^{2}\pi^{n}-\left(\mathbb{E}\left[Y_{i}^{(1)}\right](1-\pi)^{n}+\mathbb{E}\left[Y_{i}^{(0)}\right]\pi^{n}\right)^{2}$$
$$\displaystyle\qquad+\frac{1}{n}\left(\frac{1+C_{1/4,\pi}n^{-\frac{1}{4}}}{\pi}\operatorname{Var}\left[Y_{i}^{(1)}\right]+\frac{1+C_{1/4,1-\pi}n^{-\frac{1}{4}}}{1-\pi}\operatorname{Var}\left[Y_{i}^{(0)}\right]\right),$$
where Lemma 3 is applied with $\alpha=1/4$ and we recall that $C_{1/4,\pi}=1+2\left(\frac{32}{\pi^{2}}\right)^{4}$. Note that, at this stage, it is possible to write that,
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{\text{\tiny DM},n}\right]$$
$$\displaystyle=\frac{1}{n}\left(\frac{\operatorname{Var}\left[Y^{(1)}\right]}{\pi}+\frac{\operatorname{Var}\left[Y^{(0)}\right]}{1-\pi}\right)+\mathcal{O}\left(n^{-3/2}\right).$$
(61)
But the overall goal here is to compare $\operatorname{Var}\left[\hat{\tau}_{\text{\tiny DM},n}\right]$ with $\operatorname{Var}\left[\hat{\tau}_{\text{\tiny HT},n}\right]$.
$$\displaystyle\operatorname{Var}\left[\hat{\tau}_{\text{\tiny DM},n}\right]$$
$$\displaystyle\leq\operatorname{Var}\left[\hat{\tau}_{\text{\tiny HT},n}\right]-\frac{1}{n}\left(\sqrt{\frac{1-\pi}{\pi}}\mathbb{E}_{\text{\tiny R}}[Y^{(1)}]+\sqrt{\frac{\pi}{1-\pi}}\mathbb{E}_{\text{\tiny R}}[Y^{(0)}]\right)$$
$$\displaystyle\qquad+\frac{1}{n}\left(\frac{C_{1/4,\pi}n^{-\frac{1}{4}}}{\pi}\operatorname{Var}\left[Y_{i}^{(1)}\right]+\frac{C_{1/4,1-\pi}n^{-\frac{1}{4}}}{1-\pi}\operatorname{Var}\left[Y_{i}^{(0)}\right]\right)$$
$$\geq 0$$
$$\displaystyle\qquad\qquad+\mathbb{E}\left[Y_{i}^{(1)}\right]^{2}(1-\pi)^{n}+\mathbb{E}\left[Y_{i}^{(0)}\right]^{2}\pi^{n}-\left(\mathbb{E}\left[Y_{i}^{(1)}\right](1-\pi)^{n}+\mathbb{E}\left[Y_{i}^{(0)}\right]\pi^{n}\right)^{2}$$
∎
D.5 Post-stratification estimator
The post-stratified estimator (see Definition 8) is an estimator of the average treatment effect from a RCT sample. The principle is to divide the RCT sample into strata, to compute the difference-in-means per strata, and then to average the estimand on each strata, weighting by the strata size. Indeed, the post-stratification estimator introduced in Definition 8 can be re-written as follows.
$$\displaystyle\hat{\tau}_{\text{\tiny PS},n}=\sum_{x\in\mathds{X}}\frac{n_{x,1}+n_{x,0}}{n}\left(\frac{1}{n_{x,1}}\sum_{A_{i}=1,X_{i}=x}Y_{i}-\frac{1}{n_{x,0}}\sum_{A_{i}=0,X_{i}=x}Y_{i}\right),\quad\text{where }n_{x,a}=\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\mathbbm{1}_{A_{i}=a}.$$
Therefore, the post-stratification estimator can be understood as a weighted estimate of each strata level difference-in-means estimates,
$$\displaystyle\hat{\tau}_{\text{\tiny PS},n}=\sum_{x\in\mathds{X}}\frac{n_{x}}{n}\hat{\tau}_{\text{\tiny DM},n_{x}},\quad\text{where }n_{x}=\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}.$$
Proof.
Recalling the definition of $\hat{\pi}_{n}(x)$ (Definition 7) and denoting $n_{x,a}=\sum_{i=1}^{n}\mathbbm{1}_{X_{i}=x}\mathbbm{1}_{A_{i}=a}$
$$\displaystyle\hat{\tau}_{\text{\tiny PS},n}$$
$$\displaystyle=\frac{1}{n}\sum_{i=1}^{n}\frac{A_{i}Y_{i}}{\hat{\pi}_{n}(x)}-\frac{(1-A_{i})Y_{i}}{1-\hat{\pi}_{n}(x)}$$
$$\displaystyle=\frac{1}{n}\sum_{i=1}^{n}\frac{A_{i}Y_{i}}{n_{x,1}/n_{x}}-\frac{(1-A_{i})Y_{i}}{n_{x,0}/n_{x}}$$
Definition 7
$$\displaystyle=\sum_{x\in\mathds{X}}\frac{1}{n}\sum_{i=1}^{n}\mathds{1}_{X_{i}=x}\frac{A_{i}Y_{i}}{n_{x,1}/n_{x}}-\frac{(1-A_{i})Y_{i}}{n_{x,0}/n_{x}}$$
Categorical covariates
$$\displaystyle=\sum_{x\in\mathds{X}}\frac{n_{x}}{n}\sum_{i=1}^{n}\mathds{1}_{X_{i}=x}\frac{A_{i}Y_{i}}{n_{x,1}}-\frac{(1-A_{i})Y_{i}}{n_{x,0}}$$
Re-arranging $$n_{x}$$
$$\displaystyle=\sum_{x\in\mathds{X}}\frac{n_{x}}{n}\hat{\tau}_{\text{\tiny DM},n_{x}}.$$
∎
The post-stratified estimator is extensively detailed in Miratrix
et al. (2013), but largely focused on inference on a finite population (except in their Section 5). In particular the variance of the post-stratified estimator under a Bernoulli or a completely randomized design is given in Miratrix
et al. (2013) (see their Equation (16)). Imai
et al. (2008) also present derivation to compare the variance of a difference-in-means with a post-stratified estimator, quantifying the gain in precision (see Appendix A).
Appendix E (Non-exhaustive) Review of the different IPSW versions in the literature
Within the generalization literature, the IPSW can be found under slightly different forms, such as with estimated $\pi$ or not, or with or without normalization. Here, and to help the reader navigates, we reference some of the different formulas found in the literature and in implementations. |
Large-scale horizontal flows in the solar photosphere
III. Effects on filament destabilization
Th. Roudier
1
Laboratoire d’Astrophysique de l’Observatoire Midi-Pyrénées, Université Paul Sabatier
Toulouse III, CNRS, 57 Avenue d’Azeirex, 65000 Tarbes, France
1
M. Švanda
2Astronomical Institute, Academy of Sciences of the Czech Republic, Fričova 298, 25165 Ondřejov, Czech Republic
23Astronomical Institute, Charles University, V Holešovičkách 2, 18200 Prague, Czech Republic
3
N. Meunier
1
Laboratoire d’Astrophysique de l’Observatoire Midi-Pyrénées, Université Paul Sabatier
Toulouse III, CNRS, 57 Avenue d’Azeirex, 65000 Tarbes, France
14Laboratoire d’Astrophysique, Observatoire de Grenoble, Université Joseph Fourier,
BP 53, 38041 Grenoble cedex 9, France
4
S. Keil
5National Solar Observatory, Sacramento Peak, Sunspot, NM 88349, USA
5
M. Rieutord
1
Laboratoire d’Astrophysique de l’Observatoire Midi-Pyrénées, Université Paul Sabatier
Toulouse III, CNRS, 57 Avenue d’Azeirex, 65000 Tarbes, France
1
J.M. Malherbe
6LESIA, Observatoire de Paris, Section de Meudon, 92195 Meudon, France
6
S. Rondi
1
Laboratoire d’Astrophysique de l’Observatoire Midi-Pyrénées, Université Paul Sabatier
Toulouse III, CNRS, 57 Avenue d’Azeirex, 65000 Tarbes, France
1
G. Molodij
6LESIA, Observatoire de Paris, Section de Meudon, 92195 Meudon, France
6
V. Bommier
7LERMA, Observatoire de Paris, Section de Meudon, 92195 Meudon, France
7
B. Schmieder.
6LESIA, Observatoire de Paris, Section de Meudon, 92195 Meudon, France
6
(Received November 26, 2020/ Submitted )
Key Words.:
The Sun: Atmosphere – The Sun: Filaments – The Sun: Magnetic fields
††offprints: Th. Roudier
Abstract
Context:
Aims: We study the influence of large-scale photospheric motions on the destabilization of an eruptive filament,
observed on October 6, 7, and 8, 2004, as part of an international observing campaign (JOP 178).
Methods:Large-scale horizontal flows were invetigated from a series of MDI full-disc Dopplergrams and magnetograms.
From the Dopplergrams, we tracked supergranular flow patterns using the local correlation tracking (LCT) technique.
We used both LCT and manual tracking of isolated magnetic elements to obtain horizontal velocities from magnetograms.
Results: We find that the measured flow fields obtained by the different methods are well-correlated on large scales.
The topology of the flow field changed significantly during the filament eruptive phase,
suggesting a possible coupling between the surface flow field and the coronal magnetic field. We measured an
increase in the shear below the point where the eruption starts and a decrease in shear after the eruption.
We find a pattern in the large-scale horizontal flows at the solar surface that interact with differential rotation.
Conclusions: We conclude that there is probably a link between changes in surface flow and the disappearance of the eruptive filament.
1 Introduction
Dynamic processes on the Sun are linked to the evolution of the magnetic field as it is influenced by
the different layers from the convection zone to the solar atmosphere. In the photosphere,
magnetic fields are subject to diffusion due to supergranular flows and to the large-scale motions of
differential rotation and meridional circulation. The action of these surface motions on magnetic fields
plays an important role in the formation of large-scale filaments (Mackay and Gaizauskas 2003). In particular,
the magnetic fields that are transported across the solar surface can be sheared by dynamic surface
motions, which in turn result in shearing of the coronal field. This corresponds to the formation of coronal
flux ropes in models, which can be compared with H$\alpha$ filament observations (Mackay and van Ballegooijen 2006b).
Many theoretical models try to reproduce the basic structure and the stability of
filaments by taking surface motions into account, as quoted above. These models predict that
magnetic flux ropes involved in solar filament formation may be stable for many days and then
suddenly become unstable, resulting in filament eruption. Observations show that twisting motions are
a very common characteristic of eruptive prominences (see for example Patsourakos and Vial 2002). However,
it is still unknown whether the magnetic flux ropes emerge already twisted or if it is only the
photospheric motion that drives the twisting of the filament magnetic
field. The destabilization of the filament can also be linked to oscillations (Pouget 2006).
Therefore, it remains uncertain as to the mechanisms that drive filament disappearance.
Destabilization can come from the interior of the structure or by means of an outside flare.
In a previous paper (Rondi et al. 2007, hence forth Paper I), local horizontal photospheric flows were measured
at high spatial resolution (0.5″) in the vicinity of and beneath a filament before and during the
filament’s eruptive phases (the international JOP178 campaign). It was shown that the disappearance of the filament
originates in a filament gap. Both parasitic and normal magnetic polarities were continuously swept into the gap
by the diverging supergranular flow. We also observed the interaction of opposite
polarities in the same region, which could be a candidate for initiating the destabilization of the filament by
causing a reorganization of the magnetic field.
In this paper we investigate the large-scale photospheric flows at moderate spatial resolution (2″)
beneath and in the vicinity of the same eruptive filament. The observation and coalignment between data
from various instruments are explained in Section 2. In Section 3, we describe the different methods of
determining the flow field on the Sun surface. The large-scale flows associated with the filament are shown Section 4.
The properties of these flows before and after the filament eruption are described in Section 5.
In Section 6, we investigate the topology of horizontal flows in the filament area over the 3 days
around the filament eruption. A discussion of the results and general conclusions can be found in Section 7.
2 Observations
During three consecutive days of the JOP 178 campaign, Oct 6, 7, and 8, 2004
(http://gaia.bagn.obs-mip.fr/jop178/index.html), we observed the evolution of a filament that was close to
the central meridian. We also observed the photospheric flows directly below the filament and in its immediate
vicinity. The filament extends from $-$5° to $-$30° in latitude. A filament eruption
was observed on October 7, 2004, at 16:30 UT at multiple wavelengths from ground and space instruments.
The eruption produced a coronal mass ejection (CME) at approximately 19:00 UT that was observed with LASCO-2/SOHO
and two ribbon flares observed with the SOHO/EIT. MDI/SOHO longitudinal magnetic field and Doppler
velocity were recorded with a cadence of one minute during the 3 days (see Table 1).
The Air Force O-SPAN telescope located at the National Solar Observatory/Sacramento Peak provided a full-disc
H$\alpha$ image every minute. The pixel sizes were 1.96″ for
MDI magnetograms and Dopplergrams and 1.077″ for O-SPAN H$\alpha$ images.
Table 1 summarises the characteristics of all the observations of JOP 178 used in our analysis.
We coaligned all the data obtained by the different instruments (see Paper I for a complete description of the
co-alignment procedure). Our primary goal was to derive
the horizontal flow field below and around the filament. Co-alignment between SOHO/MDI magnetograms and O-SPAN
data was accomplished by adjusting the chromospheric network visible in H$\alpha$ (O-SPAN) and the amplitude of
longitudinal MDI magnetograms to an accuracy of one pixel (1.96″).
The general magnetic context before and after the filament eruption is shown Fig. 1.
3 Determination of the photospheric flows
Horizontal flows on the solar surface may be measured through the proper motion of the plasma
or by the effects of the plasma on the magnetic structures.
3.1 Dopplergram processing
In order to map the horizontal component of the large-scale photospheric plasma velocity fields, we
applied local correlation tracking (LCT; November 1986) to a set of full-disc Dopplergrams obtained
by the MDI instrument onboard SoHO. The aim of this method is to track the proper motion of supergranules
that are clearly detectable on Dopplergrams everywhere except for the disk centre. The
Dopplergrams were processed following the procedure described in Švanda et al. (2006) with slight modifications
for our data. Hereafter we refer to this method as LCT-Doppler.
Initially we suppress the solar $p$-mode oscillations. Using a weighted temporal average (see Hathaway 1988)
described by the formula:
$$w(\Delta t)=\exp\left[{\frac{b^{2}}{2a^{2}}}\right]\left(1+\frac{b^{2}-(\Delta
t%
)^{2}}{2a^{2}}\right)-\exp\left[{\frac{(\Delta t)^{2}}{2a^{2}}}\right]$$
(1)
where $\Delta t$ is the time interval between a given frame and the central one (in minutes), $b=16$ min and $a=8$ min.
The normalized version of this function has been applied to the data.
This filter reduces the amplitudes of the solar oscillations in the 2–4 mHz frequency band by a factor of
more than five hundred. The oscillations in each frame were reduced using a window of 31 successive frames.
The different time series were tracked using the Carrington rotation rate (with an angular velocity of
13.2 degrees per day), so that all the frames have the same heliographic longitude of the central meridian
($l_{0}=62.24\,^{\circ}$). The tracked data were remapped into a sinusoidal pseudocylindrical coordinate system
(also know as Sanson-Flamsteed grid) to reduce the distortion of structures in the Dopplergrams caused by
the geometrical projection to the disc. The sinusoidal projection is suitable to describe the behaviour on the
large scales. Tracked and remapped time series then undergo a $k$–$\omega$ filtering
with cut-off velocity of 1500 m s${}^{-1}$ to suppress the noise coming from the groups of granules and the
change of contrast of supergranular structures due to the solar rotation. The individual frames
were apodized by 10% using a smooth function, the same apodization took place in the temporal domain.
The resulting data series of tracked, remapped and filtered frames were then ready for tracking.
The LCT method applied to full-disc Dopplergrams is characterised by a Gaussian correlation window
($FWHM=60^{\prime\prime}$) and a time lag between correlated frames of 1 hour (basically 60 frames).
In all cases, one half of the intervals were before the eruption and the second half after the eruption.
All the pairs of correlated frames in the studied intervals were averaged to increase the signal to
numerical noise ratio.
3.2 Magnetogram processing
The second method by which we determine motions on the solar surface used the full-disc magnetograms
obtained by MDI/SoHO. To reduce the distortion of structures seen in the magnetograms caused by the
geometrical projection to the disc, we applied Sanson-Flamsteed grid projection. To measure the differential
motions of features on the solar surface, the data were aligned on a band along the equator. Due to the numerous
magnetic structures to be tracked manually, the field of view was limited to $-$35° to 20° in longitude
and $-$3° to $-$30° in latitude. The displacement of the longitudinal magnetic structures
visible on the magnetograms (both positive and negative) was determined with two different methods:
1.
The first approach, hereafter named manu-B, was to manually locate each magnetic structure
in each magnetogram to determine its trajectory and its horizontal velocity. Once the velocities in the
field of view were determined, as the magnetic structures do not sample the field of view unifomly, we applied a
reconstruction of the velocity field based on multi-resolution analysis described by Rieutord et al. (2007),
which allows us to limit the effects of the noise and error propagation.
2.
The second approach, named LCT-B, was to apply the LCT on magnetic structures with absolute values greater
than 25 Gauss to reduce the noise. The LCT method applied to these magnetograms used a Gaussian correlation
window ($FWHM=60^{\prime\prime}$) and a time lag between correlated frames of 1 hour.
The horizontal flow field measured using the plasma (Dopplergrams) differs from that measured using
magnetic features. This results because the magnetic field structures are not actually passive scalars; they
can interact with the plasma and are also constrained by their interactions in the upper atmosphere.
It is therefore not surprising to observe small differences in the velocity fields derived from
the various tracers (Dopplergram or magnetic structures).
4 Photospheric flow pattern below and around the filament
In this Section we describe the flows associated with the filament eruption, with particular emphasis on the
filament evolution and properties of the the mean East-West (zonal) velocities.
4.1 Flow fields
The flow fields in the vicinity of the filament obtained from 13-hour averages of velocities found using
the three different methods described above are shown in Fig. 2. Inspection of these
maps shows that all three methods provide similar large-scale velocity patterns. The amplitudes of
the flows are given in Table 2. The LCT method applied to magnetic structures with amplitude greater
than 25 Gauss appears to underestimate the amplitude of the flows obtained from the two other methods.
This is partially because of to the large window ($FWHM=60^{\prime\prime}$) used in the LCT method, combined with
the uneven distribution of the magnetic structures in the field of view. The correlation between zonal component,
$v_{x}$, between the
three methods is between 0.48 and 0.40, while the correlation for meridian component, $v_{y}$, is between
0.21 and 0.11. The low correlation coefficients for the meridian component probably occurs because
the main direction of the flows in our field of view is zonal and theamplitude of the meridian component is small.
We estimate that the directional error in our measurements of the velocities is about $\pm$5°.
A small error in determining the direction of the flow, for example 10°, can affect the amplitude of the
meridian component by a factor two. This error would greatly decrease the correlation in the meridian component
of the flow. However, as seen in Fig. 2, the general trend is similar in the velocity fields resulting from
the three different methods. In particular, the north–south stream that disturbs the differential rotation around
$-$25° is easily visible, and most of the large-scale features of the vector orientations can be identified.
We note that the lowest correlation coefficient is found between LCT-B and the other methods manu-B,
LCT-Doppler. This results when the LCT-B method clearly underestimates the amplitude flows. In
agreement with the previous results of Schuck (2006), we conclude that this method is probably not suitable
for accurately estimating horizontal velocities of magnetic footpoints.
The correlation is higher in the longitudinal region between heliographic coordinates of $-$5°
and 20° where it is between 0.58 and 0.41 for $v_{x}$ and between 0.34 and 0.17 for $v_{y}$.
In this region, the large-scale flows are well-structured and show both converging and diverging velocity patterns.
We observe in particular a large-scale stream in the north–south direction parallel to the filament located about
10° to the east and between $-$20° and $-$30° in latitude and 58°and 47° in longitude.
This flow stream is clearly visible in Fig. 2 (upper and middle panels), and its dynamics can be seen
at http://gaia.bagn.obs-mip.fr/jop178/oct7/mdi/7oct-mdi.htm. Around $-$20° in latitude, the velocities
of the differential rotation amplitude appears to dominate. However, in both measurements we observe
that the north–south large-scale stream on the eastern edge of the filament disturbs the regular differential
rotation. The north–south stream is located close to where the filament eruption begins
(longitude $l=56^{\circ}$, latitude $b=-26^{\circ}$ in Carrington coordinates). In the manu-B and LCT-B
methods, the stream appears closer to the filament. The amplitudes of the southward motions are
manu-B 31.2 m s${}^{-1}$, LCT-Doppler 40 m s${}^{-1}$, and LCT-B 13 m s${}^{-1}$ which are close to
the mean observed flows (see Table 2).
Below the latitude of $-$20°, the combination of
differential rotation and the north–south stream cause opposite polarities to move closer together, which
strongly increases the tension in the magnetic field very close to the starting point of the filament eruption. Our
measurements show a good agreement between the manu-B and LCT-Doppler methods.
4.2 Filament evolution
The filament’s evolution can be seen in Fig. 3. The north–south stream flow visible
in Fig. 6 (left) crosses over the part of the filament labeled A. The arrow on Fig. 3
indicates the same fixed point (325″,167″) in all of the subframes. We observe a general southward
motion of both the A and B segments of the filament. More precisely, we measure a tilt of these
two filament segments at the point of their separation. Between 16:07 UT and 16:58 UT, the long
axis of segment A rotates by an angle of 12° (clockwise) relative
to its western end, and the long axis of segment B of the filament rotates by an angle of 5.5°
(clockwise) relative to its western end. These rotations are compatible with the surface flow shown in
Fig. 6 (left) and in particular the north–south stream flow.
We did not find a singular pivot point where differential rotation did not displace the filament with respect
to the flare location (Mouradian 2007) for the present filament. The southern segment of the filament does not reform
after the eruption. This sudden disappearance shows there was an important
change in the sun and not only in the solar atmosphere (Mouradian 2007).
4.3 Mean zonal velocities
Figure 4 show plots of the mean zonal velocities resulting from the three
different velocity determination methods as functions of latitude. The mean zonal velocities from
the manu-B method clearly show the differential rotation profile with a plateau around $-$25°,
which corresponds to the effects of the north–south stream discussed above. A similar profile, but
with a lower amplitude, is visible in the mean zonal velocities obtained from the LCT-B method. These two methods
measure the displacement of the magnetic structures on the Sun’s surface. The LCT-Doppler method, which measures
the motion of the photospheric plasma, shows a mean zonal velocity profile in which
differential rotation is clearly visible, along with a strong secondary maximum visible at $-$23° of latitude.
The secondary maximum indicates a decrease in the amplitude of the $v_{x}$ component because
the flows in that region are oriented more in the north–south direction. That is partly due
to the presence of the north–south stream described above and to the local organisation
of the flow. In particular, converging and diverging flows in this region seem to have a North–South
orientation.
To distinguish the effects of the north–south stream from differential rotation,
we computed the mean zonal velocities from $-$25° to $-$17° in longitude centred on the region
where the north-south stream is present and from 0° to 20°
in longitude where, differential rotation dominates, with respect to the disc centre velocity.
The mean zonal velocities in the longitudinal belt, where the north–south stream is visible, clearly exhibits
a secondary maximum (Fig. 5) indicating that the solar rotation rate at this location is closer to
that of the equator. The mean zonal velocities computed in the standard differential rotation
belt show the classic latitude profile with a constant decrease down to the low latitudes.
As a consequence, the plasma in the north–south stream, which transports magnetic structures, rotates
faster at about $-$23°latitude than do the
magnetic structure located in the belt of longitude between 0° to 20°. The combination of these different
surface motions (stream and differential rotation) tends to bring together fields with opposite polarities, and
this in turn constrains the magnetic field lines.
We noted that the location of the starting point of the filament eruption is around $-$26°
in latitude, which is very close to the secondary maximum in the mean zonal velocity. Thus surface
motions that bring opposite polarities together may play a role in triggering the filament eruption.
5 Flow fields before and after the eruption
In this section we discuss the properties of the flow field just before and after
the filament eruption, which occurred at about 16:30 UT. Due to the length of the sequence
and because it is easier to accurately track the flow using Doppler images than it is to track the small
number of magnetic features above 25 Gauss, only the measurements obtained with the LCT-Doppler
are used in this section.
At the point where the filament eruption begins ($l=56^{\circ}$, $b=-26^{\circ}$ in Carrington coordinates),
we detected a steepening of the gradient in the differential rotation curve. During the eruption, the gradient
flattens out and a dip forms. While differential rotation curves describe mean zonal velocities over most of the
disc, their change in gradient signifies a change in the stretching forces
influencing the magnetic field loops over the area under study. We can express the surface rotation
as an even power of $\sin\phi$ : $R=A+B\sin^{2}\phi+C\sin^{4}\phi$, where $A$ is the angular velocity rate
of the equatorial rotation and $\phi$ the heliographic latitude. From the data we find that the constant values
(with their errors in parentheses) are before the eruption $A=13.375(0.010)$, $B=-1.46(0.10)$, $C=-1.42(0.20)$ and
after the eruption $A=13.404(0.010)$, $B=-1.78(0.10)$, $C=-1.24(0.20)$.
All of the rates are synodical in deg day${}^{-1}$. The full-disc profiles did not change significantly from before to
after the eruption. For example, for a latitude of $-$30° the zonal velocity has values of 12.92 (resp. 12.88) deg day${}^{-1}$
($-$34 m s${}^{-1}$, resp. $-$39 m s${}^{-1}$ in the Carrington coordinate system), for a latitude of $-$20°
the values are 13.18 ($-$2 m s${}^{-1}$), resp. 13.18 ($-$3 m s${}^{-1}$), deg day${}^{-1}$. Although the parameters of
the smooth fitted curve did not change too much, the local residual with respect to the smooth curve changed at the
latitude where the filament eruption starts.
Figure 6 displays the horizontal flows over a wide field of view measured using the
LCT-Doppler method and then averaging the resulting velocities over 3 hours, before and after the
filament eruption.
Before the eruption we can clearly see the north–south stream parallel to and about 10° east
of the filament. This stream disturbs differential rotation and brings plasma and magnetic structures to
the south. Although differential rotation tends to spread the magnetic lines to the east, the observed
north–south stream tends to shear the magnetic lines. After the eruption, only a northern segment of the filament
is visible and the north–south stream has disappeared.
To quantify the evolution of the horizontal flow before and after the filament eruption,
we computed the change in the direction of the velocity vectors.
The noise discussed in sect. 4.1, which can affect the direction of small magnitude vectors,
tends to reduce the correlation between flows. In order to mitigate this error, we computed the magnitude
weighted cosine of the direction difference (as in Švanda et al., 2007), which is robust to
the presence of noise. This quantity is given by the formula:
$$\rho_{\rm W}=\frac{\sum|{\mathbf{a}}|\frac{|{\mathbf{a}}\cdot{\mathbf{b}}|}{|{%
\mathbf{a}}||{\mathbf{b}}|}}{\sum|{\mathbf{a}}|},$$
(2)
where $\mathbf{a}$ and $\mathbf{b}$ are vector fields, ${\mathbf{a}}\cdot{\mathbf{b}}$ is a scalar multiplication and $|{\mathbf{a}}|$ is the magnitude.
The closer this quantity is to 1, the better the alignment between two
vector fields.
Figure 7 displays the magnitude-weighted cosine map computed between flows before and after the
eruption of the filament shown in Fig. 6. This map was computed by using a sliding
window with a size of 8.8″on a side (41.2″being the side of the data plot).
The magnitude weighted cosine map (Fig. 7) reveals that changes
in the vicinity of the north–south velocity stream are significant, while the horizontal flow field in the remainder
of the field of view is more stable. Although variations in the flow field were expected to be more or less random
over the field of view, we observe in that particular case that most of the variations between before and after of
the filament eruption are located in the north–south stream. The disappearance of the north–south stream after
the eruption could be linked to the eruption or to a natural evolution of the photospheric flows.
Figure 8 shows the flow field in more detail at the site where the eruption starts.
The shear in the zonal component at the point where the eruption starts ($l=56^{\circ}$, $b=-26^{\circ}$
in Carrington coordinates) is clearly visible and exits before and after the eruption, although the
shape of the apparent vorticity has changed. This location corresponds to the area of upflow observed
in the Meudon H$\alpha$ Dopplergram (Fig. 8 in Paper I).
We defined the shear as a difference between the mean zonal component $v_{x}$ in the area just North
and just South of the point at which the eruption appeared to start. We obtained the mean zonal flow by
averaging over boxes 2.3° on a side located 2.9° North and South of this point.
The evolution in the shear velocity computed as the difference between the mean flow in the two boxes as a function of
time is shown in Fig. 9. Six 2-hour averages of the flow fields were used to create this figure. The
error bars come from estimating the noise measured on the basis of synthetic data from Švanda et al. (2006).
One can see that the shear velocity increased before the eruption
and decreased after the eruption. One hour before the eruption, the shear reached the value of
(120$\pm$15) m s${}^{-1}$ over a distance of 5.2° (62 000 km in the photosphere).
After the filament eruption, we observed the restoration of ordinary differential rotation below 30° south.
6 Evolution over 6, 7, and 8 October, 2004
In Fig. 10 we compare the flow field in the filament region on the day of the eruption
(October 7, 2004), with the flow fields on the preceding and following days.
We see that the topology of the flows in this region changed over the three days: the daily evolution of the mean
zonal profiles is shown in Fig. 10 (bottom right). The dashed line with triangles is the mean
zonal velocity for 6 October, 2004. This profile is relatively flat probably due to the short time sequence as
only 3 hours of data were available.
The differential rotation profile for October 7 shows a secondary maximum around $-$23° in latitude, as
discussed in Sect. 4.
The October 8 profile exhibits similar trend, but with a smaller amplitude and an eastward
velocity for latitudes greater than 10°. The secondary maximum appears strongly reduced, indicating
restoration to a more regular differential rotation pattern in that zone.
One day before the eruption, shear began to form at the site where the filament eruption is triggered
($l=56^{\circ}$, $b=-26^{\circ}$ in Carrington coordinates). The north–south stream is also visible. Both
phenomena may store free energy in the coronal magnetic field configuration. The topology of the flow and
the stream are different a day after the filament eruption suggesting that, after the disappearance of the southern
part of the filament the conditions in the photosphere below, the filament became more relaxed. This may suggest
the mutual coupling of the photospheric flow and the configuration of the coronal magnetic field. To confirm
this idea, high-cadence high-resolution images and magnetograms covering the eruption time would be needed.
7 Discussion and conclusion
Filaments or prominences are important complex structures of the solar atmosphere because
they are linked to CMEs, which can influence the Earth’s atmosphere and near-space environment. Surface motions acting
on pre-existing coronal fields play a critical role in the formation of filaments. They appear to reconfigure
existing coronal fields, by twisting and stretching them, thereby depositing energy in the
topology of the coronal magnetic field. Photospheric motions can also initiate coronal magnetic field
disruption. Surface motions play an important role in forming Type B filament that are located between young and
old dipoles and have a long, stable structure. This class of filaments (Type B) requires surface motions to gradually
reconfigure preexisting coronal fields.
In a previous study (Paper I), we removed all the large-scale flows in order
to focus on smaller scale flow, such as mesogranulation and supergranulation. In this paper we have retained the
large-scale flows in order to study their influence on the triggering of a filament eruption.
The three different methods used to estimate horizontal flows, while exhibiting small differences, provided a
consistent picture of the general trends of the flow patterns.
The LCT-B method gave smaller velocity amplitudes but the flow directions agreed quite well
with the other methods. The smaller amplitude occurs because of the small number of the small isolated magnetic
elements and the smoothing effect of the spatial window used by this method. This leads to smoothing the results
of the LCT-B method by approximately a factor of two. This agrees with the previous results of Schuck (2006),
who shows that this method is not suitable for determining accurately estimating of the magnetic footpoint velocities.
Correlation coefficients comparing velocities from the manu-B and LCT-Doppler methods are positive
and significant. General trends are very similar; however, we see many local discrepancies in the measured flow fields
due to the noise. This implies that magnetic elements (detected by the LCT-B and manu-B methods) do not
necessary follow the plasma flow (detected by the LCT-Doppler method).
The filament eruption started at about 16:30 UT at the latitude around $-$25° where the measurements of the
horizontal flows based on Dopplergram tracking show a modification of the slope in the differential rotation
of the plasma. This behaviour is not observed in the curves obtained by tracking the
longitudinal magnetic field; both methods show a continuous slope in the differential rotation in the same place.
This seems to be a consequence of the presence of a north–south stream along the filament position, which is
easily measured by tracing Doppler structures and is only slightly visible in maps obtained by
tracking of magnetic elements.
The observed north–south stream has an amplitude of 30–40 m s${}^{-1}$. In the sequence of H$\alpha$ image that record
the filament’s evolution, the part of the filament, which is in the north–south stream,
is rotated in a direction compatible with the flow direction of the stream. This behaviour suggests that the foot-points
of the filament are carried by the surface flows. The influence of the stream is strengthened by
differential rotation. We should keep in mind that the filament extends from $-$5° to $-$30°
in latitude and that the northern part of the filament is subjected to a larger rotation than the southern
part.
The north–south stream, along with contribution from differential rotation causes the stretching of the coronal
magnetic field in the filament and therefore contributes to destabilizing the filament. The topology of the north–south
stream changed after the filament eruption, nearly vanishing.
We have measured an increase in the zonal shear at the site where the filaments eruption begins, before and its
sudden decrease after the eruption. This result suggests that the shear in the zonal
component of the flow field is the most important component of the surface flow affecting the stability
of the coronal magnetic field, and it can lead to its eruption, which in turn can drive active phenomena
such as ribbon flares and CMEs. This evolution of the shear in the flow field is probably related to the
re-orientation by 70° (or 110°) of the transverse field after the eruption seen in the daily vector
magnetograms obtained with THEMIS (Paper I)).
All of the features observed in the topology of the horizontal velocity fields at the starting-point site
could contribute to destabilizing the filament, resulting in its eruption.
The present study has only examined the flows in the vincinity of a single filament. From our data,
we propose that the stability and evolution of the filament are influenced by surface flows that carry the footpoints
of the filament. In addition, Dudǐk et al. (2007) constructed a linear magnetohydrostatic model of the northern part
of the filament. His models show that the shape of the dipped field lines of the central part of the filament footpoints
closely resemble the shape of the underlying, nearby polarities. This suggests a reconnection could be taking
place between the flux of the incoming parasitic polarity and the “native” flux of the weak polarities dominant
in that part of the filament channel.
Filaments, or prominences, are important complex structures of the solar atmosphere. Several mechanisms
are probably involved in the filament eruptions: the action of surface motions
to create or increase the helicity of the flux rope (van Driel-Gesztelyi 2005, Romano 2005), reconnecting
field lines in the corona (Mackay and van Ballegooijen, 2006a), the chilarity evolution of the barbs
(Su et al. 2005), and oscillations of the filament (Pouget 2006), etc.
The coronal magnetic field is generally thought to be anchored in the photosphere, and flux
transport on the solar surface (Wang et al. 1989) is the natural mechanism to explain the evolution of
filament. Recent models of the large-scale coronal structure (Mackay and van Ballegooijen, 2006a) consider the action
of the large-scale surface motions, such as differential rotation, meridional flow, and surface diffusion
(supergranular). Recent analysis of the near-surface flows computed from Doppler imaging, provided
by the MDI/SOHO instrument, reveals the shearing flow aligned with the neutral line (Hindman et al. 2006).
Our present observation indicates that large-scale surface flows are structured
(not uniform), showing areas of divergence or stream flows that should be taken into account in the
numerical simulations.
A better understanding of the mechanisms that lead to filament eruptions requires simultaneous
multi-wavelength and multi-spatial resolution observations (both high resolution of the filament and low resolution
of the full sun) over a wide range of latitudes.
Indeed, our previous works showed that different phenomena are observed at high resolution, such as magnetic
reconnection close to the starting location of the filament eruption (Paper I). In this paper, observing a
larger area at lower resolution, we showed that, at the same location where the filament first begins to erupt,
there is a steep gradient in differential rotation, a north-south stream, and a shear in the zonal component.
The next step in our study of the filament eruptions will be to examine the evolution of the extrapolated
coronal field from photospheric longitudinal magnetograms to determine whether the effects of surface motions
on coronal fields play a critical role in causing filament eruptions.
Acknowledgements.
This work was supported by the Centre National de la Recherche
Scientifique (C.N.R.S., UMR 5572, and UMR 8109), by the Programme National Soleil Terre (P.N.S.T.), by
the Czech Science Foundation under grant 205/04/2129, and by ESA-PECS under grant No. 8030. SOHO is a mission
of international cooperation between the ESA and NASA.
This work was supported by the European commission through the RTN programme (HPRN-CT-2002-00313).
We wish to thank ISOON/O-SPAN, SOHO/MDI, SOHO/EIT teams for their technical help.
References
(2006)
Dudǐk, J., Aulanier, G., Schmieder, B., Bommier, V., and Roudier, Th. 2007
submited to Sol.Phys.
(1988)
Hathaway, D. H. 1988, Sol. Phys., 117, 1
(2006)
Hindmann, B. W., Haber, D. A., Toomre, J. 2006 ApJ 653, 725
(2003)
Mackay, D.H., and Gaizauskas, V. 2003 Sol.Phys. 216, 121
(2006)
Mackay, D.H., van Ballegooijen, A. A. 2006a ApJ 641, 577
(2006)
Mackay, D.H., and van Ballegooijen, A.A. 2006b ApJ 642, 1193
(2007)
Mouradian 2007 Private communication
(1986)
November, L.J. 1986, Appl. Opt. 25, 392
(2002)
Patsourakos, S. and Vial, J.C. 2002 Sol. Phys. 208,253
(2006)
Pouget, G. 2006 PhD Thesis, Université Paris XI Orsay
(2007)
Rieutord, M., Roques, S., Roudier, Th., and Ducottet, C. 2007 A&A in press.
(2005)
Romano, P., Contarino, L., Zuccarello, F.2005 A&A 433, 683
(2007)
Rondi, S., Roudier, Th., Molodij, G., Bommier, V., Keil, S.,
Sütterlin, P., Malherbe, J.M., Meunier, N., Schmieder, B.,
Maloney, P. 2007 A&A 467, 1289
(2006)
Schuck,P.W. 2006 ApJ 646, 1358
(2005)
Su, J. T., Liu, Y., Zhang, H. Q., Kurokawa, H., Yurchyshyn, V., Shibata, K.,
Bao, X. M., Wang, G. P., Li, C. 2005 ApJ 630L.101
(2006)
Švanda, M., Klvaňa, M., and Sobotka, M. 2006 A&A 458, 301
(2007)
Švanda, M., Zhao, J. ; Kosovichev, A. G. , 2007, Sol.Phys. 241, 27
(2005)
van Driel-Gesztelyi, L.2005 Astron. and Astrophys. Space Science Library,
vol. 320, Springer, Dordrecht, The Netherlands, 2005., p.57-85
(1989)
Wang Y.-M., Nash, A. G., Sheeley, N. R. 1989 Science 245, 712 |
A Categorification of $\displaystyle{\mathfrak{q}}(2)$-crystals
Dimitar Grantcharov${}^{1}$, Ji Hye Jung${}^{2}$, Seok-Jin
Kang${}^{3}$, Myungho Kim
Department of Mathematics
University of Texas at Arlington
Arlington, TX 76021, USA
[email protected]
Department of Mathematical Sciences
Seoul National University
Seoul 151-747, Korea
[email protected]
Department of Mathematical Sciences
and
Research Institute of Mathematics
Seoul National University
Seoul 151-747, Korea
[email protected]
School of Mathematics
Korea Institute for Advanced Study
Seoul , Korea
[email protected]
Abstract.
We provide a categorification of $\mathfrak{q}(2)$-crystals on the
singular $\mathfrak{gl}_{n}$-category ${\mathcal{O}}_{n}$.
Our result extends the $\mathfrak{gl}_{2}$-crystal structure on
${\rm Irr}({\mathcal{O}}_{n})$ defined by Bernstein-Frenkel-Khovanov.
Further properties of the ${\mathfrak{q}}(2)$-crystal ${\rm Irr}({\mathcal{O}}_{n})$ are also discussed.
Key words and phrases:crystal bases, odd Kashiwara operators, quantum queer
superalgebras, $\displaystyle{\mathfrak{q}}(2)$-categorification
2010 Mathematics Subject Classification: 17B37, 81R50
${}^{1}$This work was partially supported by NSA grant H98230-13-1-0245
${}^{2}$This work was supported by NRF Grant # 2014-021261 and
by NRF-2013R1A1A2063671
${}^{3}$This work was supported by NRF Grant # 2014-021261 and by NRF Grant # 2013-055408
Introduction
The crystal basis theory is one of the most prominent
discoveries in the modern representation theory. Crystal bases,
which can be understood as global bases at $q=0$, have been
introduced by Kashiwara [16, 17, 18] and have many
significant applications to a wide variety of mathematical and
physical theories. In particular, their nice behavior with respect
to tensor products leads to elegant explanations of a lot of
combinatorial phenomena such as combinatorics of Young tableaux and
Young walls [15, 20]. On the other hand, Lusztig took a
geometric approach to develop the canonical basis theory,
which turned out to be deeply related to categorification
theory as is the case with global basis theory.
In [2], Bernstein, Frenkel and Khovanov discovered a close
connection between the singular $\mathfrak{gl}_{n}$-category
$\mathcal{O}_{n}$ and the $n$-fold tensor power ${{\mathbf{V}}}^{\otimes n}$,
where ${\mathbf{V}}$ is the 2-dimensional natural representation of
$\mathfrak{sl}_{2}$. Their result initiated the categorification
program of $\mathfrak{sl}_{2}$-representation theory, which was
extended to the quantum algebra $U_{q}(\mathfrak{sl}_{2})$ [27]
and to general tensor products of finite-dimensional
$U_{q}(\mathfrak{sl}_{2})$-modules [7]. That is, they obtained
several versions of (weak) $\mathfrak{sl}_{2}$-categorification in
the sense of Chuang and Rouquier [4].
In recent years, there is growing interest in the crystal basis
theory of the quantum superalgebras. A major accomplishment in this
direction is the development of crystal basis theory of the quantum
superalgebra $U_{q}(\mathfrak{gl}(m|n))$ for the tensor modules; i.e., the
modules arising from tensor powers of the natural representation
[1]. Such a theory was developed for the quantum
superalgebras $U_{q}(\mathfrak{q}(n))$ for the category of tensor
modules [8, 9] and
$U_{q}(\mathfrak{osp}(m|2n))$ for
a certain semisimple tensor category of $U_{q}(\mathfrak{osp}(m|2n))$-modules
[19].
The $\mathfrak{q}(n)$-case is especially interesting and challenging
both from algebraic and combinatorial perspectives. A definition of
$U_{q}(\mathfrak{q}(n))$ was first introduced in [25] using the
Fadeev-Reshetikhin-Turaev formalism. In [11], an equivalent
definition of $U_{q}(\mathfrak{q}(n))$ was given in the spirit of
Drinfeld-Jimbo presentation and the highest weight representation
theory was developed. Moreover, in [8, 9, 10], the
crystal basis theory for $U_{q}(\mathfrak{q}(n))$-modules was
established, which provides a representation theoretic
interpretation of combinatorics of semistandard decomposition
tableaux.
We now explain the main result of this paper. One important
consequence of $\mathfrak{sl}_{2}$-categorification in [2]
is that the set ${\rm Irr}({\mathcal{O}}_{n})$ of isomorphism classes
of simple objects in ${\mathcal{O}}_{n}$ admits a $\mathfrak{gl}_{2}$-crystal structure. The categorified Kashiwara operators
$\mathcal{E}$ and $\mathcal{F}$ are constructed using the
translation functors given by the $n$-dimensional natural
$\mathfrak{gl}_{n}$-module
$L(e_{1})$ and its dual $L(e_{1})^{*}$.
In the present paper, we investigate $\mathfrak{q}(2)$-crystal
structure on ${\rm Irr}({\mathcal{O}}_{n})$. We also use the
translation functors to construct the categorified odd Kashiwara
operators $\overline{\mathcal{E}}$ and $\overline{\mathcal{F}}$.
However, we use the infinite-dimensional irreducible highest weight
$\mathfrak{gl}_{n}$-module $L(e_{n})$ with highest weight
$e_{n}$
and its dual $L(e_{n})^{*}$, which fits very
naturally in our setting. We believe our result is the first step
toward the ${\mathfrak{q}}(2)$-categorification theory and it will
generate various interesting developments in categorical
representation theory of (quantum) superalgebras.
The organization of the paper is as follows. In the first two
sections, we collect some of basic definitions and
properties related to the $\mathfrak{gl}_{2}$-crystal structure on
${\rm Irr}({\mathcal{O}}_{n})$. The third section is devoted
to the properties of ${\mathfrak{q}}(2)$-crystals used in this
paper. The definition of categorified odd Kashiwara operators on
${\mathcal{O}}_{n}$, as well as the main result of this paper, are
included in Section 4. In the last section, we discuss
further properties of the $\mathfrak{q}(2)$-crystals related to
parabolic subcategories of ${\mathcal{O}}_{n}$.
Acknowledgements. We would like to thank V. Mazorchuk
for the fruitful discussions and for bringing our attention to the
paper [14]. The first author would like to thank Seoul
National University for the warm hospitality and the excellent
working conditions.
1. The category $\mathcal{O}_{n}$
Let $\mathfrak{g}=\mathfrak{gl}_{n}$ $(n\geq 2)$ be the general linear Lie
algebra with triangular decomposition ${\mathfrak{g}}=\mathfrak{n}_{-}\oplus\mathfrak{h}\oplus\mathfrak{n}_{+}$. We denote by $U({\mathfrak{g}})$ its
universal enveloping algebra and by $\mathcal{Z}({\mathfrak{g}})$ the center of
$U({\mathfrak{g}})$. Choose an orthonormal basis $\{e_{1},\ldots,e_{n}\}$ of ${\mathbb{R}}^{n}$
and identify ${\mathbb{C}}\otimes_{{\mathbb{R}}}{\mathbb{R}}^{n}$ with $\mathfrak{h}^{*}$, the dual
of $\mathfrak{h}$. Thus $\Delta:=\{e_{i}-e_{j}\mid i<j\}$ is the
set of positive roots and $\Pi:=\{e_{i}-e_{i+1}\mid 1\leq i\leq n-1\}$ is the set of simple roots. The Weyl group of $\mathfrak{g}$
is isomorphic to the symmetric group $S_{n}$, which acts on
${\mathfrak{h}}^{*}$ by permuting $e_{i}$’s.
We say that a ${\mathfrak{g}}$-module $M$ is a weight module if
$$M=\bigoplus_{\lambda\in{\mathfrak{h}}^{*}}M^{\lambda},\ \ \text{where}\ \ M^{%
\lambda}=\{m\in M\;|\;hm=\lambda(h)m\ \mbox{ for all }h\in{\mathfrak{h}}\}.$$
A linear functional
$\lambda\in{\mathfrak{h}}^{*}$ is called a weight of $M$ if
$M^{\lambda}\neq 0$. We denoted by $\text{Supp}(M)$ the
set of weights of $M$. Note that any weight of $M$ is a linear
combination of $e_{i}$’s. For a weight module $M=\bigoplus_{\lambda\in{\mathfrak{h}}^{*}}M^{\lambda}$, let $M^{*}:=\bigoplus_{\lambda\in{\mathfrak{h}}^{*}}\operatorname{Hom}_{\mathbb{C}}%
(M^{\lambda},{\mathbb{C}})$ be
the restricted dual of $M$ with the $\mathfrak{g}$-module action
given by
$$(gf)(m)=f(-gm)\ \ \text{for}\ g\in\mathfrak{g},f\in M^{*},m\in M.$$
Note that
$\text{Supp}(M^{*})=-\text{Supp}(M)$
Let ${\mathbf{V}}={\mathbb{C}}v_{1}\oplus{\mathbb{C}}v_{2}$ be the 2-dimensional natural
representation of $\mathfrak{gl}_{2}$, where the
$\mathfrak{gl}_{2}$-action is given by left multiplication. Hence we
have $\text{wt}(v_{1})=e_{1}$ and $\text{wt}(v_{2})=e_{2}$. Recall that
the special linear Lie algebra $\mathfrak{sl}_{2}$ is the subalgebra
of $\mathfrak{gl}_{2}$ generated by
$$E=\left(\begin{matrix}0&1\\
0&0\end{matrix}\right),\quad F=\left(\begin{matrix}0&0\\
1&0\end{matrix}\right),\quad H=\left(\begin{matrix}1&0\\
0&-1\end{matrix}\right).$$
Thus its universal enveloping algebra $U(\mathfrak{sl}_{2})$ is the
associative ${\mathbb{C}}$-algebra generated by $E,F,H$ with defining
relations
$$\displaystyle EF-FE=H,\quad HE-EH=2E,\quad HF-FH=-2F.$$
The $\mathfrak{gl}_{2}$-action on ${\mathbf{V}}$ induces an
$\mathfrak{sl}_{2}$-action given by
$$\displaystyle Hv_{1}=v_{1},\quad Ev_{1}=0,\quad Fv_{1}=v_{2},$$
$$\displaystyle Hv_{2}=-v_{2},\quad Ev_{2}=v_{1},\quad Fv_{2}=0.$$
It follows that the $\mathfrak{sl}_{2}$-weight of $v_{1}$ is $1$ and
that of $v_{2}$ is $-1$. For each $n\geq 2$, the tensor space
${\mathbf{V}}^{\otimes n}$ admits a $U({\mathfrak{s}l}_{2})$-module structure
via the comultiplication $\Delta:U(\mathfrak{sl}_{2})\rightarrow U(\mathfrak{sl}_{2})\otimes U(\mathfrak%
{sl}_{2})$ given by
$$\displaystyle\Delta(E)=E\otimes 1+1\otimes E,\quad\Delta(F)=F\otimes 1+1%
\otimes F,\quad\Delta(H)=H\otimes 1+1\otimes H.$$
Let ${\mathcal{W}}={\mathcal{W}}({\mathfrak{g}})$ be the category of all
weight modules $M$ such that $\dim M^{\lambda}<\infty$ for all
$\lambda\in{\mathfrak{h}}^{*}$. We denote by $\mathcal{O}=\mathcal{O}({\mathfrak{g}})$ the full subcategory of ${\mathcal{W}}({\mathfrak{g}})$ consisting of
finitely generated $U({\mathfrak{g}})$-modules that are locally $U(\mathfrak{n}_{+})$-nilpotent. The category $\mathcal{O}$ is known as the Bernstein-Gelfand-Gelfand category.
Let
$$\rho=\dfrac{1}{2}\sum_{i<j}(e_{i}-e_{j})=\dfrac{n-1}{2}e_{1}+\dfrac{n-3}{2}e_{%
2}+\cdots+\dfrac{1-n}{2}e_{n},$$
the half sum of positive roots. For a sequence $a_{1},\ldots,a_{n}$ of
$1$’s and $2$’s, we denote by $M(a_{1},\ldots,a_{n})$ and $L(a_{1},\ldots,a_{n})$ the Verma module with highest weight $a_{1}e_{1}+\cdots+a_{n}e_{n}-\rho$ and its simple quotient, respectively.
For each $i\in{\mathbb{Z}}$, define $\mathcal{O}_{i,n-i}$ to be the full
subcategory of $\mathcal{O}$ consisting of
$\mathfrak{gl}_{n}$-modules $M$ whose composition factors are of the
form $L(a_{1},\ldots,a_{n})$ with exactly $i$-many $2$’s. The
category ${\mathcal{O}}_{i,n-i}$ is a singular block of $\mathcal{O}$
corresponding to the subgroup $S_{i}\times S_{n-i}$ of $S_{n}$. For
$i<0$ or $i>n$, ${\mathcal{O}}_{i,n-i}$ consists of the zero
object only. We define
(1.1)
$$\mathcal{O}_{n}\mathbin{:=}\mathop{\mbox{\normalsize$\bigoplus$}}\limits_{i=0}%
^{n}\ \mathcal{O}_{i,n-i},$$
the main category of our interest. We denote $G(\mathcal{O}_{n}):={\mathbb{C}}\otimes K(\mathcal{O}_{n})$, where $K(\mathcal{O}_{n})$ is the
Grothendieck group of $\mathcal{O}_{n}$. As usual, we write $[M]$ for
the isomorphism class of an object $M$ in $\mathcal{O}_{n}$.
An alternative description of ${\mathcal{O}}_{i,n-i}$ is given as
follows. Let $\chi:{\mathcal{Z}}(\mathfrak{g})\rightarrow{\mathbb{C}}$ be an
algebra homomorphism. We define ${\mathcal{O}}_{\chi}$ to be the
subcategory of $\mathcal{O}$ consisting of $\mathfrak{g}$-modules $M$
such that for each $z\in{\mathcal{Z}}(\mathfrak{g})$ and $m\in M$,
we have $(z-\chi(z))^{k}\,m=0$ for some $k>0$. Then we get the
central character decomposition
$${\mathcal{O}}=\bigoplus_{\chi\in{\mathcal{Z}}(\mathfrak{g})^{\vee}}{\mathcal{O%
}}_{\chi},$$
where ${\mathcal{Z}}(\mathfrak{g})^{\vee}$ denotes the
set of all algebra homomorphisms ${\mathcal{Z}}(\mathfrak{g})\rightarrow{\mathbb{C}}$. Note that ${\mathcal{Z}}(\mathfrak{g})$ acts on a
highest weight module with highest weight $\lambda$ by a constant
$\chi^{\lambda}(z)$ $(z\in{\mathcal{Z}}(\mathfrak{g}))$. We will
write ${\mathcal{O}}_{\lambda}$ for ${\mathcal{O}}_{\chi^{\lambda}}$.
On the other hand, for each $\nu\in{\mathfrak{h}}^{*}$, set
$\overline{\nu}=\nu+{\mathbb{Z}}\Delta\in{\mathfrak{h}}^{*}\big{/}{\mathbb{Z}}\Delta$, where ${\mathbb{Z}}\Delta$ denotes the root lattice of $\mathfrak{g}$.
We define ${\mathcal{O}}[\overline{\nu}]$ to be the full
subcategory of $\mathcal{O}$ consisting of
$\mathfrak{gl}_{n}$-modules $M$ such that $\text{wt}(M)\subset\overline{\nu}$. Then we have
the support decomposition
$${\mathcal{O}}=\bigoplus_{\overline{\nu}\in{\mathfrak{h}}^{*}\big{/}{\mathbb{Z}%
}\Delta}{\mathcal{O}}[\overline{\nu}].$$
The category ${\mathcal{O}}_{i,n-i}$ coincides with ${\mathcal{O}}_{\omega_{i}-\rho}$, and
${\mathcal{O}}_{\omega_{i}-\rho}$ is a full subcategory of
${\mathcal{O}}[\overline{\omega_{i}-\rho}]$, where
$$\omega_{i}:=2\sum_{j=1}^{i}e_{j}+\sum_{j=i+1}^{n}e_{j}$$
is the shifted $i$-th fundamental weight.
Similarly, the category $\mathcal{W}$ has the central character
decomposition and the support decomposition
$${\mathcal{W}}=\bigoplus_{\chi\in{\mathcal{Z}}(\mathfrak{g})^{\vee}}{\mathcal{W%
}}_{\chi}=\bigoplus_{\overline{\nu}\in{\mathfrak{h}}^{*}\big{/}{\mathbb{Z}}%
\Delta}{\mathcal{W}}[\overline{\nu}].$$
Note that $\mathcal{W}$ has the
central character decomposition due to the fact that the ${\mathcal{Z}}(\mathfrak{g})$-action is stable on each weight space. Set
$${\mathcal{W}}_{\lambda}:={\mathcal{W}}_{\chi^{\lambda}},\quad{\mathcal{W}}_{i,%
n-i}:={\mathcal{W}}_{\omega_{i}-\rho}\cap{\mathcal{W}}[\overline{\omega_{i}-%
\rho}],\quad{\mathcal{W}}_{n}:=\bigoplus_{i=0}^{n}{\mathcal{W}}_{i,n-i}.$$
For each $0\leq i\leq n$, let ${\rm pr}_{i}:{\mathcal{W}}\rightarrow\,{\mathcal{W}}_{i,n-i}$ be the canonical projection.
Clearly, ${\rm pr}_{i}(\mathcal{O})={\mathcal{O}_{i,n-i}}$.
Following [2],
we define
(1.2)
$$\displaystyle\mathcal{E}_{i}:{\mathcal{O}}_{i,n-i}\to{\mathcal{O}}_{i+1,n-i-1}%
,\;\mathcal{E}_{i}={\rm pr}_{i+1}\circ\big{(}-\otimes L(e_{1})\big{)},$$
(1.3)
$$\displaystyle\mathcal{F}_{i}:{\mathcal{O}}_{i,n-i}\to{\mathcal{O}}_{i-1,n-i+1}%
,\;\mathcal{F}_{i}={\rm pr}_{i-1}\circ\big{(}-\otimes L(e_{1})^{*}\big{)},$$
where $L(e_{1})$ is the $n$-dimensional natural representation of
${\mathfrak{g}}$. Now we define the exact endofunctors ${\mathcal{E}}$ and ${\mathcal{F}}$ on $\mathcal{O}_{n}$ by
(1.4)
$$\displaystyle\mathcal{E}=\mathop{\mbox{\normalsize$\bigoplus$}}\limits_{i=0}^{%
n}\mathcal{E}_{i},\quad\mathcal{F}=\mathop{\mbox{\normalsize$\bigoplus$}}%
\limits_{i=0}^{n}\mathcal{F}_{i}.$$
We denote by $[\mathcal{E}]$ and $[\mathcal{F}]$ the linear endomorphisms on $G(\mathcal{O}_{n})$ induced from the functors $\mathcal{E}$ and $\mathcal{F}$, respectively.
The following theorem plays an important role in this paper.
Theorem 1.1.
([2])
(1)
$(\mathcal{E},\mathcal{F})$ is a biadjoint pair.
(2)
The correspondence $E\mapsto[\mathcal{E}]$,
$F\mapsto[\mathcal{F}]$ defines a $U(\mathfrak{sl}_{2})$-action on
$G(\mathcal{O}_{n})$.
(3)
The simple objects in $\mathcal{O}_{n}$ correspond to weight vectors in $G(\mathcal{O}_{n})$.
(4)
There is a $U(\mathfrak{sl}_{2})$-module isomorphism
(1.5)
$$\begin{array}[]{ccc}\Upsilon\ :\ G(\mathcal{O}_{n})&\rightarrow&{\mathbf{V}}^{%
\otimes n}\\
\left[M(a_{1},\ldots,a_{n})\right]&\mapsto&v_{a^{\prime}_{1}}\otimes\cdots%
\otimes v_{a^{\prime}_{n}},\end{array}$$
where $1^{\prime}:=2$ and $2^{\prime}:=1$.
Theorem 1.2.
([4, §7.4.3])
The category $\mathcal{O}_{n}$ provides an $\mathfrak{sl}_{2}$-categorification in the sense of Chuang-Rouquier.
2. $\mathfrak{gl}_{2}$-crystal structure on ${\rm Irr}(\mathcal{O}_{n})$
In this section, we will discuss the $\mathfrak{gl}_{2}$-crystal
structure on ${\rm Irr}(\mathcal{O}_{n})$, the set of isomorphism
classes of simple objects in ${\mathcal{O}_{n}}$. We first recall the
definition of $\mathfrak{gl}_{2}$-crystal.
Set $P:={\mathbb{Z}}e_{1}\oplus{\mathbb{Z}}e_{2}$ and $\alpha_{1}:=e_{1}-e_{2}$.
Let $(k_{1},k_{2})$ be the basis of $P^{*}$
which is dual to $(e_{1},e_{2})$.
The natural pairing $P^{*}\times P\to{\mathbb{Z}}$ is denoted by
$\langle\,,\,\rangle$.
Definition 2.1.
An (abstract) $\mathfrak{gl}_{2}$-crystal is a set $B$
together with the maps $\tilde{e},\tilde{f}\colon B\to B\sqcup\{0\}$, $\varphi,\varepsilon\colon B\to{\mathbb{Z}}\sqcup\{-\infty\}$, and $\operatorname{wt}\colon B\to P$
satisfying the following conditions (see [18]):
(i)
$\operatorname{wt}(\tilde{e}b)=\operatorname{wt}b+\alpha_{1}$ if $\tilde{e}b\neq 0$,
(ii)
$\operatorname{wt}(\tilde{f}b)=\operatorname{wt}b-\alpha_{1}$ if $\tilde{f}b\neq 0$,
(iii)
for any $b\in B$, $\varphi(b)=\varepsilon(b)+$
$\langle k_{1}-k_{2},\operatorname{wt}b\rangle$,
(iv)
for any $b,b^{\prime}\in B$,
$\tilde{f}b=b^{\prime}$ if and only if $b=\tilde{e}b^{\prime}$,
(v)
for any $b\in B$
such that $\tilde{e}b\neq 0$, we have $\varepsilon(\tilde{e}b)=\varepsilon(b)-1$,
$\varphi(\tilde{e}_{i}b)=\varphi(b)+1$,
(vi)
for any $b\in B$ such that $\tilde{f}b\neq 0$,
we have $\varepsilon(\tilde{f}b)=\varepsilon(b)+1$, $\varphi(\tilde{f}b)=\varphi(b)-1$,
(vii)
for any $b\in B$ such that $\varphi(b)=-\infty$, we
have $\tilde{e}b=\tilde{f}b=0$.
For each object $S\in\ \mathcal{O}_{n}$, set
$$\displaystyle\varphi(S)\mathbin{:=}{\mathop{\mathrm{max}}}\left\{m\in{\mathbb{%
Z}}_{\geq 0}\mathbin{;}\mathcal{F}^{m}(S)\neq 0\right\},\quad\varepsilon(S)%
\mathbin{:=}{\mathop{\mathrm{max}}}\left\{m\in{\mathbb{Z}}_{\geq 0}\mathbin{;}%
\mathcal{E}^{m}(S)\neq 0\right\}.$$
The $\mathfrak{sl}_{2}$-categorification on $\mathcal{O}_{n}$ has the
following nice properties.
Proposition 2.2.
([4, Proposition 5.20], [21, Proposition
2.3])
Let $S$ be a simple object in $\mathcal{O}_{n}$
with $\varepsilon(S)\neq 0$ (respectively, $\varphi(S)\neq 0$).
(1)
The object $\mathcal{E}(S)$ (respectively, $\mathcal{F}(S)$) has simple socle
and simple head, and they are isomorphic to each other.
(2)
For any other subquotient $S^{\prime}$ of $\mathcal{E}(S)$ (respectively, $\mathcal{F}(S)$),
we have $\varepsilon(S^{\prime})\leq\varepsilon(S)-1$ (respectively, $\varphi(S^{\prime})\leq\varphi(S)-1$).
For a simple object $S$ in ${\mathcal{O}}_{n}$, let $\mathsf{wt}([S])\in{\mathbb{Z}}$ be the $\mathfrak{sl}_{2}$-weight of $[S]$ in $G(\mathcal{O}_{n})$.
Define
$$\displaystyle\tilde{e}([S]):=[{\rm hd}\,\mathcal{E}(S)],\quad\tilde{f}([S]):=[%
{\rm hd}\,\mathcal{F}(S)].$$
Since the head and socle of $\mathcal{E}(S)$ are isomorphic, we may
define $\tilde{e}([S])=[{\rm soc}({\mathcal{E}}(S))]$ and similarly for
$\tilde{f}([S])$.
Then $\big{(}{\rm Irr}(\mathcal{O}_{n}),\mathsf{wt},\varphi,\varepsilon,\tilde{e},%
\tilde{f}\big{)}$ becomes an $\mathfrak{sl}_{2}$-crystal
(see the last paragraph of [21, §2.4]). For example, if
${\rm hd}\,\mathcal{E}(S)\cong S^{\prime}$, then we have
$$0\neq\operatorname{Hom}_{\mathcal{O}_{n}}(\mathcal{E}(S),S^{\prime})\cong%
\operatorname{Hom}_{\mathcal{O}_{n}}(S,\mathcal{F}(S^{\prime})).$$
Thus $S$ is a simple submodule of $\mathcal{F}(S^{\prime})$ so that we have
$S\cong{\rm soc}\,\mathcal{F}(S^{\prime})$ by the above proposition. That
is, if $\tilde{e}([S])=[S^{\prime}]$, then $[S]=\tilde{f}([S^{\prime}])$ as desired.
Note that, by the $U(\mathfrak{sl}_{2})$-module isomorphism in
Theorem 1.1(5), the $\mathfrak{sl}_{2}$-weight of $[L(a_{1},\ldots,a_{n})]$ is given by
$$\mathsf{wt}([L(a_{1},\ldots,a_{n})])=\sharp\{i\;|\;a_{i}=2\}-\sharp\{i\;|\;a_{%
i}=1\}.$$
Hence by setting
$$\operatorname{wt}(L(a_{1},\ldots,a_{n})):=(\sharp\{i\;|\;a_{i}=2\})e_{1}+(%
\sharp\{i\;|\;a_{i}=1\})e_{2},$$
$\big{(}{\rm Irr}(\mathcal{O}_{n}),\operatorname{wt},\varphi,\varepsilon,\tilde%
{e},\tilde{f}\big{)}$ becomes a $\mathfrak{gl}_{2}$-crystal.
Let ${\mathbf{B}}=\{b_{1},b_{2}\}$ be the $\mathfrak{sl}_{2}$-crystal of ${\mathbf{V}}$. By
defining $\text{wt}(b_{1})=e_{1}$, $\text{wt}(b_{2})=e_{2}$, ${\mathbf{B}}$ becomes a
$\mathfrak{gl}_{2}$-crystal.
Recall that the tensor product rule for $\mathfrak{gl}_{2}$-crystals gives a $\mathfrak{gl}_{2}$-crystal structure on ${\mathbf{B}}^{\otimes n}={\mathbf{B}}\times\cdots\times{\mathbf{B}}$ (see, for example, (3.1)). The following theorem describes
the $\mathfrak{gl}_{2}$-crystal structure on
$\text{Irr}(\mathcal{O}_{n})$.
Theorem 2.3.
([3, Theorem 4.4])
As a $\mathfrak{gl}_{2}$-crystal, $\big{(}{\rm Irr}(\mathcal{O}_{n}),\operatorname{wt},\varphi,\varepsilon,\tilde%
{e},\tilde{f}\big{)}$ is isomorphic to ${\mathbf{B}}^{\otimes n}$ under the map
$$[L(a_{1},\ldots,a_{n})]\mapsto b_{a^{\prime}_{1}}\otimes\cdots\otimes b_{a^{%
\prime}_{n}}.$$
Remark 2.4.
Note that the functors $e_{1}$ and $f_{1}$ defined in [3]
correspond to our functors $\mathcal{F}$ and $\mathcal{E}$,
respectively.
If we take the full subgraph of the $\mathfrak{gl_{\infty}}$-crystal ${\mathbb{Z}}^{n}$ given in [3] with vertices $\widetilde{\mathbf{B}}:=\left\{(a_{1},\ldots,a_{n})\mathbin{;}a_{i}\in\{1,2\}%
\right\}\subset{\mathbb{Z}}^{n}$, then $\widetilde{\mathbf{B}}$ can be regarded as a $\mathfrak{gl}_{2}$-crystal in a natural way.
Remark that in [3], the opposite tensor product rule for
$\mathfrak{gl}_{2}$-crystals was used.
The map $\psi:(a_{1},\ldots,a_{n})\mapsto(b_{a^{\prime}_{1}},\ldots,b_{a^{\prime}_{n}})$ becomes a bijection
between $\widetilde{\mathbf{B}}$ and ${\mathbf{B}}$ satisfying $\psi(\tilde{f}_{1}(a_{1},\ldots,a_{n}))=\tilde{e}(\psi(a_{1},\ldots,a_{n}))$ and $\psi(\tilde{e}_{1}(a_{1},\ldots,a_{n}))=\tilde{f}(\psi(a_{1},\ldots,a_{n}))$.
3. $\mathfrak{q}(2)$-crystals
In this section we recall the definition of $\mathfrak{q}(2)$-crystal and provide a description of the connected components
of ${\mathbf{B}}^{\otimes n}$ as $\mathfrak{q}(2)$-crystals. The notion of
abstract $\mathfrak{q}(n)$-crystal and the queer tensor
product rule are introduced in [8, 9, 10]. In
this paper, we consider $\mathfrak{q}(2)$-crystals only.
Definition 3.1.
An $\mathfrak{q}(2)$-crystal is a
$\mathfrak{gl}_{2}$-crystal together with the maps $\tilde{e}_{\overline{1}},\tilde{f}_{\overline{1}}\colon B\to B\sqcup\{0\}$ satisfying the following conditions:
(i)
$\operatorname{wt}(B)\subset P^{\geq 0}\mathbin{:=}{\mathbb{Z}}_{\geq 0}e_{1}%
\oplus{\mathbb{Z}}_{\geq 0}e_{2}$,
(ii)
$\operatorname{wt}(\tilde{e}_{\overline{1}}b)=\operatorname{wt}(b)+\alpha_{1}$, $\operatorname{wt}(\tilde{f}_{\overline{1}}b)=\operatorname{wt}(b)-\alpha_{1}$,
(iii)
for all $b,b^{\prime}\in B$, $\tilde{f}_{\overline{1}}b=b^{\prime}$ if and only if $b=\tilde{e}_{\overline{1}}b^{\prime}$.
Note that in [8, 9], the $\mathfrak{gl}_{2}$-crystals satisfying the above conditions are called abstract $\mathfrak{q}(2)$-crystals. In this paper, we simply call them $\mathfrak{q}(2)$-crystals.
Let $B$ be a $\mathfrak{q}(2)$-crystal (respectively, a $\mathfrak{gl}_{2}$-crystal) and let $B^{\prime}$ be a subset of $B$.
We say that $B^{\prime}$ is a $\mathfrak{q}(2)$-subcrystal (respectively, $\mathfrak{gl}_{2}$-subcrystal) of $B$, if $x(b)\in B^{\prime}\sqcup\{0\}$ for every $b\in B^{\prime}$ and $x=\tilde{e},\tilde{f},\tilde{e}_{\overline{1}},\tilde{f}_{\overline{1}}$ (respectively, $x=\tilde{e},\tilde{f}$).
The queer tensor product rule is given in the following theorem.
Theorem 3.2.
[8, 9, 10]
Let $B_{1}$ and $B_{2}$ be $\mathfrak{q}(2)$-crystals. Define
the tensor product $B_{1}\otimes B_{2}$ of $B_{1}$ and $B_{2}$ to be
$(B_{1}\times B_{2},\operatorname{wt},\varphi,\varepsilon,\tilde{e},\tilde{f},%
\tilde{e}_{\overline{1}},\tilde{f}_{\overline{1}})$, where
$$\displaystyle\operatorname{wt}(b_{1}\otimes b_{2})=\operatorname{wt}(b_{1})+%
\operatorname{wt}(b_{2}),$$
$$\displaystyle\varepsilon(b_{1}\otimes b_{2})={\mathop{\mathrm{max}}}\{%
\varepsilon(b_{1})-\varphi(b_{1})+\varepsilon(b_{2}),\ \varepsilon(b_{1})\},$$
$$\displaystyle\varphi(b_{1}\otimes b_{2})={\mathop{\mathrm{max}}}\{\varphi(b_{1%
})-\varepsilon(b_{2})+\varphi(b_{2}),\ \varphi(b_{2})\},$$
and
(3.1)
$$\displaystyle\tilde{e}(b_{1}\otimes b_{2})$$
$$\displaystyle=\begin{cases}\tilde{e}b_{1}\otimes b_{2}&\text{if}\ \varphi(b_{1%
})\geq\varepsilon(b_{2}),\\
b_{1}\otimes\tilde{e}b_{2}&\text{if}\ \varphi(b_{1})<\varepsilon(b_{2}),\end{cases}$$
$$\displaystyle\tilde{f}(b_{1}\otimes b_{2})$$
$$\displaystyle=\begin{cases}\tilde{f}b_{1}\otimes b_{2}&\text{if}\ \varphi(b_{1%
})>\varepsilon(b_{2}),\\
b_{1}\otimes\tilde{f}b_{2}&\text{if}\ \varphi(b_{1})\leq\varepsilon(b_{2}),%
\end{cases}$$
(3.2)
$$\displaystyle\tilde{e}_{\overline{1}}(b_{1}\otimes b_{2})$$
$$\displaystyle=\begin{cases}\tilde{e}_{\overline{1}}b_{1}\otimes b_{2}&\text{if%
$\langle k_{1},\operatorname{wt}b_{2}\rangle=\langle k_{2},\operatorname{wt}b%
_{2}\rangle=0$,}\\
b_{1}\otimes\tilde{e}_{\overline{1}}b_{2}&\text{otherwise,}\end{cases}$$
$$\displaystyle\tilde{f}_{\overline{1}}(b_{1}\otimes b_{2})$$
$$\displaystyle=\begin{cases}\tilde{f}_{\overline{1}}b_{1}\otimes b_{2}&\text{if%
$\langle k_{1},\operatorname{wt}b_{2}\rangle=\langle k_{2},\operatorname{wt}b%
_{2}\rangle=0$,}\\
b_{1}\otimes\tilde{f}_{\overline{1}}b_{2}&\text{otherwise}.\end{cases}$$
Then $B_{1}\otimes B_{2}$ is
a ${\mathfrak{q}}(2)$-crystal.
For a given ${\mathfrak{q}}(2)$-crystal, we draw an arrow $\xymatrix{b\ar@{->}@<0.2ex>[r]^{-}{1}&b^{\prime}}$ if and only if $\tilde{f}(b)=b^{\prime}$ and draw
an arrow $\xymatrix{b\ar@{-->}@<0.2ex>[r]^{-}{\bar{1}}&b^{\prime}}$ if and
only if $\tilde{f}_{\overline{1}}(b)=b^{\prime}$. The resulting oriented graph is called a
$\mathfrak{q}(2)$-crystal graph.
For a vertex $b$ in a $\mathfrak{q}(2)$-crystal graph $B$, we denote by $C(b)$
the connected component of $b$ in $B$.
The connected component as a $\mathfrak{gl}_{2}$-crystal will be denoted by $C_{\mathfrak{gl}_{2}}(b)$.
An element $b$ in a $\mathfrak{q}(2)$-crystal (respectively, $\mathfrak{gl}_{2}$-crystal) is called a highest weight vector (respectively, $\mathfrak{gl}_{2}$ -highest weight vector) if $\tilde{e}_{\overline{1}}b=\tilde{e}b=0$ (respectively, $\tilde{e}b=0$).
If $\varphi(b)=0$ and $\tilde{e}^{\varepsilon(b)}b$ is a highest weight vector, then we call $b$ a lowest weight vector.
Example 3.3.
(1)
Let $\mathbf{B}=\{b_{1},b_{2}\}$ be the $\mathfrak{gl}_{2}$-crystal of ${\mathbf{V}}$.
Define
$$\tilde{e}_{\overline{1}}(b_{1})=0,\ \ \tilde{f}_{\overline{1}}(b_{1})=b_{2},%
\quad\tilde{e}_{\overline{1}}(b_{2})=b_{1},\ \ \tilde{f}_{\overline{1}}(b_{2})%
=0.$$
Then ${\mathbf{B}}$ is a $\mathfrak{q}(2)$-crystal
with $\mathfrak{q}(2)$-crystal graph
$$\xymatrix@C=5ex{*+{1}\ar@<0.1ex>[r]^{-}{1}\ar@{-->}@<-0.9ex>[r]_{\overline{1}}%
&*+{2}}$$
From now on, $b_{1}$ and $b_{2}$ are identified with $1$ and $2$, respectively.
(2)
By the queer tensor product rule, $\mathbf{B}^{\otimes r}$ is a $\mathfrak{q}(2)$-crystal. The ${\mathfrak{q}}(2)$-crystal structure
of ${\mathbf{B}}^{\otimes 4}$ is given below.
$$\hskip 30.0pt\xymatrix{1111\ar[d]_{-}{1}\ar@{-->}[dr]^{\overline{1}}&&&&&&&\\
2111\ar[d]_{-}{1}\ar@{-->}[dr]^{\overline{1}}&1112\ar[d]_{-}{1}&&1121\ar[d]_{-%
}{1}\ar@{-->}[dr]^{\overline{1}}&&&1211\ar[d]_{-}{1}\ar@{-->}[dr]^{\overline{1%
}}&\\
2211\ar[d]_{-}{1}\ar@{-->}[dr]^{\overline{1}}&2112\ar[d]_{-}{1}&&2121\ar@<-0.2%
ex>[d]_{-}{1}\ar@{-->}@<0.9ex>[d]^{\overline{1}}&1122&&1221\ar@<-0.2ex>[d]_{-}%
{1}\ar@{-->}@<0.9ex>[d]^{\overline{1}}&1212\\
2221\ar@<-0.2ex>[d]_{-}{1}\ar@{-->}@<0.9ex>[d]^{\overline{1}}&2212&&2122&&&122%
2&\\
2222&&&&&&&}$$
Here we identify a sequence $a_{1}\cdots a_{r}$ ($a_{i}\in\{1,2\}$) with the element $a_{1}\otimes\cdots\otimes a_{r}\in{\mathbf{B}}^{\otimes r}$.
(3)
The connected component $C(22122122)\subset{\mathbf{B}}^{\otimes 8}$ is given below:
$$\xymatrix{11121121\ar[d]_{-}{1}\ar@{-->}[dr]^{\overline{1}}&\\
21121121\ar[d]_{-}{1}\ar@{-->}[dr]^{\overline{1}}&11121122\ar[d]_{-}{1}\\
22121121\ar[d]_{-}{1}\ar@{-->}[dr]^{\overline{1}}&21121122\ar[d]_{-}{1}\\
22122121\ar@<-0.2ex>[d]_{-}{1}\ar@{-->}@<0.9ex>[d]^{\overline{1}}&22121122\\
22122122&}$$
In Example 3.3(2), we can observe the following decompositions of $\mathfrak{gl}_{2}$-crystals.
Proposition 3.4.
For $r\geq 2$, the connected component $C(1^{r})$ in ${\mathbf{B}}^{\otimes r}$
is decomposed into
$$C(1^{r})=C_{\mathfrak{gl}_{2}}(1^{r})\sqcup C_{\mathfrak{gl}_{2}}(1^{r-1}2)%
\cong C_{\mathfrak{gl}_{2}}(1^{r-1})\otimes{\mathbf{B}}_{\mathfrak{gl}_{2}},$$
as $\mathfrak{gl}_{2}$-crystals.
Proof.
Let $b\in{\mathbf{B}}^{\otimes r}$. It is not difficult to see that $\tilde{f}_{\overline{1}}\tilde{f}^{x}\tilde{f}_{\overline{1}}b=0$ for all $x\in{\mathbb{Z}}_{\geq 0}$.
Note that $1^{r}$ is the only vector in $C(1^{r})$ annihilated by $\tilde{e}$ and $\tilde{e}_{\overline{1}}$ by [9, Theorem 4.6(b)].
Hence, an element of $C(1^{r})\sqcup\{0\}$ is one of the form
$$\tilde{f}^{x}(1^{r}),\ \tilde{f}^{y}\tilde{f}_{\overline{1}}\tilde{f}^{x}(1^{r%
}),\ \ (x,y\in{\mathbb{Z}}_{\geq 0}).$$
Clearly, $\tilde{f}^{x}(1^{r})\in C_{\mathfrak{gl}_{2}}(1^{r})\sqcup\{0\}$ and
$\tilde{f}^{x}\tilde{f}_{\overline{1}}(1^{r})\in C_{\mathfrak{gl}_{2}}(1^{r-1}2%
)\sqcup\{0\}$.
By direct calculations, we have
$$\displaystyle\tilde{f}_{\overline{1}}\tilde{f}^{x}(1^{r})=\begin{cases}2^{x}1^%
{r-1-x}2=\tilde{f}^{x}\tilde{f}_{\overline{1}}(1^{r})\in C_{\mathfrak{gl}_{2}}%
(1^{r-1}2)&\text{if}\ \ 0\leq x\leq r-2,\\
2^{x}2=\tilde{f}^{r}(1^{r})\in C_{\mathfrak{gl}_{2}}(1^{r})&\text{if}\ \ x=r-1%
,\\
0&\text{otherwise}.\end{cases}$$
Then it is clear that
$\tilde{f}^{y}\tilde{f}_{\overline{1}}\tilde{f}^{x}(1^{r})\in C_{\mathfrak{gl}_%
{2}}(1^{r})\sqcup C_{\mathfrak{gl}_{2}}(1^{r-1}2)\sqcup\{0\}$.
Hence,
$C(1^{r})\subseteq C_{\mathfrak{gl}_{2}}(1^{r})\ \sqcup\ C_{\mathfrak{gl}_{2}}(%
1^{r-1}2).$
Since $1^{r},1^{r-1}2=\tilde{f}_{\overline{1}}(1^{r})\in C(1^{r}),$
it follows that $C_{\mathfrak{gl}_{2}}(1^{r})\sqcup C_{\mathfrak{gl}_{2}}(1^{r-1}2)=C(1^{r})$.
Now we show $C_{\mathfrak{gl}_{2}}(1^{r})\sqcup C_{\mathfrak{gl}_{2}}(1^{r-1}2)\cong C_{%
\mathfrak{gl}_{2}}(1^{r-1})\otimes{\mathbf{B}}_{\mathfrak{gl}_{2}}$. We can regard
$C_{\mathfrak{gl}_{2}}(1^{r-1})\otimes{\mathbf{B}}_{\mathfrak{gl}_{2}}$ as a
${\mathfrak{gl}}_{2}$-subcrystal of ${\mathbf{B}}_{{\mathfrak{gl}}_{2}}^{\otimes r}$. Note that $b$ is a
${\mathfrak{gl}}_{2}$-highest weight vector in ${\mathbf{B}}_{{\mathfrak{gl}}_{2}}^{\otimes r}$ if and
only if $b$ is a lattice permutation. Since
$C_{{\mathfrak{gl}}_{2}}(1^{r-1})=\left\{2^{x}1^{r-1-x}\mathbin{;}0\leq x\leq r%
-1\right\},$ there
are only two ${\mathfrak{gl}}_{2}$-highest weight vectors in $C_{{\mathfrak{gl}}_{2}}(1^{r-1})\otimes{\mathbf{B}}_{{\mathfrak{gl}}_{2}}$: $1^{r}$ and $1^{r-1}2$. Hence,
$C_{\mathfrak{gl}_{2}}(1^{r-1})\otimes{\mathbf{B}}_{\mathfrak{gl}_{2}}=C_{%
\mathfrak{gl}_{2}}(1^{r})\sqcup C_{\mathfrak{gl}_{2}}(1^{r-1}2).$
∎
Recall that a finite sequence of positive integers $x=x_{1}\cdots x_{N}$ is called a strict reverse lattice permutation if for $1\leq k\leq N$ and $2\leq i\leq n$, the number of occurrences of $i$
is strictly greater than the number of occurrences of $i-1$ in $x_{k}\cdots x_{N}$ as long as $i-1$ appears in $x_{k}\cdots x_{N}$
[10].
Proposition 3.5.
[10]
An element $b_{1}\otimes\cdots\otimes b_{N}\in{\mathbf{B}}^{\otimes N}$ is a
lowest weight vector if and only if it is a strict reverse lattice
permutation.
We say that a sequence consisting of 1’s and 2’s is a trivial
lattice permutation if
(i) the number of 1’s and the number of 2’s are the same,
(ii) in every proper initial part, the number of occurrences of
1 is strictly larger than the number of occurrences of 2.
For a sequence $u$ in $\{1,2\}$, we denote by $|u|$ the length of $u$.
Proposition 3.6.
(1)
Let $\ell=a_{1}a_{2}\cdots a_{r}$ be a ${\mathfrak{q}}(2)$-lowest weight
vector in ${\mathbf{B}}^{\otimes r}$. Then there is a unique way to decompose
$\ell$ into the form
$$\ell=u_{1}u_{2}\cdots u_{s}2$$
such that every $u_{i}$ is a trivial lattice permutation or a
maximal subsequence consisting of 2’s only.
(2)
Let $A_{\ell}$ be the set of positive integers $k$ with $1\leq k\leq r-1$ such that
$$|u_{1}|+|u_{2}|+\cdots+|u_{i-1}|<k\leq|u_{1}|+|u_{2}|+\cdots+|u_{i}|,$$
where $u_{i}$ is a trivial lattice
permutation. For $b=b_{1}\cdots b_{r}\in{\mathbf{B}}^{\otimes r}$, define
$\widehat{b}$ to be the sequence obtained from $b$ by removing all
$b_{k}$’s for $k\in A_{\ell}$. We also define $\overline{b}$ to be the
subsequence $b_{k_{1}}b_{k_{2}}\cdots b_{k_{m}}$ of $b$, where
$A_{\ell}=\{k_{1}<k_{2}<\cdots<k_{m}\}$.
Then we have
(a)
$C(\ell)=\left\{b\in{\mathbf{B}}^{\otimes r}\mathbin{;}\widehat{b}\in C(%
\widehat{\ell}),\ \ {\overline{b}}={\overline{\ell}}\right\}$.
(b)
The map $C(\ell)\to C(\widehat{\ell})$ given by $b\mapsto\widehat{b}$ is a bijection that commutes with $\tilde{e}$, $\tilde{f}$, $\tilde{e}_{\overline{1}}$,
$\tilde{f}_{\overline{1}}$.
Proof.
Since $\ell$ is a ${\mathfrak{q}}(2)$-lowest weight vector, it is a strict reverse lattice permutation by Corollary
3.5.
In particular, we have $a_{r}=2$.
If $\ell=2^{r}$, we have $u_{1}=2^{r-1}.$
If $\ell\neq 2^{r}$, let $a_{j}$ be the leftmost $1$ that occurs in $\ell$.
By the definition, $a_{j}a_{j+1}\cdots a_{r}$ is also a strict reverse lattice permutation,
therefore,
the number of occurrences of $2$ is strictly
greater than the number of occurrences of $1$ in $a_{j}a_{j+1}\cdots a_{r}$.
Hence, there is the smallest $k$ such that $j+1\leq k\leq r-1$ and
the number of occurrences of $2$ is equal to
the number of occurrences of $1$ in $a_{j}a_{j+1}\cdots a_{k}$.
We let $u_{1}=a_{1}\cdots a_{j-1}=2^{j-1}$, $u_{2}=a_{j}a_{j+1}\cdots a_{k}$ when $j\geq 2$, and
$u_{1}=a_{j}a_{j+1}\cdots a_{k}$ when $j=1$.
Since $k$ is the smallest one and the number of occurrences of $2$ is equal to
the number of occurrences of $1$ in $a_{j}a_{j+1}\cdots a_{k}$,
the subsequence $a_{j}a_{j+1}\cdots a_{k}$ is a trivial lattice permutation.
Since $a_{k+1}\cdots a_{r}$ is also a strict reverse lattice permutation,
we repeat the above procedure.
By the construction, it is straightforward that
the decomposition of $\ell$ into the form $\ell=u_{1}u_{2}\cdots u_{s}2$
is unique.
Let $M:=\left\{b\in{\mathbf{B}}^{\otimes r}\mathbin{;}\widehat{b}\in C(\widehat{%
\ell}),\ \overline{b}=\overline{\ell}\right\}$. By defining $\widehat{0}\mathbin{:=}0$, we obtain a bijection between $M\sqcup\{0\}$ and
$C(\widehat{\ell})\sqcup\{0\}$ given by $b\mapsto\widehat{b}$. We
will show that this bijection commutes with $\tilde{e}$, $\tilde{f}$, $\tilde{e}_{\overline{1}}$
and $\tilde{f}_{\overline{1}}$.
Note that $\tilde{f}_{\overline{1}},\tilde{e}_{\overline{1}}$ act only on $b_{r}$ for $b\in{\mathbf{B}}^{\otimes r}$.
In addition, we have $r\not\in A_{\ell}$ so that $\widehat{b}=ub_{r}$ for some $u$.
It follows that
$$\widehat{\tilde{f}_{\overline{1}}(b)}=\tilde{f}_{\overline{1}}(\widehat{b}),%
\quad\widehat{\tilde{e}_{\overline{1}}(b)}=\tilde{e}_{\overline{1}}(\widehat{b%
}).$$
We know that
$$\varphi(b)={\mathop{\mathrm{max}}}\left\{k\geq 0\mathbin{;}\tilde{f}(b)\in{%
\mathbf{B}}^{\otimes r}\right\}\text{ and }\,\varepsilon(b)={\mathop{\mathrm{%
max}}}\left\{k\geq 0\mathbin{;}\tilde{e}(b)\in{\mathbf{B}}^{\otimes r}\right\}.$$
Since $\overline{b}=\overline{\ell}$ is a sequence of trivial lattice permutations,
we have $\varphi(b)=\varphi(\widehat{b})$ and $\varepsilon(b)=\varepsilon(\widehat{b})$.
In particular, we have $\tilde{f}(\widehat{b})=0$ if and only if $\tilde{f}(b)=0$, and
$\tilde{e}(\widehat{b})=0$ if and only if $\tilde{e}(b)=0$.
Assume that $\tilde{f}(b)\neq 0$. Then we have
$\tilde{f}(b)=b_{1}\cdots\tilde{f}(b_{t})\cdots b_{r}$ for some $1\leq t\leq r$.
Since $\varphi(u)=\varepsilon(u)=0$ for every trivial lattice permutation $u$,
the tensor product rule implies that
$t\notin A_{\ell}$ and
$\tilde{f}(\widehat{b})=\widehat{\tilde{f}(b)}$.
Similarly, if $\tilde{e}(b)\neq 0$, then we have $\widehat{\tilde{e}(b)}=\tilde{e}(\widehat{b})$.
Hence the bijection $b\mapsto\widehat{b}$ commutes with $\tilde{e},\tilde{f},\tilde{e}_{\overline{1}}$ and $\tilde{f}_{\overline{1}}$.
It follows that the set $M\sqcup\{0\}$ is closed under the actions $\tilde{e},\tilde{f},\tilde{e}_{\overline{1}},\tilde{f}_{\overline{1}}$ and
$M$ is connected.
Since $\ell\in M$, we have $C(\ell)\subseteq M$ and hence $C(\ell)=M$, as desired.
∎
Example 3.7.
In Example 3.3(4), the element $\ell=22122122$ is
a $\mathfrak{q}(2)$-lowest weight vector in ${\mathbf{B}}^{\otimes 8}$. Then we
obtain
$A_{\ell}=\{3,4,6,7\}$, $\widehat{\ell}=2222$ and $\overline{\ell}=1212$.
We also have $C(\ell)\cong C(2222)=C_{{\mathfrak{gl}}_{2}}(1^{4})\sqcup C_{{\mathfrak{gl}}_{%
2}}(1^{3}2).$
We close this section with a theorem that will be useful in the next
section. Let ${\mathbf{a}}=(a_{1},\ldots,a_{n})$ be a sequence of $1$’s
and $2$’s. We denote by $G(\mathbf{a^{\prime}})$ the basis element of ${\mathbf{V}}^{\otimes n}$ corresponding to $[L(\mathbf{a})]$ under $\Upsilon$, where
$\mathbf{a}^{\prime}=(a_{1},\ldots,a_{n})^{\prime}\mathbin{:=}(a_{1}^{\prime},%
\ldots,a^{\prime}_{n})$.
We write
$\mathbf{a}x=(a_{1},\ldots,a_{n},x)$ for $x=1,2$.
Then we have the following.
Theorem 3.8.
([2, Proposition 4], see also [6, Theorem 3.1])
Let $\mathbb{a}$, $\mathbb{a}_{1}$ and $\mathbb{a}_{2}$ be sequences in
$\{1,2\}$ and let $h=v_{1}\otimes v_{2}-v_{2}\otimes v_{1}$.
(1)
$G(1)=v_{1}$ and $G(2)=v_{2}$.
(2)
If $\mathbb{a}=2\mathbb{a}_{1}$, then $G(\mathbb{a})=v_{2}\otimes G(\mathbb{a}_{1})$.
(3)
If $\mathbb{a}=\mathbb{a}_{1}1$, then $G(\mathbb{a})=G(\mathbb{a}_{1})\otimes v_{1}$.
(4)
If $\mathbb{a}=\mathbb{a}_{1}(12)\mathbb{a}_{2}$ with $|\mathbb{a}_{1}|=k$ and $|\mathbb{a}|=m$,
then $G(\mathbb{a})=h_{k}(G(\mathbb{a}_{1}\mathbb{a}_{2}))$, where
$h_{k}:{\mathbf{V}}^{\otimes m-2}\rightarrow{\mathbf{V}}^{\otimes m}$ is the
linear map given by
$$u_{1}\otimes\cdots\otimes u_{m-2}\longmapsto u_{1}\otimes\cdots\otimes u_{k}%
\otimes h\otimes u_{k+1}\otimes\cdots\otimes u_{m-2}.$$
Remark 3.9.
Let $\widetilde{\Upsilon}:G(\mathcal{O}_{n})\mathop{\xrightarrow[]{{\raisebox{-2.58%
pt}[0.0pt][-2.58pt]{$\mspace{2.0mu }\sim\mspace{2.0mu }$}}}}{\mathbf{V}}^{%
\otimes n}$ be
the identification used in [3, §4.4]. Then we have $\psi\circ\widetilde{\Upsilon}=\Upsilon$, where $\psi:{\mathbf{V}}^{\otimes n}\rightarrow{\mathbf{V}}^{\otimes n}$ is given by $v_{a_{1}}\otimes\cdots\otimes v_{a_{n}}\mapsto v_{a^{\prime}_{1}}\otimes\cdots%
\otimes v_{a^{\prime}_{n}}$.
Then it is not difficult to check $G(\mathbf{a^{\prime}})=\psi(\widetilde{G}(\mathbf{a}))$, where $\widetilde{G}(\mathbf{a})$ denotes the
upper global basis (= dual canonical basis) element
corresponding to $\mathbf{a}$, which is given in [3].
4. Categorified odd Kashiwara operators
In this section we define the odd Kashiwara operators $\tilde{f}_{\overline{1}},\tilde{e}_{\overline{1}}$ on ${\rm Irr}(\mathcal{O}_{n})$ and show that ${\rm Irr}(\mathcal{O}_{n})$ has a $\mathfrak{q}(2)$-crystal structure. To
define $\tilde{f}_{\overline{1}},\tilde{e}_{\overline{1}}$ we will use tensor products with the
infinite-dimensional irreducible highest weight
$\mathfrak{gl}_{n}$-modules $L(e_{n})$ with highest weight $e_{n}$ and its dual
$L(e_{n})^{*}$. The choice of
$L(e_{n})$ is justified by the properties listed in the next proposition.
Recall that, for a parabolic subalgebra $\mathfrak{p}$ of
$\mathfrak{g}$, a $\mathfrak{g}$-module is parabolically
induced from a $\mathfrak{p}$-module $M_{0}$ if $M=U(\mathfrak{g})\otimes_{U(\mathfrak{p})}M_{0}$. In this paper, we take $\mathfrak{p}$
to be the maximal parabolic subalgebra with nilradical
${\mathfrak{n}}_{\mathfrak{p}}$ and the Levi subalgebra
${\mathfrak{l}}_{\mathfrak{p}}=\mathfrak{gl}_{n-1}\oplus\mathfrak{gl}_{1}$.
Proposition 4.1.
(1)
Let $L(0)\otimes L(1)$ be the 1-dimensional
$\mathfrak{p}$-module on which $\mathfrak{n}_{\mathfrak{p}}$ acts
trivially. Then the $\mathfrak{gl}_{n}$-module $L(e_{n})$ is
parabolically induced from $L(0)\otimes L(1)$. In particular,
$$\text{Supp}(L(e_{n}))=\{e_{n}+\sum_{i=1}^{n-1}b_{i}(e_{n}-e_{i})\mid b_{i}\in{%
\mathbb{Z}}_{\geq 0}\}.$$
(2)
All the weight spaces of $L(e_{n})$ are 1-dimensional.
(3)
If a $\mathfrak{gl}_{n}$-module $M$ belongs to the category
$\mathcal{O}$, then $M\otimes L(e_{n})$ belongs to the category
$\mathcal{W}$.
Proof.
The proofs are standard. For (1) and (2), see for
example, [24, Lemma 11.2].
∎
Define the functors
$$\overline{\mathcal{E}}_{i}:{\mathcal{O}_{i;n}}\to{\mathcal{W}}_{i+1;n},\;%
\overline{\mathcal{E}}_{i}={\rm pr}_{i+1}\circ\big{(}-\otimes L(e_{n})\big{)}.$$
and set
$$\overline{\mathcal{E}}:{\mathcal{O}_{n}}\to{\mathcal{W}_{n}},\;\overline{%
\mathcal{E}}=\bigoplus_{i=0}^{n}\overline{\mathcal{E}}_{i}.$$
The following proposition plays a crucial role in defining the odd
Kashiwara operator $\tilde{e}_{\overline{1}}$ on ${\rm Irr}(\mathcal{O}_{n})$.
Proposition 4.2.
(1)
The functor $\overline{\mathcal{E}}$ is an exact covariant functor such that
$$\overline{\mathcal{E}}:{\mathcal{O}_{n}}\longrightarrow{\mathcal{O}}_{n}.$$
(2)
$\overline{\mathcal{E}}(M(a_{1},...,a_{n}))=\begin{cases}M(a_{1},...,a_{n-1},2)%
&\mbox{ if $a_{n}=1$,}\\
0&\mbox{ if $a_{n}=2$.}\end{cases}$
(3)
$\overline{\mathcal{E}}(L(a_{1},...,a_{n}))=\begin{cases}L(a_{1},...,a_{n-1},2)%
&\mbox{ if $a_{n}=1$,}\\
0&\mbox{ if $a_{n}=2$.}\end{cases}$
Proof.
The fact that $\overline{\mathcal{E}}$ is exact and covariant is
standard. We next show that the image of $\overline{\mathcal{E}}$ is in $\mathcal{O}_{n}$ and prove (2).
We would like to show that if $M$ is in ${\mathcal{O}_{n}}$ then $\overline{\mathcal{E}}(M)$ is in ${\mathcal{O}_{n}}$ as
well. It is enough to prove that for the projective cover $P$ of $M$
in ${\mathcal{O}_{n}}$. It is clear that $\overline{\mathcal{E}}(P)$ is
locally $U({\mathfrak{n}}_{+})$-nilpotent, so it remains to show that
$\overline{\mathcal{E}}(P)$ is finitely generated. Since every
projective in ${\mathcal{O}}$ has a Verma flag, we may assume that $P=M(\lambda)$ is a Verma module. But then by Proposition
4.1, $M(\lambda)\otimes L(e_{n})$ has an infinite filtration with
subquotients $M(\lambda+e_{n}+\sum_{i=1}^{n-1}b_{i}(e_{n}-e_{i}))$,
$b_{i}\in{\mathbb{Z}}_{\geq 0}$. The proof of this fact uses the same reasoning as the the proof of the decomposition of $M(\lambda)\otimes L(e_{1})$ (for the latter, see for example [13, Theorem 3.6]). It is straightforward to check that
if $\lambda+\rho=\sum_{i=1}^{n}a_{i}e_{i}$ for $a_{i}\in\{1,2\}$,
then the $e_{n}$-coordinate of $\lambda+e_{n}+\sum_{i=1}^{n-1}b_{i}(e_{n}-e_{i})+\rho$ is $1$ or $2$ only if $b_{1}=...=b_{n-1}=0$ and
$a_{n}=1$. We thus proved a stronger statement: $\overline{\mathcal{E}}(M(\lambda))=M(\lambda+e_{n})$ if $a_{n}=1$ and $\overline{\mathcal{E}}(M(\lambda))=0$ otherwise which implies (2).
(3) We will use the
notation introduced at the end of Section 3.
For a sequence $\mathbb{a}=a_{1}\cdots a_{n}$ in $\{1,2\}$, set $v_{\mathbb{a}}:=v_{a_{1}}\otimes\cdots v_{a_{n}}$.
Recall that the element $G(\mathbb{a}^{\prime})$ corresponds to $[L(\mathbb{a})]$ under $\Upsilon$.
For the case $a_{n}=2$, recall that $G(\mathbb{a}^{\prime}1)=\sum_{\mathbb{b}}c_{\mathbb{b}}^{\mathbb{a}^{\prime}}%
\ v_{\mathbb{b}}\otimes v_{1}$ for some $c_{\mathbb{b}}^{\mathbb{a}^{\prime}}\in{\mathbb{Z}}$ by Theorem 3.8(3).
Hence we have $[L(\mathbb{a}2)]=\sum_{\mathbb{b}}c_{\mathbb{b}}^{\mathbb{a}^{\prime}}\ [M(%
\mathbb{b}^{\prime}2)]$.
We obtain
$\overline{\mathcal{E}}(L(\mathbb{a}2))=0$ by (2).
In order to prove the case $a_{n}=1$, it is sufficient to
prove the following statement:
(4.1)
if $$G(\mathbb{a}2)=\sum_{\mathbb{b}}c_{\mathbb{b}}^{\mathbb{a}}\ v_{\mathbb{b}}%
\otimes v_{2}+\sum_{\mathbb{b}}d_{\mathbb{b}}^{\mathbb{a}}\ v_{\mathbb{b}}%
\otimes v_{1}$$, then $$G(\mathbb{a}1)=\sum_{\mathbb{b}}c_{\mathbb{b}}^{\mathbb{a}}\ v_{\mathbb{b}}%
\otimes v_{1}$$.
Indeed, passing through $\Upsilon$, it implies that if $[L(\mathbb{a}^{\prime}1)]=\sum_{\mathbb{b}}c_{\mathbb{b}}^{\mathbb{a}}\ [M(%
\mathbb{b}^{\prime}1)]+\sum_{\mathbb{b}}d_{\mathbb{b}}^{\mathbb{a}}\ [M(%
\mathbb{b}^{\prime}2)]$ for some $c_{\mathbb{b}}^{\mathbb{a}},d_{\mathbb{b}}^{\mathbb{a}}\in{\mathbb{Z}}$, then
$[L(\mathbb{a}^{\prime}2)]=\sum_{\mathbb{b}}c_{\mathbb{b}}^{\mathbb{a}}\ [M(%
\mathbb{b}^{\prime}2)]$.
Hence, by (2) we have
$$\displaystyle[L(\mathbb{a}^{\prime}2)]=$$
$$\displaystyle\sum_{\mathbb{b}}c^{\mathbb{a}}_{\mathbb{b}}\ [M(\mathbb{b}^{%
\prime}2)]=\sum_{\mathbb{b}}c^{\mathbb{a}}_{\mathbb{b}}\ [\overline{\mathcal{E%
}}(M(\mathbb{b}^{\prime}1))]$$
$$\displaystyle=$$
$$\displaystyle[\overline{\mathcal{E}}]\Big{(}\sum_{\mathbb{b}}c^{\mathbb{a}}_{%
\mathbb{b}}\ [(M(\mathbb{b}^{\prime}1))]+\sum_{\mathbb{b}}d^{\mathbb{a}}_{%
\mathbb{b}}\ [M(\mathbb{b}^{\prime}2)]\Big{)}$$
$$\displaystyle=$$
$$\displaystyle[\overline{\mathcal{E}}(L(\mathbb{a}^{\prime}1))],$$
Thus $L(\mathbb{a}^{\prime}2)$ is isomorphic to $\overline{\mathcal{E}}(L(\mathbb{a}^{\prime}1))$, as desired.
We will use induction on the length of $\mathbb{a}$. If the length of $\mathbb{a}$ is zero or 1, then it is clear
from Theorem 3.8.
First, we consider the case $\mathbb{a}=2\mathbb{a_{1}}$ for some $\mathbb{a_{1}}$.
By Theorem 3.8(2),
we have
$G(\mathbb{a}2)=G(2\mathbb{a_{1}}2)=v_{2}\otimes G(\mathbb{a_{1}}2)$ and $G(\mathbb{a}1)=G(2\mathbb{a_{2}}1)=v_{2}\otimes G(\mathbb{a}_{2}1)$.
Then (4.1) follows from the induction hypothesis.
Second, if $\mathbb{a}=1^{n}$, then $G(1^{n}2)=v_{1}^{\otimes{n-1}}\otimes v_{1}\otimes v_{2}\,-\,v_{1}^{\otimes{n-%
1}}\otimes v_{2}\otimes v_{1}$ and
$G(1^{n+1})=v_{1}^{\otimes{n+1}}$ by Theorem 3.8(3),(4). Thus we obtain (4.1).
Last, let $\mathbb{a}=1^{k}12\mathbb{a_{1}}$ for some $k\geq 0$ and $\mathbb{a_{1}}$.
By the induction hypothesis, we know that
if $G(1^{k}\mathbb{a_{1}}2)=\sum_{\mathbb{b}}c^{\mathbb{a_{1}}}_{\mathbb{b}}\ v_{%
\mathbb{b}}\otimes v_{2}+\sum_{\mathbb{b}}d^{\mathbb{a_{1}}}_{\mathbb{b}}\ v_{%
\mathbb{b}}\otimes v_{1}$ for some $c^{\mathbb{a_{1}}}_{\mathbb{b}},d^{\mathbb{a_{1}}}_{\mathbb{b}}\in{\mathbb{Z}}$,
then $G(1^{k}\mathbb{a_{1}}1)=\sum_{\mathbb{b}}c^{\mathbb{a_{1}}}_{\mathbb{b}}\ v_{%
\mathbb{b}}\otimes v_{1}$.
Using Theorem 3.8(4), we obtain
$$\displaystyle G(1^{k}12\mathbb{a_{1}}2)=\sum_{\mathbb{b}}c^{\mathbb{a_{1}}}_{%
\mathbb{b}}\ v_{\mathbb{b_{1}}}\otimes h\otimes v_{\mathbb{b_{2}}}\otimes v_{2%
}+\sum_{\mathbb{b}}d^{\mathbb{a_{1}}}_{\mathbb{b}}\ v_{\mathbb{b_{1}}}\otimes h%
\otimes v_{\mathbb{b_{2}}}\otimes v_{1},$$
and
$$G(1^{k}12\mathbb{a_{1}}1)=\sum_{\mathbb{b}}c^{\mathbb{a_{1}}}_{\mathbb{b}}\ v_%
{\mathbb{b_{1}}}\otimes h\otimes v_{\mathbb{b_{2}}}\otimes v_{1},$$
where $h=v_{1}\otimes v_{2}-v_{2}\otimes v_{1}$ and $\mathbb{b_{1}}$ (respectively, $\mathbb{b_{2}}$) stands for the first $k$ terms (respectively, last $|\mathbb{b}|-k$ terms) of $\mathbb{b}$.
Therefore, we obtain (4.1).
∎
Remark 4.3.
Using Proposition 4.2(2) we may show that $\overline{\mathcal{E}}:{\mathcal{O}}_{n}\to{\mathcal{O}}_{n}$ coincides with the functor
$$M\mapsto\mbox{pr}_{\mathcal{W}_{n}}(M\otimes L(e_{n})),$$
where $\mbox{pr}_{\mathcal{W}_{n}}:{\mathcal{W}}\to{\mathcal{W}}_{n}$ stands for the canonical projection functor.
In view of the above proposition, it is natural to define
$$\tilde{e}_{\overline{1}}([S])=[\overline{\mathcal{E}}(S)]\ \ \text{for}\ \ S%
\in{\rm Irr}(\mathcal{O}_{n}).$$
Now we will construct a left adjoint of $\overline{\mathcal{E}}$, which will be denoted by
$\overline{\mathcal{F}}$. We will apply the technique originally
introduced by Fiebig for Kac-Moody algebras [5] and later
adopted by Kåhrström [14], to a case similar to ours .
For $\lambda\in\mathfrak{h}^{*}$ and a $\mathfrak{gl}_{n}$- module $M$ in
$\mathcal{W}$, denote by $M^{\nleqslant\lambda}$ the submodule of $M$
generated by all the weight spaces $M^{\mu}$ with
$\mu\not\leqslant\lambda$. Set
$$M^{\leqslant\lambda}:=M/M^{\nleqslant\lambda}.$$
For $i=0,...,n$, define
$$\overline{\mathcal{F}}_{i}:{\mathcal{O}}_{i;n}\to{\mathcal{W}_{i-1,n-i+1}},\;%
\overline{\mathcal{F}}_{i}={\rm pr}_{i-1}\circ\big{(}-\otimes L(e_{n})^{*}\big%
{)}^{\leqslant(\omega_{i}-\rho)}$$
(recall that $\omega_{i}:=2\sum_{j=1}^{i}e_{j}+\sum_{j=i+1}^{n}e_{j}$). Now define
$$\overline{\mathcal{F}}:{\mathcal{O}_{n}}\to{\mathcal{W}_{n}},\;\overline{%
\mathcal{F}}=\bigoplus_{i=0}^{n}\overline{\mathcal{F}_{i}}.$$
Proposition 4.4.
Let $\lambda\in{\mathfrak{h}}^{*}$.
(1)
The functor $M\mapsto M^{\leqslant\lambda}$
is right exact on $\mathcal{W}$.
(2)
If $M$ belongs to $\mathcal{O}$,
then $\left(M\otimes L(e_{n})^{*}\right)^{\leqslant\lambda}$ belongs to $\mathcal{O}$ as well.
(3)
The functor $\overline{\mathcal{F}}_{i}$
is the left adjoint of the functor $\overline{\mathcal{E}}_{i-1}$. Furthermore, we have
$$\overline{\mathcal{F}}_{i}(M(a_{1},\ldots,a_{n}))=\begin{cases}M(a_{1},\ldots,%
a_{n-1},1)&\text{if}\ a_{n}=2,\\
0&\text{if}\ a_{n}=1.\end{cases}$$
(4)
The functor $\overline{\mathcal{F}}$ is the left adjoint of $\overline{\mathcal{E}}$.
Proof.
Part (1) is [14, Lemma 2.9], while part (2) is [14, Corollary 2.12].
For part (3) we follow the proof of [14, Theorem 3.4].
Note that Theorem 3.4 in [14] is for the principal block ${\mathcal{O}}_{0}$ of ${\mathcal{O}}$, namely for the functor $M\mapsto{\rm pr}_{{\mathcal{O}}_{0}}\left(M\otimes L(e_{n})^{*}\right)^{%
\leqslant 0}$ but the same reasoning applies for the block ${\mathcal{O}}_{i;n}$.
To find ${\rm pr}_{i-1}\left(M(a_{1},...,a_{n})\otimes L(e_{n})^{*}\right)^{\leqslant(%
\omega_{i}-\rho)}$ we first use Proposition 4.1 and fix a basis $v_{b}$, $b=(b_{1},...,b_{n-1})\in\left({\mathbb{Z}}_{\geq 0}\right)^{n-1}$, such that $\operatorname{wt}(v_{b})=e_{n}+\sum_{j=1}^{n-1}b_{j}(e_{n}-e_{j})$. Then the set $\{v_{b}^{*}\;|\;b\in\left({\mathbb{Z}}_{\geq 0}\right)^{n-1}\}$ forms a basis of $L(e_{n})^{*}$. Thus, if $v$ is a highest weight vector of $M(a_{1},...,a_{n})$ then
$$M(a_{1},...,a_{n})\otimes L(e_{n})^{*}=\bigoplus_{b}U({\mathfrak{n}}_{-})(v%
\otimes v_{b}^{*})$$
as $U({\mathfrak{n}}_{-})$-modules. Now using [14, Proposition 2.10] we have that
$$\left(M(a_{1},...,a_{n})\otimes L(e_{n})^{*}\right)^{\leqslant(\omega_{i}-\rho%
)}=\bigoplus_{{\rm wt}(v\otimes v_{b}^{*})\leqslant\omega_{i}-\rho}U({%
\mathfrak{n}}_{-})(v\otimes v_{b}^{*})$$
Since the $e_{n}$-coordinate of $\operatorname{wt}(v\otimes v_{b}^{*})+\rho$ is $a_{n}-1-\sum_{j=1}^{n-1}b_{j}$, we have that $\operatorname{wt}(v\otimes v_{b}^{*})+\rho\leq\omega_{i}$ only if $a_{n}-1-\sum_{j=1}^{n-1}b_{j}\geq 1$. Hence $a_{n}=2$ and $b_{1}=...=b_{n-1}=0$. This completes the proof of (3). Part (4) follows from part (3).
∎
Set
$$\tilde{f}_{\overline{1}}([S]):=[\rm{hd}\overline{\mathcal{F}}(S)].$$
Remark 4.5.
One easily checks that even for $n=2$, $[{\rm hd}\,\overline{\mathcal{F}}(S)]$ might be different from $[\overline{\mathcal{F}}(S)]$. Indeed, if $S=L(2,2)$, then by Proposition 4.4(3),
$$[\overline{\mathcal{F}}(L(2,2))]=[\overline{\mathcal{F}}(M(2,2))]=[M(2,1)]=[L(%
2,1)]+[L(1,2)].$$
Lemma 4.6.
For $a_{1},\ldots,a_{n-1}\in\{1,2\}$, we have
$$\displaystyle\overline{\mathcal{F}}(L(a_{1},\ldots,a_{n-1},1))=0\ \text{and}\ %
{\rm hd}\,\overline{\mathcal{F}}(L(a_{1},\ldots,a_{n-1},2))=L(a_{1},\ldots,a_{%
n-1},1).$$
Proof.
By Proposition 4.4(3), we know that $\overline{\mathcal{F}}$ maps a simple module in ${\mathcal{O}}_{n}$ to a highest weight module in ${\mathcal{O}}_{n}$ or $0$. Hence $\overline{\mathcal{F}}(S)$ has a simple head for $S\in{\rm Irr}({\mathcal{O}}_{n})$, if it is nonzero. Now the assertion follows from Proposition 4.2(3) and Proposition 4.4(4). ∎
Theorem 4.7.
(1)
There is a $\mathfrak{q}(2)$-crystal structure on
$\text{Irr}(\mathcal{O}_{n})$ with odd Kashiwara operators $\tilde{e}_{\overline{1}}$ and
$\tilde{f}_{\overline{1}}$ given above.
(2)
As a $\mathfrak{q}(2)$-crystal, $\text{Irr}(\mathcal{O}_{n})$ is
isomorphic to ${\mathbf{B}}^{\otimes n}$.
Proof.
Let $\psi$ be the map given by $(a_{1},\ldots,a_{n})\mapsto a_{1}^{\prime}\otimes\cdots\otimes a_{n}^{\prime}$, where $a_{i}=1$ or $2$, $1^{\prime}=2$,
$2^{\prime}=1$.
For part (1), we use Proposition 4.2(3) and Lemma 4.6.
For part (2),
one can easily check that $x[L(a_{1},...,a_{n})]=[L(\psi^{-1}x\psi(a_{1},...,a_{n}))]$ for $x=\tilde{f}_{\overline{1}},\tilde{e}_{\overline{1}}$, whenever $x[L(a_{1},...,a_{n})]\neq 0$.
∎
5. Invariants of connected components
One of the important properties of the $\mathfrak{gl}_{2}$-crystal
structure of ${\rm Irr}(\mathcal{O}_{n})$ is that the the isomorphism
classes of simple objects in a fixed parabolic subcategory of
$\mathcal{O}_{n}$ form a $\mathfrak{gl}_{2}$-subcrystal of ${\rm Irr}(\mathcal{O}_{n})$. A similar but slightly weaker statement holds for
the $\mathfrak{q}(2)$-crystal ${\rm Irr}(\mathcal{O}_{n})$. To
formulate this statement, we need to introduce some notation.
For a sequence ${\mathbf{a}}=(a_{1},...,a_{n})$ of $1$’s and $2$’s, let
$$I_{\rm fin}(a_{1},...,a_{n}):=\{i\;|\;a_{i}=2\mbox{ and
}a_{i+1}=1\}.$$
In particular, $I_{\rm fin}(a_{1},...,a_{n})$ is a subset of
$\{1,...,n-1\}$. Recall that for an irreducible
$\mathfrak{gl}_{n}$-module $M$ in $\mathcal{W}$ and a root $\alpha$ of
$\mathfrak{gl}_{n}$, every root vector $x$ in the $\alpha$-root space
acts either injectively or locally finitely on $M$. Indeed, this follows from the fact that the set of all $m$ in $M$ for which $x^{N}m=0$ for sufficiently large $N\geq 1$ forms a submodule of $M$.
For a module $L$ in the category $\mathcal{O}$, we define
$\Pi_{\rm fin}(L)$ to be the set of simple roots $\alpha$ such that the
vectors in the $(-\alpha)$-root space act locally finitely on $L$.
For a subset $I$ of $\{1,...,n-1\}$, denote by ${\mathcal{O}}_{I}$ the
parabolic subcategory of ${\mathcal{O}}$ consisting of all
$\mathfrak{gl}_{n}$-modules $M$ on which the root vectors of $-e_{i}+e_{i+1}$ $(i\in I)$ act locally finitely.
Some properties of ${\mathcal{O}}_{I}$ related to the $\mathfrak{gl}_{2}$-crystal structure of $\text{Irr}(\mathcal{O}_{n})$ are listed in the following
proposition. We refer the reader to
[13, Chapter 9] for other important properties of ${\mathcal{O}}_{I}$.
Proposition 5.1.
Let $a_{i}=1$ or $2$ for $i=1,...,n$.
(1)
$\Pi_{\rm fin}(L(a_{1},...,a_{n}))=\{e_{i}-e_{i+1}\;|\;i\in I_{\rm fin}(a_{1},.%
..,a_{n})\}.$
(2)
Let $L$ be a irreducible $\mathfrak{gl}_{n}$-module whose isomorphism
class belongs to the connected component $C([L(a_{1},\ldots,a_{n})])$ in
the $\mathfrak{gl}_{2}$-crystal $\text{Irr}(\mathcal{O}_{n})$.
Then $\Pi_{\rm fin}(L)=\Pi_{\rm fin}(L(a_{1},...,a_{n}))$. In
particular, $L$ belongs to ${\mathcal{O}}_{I}$, where $I=I_{\rm fin}(a_{1},...,a_{n})$.
(3)
For every subset $I$ of $\{1,...,n-1\}$, the isomorphism classes of
irreducible $\mathfrak{gl}_{n}$-modules in $\mathcal{O}_{n}\cap{\mathcal{O}}_{I}$ form a $\mathfrak{gl}_{2}$-subcrystal of ${\rm Irr}(\mathcal{O}_{n})$.
Proof.
Part (1) is a standard fact. For parts (2) and (3), we use Theorem 2.3 or the fact that if $\alpha\in\Pi_{\rm fin}(L)$ and
$x$ is in the $(-\alpha)$-root space then $x$ acts locally finitely on $L\otimes L(e_{1})$ and $L\otimes L(e_{1})^{*}$.
∎
The $\mathfrak{q}(2)$-version of the above proposition is the
following.
Proposition 5.2.
Let $a_{i}=1$ or $2$ for $i=1,...,n$.
(1)
If $\overline{\mathcal{E}}(L(a_{1},\ldots,a_{n}))\neq 0$ (equivalently, $a_{n}=1$), then
$$\Pi_{\rm fin}(\overline{\mathcal{E}}(L(a_{1},\ldots,a_{n})))=\Pi_{\rm fin}(L(a%
_{1},\ldots,a_{n}))\setminus\{e_{n-1}-e_{n}\}.$$
(2)
Let $L(b_{1},\ldots,b_{n})$ $(b_{i}=1,2)$ be the irreducible
$\mathfrak{gl}_{n}$-module whose isomorphism class belongs to the
connected component $C([L(a_{1},\ldots,a_{n})])$ in the
$\mathfrak{q}(2)$-crystal $\text{Irr}(\mathcal{O}_{n})$.
Then $L(b_{1},\ldots,b_{n})$ belongs to ${\mathcal{O}}_{I}$, where
$I=I_{\rm fin}(a_{1},...,a_{n})\setminus\{n-1\}$.
(3)
For every subset $I$ of $\{1,...,n-2\}$, the
isomorphism classes of irreducible $\mathfrak{gl}_{n}$- modules in
$\mathcal{O}_{n}\cap{\mathcal{O}}_{I}$ form a ${\mathfrak{q}}(2)$-subcrystal of ${\rm Irr}(\mathcal{O}_{n})$.
Proof.
Part (1) follows from Theorem 4.7(2) and Proposition
5.1(1). Parts (2) and (3) follow from (1).
∎
We finish this section with a result on the decomposition of the ${\mathfrak{q}}(2)$-connected components of ${\mathbf{B}}^{\otimes n}$ into $\mathfrak{gl}_{2}$-connected components.
Proposition 5.3.
Let $\ell$ be a ${\mathfrak{q}}(2)$-lowest weight vector in ${\mathbf{B}}^{\otimes n}$ with $|\widehat{\ell}|\geq 2$. Then $C([L(\ell^{\prime})])=A\sqcup B$, where $A$ and $B$ are the
following $\mathfrak{gl}_{2}$-subcrystals, which are connected in
$\rm{Irr}(\mathcal{O}_{n})$.
$$\displaystyle A$$
$$\displaystyle=$$
$$\displaystyle\{[L(a^{\prime})]\;|\;\overline{a}=\overline{\ell},I_{\rm fin}(%
\widehat{a}^{\prime})=\emptyset\},$$
$$\displaystyle B$$
$$\displaystyle=$$
$$\displaystyle\{[L(a^{\prime})]\;|\;\overline{a}=\overline{\ell},I_{\rm fin}(%
\widehat{a}^{\prime})=\{|\widehat{\ell}|-1\}\}.$$
Proof.
By Theorem 4.7(2), we can use the description of ${\mathbf{B}}^{\otimes n}$ in Section 3.
Then by Proposition 3.6(2), we may assume that $\ell=2^{n}$ and hence we obtain $C([L(\ell^{\prime})])=C_{\mathfrak{gl}_{2}}([L(2^{n})])\sqcup C_{\mathfrak{gl}%
_{2}}([L(2^{n-1}1)])$ by Proposition 3.4.
The statement follows from Proposition 5.1(2).
∎
References
[1]
G. Benkart, S.-J. Kang, M. Kashiwara,
Crystal bases for the quantum superalgebra
$U_{q}(\mathfrak{gl}(m,n))$,
J. Amer. Math. Soc. 13 (2000), 293–331.
[2]
J. Bernstein, I. Frenkel,
M. Khovanov,
A categorification of the Temperley-Lieb algebra and Schur quotients of $U(sl_{2})$
via projective and Zuckerman functors,
Selecta Math. (N.S.) 5 (1999), no. 2, 199–241.
[3]
J. Brundan, A. Kleshchev,
Representations of shifted Yangians and finite W-algebras,
Mem. Amer. Math. Soc. 196 (2008), no. 918, viii+107 pp.
[4]
J. Chuang, R. Rouquier,
Derived equivalences for symmetric groups and $\mathfrak{sl}_{2}$-categorification,
Ann. Math. 167 (2) (2008), no. 1, 245–298.
[5]
P. Fiebig, Centers and translation functors for the category $\mathcal{O}$ over Kac-Moody algebras,
Math.
Z. 243 (4) (2003), 689–717.
[6]
I. B. Frenkel, M. G. Khovanov, A. A. Kirillov, Jr., Kazhdan-Lusztig polynomials and canonical basis, Transform.
Groups 3 (4) (1998), 321–336.
[7]
I. Frenkel, M. Khovanov, C. Stroppel,
A categorification of finite-dimensional irreducible representations of quantum
$sl_{2}$ and their tensor products,
Selecta Math. (N.S.) 12 (2006), 379–431.
[8]
D. Grantcharov, J. H. Jung, S.-J. Kang, M. Kashiwara, M. Kim,
Quantum queer superalgebra and crystal bases,
Proc. Japan Acad. Ser. A Math. Sci. 86 (2010), 177–182.
[9]
D. Grantcharov, J. H. Jung, S.-J. Kang, M. Kashiwara, M. Kim,
Crystal bases for the quantum queer superalgebra,
to appear in J. Eur. Math. Soc.
[10]
D. Grantcharov, J. H. Jung, S.-J. Kang, M. Kashiwara, M. Kim,
Crystal bases for the quantum queer superalgebra and semistandard decomposition tableaux,
Trans. Amer. Math. Soc. 366 (2014) 457–489.
[11]
D. Grantcharov, J. H. Jung, S.-J. Kang, M. Kim,
Highest weight modules over quantum queer superalgebra
$U_{q}({\mathfrak{q}}(n))$,
Commun. Math. Phys. 296
(2010), 827–860.
[12]
J. Hong, S.-J. Kang, Introduction to Quantum Groups and
Crystal Bases, Graduate Studies in Mathematics 42,
American Mathematical Society, 2002.
[13]
J. Humphreys, Representations of Semisimple Lie
Algebras in the BGG Category $\mathcal{O}$, Graduate Studies in Mathematics 94,
American Mathematical Society, 2008.
[14]
J. Kåhrström,
Tensoring with infinite-dimensional modules in ${\mathcal{O}}_{0}$,
Algebr. Represent. Theory 13 (2010), 561–587.
[15]
S.-J. Kang, Crystal bases for quantum affine algebras and
combinatorics of Young walls, Proc. Lond. Math. Soc. (3) 86
(2003), 29–69.
[16]
M. Kashiwara,
Crystalizing the $q$-analogue
of universal enveloping algebras,
Commun. Math. Phys. 133 (1990), 249–260.
[17]
by same author,
On crystal bases of the $q$-analogue of
universal enveloping algebras,
Duke Math. J. 63 (1991), 465–516.
[18]
by same author,
Crystal base and Littelmann’s refined Demazure character formula,
Duke Math. J. 71 (1993), 839–858.
[19]
J.-H. Kwon, Super duality and Crystal bases for quantum orthosymplectic superalgebras, arXiv:1301.1756.
[20]
M. Kashiwara, T. Nakashima,
Crystal graphs for representations of the $q$-analogue of classical Lie algebras,
J. Algebra 165 (1994), 295–345.
[21]
I. Losev,
Highest weight $sl_{2}$-categorifications I: crystals,
Math. Z. 274 (2013), 1231–1247.
[22]
G. Lusztig, Canonical bases arising from quantized enveloping algebras, J. Amer. Math. Soc. 3 (1990), 447–498.
[23]
G. Lusztig, Quivers, perverse sheaves, and quantized enveloping algebras, J. Amer. Math. Soc. 4 (1991), 365–421.
[24]
O. Mathieu, Classification of weight modules,
Ann. Inst. Fourier 50 (2000), 537–592.
[25]
G. Olshanski,
Quantized universal enveloping
superalgebra of type $Q$ and a super-extension of the Hecke
algebra,
Lett. Math. Phys. 24 (1992), 93–102.
[26]
A. Sergeev,
The tensor algebra of the tautological representation as a module over the Lie superalgebras $\mathfrak{gl}(n,m)$ and $Q(n)$,
Mat. Sb. 123 (1984), 422–430 (in Russian).
[27]
C. Stroppel,
Categorification of the Temperley-Lieb category, tangles, and cobordisms via
projective functors,
Duke Math. J. 126 (2005), 547–596. |
Convexification of a 3-D coefficient inverse scattering problem††thanks: Supported by US Army Research Laboratory and US Army Research Office grant W911NF-15-1-0233 and by the Office of Naval Research grant N00014-15-1-2330. In addition, the work of Kolesov A.E. was partially supported by Mega-grant of the Russian Federation Government (N14.Y26.31.0013) and RFBR (project N17-01-00689A)
Michael V. Klibanov ,
Aleksandr E. Kolesov 33footnotemark: 3
The corresponding authorDepartment of Mathematics & Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223, USA ([email protected],
[email protected])Institute of Mathematics and Information Science, North-Eastern Federal University, Yakutsk, Russia ([email protected])
Abstract
A version of the so-called “convexification” numerical
method for a coefficient inverse scattering problem for the 3D Hemholtz
equation is developed analytically and tested numerically. Backscattering
data are used, which result from a single direction of the propagation of
the incident plane wave on an interval of frequencies. The method converges
globally. The idea is to construct a weighted Tikhonov-like functional. The
key element of this functional is the presence of the so-called Carleman
Weight Function (CWF). This is the function which is involved in the Carleman
estimate for the Laplace operator. This functional is strictly convex on any
appropriate ball in a Hilbert space for an appropriate choice of the
parameters of the CWF. Thus, both the absence of local minima and
convergence of minimizers to the exact solution are guaranteed. Numerical tests demonstrate
a good performance of the resulting algorithm. Unlikeprevious the
so-called tail functions globally convergent method, we neither do not
impose the smallness assumption of the interval of wavenumbers, nor
we do not iterate with respect to the so-called tail functions.
Keywords: coefficient inverse scattering problem, Carleman weight
function, globally convergent numerical method
2010 Mathematics Subject Classification: 35R30.
1 Introduction
In this work, we develop a version of the so-called “convexification” numerical method for a coefficient inverse scattering
problem (CISP) for the 3D Helmholtz equation with backscattering data
resulting from a single measurement event which is generated by a single
direction of the propagation of the incident plane wave on an interval of
frequencies. We present both the theory and numerical results. Our method
converges globally. This is a generalization to the 3D case of our (with
coauthors) previous 1D version of the convexification [1]. Three main advantages of the convexification method
over the previously developed the so-called “tail
functions” globally convergent method for a similar CISP [2, 3, 4, 5, 6, 7]
are: (1) To solve our problem, we construct a globally strictly convex cost
functional with the Carleman Weight Function (CWF) in it, (2) we do not
impose in our convergence analysis the smallness assumption on the interval
of wavenumbers, and (3) we do not iterate with respect to the so-called
“tail functions”.
It is well known that any CISP is both highly nonlinear and ill-posed. These
two factors cause substantial difficulties in numerical solutions of these
problems. A globally convergent method (GCM) for a CISP is such a
numerical method, which has a rigorous guarantee of reaching a sufficiently
small neighborhood of the exact solution of that CISP without any advanced
knowledge of this neighborhood. In addition, the size of this neighborhood
should depend only on approximation errors and the level of noise in the
data.
Over the years the first author with coauthors has proposed a variety of
globally convergent methods for CISPs with single measurement data, see,
e.g. [2, 3, 4, 5, 7, 8, 9, 10, 11, 12, 13, 14, 15], and
references cited therein. These methods can be classified into two types.
Methods of the first type, which we call the tail functions methods,
are certain iterative processes. On each iterative step one solves the
Dirichlet boundary value problem for a linear elliptic Partial Differential
Equation (PDE). This PDE depends on the iteration number. The solution of
that problem enables one to update the unknown coefficient. Using this
update, one updates the so-called tail function, which is a complement of a
certain truncated integral, where the integration is carried out with
respect to the wavenumber. The stopping criterion for the iterative process
is developed computationally. The tail function method was successfully
tested on experimental backscattering data The tail function method was
successfully tested on experimental backscattering data [4, 5, 6, 7, 14].
Globally convergent numerical methods of the second type are called the
convexification methods. They are based on the minimization of the
weighted Tikhonov-like functional with the CWF in it. The CWF is the
function which is involved in the Carleman estimate for the corresponding
PDE operator. The CWF can be chosen in such a way that the above functional
becomes strictly convex on a ball of an arbitrary radius in a certain
Hilbert space (see some details in this section below). Note that the
majority of known numerical methods of solutions of nonlinear ill-posed
problems minimize conventional least squares cost functionals [16, 17, 18], which are usually non convex and have
multiple local minima and ravines, see, e.g. [19] for a good
numerical example of multiple local minima. Hence, a gradient-like method
for such a functional converges to the exact solution only if the starting
point of iterations is located in a sufficiently small neighborhood of this
solution. Some other effective approache to numerical methods for nonlinear
ill-posed problems can be found in [20, 21].
Various versions of the convexification methods have been proposed since the
first work [9], see [10, 11, 12]. However, these versions have some theoretical gaps,
which have limited their numerical studies so far. In the recent works [8, 13, 22] the attention to
the convexification method was revived. Theoretical gaps were eliminated in
[23] and thorough numerical studies for one
dimensional problems were performed [1, 15]. Besides, in [24] the convexification method was developed for
ill-posed problems for quasilinear PDEs and corresponding numerical studies
for the 1D case were conducted in [22, 23]. The idea of any version of the
convexification has direct roots in the method of [25],
which is based on Carleman estimates. The method of [25]
was originally designed only for proofs of uniqueness theorems for CIPs,
also see, e.g. the book [12] and the recent survey [26]. Recently an interesting version of the convexification was
published in [27] for a CISP for the hyperbolic equation $u_{tt}=\Delta u+a\left(x\right)u$ with the unknown coefficient $a\left(x\right)$ in the case when one of initial conditions does not non-vanish.
The method of [27] is also based on the idea of [25] and has some roots in [8, 13].
By the convexification, one constructs a weighted Tikhonov-like functional $J_{\lambda}$ on a closed ball $\overline{B\left(R\right)}$ of an
arbitrary radius $R>0$ and with the center at $\left\{0\right\}$ in an
appropriate Hilbert space. Here $\lambda>0$ is a parameter. The key theorem
claims that one can choose a number $\lambda\left(R\right)>0$ such that
for all $\lambda\geq\lambda\left(R\right)$ the functional $J_{\lambda}$
is strictly convex on $\overline{B\left(R\right)}.$ Furthermore, the
existence of the unique minimizer of $J_{\lambda}$ on $\overline{B\left(R\right)}$ as well as convergence of minimizers to the exact solution when
the level of noise in the data tends to zero are proven. In addition, it is
proven that the gradient projection method reaches a sufficiently small
neighborhood of the exact coefficient when starting from an arbitrary point
of $B\left(R\right)$. Since $R>0$ is an arbitrary number, then this is a
globally convergent numerical method.
Due to a broad variety of applications, Inverse Scattering Problems (ISPs)
are quite popular in the community of experts in inverse problems. There are
plenty of works dedicated to this topic. Since this paper is not a survey,
we refer to only few of them, e.g. [17, 18, 20, 21, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43] and references cited
thereein. We note that the authors of [33] have considered a
modified tail functions method. As stated above, we are interested in a CISP
for the Helmholtz equation with the data generated by a single measurement
event. As to the CISPs with multiple measurements, we refer to a global
reconstruction procedure, which was developed and numerically implemented in
[37], also see [38, 39] for further developments and numerical
studies. Actually, this is an effective extension of the classical 1D
Gelfand-Krein-Levitan method on the 2D case.
In section 2 we formulate our forward and inverse problems. In section 3 we
construct the weighted Tikhonov-like functional with the CWF in it. In
section 4 we formulate our theorems. We prove them in section 5. In
section 6 we present numerical results.
2 Problem Statement
2.1 The Helmholtz equation
Just as in the majority of the above cited previous works of the first
author with coauthors about GCM, we focus in this paper applications to the
detection and identification of targets, which mimic antipersonnel land
mines (especially plastic mines, i.e. dielectrics) and improvised explosive
devices (IEDs) using measurements of a single component of the electric wave
field. In this case the medium is assumed to be non magnetic, non absorbing,
and the dielectric constant in it should be represented by a function, which
is mostly a constant with some small sharp inclusions inside (however, we do
not assume in our theory such a structure of the dielectric constant). These
inclusions model antipersonnel land mines and IEDs. Suppose that the
incident electric field has only one non zero component. It was established
numerically in [44] that the propagation of that component through
such a medium is well governed by the Helmholtz equation rather than by the
full Maxwell’s system. Besides, in all above cited works of the first author
with coauthors about experimental data those targets were accurately imaged
by the above mentioned tail functions GCM using experimentally measured
single component of the electric field and modeling the propagation of that
component by the Helmholtz equation. In addition, we are unaware about a GCM
for a CISP with single measurement data for the Maxwell’s system. Thus, we
use the Helmholtz equation below.
The need of the detection and identification of, e.g. land mines, might, in
particular, occur on a battlefield. Due to the security considerations, the
amount of collected data should be small in this case, and these should be
the backcattering data. Thus, we use only a single direction of the
propagation of the incident plane wave of the electric field and assume
measurements of only the backscattering part of the corresponding component
of that field.
2.2 Forward and inverse problems
Let $\mathbf{x}=(x,y,z)\in\mathbb{R}^{3}$. Let $b,d,\xi>0$ be three
numbers. It is convenient for our numerical studies (section 6) to define
from the beginning the domain of interest $\Omega$ and the backscattering
part $\Gamma$ of its boundary as
$$\Omega=\left\{\left(x,y,z\right):\left|x\right|,\left|y\right|<b,z\in\left(-%
\xi,d\right)\right\},\text{ }\Gamma=\left\{\left(x,y,z\right):\left|x\right|,%
\left|y\right|<b,z=-\xi\right\}.$$
(2.1)
Let the function $c(\mathbf{x})$ be the spatially distributed dielectric
constant and $k$ be the wavenumber. We consider the following forward
problem for the Helmholtz equation:
$$\Delta u+k^{2}\,c(\mathbf{x})\,u=0,\quad\mathbf{x}\in\mathbb{R}^{3},$$
(2.2)
$$u\left(\mathbf{x},k\right)=u_{s}\left(\mathbf{x},k\right)+u_{i}\left(\mathbb{x%
},k\right),$$
(2.3)
where $u(\mathbf{x},k)$ is the total wave, $u_{s}(\mathbf{x},k)$ is the
scattered wave, and $u_{i}(\mathbf{x},k)$ is the incident plane wave
propagating along the positive direction of the $z-$axis,
$$u_{i}(\mathbf{x},k)=e^{ikz}.$$
(2.4)
The scattered wave $u_{s}(\mathbf{x},k)$ satisfies the Sommerfeld radiation
condition:
$$\lim_{r\rightarrow\infty}r\left(\frac{\partial u_{s}}{\partial r}-iku_{s}%
\right)=0,\quad r=\left|\mathbf{x}\right|.$$
(2.5)
Also, the function $c(\mathbf{x})$ satisfies the following conditions:
$$c(\mathbf{x})=1+\beta\left(\mathbf{x}\right),\text{ }\beta\left(\mathbf{x}%
\right)\geq 0,\,\mathbf{x}\in\mathbb{R}^{3},\quad\mbox{and}c(\mathbf{x})=1,\,%
\mathbf{x}\notin\overline{\Omega}.$$
(2.6)
The assumption of (2.6) $c(\mathbf{x})=1$ in $\mathbb{R}^{3}\setminus\Omega$ means that we have vacuum outside of the domain $\Omega.$ Finally, we assume that $c(\mathbf{x})\in C^{15}(\mathbb{R}^{3})$.
This smoothness condition was imposed to derive the asymptotic behavior of
the solution of the Helmholtz equation (2.2) at $k\rightarrow\infty$ [45]. We also note that extra smoothness
conditions are usually not of a significant concern when a CIP is
considered, see, e.g. theorem 4.1 in [46]. In particular, this
smoothness condition implies that the function $u\left(\mathbf{x},k\right)\in C^{16+\gamma}\left(\overline{G}\right),\forall%
\gamma\in\left(0,1\right),\forall k>0,$ where $C^{16+\gamma}\left(\overline{G}\right)$
is the Hölder space and $G\subset\mathbb{R}^{3}$ is an arbitrary
bounded domain [47]. Also, it follows from lemma 3.3 of [4] that the derivative $\partial_{k}u\left(\mathbf{x},k\right)$ exists for all $\mathbf{x}\in\mathbb{R}^{3},k>0$ and satisfies
the same smoothness condition as the function $u\left(\mathbf{x},k\right).$
Coefficient Inverse Scattering Problem (CISP). Let the domain
$\Omega$ and the backscattering part $\Gamma\subset\partial\Omega$ of
its boundary be as in (2.1). Let the wavenumber $k\in[\underline{k},\overline{k}],$ where $[\underline{k},\overline{k}]$$\subset\left(0,\infty\right)$ is an interval of wavenumbers. Determine
the function $c(\mathbf{x}),\,\mathbf{x}\in\Omega$, assuming that the
following function $g(\mathbf{x},k)$ is given:
$$u(\mathbf{x},k)=g_{0}(\mathbf{x},k),\quad\mathbf{x}\in\Gamma,\,k\in[\underline%
{k},\overline{k}].$$
(2.7)
In addition to the data (2.7) we can obtain the boundary
conditions for the derivative of the function $u(\mathbf{x},k)$ in the $z-$direction using the data propagation procedure (section 6.2),
$$u_{z}(\mathbf{x},k)=g_{1}(\mathbf{x},k),\quad\mathbf{x}\in\Gamma,\,k\in[%
\underline{k},\overline{k}].$$
(2.8)
In addition, we complement Dirichlet (2.7) and Neumann (2.8) boundary conditions on $\Gamma$ with the heuristic Dirichlet
boundary condition at the rest of the boundary $\partial\Omega$ as:
$$u(\mathbf{x},k)=e^{ikz},\mathbf{x}\in\partial\Omega\diagdown\Gamma,\,k\in[%
\underline{k},\overline{k}].$$
(2.9)
This boundary condition coincides with the one for the uniform medium with $c\left(\mathbf{x}\right)\equiv 1.$ To justify (2.9), we recall that,
using the tail functions method, it was demonstrated in sections 7.6 and 7.7
of [4] that (2.9) does not affect much the
reconstruction accuracy as compared with the correct Dirichlet boundary
condition. Besides, (2.9) has always been used in works [4, 5, 6, 7] with experimental data, where accurate results were
obtained by the tail functions GCM.
The uniqueness of the solution of this CISP is an open and long standing
problem. In fact, uniqueness of a similar coefficient inverse problem can be
currently proven only in the case if the right hand side of equation (2.2) is a function which is not vanishing in $\overline{\Omega}.$
This can be done by the method of [25, 12, 26]. Hence, we assume below the
uniqueness of our CISP.
2.3 Travel time
The Riemannian metric generated by the function $c(\mathbf{x})$ is:
$$d\tau(\mathbf{x})=\sqrt{c(\mathbf{x})}|d\mathbf{x}|,\quad|d\mathbf{x}|=\sqrt{(%
dx)^{2}+(dy)^{2}+(dz)^{2}}.$$
Fix the number $a>0.$ Consider the plane $P_{a}=\{(x,y,-a):x,y\in\mathbb{R}\}.$ We assume that $\Omega\subset\left\{z>-a\right\}$ and impose
everywhere below the following condition on the function $c(\mathbf{x})$:
Regularity Assumption. For any point $x\in\mathbb{R}^{3}$ there exists a unique geodesic line $\Gamma(x,a)$, with
respect to the metric $d\tau$, connecting $x$ with the plane
$P_{a}$ and perpendicular to $P_{a}$.
A sufficient condition of the regularity of geodesic lines is [48]:
$$\sum_{i,j=1}^{3}\frac{\partial^{2}c\left(\mathbf{x}\right)}{\partial x_{i}%
\partial x_{j}}\xi_{i}\xi_{j}\geq 0,\forall\mathbf{x}\in\overline{\Omega},%
\forall\mathbf{\xi}\in\mathbb{R}^{3}.$$
We introduce the travel time $\tau(\mathbf{x})$ from the plane $P_{a}$ to
the point $\mathbf{x}$ as [45]
$$\tau(\mathbf{x})=\int_{\Gamma(\mathbf{x},a)}\sqrt{c\left(\mathbf{\xi}\right)}d\sigma.$$
3 The Weighted Tikhonov Functionals
3.1 The asymptotic behavior
It was proven in [45] that the following asymptotic
behavior of the function $u(\mathbf{x},k)$ is valid:
$$u(\mathbf{x},k)=A(\mathbf{x})e^{ik\tau(\mathbf{x})}\left[1+s\left(\mathbf{x},k%
\right)\right],\text{ }\mathbf{x}\in\overline{\Omega},k\rightarrow\infty,$$
(3.1)
where the function $s\left(\mathbf{x},k\right)$ is such that
$$s\left(\mathbf{x,}k\right)=O\left(\frac{1}{k}\right),\partial_{k}s\left(%
\mathbf{x,}k\right)=O\left(\frac{1}{k}\right),\text{ }\mathbf{x}\in\overline{%
\Omega},k\rightarrow\infty.$$
(3.2)
Here the function $A(\mathbf{x})>0$ and $\tau(\mathbf{x})$ is the length of
the geodesic line in the Riemannian metric generated by the function $c(\mathbf{x})$. Denote
$$w(\mathbf{x},k)=\frac{u(\mathbf{x},k)}{u_{i}(\mathbf{x},k)}.$$
(3.3)
Using (3.1), (3.2) and (3.3), we obtain for $\mathbf{x}\in\overline{\Omega},k\rightarrow\infty$ that
$$w(\mathbf{x},k)=A(\mathbf{x})e^{ik(\tau(\mathbf{x})-z)}\left[1+s\left(\mathbf{%
x},k\right)\right].$$
(3.4)
Using (3.1) and (3.4), we uniquely
define the function $\log w(\mathbf{x},k)$ for $\mathbf{x}\in\Omega$, $k\in[\underline{k},\overline{k}]$ for sufficiently large values of $\underline{k}$ as
$$\log w(\mathbf{x},k)=\ln A(\mathbf{x})+ik(\tau(\mathbf{x})-z)+\mathop{%
\displaystyle\sum}_{n=1}^{\infty}\frac{\left(-1\right)^{n-1}}{n}\left(s(%
\mathbf{x},k)\right)^{n}.$$
(3.5)
Obviously for so defined function $\log w(\mathbf{x},k)$ we have that $\exp\left[\log w(\mathbf{x},k)\right]$ equals to the right hand side of (3.4). Thus, we assume below that the number $\underline{k}$ is
sufficiently large.
3.2 The integro-differential equation
It follows from (2.2), (2.4), (2.6) and (3.3) that the function $w\left(\mathbf{x},k\right)$ satisfies the
following equation in the domain $\Omega$
$$\Delta w+k^{2}\beta w+2ikw_{z}=0.$$
(3.6)
For $\mathbf{x}\in\Omega,\,k\in[\underline{k},\overline{k}]$ we
define the function $v(\mathbf{x},k),$
$$v(\mathbf{x},k)=\frac{\log w(\mathbf{x},k)}{k^{2}}.$$
(3.7)
Then
$$\Delta v+k^{2}\left(\nabla v\right)^{2}+2ikv_{z}+\beta(\mathbf{x})=0.$$
(3.8)
Let $q(\mathbf{x},k)$ be the derivative of the function $v$ with respect to $k,$
$$q(\mathbf{x},k)=\partial_{k}v(\mathbf{x},k).$$
(3.9)
Then
$$v(\mathbf{x},k)=-\int_{k}^{\overline{k}}q\left(\mathbf{x},\kappa\right)d\kappa%
+V(\mathbf{x}).$$
(3.10)
We call $V(\mathbf{x})$ the tail function:
$$V(\mathbf{x})=v\left(\mathbf{x},\overline{k}\right).$$
(3.11)
To eliminate the function $\beta(\mathbf{x})$ from equation (3.8), we differentiate (3.8) with respect to $k,$
$$\Delta q+2k\nabla v\cdot\left(k\nabla q+\nabla v\right)+2i\left(kq_{z}+v_{z}%
\right)=0.$$
(3.12)
Substituting (3.10) into (3.12) leads to the following
integro-differential equationeq:
$$\begin{gathered}\displaystyle L(q)=\Delta q+2k\left(\nabla V-\int_{k}^{%
\overline{k}}\nabla q(\mathbf{x},\kappa)d\kappa\right)\cdot\left(k\nabla q+%
\nabla V-\int_{k}^{\overline{k}}\nabla q\left(\mathbf{x},\kappa\right)d\kappa%
\right)\\
\displaystyle+2i\left(kq_{z}+V_{z}-\int_{k}^{\overline{k}}q_{z}\left(\mathbf{x%
},\kappa\right)d\kappa\right)=0.\end{gathered}$$
(3.13)
Finally, we complement this equation with the overdetermined boundary
conditions:
$$\begin{gathered}\displaystyle q(\mathbf{x},k)=\phi_{0}(\mathbf{x},k),\quad q_{%
z}(\mathbf{x},k)=\phi_{1}(\mathbf{x},k),\quad\mathbf{x}\in\Gamma,\,k\in[%
\underline{k},\overline{k}],\\
\displaystyle q(\mathbf{x},k)=0,\quad\mathbf{x}\in\partial\Omega\setminus%
\Gamma,\,k\in[\underline{k},\overline{k}],\end{gathered}$$
(3.14)
where the functions $\phi_{0}$ and $\phi_{1}$ are calculated from the
functions $g_{0}$ and $g_{1}$ in (2.7), (2.8). The third
boundary condition (3.14) follows from (2.4), (2.9), (3.3), (3.7) and (3.9).
Note that in (3.13) both functions $q(\mathbf{x},k)$ and $V(\mathbf{x})$ are unknown. Hence, we approximate the function $V(\mathbf{x})$
first. Next, we solve the problem (3.13 ), (3.14)
for the function $q(\mathbf{x},k)$.
Remark 3.1. Suppose that certain approximations for the functions $q(\mathbf{x},k)$ and $V(\mathbf{x})$ are found. Then an approximation for
the unknown coefficient $c\left(\mathbf{x}\right)$ can be found via
backwards calculations: first, approximate the function $v\left(\mathbf{x},k\right)$ via (3.10) and then approximate the function $\beta\left(\mathbf{x}\right)$ using equation (3.8) for a certain
value of $k\in\left[\underline{k},\overline{k}\right]$. In our
computations we use $k=\underline{k}$ for that value of $k$. Next, one
should use (2.6). Therefore, we focus below on approximating
functions $q(\mathbf{x},k)$ and $V(\mathbf{x}).$
3.3 Approximation of the tail function
The method of this paper to approximate the tail function is different from
the method explored before in [1]. Also, unlike the
tail functions method, we do not update tails here.
It follows from (3.5) and (3.11) that there exists a function $p(\mathbf{x})$ such that
$$v\left(\mathbf{x},k\right)=\frac{p\left(\mathbf{x}\right)}{k}+O\left(\frac{1}{%
k^{2}}\right),\quad q\left(\mathbf{x},k\right)=-\frac{p\left(\mathbf{x}\right)%
}{k^{2}}+O\left(\frac{1}{k^{3}}\right),\quad k\rightarrow\infty,\,\mathbf{x}%
\in\Omega.$$
(3.15)
Since the number $\overline{k}$ is sufficiently large, we drop terms $O\left(1/\overline{k}^{2}\right)$ and $O\left(1/\overline{k}^{3}\right)$
in (3.15). Next, we approximately set
$$v\left(\mathbf{x},k\right)=\frac{p\left(\mathbf{x}\right)}{k},\quad q\left(%
\mathbf{x},k\right)=-\frac{p\left(\mathbf{x}\right)}{k^{2}},\quad k\geq%
\overline{k},\,\mathbf{x}\in\Omega.$$
(3.16)
Substituting (3.16) in (3.13) and letting $k=\overline{k}$, we obtain
$$\Delta V(\mathbf{x})=0,\quad\mathbf{x}\in\Omega.$$
(3.17)
This equation is supplemented by the following boundary conditions:
$$V(\mathbf{x})=\psi_{0}(\mathbf{x}),\quad V_{z}(\mathbf{x})=\psi_{1}(\mathbf{x}%
),\quad\mathbf{x}\in\Gamma,\quad V(\mathbf{x})=0,\quad\mathbf{x}\in\partial%
\Omega\setminus\Gamma,$$
(3.18)
where functions $\psi_{0}$ and $\psi_{1}$ can be computed using (2.7) and (2.8). Boundary conditions (3.18) are
over-determined ones. Due to the approximate nature of (3.16), we
have observed that the obvious approach of finding the function $V(\mathbf{x})$ by dropping the second boundary condition (3.18) and solving
the resulting Dirichlet boundary value problem for Laplace equation (3.17) with the resulting boundary data (3.18) does not
provide satisfactory results. The same observation was made in [1] for the 1D case. Thus, we use a different approach to
approximate the function $V\left(\mathbf{x}\right)$.
Let the number $s>0$ be such that $s>\xi.$ Let $\lambda,\nu>0$ be two
parameters which we will choose later. We introduce the CWF as
$$\varphi_{\lambda}\left(z\right)=\exp\left[2\lambda\left(z+s\right)^{-\nu}%
\right],$$
(3.19)
see Theorem 4.1 in section 4.1. Below we fix a number $\nu$ and allow $\lambda$ to change. We find an approximate solution of the problem (3.17), (3.18) by minimizing the following cost functional
with the CWF in it:
$$I_{\mu,\alpha}\left(V\right)=\exp\left(-2\mu\left(s+d\right)^{-\nu}\right)\int%
_{\Omega}\left|\Delta V\right|^{2}\varphi{}_{\mu}\left(z\right)d\mathbf{x}+%
\alpha\|V\|_{H^{3}(\Omega)}^{2}.$$
(3.20)
We minimize the functional $I_{\nu,\alpha}\left(V\right)$ on the set $S$,
$$V\in S=\{V\in H^{2}(\Omega):\,V(\mathbf{x})=\psi_{0}\left(\mathbf{x}\right),\,%
V_{z}(\mathbf{x})=\psi_{1}\left(\mathbf{x}\right),\mathbf{x}\in\Gamma,V(%
\mathbf{x})=0,\,\mathbf{x}\in\partial\Omega\setminus\Gamma\}.$$
(3.21)
In (3.20), $\alpha>0$ is the regularization parameter. The
multiplier $\exp\left(-2\mu\left(s+d\right)^{-\nu}\right)$ is
introduced to balance two terms in the right hand side of (3.20).
Remark 3.2. Since the Laplace operator is linear, one can also find
an approximate solution of problem (3.17), (3.18) by
the regular quasi-reversibility method via setting in (3.20) $\mu=0$ [49]. However, we have noticed that a better
computational accuracy is provided in the presence of the CWF$.$ This
observation coincides with the one of [23] where it
was noticed numerically that the presence of the CWF in an analog of the
functional (3.20) for the 1D heat equation provides a better
solution accuracy for the quasi-reversibility method.
We now follow the classical Tikhonov regularization concept [50]. By
this concept, we should assume that there exists an exact solution $V_{\ast}\left(\mathbf{x}\right)$ of the problem (3.20), (3.21)
with the noiseless data $\psi_{0\ast}(\mathbf{x}),\psi_{1\ast}(\mathbf{x}).$ Below the subscript “$\ast$” is related only to the
exact solution. In fact, however, the data $\psi_{0}(\mathbf{x}),\psi_{1}(\mathbf{x})$ contain noise. Let $\delta\in\left(0,1\right)$ be the level
of noise in the data $\psi_{0}(\mathbf{x}),\psi_{1}(\mathbf{x})$. Again,
following the same concept, we should assume that the number $\delta\in\left(0,1\right)$ is sufficiently small. Assume that there exist functions
$Q\left(\mathbf{x}\right),Q_{\ast}\left(\mathbf{x}\right)\in H^{2}\left(\Omega\right)$ such that (see (3.21))
$$Q\left(\mathbf{x}\right)=\psi_{0}(\mathbf{x}),\quad\partial_{z}Q(\mathbf{x})=%
\psi_{1}(\mathbf{x}),\quad\mathbf{x}\in\Gamma;\quad Q(\mathbf{x})=0,\quad%
\mathbf{x}\in\partial\Omega\setminus\Gamma,$$
(3.22)
$$Q_{\ast}\left(\mathbf{x}\right)=\psi_{0\ast}(\mathbf{x}),\quad\partial_{z}Q_{%
\ast}(\mathbf{x})=\psi_{1\ast}(\mathbf{x}),\quad\mathbf{x}\in\Gamma;\quad Q_{%
\ast}(\mathbf{x})=0,\quad\mathbf{x}\in\partial\Omega\setminus\Gamma,$$
(3.23)
$$\left\|Q-Q_{\ast}\right\|_{H^{3}\left(\Omega\right)}<\delta.$$
(3.24)
Introduce the number $t_{\nu},$
$$t_{\nu}=\left(s-\xi\right)^{-\nu}-\left(s+d\right)^{-\nu}>0.$$
(3.25)
Let
$$W\left(\mathbf{x}\right)=V\left(\mathbf{x}\right)-Q\left(\mathbf{x}\right).$$
(3.26)
Then by (3.20) and (3.21) the functional $I_{\mu,\alpha}$ becomes
$$\widetilde{I}_{\mu,\alpha}\left(W\right)=\exp\left(-2\mu\left(s+d\right)^{-\nu%
}\right)\int_{\Omega}\left|\Delta W+\Delta Q\right|^{2}\varphi_{\mu}\left(z%
\right)d\mathbf{x}+\alpha\|W+Q\|_{H^{3}(\Omega)}^{2},\text{ }W\in H_{0}^{3}%
\left(\Omega\right).$$
(3.27)
Theorem 4.2 of section 4 claims that for each $\alpha>0$ there exists
unique minimizer $W_{\mu,\nu,\alpha}\in H^{3}\left(\Omega\right)$ of
the functional (3.20), which is called the “regularized solution”. Using (3.26), denote $V_{\mu,\nu,\alpha}=W_{\mu,\nu,\alpha}+Q.$ It is stated in Theorem 4.2 that one can choose a
sufficiently large number $\nu_{0}=\nu_{0}\left(\Omega,s\right)$
depending only on $\Omega$ and $s$ such that for any fixed value of the
parameter $\nu\geq\nu_{0}$ the choices
$$\alpha=\alpha\left(\delta\right)=\delta,\mu=\ln\left(\delta^{-1/\left(2t_{\nu}%
\right)}\right)$$
(3.28)
regularized solutions converge to the exact solution as $\delta\rightarrow 0.$ More precisely, there exists a constant $C=C\left(\Omega\right)>0$ such that
$$\left\|V_{\mu\left(\delta\right),\nu,\alpha\left(\delta\right)}-V_{\ast}\right%
\|_{H^{2}\left(\Omega\right)}\leq C\left(1+\|V_{\ast}\|_{H^{3}(\Omega)}\right)%
\sqrt{\delta}\sqrt{\ln\left(\delta^{-1/\left(2t_{\nu}\right)}\right)}.$$
(3.29)
Here and below $C=C\left(\Omega\right)>0$ denotes different positive
constants depending only on the domain $\Omega.$
3.4 Associated spaces
Below, for any complex number $z\in\mathbb{C}$ we denote $\overline{z}$ its
complex conjugate. It is convenient for us to consider any complex valued
function $U=\mathop{\rm Re}U+i\mathop{\rm Im}U=U_{1}+iU_{2}$ as the 2D vector function $U=\left(U_{1},U_{2}\right).$
Thus, below any Banach space we use for a complex valued function is
actually the space of such 2D real valued vector functions. Norms in these
spaces of 2D vector functions are defined in the standard way, so as scalar
products, in the case of Hilbert spaces. For brevity we do not differentiate
below between complex valued functions and corresponding 2D vector
functions. However, it is always clear from the context what is what.
We define the Hilbert space $H_{m}$ of complex valued functions $f\left(\mathbf{x},k\right)$ as
$$H_{m}=\left\{f\left(\mathbf{x},k\right):\left\|f\right\|_{H_{m}}=\left[\int_{%
\underline{k}}^{\overline{k}}\left\|f\left(\mathbf{x},k\right)\right\|_{H^{m}%
\left(\Omega\right)}^{2}dk\right]^{1/2}<\infty\right\},\text{ }m=1,2,3.$$
(3.30)
Denote $\left[,\right]$ the scalar product in the space $H_{3}.$ The
subspace $H_{m}^{0}$ of the space $H_{m}$ is defined as
$$H_{m}^{0}=\left\{f\in H_{m}:f\left(\mathbf{x},k\right)\mid_{\partial\Omega}=0,%
f_{z}\left(\mathbf{x},k\right)\mid_{\Gamma}=0,\forall k\in\left[\underline{k},%
\overline{k}\right]\right\}.$$
Also, in the case of functions independent on $k$,
$$H_{0}^{m}\left(\Omega\right)=\left\{f\left(\mathbf{x}\right)\in H^{m}\left(%
\Omega\right):f\left(\mathbf{x}\right)\mid_{\partial\Omega}=0,f_{z}\left(%
\mathbf{x}\right)\mid_{\Gamma}=0\right\}.$$
Similarly we define for $r=0,1,2$
$$C_{r}=\left\{f\left(\mathbf{x},k\right):\left\|f\right\|_{C_{r}}=\max_{k\in%
\left[\underline{k},\overline{k}\right]}\left\|f\left(\mathbf{x},k\right)%
\right\|_{C^{r}\left(\overline{\Omega}\right)}\right\},$$
where $C^{0}\left(\overline{\Omega}\right)=C\left(\overline{\Omega}\right).$ Embedding theorem implies that:
$$H_{3+r}\subset C_{1+r},\left\|f\right\|_{C_{1+r}}\leq C\left\|f\right\|_{H_{3+%
r}},\text{ }\forall f\in H_{3+r},r=0,1\text{,}$$
(3.31)
$$\left\|\widetilde{f}\right\|_{C^{1}\left(\overline{\Omega}\right)}\leq C\left%
\|\widetilde{f}\right\|_{H^{3}\left(\Omega\right)},\text{ }\forall\widetilde{f%
}\in H^{3}\left(\Omega\right).$$
(3.32)
3.5 The weighted Tikhonov-like functional
Suppose that there exists a function $F\left(\mathbf{x},k\right)\in H_{4}$
such that (see (3.14)):
$$F\left(\mathbf{x},k\right)\mid_{\Gamma}=\phi_{0}\left(\mathbf{x},k\right),%
\text{ }F_{z}\left(\mathbf{x},k\right)\mid_{\Gamma}=\phi_{1}\left(\mathbf{x},k%
\right),\text{ }F\left(\mathbf{x},k\right)\mid_{\partial\Omega\diagdown\Gamma}%
=0.$$
(3.33)
Also, assume that there exists an exact solution $c_{\ast}\left(\mathbf{x}\right)$ of our CISP satisfying the above conditions imposed on the
coefficient $c\left(\mathbf{x}\right)$ and generating the noiseless
boundary data $\phi_{0\ast}$ and $\phi_{1\ast}$ in (3.14). Let the function $F_{\ast}\left(\mathbf{x},k\right)\in H_{3}$
satisfies boundary conditions (3.33) in which functions $\phi_{0}$ and
$\phi_{1}$ are replaced with functions $\phi_{0\ast}$ and $\phi_{1\ast}$
respectively. We assume that
$$\left\|F-F_{\ast}\right\|_{H_{4}}<\delta.$$
(3.34)
Let $q_{\ast}\in H_{3}$ be the function $q$ generated by the exact
coefficient $c_{\ast}\left(\mathbf{x}\right).$ Introduce functions $p,p_{\ast}\in H_{3}^{0}$ as
$$p\left(\mathbf{x},k\right)=q\left(\mathbf{x},k\right)-F\left(\mathbf{x},k%
\right),\text{ }p_{\ast}\left(\mathbf{x},k\right)=q_{\ast}\left(\mathbf{x},k%
\right)-F_{\ast}\left(\mathbf{x},k\right).$$
(3.35)
It follows from the discussion in section 2.2 about the smoothness as well
as from (3.7), (3.9) and (3.35) that the functions $p,p_{\ast}\in H_{3}^{0}.$ Let $R>0$ be an arbitrary number. Consider the
ball $B\left(R\right)\subset H_{3}^{0}$ of the radius $R$,
$$B\left(R\right)=\left\{f\in H_{3}^{0}:\left\|f\right\|_{H_{3}}<R\right\}.$$
(3.36)
Based on the integro-differential equation (3.13), boundary
conditions (3.14) for it, (3.33) and (3.35), we
construct our weighted Tikhonov-like functional with the CWF (3.19) in
it as
$$J_{\lambda,\rho}\left(p\right)=\exp\left(-2\lambda\left(s+d\right)^{-\nu}%
\right)\int_{\underline{k}}^{\overline{k}}\int_{\Omega}\left|L\left(p+F\right)%
\left(\mathbf{x},\kappa\right)\right|^{2}\varphi_{\lambda}^{2}\left(z\right)d%
\mathbf{x}d\kappa+\rho\left\|p\right\|_{H_{3}}^{2},$$
(3.37)
where $\rho>0$ is the regularization parameter. Similarly with (3.20), the multiplier $\exp\left(-2\lambda\left(s+d\right)^{-\nu}\right)$ is introduced to balance two terms in the right hand side of (3.37). The minimizer $V_{\mu\left(\delta\right),\nu,\alpha\left(\delta\right)}$ of the functional (3.20) is chosen in $J_{\lambda,\rho}\left(p\right)$ as the tail function. We consider the
following minimization problem:
Minimization Problem. Minimize the functional $J$${}_{\lambda,\rho}$$(q)$ on the set $\overline{B\left(R\right)}$.
4 Theorems
In this section we formulate theorems about numerical procedures considered
in section 3. We start from the Carleman estimate with the CWF (3.19).
Theorem 4.1 (Carleman estimate)
Let $\Omega\subset\mathbb{R}^{3}$ be the above domain (2.1). Temporary denote $\mathbf{x}=\left(x,y,z\right)=\left(x_{1},x_{2},x_{3}\right)$. There exist numbers $C=C\left(\Omega\right)>0$, $\nu_{0}=\nu{}_{0}\left(\Omega,s,d\right)\geq 1$ and $\lambda_{0}=\lambda_{0}\left(\Omega,s,d\right)\geq 1$ depending only on listed
parameters such that for any real valued function $u\in H_{0}^{2}\left(\Omega\right)$ the following Carleman estimate holds with the CWF $\varphi_{\lambda}\left(z\right)$ in (3.19 for and
fixed number $\nu\geq\nu_{0}$ and for all $\lambda\geq\lambda_{0}$
$$\int_{\Omega}\left(\Delta u\right)^{2}\varphi_{\lambda}\left(z\right)d\mathbf{%
x}\geq\frac{C}{\lambda}\sum_{i,j=1}^{3}\int_{\Omega}\left(u_{x_{i}x_{j}}\right%
)^{2}\varphi_{\lambda}\left(z\right)d\mathbf{x}+C\lambda\int_{\Omega}\left(%
\nabla u\right)^{2}\varphi_{\lambda}\left(z\right)d\mathbf{x}+C\lambda^{3}\int%
_{\Omega}u^{2}\varphi_{\lambda}\left(z\right)d\mathbf{x}$$
(4.1)
Remark 4.1. A close analog of Theorem 4.1 is formulated as lemma
4.1 of [15] and is proven in the proof of lemma 6.5.1 of
[3]. Hence, we omit the proof of Theorem 4.1.
The next theorem is about the problem (3.20), (3.21).
Theorem 4.2
Assume that there exists a function $Q\in H^{3}\left(\Omega\right)$ satisfying conditions (3.22), (3.24). Then for each set of parameters $\mu,\nu,\alpha>0$ there
exists unique minimizer $W_{\mu,\alpha,\nu}\in H^{3}\left(\Omega\right)$ of the functional (3.27). Let $V_{\mu,\nu,\alpha}=W_{\mu,\nu,\alpha}+Q$ (see (3.26)). Suppose that there
exists an exact solution $V_{\ast}\in H^{3}\left(\Omega\right)$
of the problem (3.17), (3.18) with the noiseless
boundary data $\psi_{0\ast}(x),\psi_{1\ast}(x)$. Also, assume
that there exists a function $Q_{\ast}\in H^{3}\left(\Omega\right)$satisfying conditions (3.23) and such that
$$\left\|Q_{\ast}\right\|_{H^{3}\left(\Omega\right)}\emph{\ }\leq C\left\|V_{%
\ast}\right\|_{H^{3}\left(\Omega\right)}.$$
(4.2)
Let inequality (3.24) hold, where $\delta\in\left(0,1\right)$ is the level of noise in the data. Let $\nu_{0}\left(\Omega,s\right)$ and $\lambda_{0}\left(\Omega,s\right)$ becnumbers of Theorem 4.1. Fix a number $\nu\geq\nu_{0}\left(\Omega,s\right)$ and let the parameter $\nu$ be independent on $\delta$. Choose a number $\delta_{0}\in\left(0,e^{-2\lambda_{0}t_{\nu}}\right)$, where $\lambda_{0}$ is defined in
Theorem 4.1 and the number $t_{\nu}$ is defined in (3.25).
For any $\delta\in\left(0,\delta_{0}\right)$ let the choice (3.28) holds. Then the convergence estimate (3.29) of functions $V_{\mu\left(\delta\right),\nu,\alpha\left(\delta\right)}$ to the exact solution $V_{\ast}$ holds for $\delta\rightarrow 0$. In addition, the function $V_{\mu\left(\delta\right),\nu,\alpha\left(\delta\right)}\in C^{1}\left(%
\overline{\Omega}\right)$ and
$$C\left\|V_{\mu\left(\delta\right),\nu,\alpha\left(\delta\right)}\right\|_{C^{1%
}\left(\overline{\Omega}\right)}\leq\left\|V_{\mu\left(\delta\right),\nu,%
\alpha\left(\delta\right)}\right\|_{H^{3}\left(\Omega\right)}\leq C\left(1+%
\left\|V_{\ast}\right\|_{H^{3}\left(\Omega\right)}\right).$$
(4.3)
Theorem 4.3 is the central analytical result of this paper.
Theorem 4.3 (Global strict convexity)
Assume that conditions of Theorem 4.2 hold. Set in (3.13) $V=V_{\mu\left(\delta\right),\nu,\alpha\left(\delta\right)},$ where the
function $V_{\mu\left(\delta\right),\nu,\alpha\left(\delta\right)}$is the one of Theorem 4.2. The functional $J_{\lambda,\rho}\left(p\right)$ has the Fréchet derivative $J_{\lambda,\rho}^{\prime}\left(p\right)\in H_{3}$ at any point $p\in H_{3}^{0}.$Assume that there exists a function $F,F_{\ast}\left(\mathbf{x},k\right)\in H_{4}$ satisfying conditions (3.33), (3.34),
where $\delta\in\left(0,1\right).$ Let $\lambda_{0}=\lambda_{0}\left(\Omega\right)$ be the number defined in Theorem 4.1. Then
there exists a number $\lambda_{1}=\lambda_{1}\left(\Omega,R,\left\|F_{\ast}\right\|_{H_{4}},\left\|V%
_{\ast}\right\|_{H^{3}\left(\Omega\right)},\underline{k},\overline{k}\right)%
\geq\lambda_{0}\left(\Omega\right)$ and a number $C_{1}=C_{1}\left(\Omega,R,\left\|F_{\ast}\right\|_{H_{4}},\left\|V_{\ast}%
\right\|_{H^{3}\left(\Omega\right)},\underline{k},\overline{k}\right)>0,$
both depending only on listed parameters, such that for any $\lambda\geq\lambda_{1}$ the functional $J_{\lambda,\rho}\left(p\right)$ is strictly convex on $\overline{B\left(R\right)}.$ In
other words, the following estimates are valid for all $p_{1},p_{2}\in\overline{B\left(R\right)}:$
$$J_{\lambda,\rho}\left(p_{2}\right)-J_{\lambda,\rho}\left(p_{1}\right)-J_{%
\lambda,\rho}^{\prime}\left(p_{1}\right)\left(p_{2}-p_{1}\right)\geq\frac{C_{1%
}}{\lambda}\left\|p_{2}-p_{1}\right\|_{H_{2}}^{2}+\rho\left\|p_{2}-p_{1}\right%
\|_{H_{3}}^{2},$$
(4.4)
$$J_{\lambda,\rho}\left(p_{2}\right)-J_{\lambda,\rho}\left(p_{1}\right)-J_{%
\lambda,\rho}^{\prime}\left(p_{1}\right)\left(p_{2}-p_{1}\right)\geq C_{1}%
\left\|p_{2}-p_{1}\right\|_{H_{1}}^{2}+\rho\left\|p_{2}-p_{1}\right\|_{H_{3}}^%
{2}.$$
(4.5)
Remark 4.2. The first term in the right hand side of (4.5)
does not decay with the increase of $\lambda,$ unlike (4.4). Hence,
the “convexity property” of the functional $J_{\lambda,\rho}$ is sort of better in terms of the $H_{1}-$norm in (4.5)
rather than in terms of the $H_{2}-$norm in (4.4). On the other hand,
the norm of that term is weaker than the one in (4.4). Also, to
establish convergence of reconstructed coefficients $c_{n}\left(\mathbf{x}\right),$ we need the $H_{2}-$norm: see (4.12) and Remark 3.1.
Theorem 4.4
Suppose that the conditions of Theorems 4.2 and
4.3 regarding the tail function $V=V_{\mu\left(\delta\right),\nu,\alpha\left(\delta\right)}$ and the function $F$ hold.
The Fréchet derivative $J_{\lambda,\rho}^{\prime}$ of the
functional $J_{\lambda,\rho}$ satisfies the Lipschitz continuity
condition in any ball $B\left(R^{\prime}\right)$ as in (3.36) with any $R^{\prime}>0.$ In other words, the following inequality
holds with the constant $M=M\left(\Omega,R^{\prime},\left\|F_{\ast}\right\|_{H_{4}},\left\|V_{\ast}%
\right\|_{H^{3}\left(\Omega\right)},\lambda,\nu,\rho,\underline{k},\overline{k%
}\right)>0$
depending only on listed parameters:
$$\left\|J_{\lambda,\rho}^{\prime}\left(p_{1}\right)-J_{\lambda,\rho}^{\prime}%
\left(p_{2}\right)\right\|_{H_{3}}\leq M\left\|p_{1}-p_{2}\right\|_{H_{3}},%
\text{ }\forall p_{1},p_{2}\in B\left(R^{\prime}\right).$$
Let $P_{\overline{B}}:H_{3}^{0}\rightarrow\overline{B\left(R\right)}$ be
the projection operator of the Hilbert space $H_{3}^{0}$ on $\overline{B\left(R\right)}.$ Let $p_{0}\in B\left(R\right)$ be an arbitrary point
of the ball $B\left(R\right)$. Consider the following sequence:
$$p_{n}=P_{\overline{B}}\left(p_{n-1}-\omega J_{\lambda,\rho}^{\prime}\left(p_{n%
-1}\right)\right),\text{ }n=1,2,...,$$
(4.6)
where $\omega\in\left(0,1\right)$ is a certain number.
Theorem 4.5
Assume that conditions of Theorems 4.2 and 4.3
hold. Let $\lambda\geq\lambda_{1},$ where $\lambda_{1}$ is the number of Theorem 4.3. Then there exists unique minimizer $p_{\min,\lambda}\in\overline{B\left(R\right)}$ of the functional $J_{\lambda,\rho}\left(p\right)$ on the set $\overline{B\left(R\right)}$ and
$$J_{\lambda,\rho}^{\prime}\left(p_{\min,\lambda}\right)\left(y-p_{\min,\lambda}%
\right)\geq 0,\text{ \ }\forall y\in H_{3}^{0}.$$
(4.7)
Also, there exists a sufficiently small number $\omega_{0}=\omega_{0}\left(\Omega,R,\left\|F\right\|_{H_{4}},\left\|V_{\ast}%
\right\|_{H^{3}\left(\Omega\right)},\underline{k},\overline{k},\lambda,\delta%
\right)\in\left(0,1\right)$ depending only on
listed parameters such that for any $\omega\in\left(0,\omega_{0}\right)$ the sequence (4.6) converges to the minimizer $p_{\min,\lambda}\in\overline{B\left(R\right)}$ of the functional $J_{\lambda,\rho}\left(p\right)$ on the set $\overline{B\left(R\right)}$,
$$\left\|p_{\min,\lambda}-p_{n}\right\|_{H_{3}}\leq r^{n}\left\|p_{\min,\lambda}%
-p_{0}\right\|_{H_{3}},\text{ }n=1,2,..,$$
(4.8)
where the number $r=r\left(\omega,\Omega,R,\left\|F\right\|_{H_{4}},\left\|V_{\ast}\right\|_{H^{%
3}\left(\Omega\right)},\underline{k},\overline{k},\lambda,\delta\right)\in%
\left(0,1\right)$ depends only on listed parameters.
By (4.8) we estimate the convergence rate of the sequence (4.6)
to the minimizer. The next question is about the convergence of this
sequence to the exact solution $p_{\ast}$ assuming that it exists.
Theorem 4.6
Assume that conditions of Theorems 4.2 and 4.3
hold. Let $\lambda_{1}$ be the number of Theorem 4.3. Choose a
number $\delta_{1}\in\left(0,e^{-2\lambda_{1}t_{\nu}}\right).$ For $\delta\in\left(0,\delta_{1}\right),$ set $\rho=\rho\left(\delta\right)=\sqrt{\delta},\lambda=\lambda\left(\delta\right)=%
\ln\left(\delta^{-1/\left(2t_{\nu}\right)}\right).$ Furthermore,
assume that the exact solution $p_{\ast}$ exists and $p_{\ast}\in B\left(R\right)$. Then there exists a number $C_{2}=C_{2}\left(\Omega,R,\left\|F\right\|_{H_{4}},\left\|V_{\ast}\right\|_{H^%
{3}\left(\Omega\right)},\underline{k},\overline{k}\right)>0$
depending only on listed parameters such that
$$\left\|p_{\ast}-p_{\min,\lambda\left(\delta\right)}\right\|_{H_{2}}\leq C_{2}%
\delta^{1/4}\left[\ln\left(\delta^{-1/\left(2t_{\nu}\right)}\right)\right]^{3/%
4},$$
(4.9)
$$\left\|c_{\ast}-c_{\min,\lambda\left(\delta\right)}\right\|_{L_{2}\left(\Omega%
\right)}\leq C_{2}\delta^{1/4}\left[\ln\left(\delta^{-1/\left(2t_{\nu}\right)}%
\right)\right]^{3/4},$$
(4.10)
where the function $c_{\min,\lambda\left(\delta\right)}\left(\mathbf{x}\right)$ is reconstructed from the function $p_{\min,\lambda\left(\delta\right)}$ using (3.35) and Remark 3.1. In addition, the following convergence estimates hold
$$\left\|p_{\ast}-p_{n}\right\|_{H_{2}}\leq C_{2}\delta^{1/4}\left[\ln\left(%
\delta^{-1/\left(2t_{\nu}\right)}\right)\right]^{3/4}+r^{n}\left\|p_{\min,%
\lambda\left(\delta\right)}-p_{0}\right\|_{H_{3}},\text{ }n=1,2,...,$$
(4.11)
$$\left\|c_{\ast}-c_{n}\right\|_{L_{2}\left(\Omega\right)}\leq C_{2}\delta^{1/4}%
\left[\ln\left(\delta^{-1/\left(2t_{\nu}\right)}\right)\right]^{3/4}+C_{2}r^{n%
}\left\|p_{\min,\lambda\left(\delta\right)}-p_{0}\right\|_{H_{3}},\text{ }n=1,%
2,...,$$
(4.12)
where $r$ is the number in (4.8) and the function $c_{n}\left(\mathbf{x}\right)$ is reconstructed from the function $p_{n}\left(\mathbf{x},k\right)$ using (3.35) and Remark 3.1.
Remark 4.1. Since $R>0$ is an arbitrary number and $p_{0}$ is an
arbitrary point of the ball $B\left(R\right)$, then Theorems 4.5 and 4.6
ensure the global convergence of the gradient projection method for our
case, see the second paragraph of section 1. We note that if a functional is
non convex, then the convergence of a gradient-like method of its
minimization might be guaranteed only if the starting point of iterations is
located in a sufficiently small neighborhood of its minimizer.
5 Proofs
In this section we prove theorems formulated in section 4, except of Theorem
4.1 (see Remark 4.1).
5.1 Proof of Theorem 4.2
By (3.26) and (3.27) the vector function $W_{\min}=\left(W_{1,\min},W_{2,\min}\right)\in H_{0}^{3}\left(\Omega\right)$ is a minimizer of
the functional $\widetilde{I}_{\mu,\alpha}\left(W\right)$ if and only if
$$\begin{gathered}\displaystyle\exp\left(-2\mu\left(s+d\right)^{-\nu}\right)\int%
_{\Omega}\left(\Delta W_{1,\min}\Delta h_{1}+\Delta W_{2,\min}\Delta h_{2}%
\right)\varphi_{\mu}\left(z\right)d\mathbf{x}+\alpha\left(\left(W_{\min},h%
\right)\right)=\\
\displaystyle-\exp\left(-2\mu\left(s+d\right)^{-\nu}\right)\int_{\Omega}\left(%
\Delta Q_{1}\Delta h_{1}+\Delta Q_{2}\Delta h_{2}\right)\varphi_{\mu}\left(z%
\right)d\mathbf{x}-\alpha\left(\left(Q,h\right)\right),\,\forall h=\left(h_{1}%
,h_{2}\right)\in H_{0}^{3}\left(\Omega\right),\end{gathered}$$
(5.1)
where $\left(\left(,\right)\right)$ is the scalar product in $H^{3}\left(\Omega\right).$ For any vector function $P=\left(P_{1},P_{2}\right)\in H_{0}^{3}\left(\Omega\right)$ consider the
expression in the left hand side of (5.1) in which the vector $\left(W_{1,\min},W_{2,\min}\right)$ is replaced with $\left(P_{1},P_{2}\right).$ Then this expression defines a new scalar product $\left\{P,h\right\}$
in $H^{3}\left(\Omega\right),$ and the corresponding norm $\sqrt{\left\{P,P\right\}}$ is equivalent to the norm in $H^{3}\left(\Omega\right).$
Next,
$$\begin{gathered}\displaystyle\left|-\exp\left(-2\mu\left(s+d\right)^{-\nu}%
\right)\int_{\Omega}\left(\Delta Q_{1}\Delta h_{1}+\Delta Q_{2}\Delta h_{2}%
\right)\varphi_{\mu}\left(z\right)d\mathbf{x}-\alpha\left(\left(Q,h\right)%
\right)\right|\leq D\left\|Q\right\|_{H^{3}\left(\Omega\right)}\left\|h\right%
\|_{H^{3}\left(\Omega\right)}\\
\displaystyle\leq D_{1}\sqrt{\left\{Q,Q\right\}}\sqrt{\left\{h,h\right\}},%
\text{ }\forall h=\left(h_{1},h_{2}\right)\in H_{0}^{3}\left(\Omega\right)\end%
{gathered}$$
with certain constants $D,D_{1}$ independent on $Q$ and $h$ but dependent on
parameters $\mu,\nu.$ Hence, Riesz theorem implies that there exists
unique vector function $\widehat{Q}=\widehat{Q}\left(Q\right)\in H_{0}^{3}\left(\Omega\right)$ such that
$$-\exp\left(-2\mu\left(s+d\right)^{-\nu}\right)\int_{\Omega}\left(\Delta Q_{1}%
\Delta h_{1}+\Delta Q_{2}\Delta h_{2}\right)\varphi_{\mu}\left(z\right)d%
\mathbf{x}-\alpha\left(\left(Q,h\right)\right)=\left\{\widehat{Q},h\right\},%
\text{ }\forall h=\left(h_{1},h_{2}\right)\in H_{0}^{3}\left(\Omega\right).$$
Hence, by (5.1) $\left\{W_{\min},h\right\}=\left\{\widehat{Q},h\right\},\forall h\in H_{0}^{3}%
\left(\Omega\right).$ Hence, $W_{\min}=\widehat{Q}.$ Thus, existence and uniqueness of the minimizer of the
functional $\widetilde{I}_{\mu,\alpha}\left(W\right)$ are established,
and the same for $I_{\mu,\alpha}\left(V\right)$.
We now prove convergence estimate (3.29). Let $W_{\ast}=V_{\ast}-Q_{\ast}\in H_{0}^{3}\left(\Omega\right).$ Denote $\widetilde{W}=W_{\min}-W_{\ast},$ $\widetilde{Q}=Q-Q_{\ast}.$ Since
$$\exp\left(-2\mu\left(s+d\right)^{-\nu}\right)\int_{\Omega}\left(\Delta W_{\ast%
,1}\Delta h_{1}+\Delta W_{\ast,2}\Delta h_{2}\right)\varphi_{\mu}\left(z\right%
)d\mathbf{x}+\alpha\left[W_{\ast},h\right]$$
(5.2)
$$=-\exp\left(-2\mu\left(s+d\right)^{-\nu}\right)\int_{\Omega}\left(\Delta Q_{1}%
^{\ast}\Delta h_{1}+\Delta Q_{2}^{\ast}\Delta h_{2}\right)\varphi_{\mu}\left(z%
\right)d\mathbf{x+}\alpha\left[W_{\ast},h\right],\text{ }\forall h\in H_{0}^{3%
}\left(\Omega\right),$$
then subtracting (5.2) from (5.1) and setting $h=\widetilde{W},$
we obtain
$$\begin{gathered}\displaystyle\exp\left(-2\mu\left(s+d\right)^{-\nu}\right)\int%
_{\Omega}\left(\Delta\widetilde{W}\right)^{2}\varphi_{\mu}\left(z\right)d%
\mathbf{x+}\alpha\left\|\widetilde{W}\right\|_{H^{3}\left(\Omega\right)}^{2}\\
\displaystyle=-\exp\left(-2\mu\left(s+d\right)^{-\nu}\right)\int_{\Omega}\left%
(\Delta\widetilde{Q}_{1}\Delta\widetilde{W}_{1}+\Delta\widetilde{Q}_{2}\Delta%
\widetilde{W}_{2}\right)\varphi_{\mu}\left(z\right)d\mathbf{x-}\alpha\left(%
\left(W_{\ast}+Q,\widetilde{W}\right)\right).\end{gathered}$$
Using the Cauchy-Schwarz inequality, taking into account (3.24) and
recalling that $\alpha=\delta$, we obtain
$$\begin{gathered}\displaystyle\exp\left(-2\mu\left(s+d\right)^{-\nu}\right)\int%
_{\Omega}\left(\Delta\widetilde{W}\right)^{2}\varphi_{\mu}\left(z\right)d%
\mathbf{x}+\delta\left\|\widetilde{W}\right\|_{H^{3}\left(\Omega\right)}^{2}\\
\displaystyle\leq C\delta\left(1+\left\|V_{\ast}\right\|_{H^{3}\left(\Omega%
\right)}^{2}\right)+C\exp\left(2\mu t_{\nu}\right)\delta^{2}.\end{gathered}$$
(5.3)
Since $\mu=\ln\left(\delta^{-1/\left(2t_{\nu}\right)}\right)$ and $\delta\in\left(0,1\right),$ then $\exp\left(2\mu t_{\nu}\right)\delta^{2}=\delta$ and
$$C\delta\left(1+\left\|V_{\ast}\right\|_{H^{3}\left(\Omega\right)}^{2}\right)+C%
\exp\left(2\mu t_{\nu}\right)\delta^{2}\leq C\delta\left(1+\left\|V_{\ast}%
\right\|_{H^{3}\left(\Omega\right)}^{2}\right),$$
then (5.3) implies that
$$\left\|\widetilde{W}\right\|_{H^{3}\left(\Omega\right)}\leq C\left(1+\left\|V_%
{\ast}\right\|_{H^{3}\left(\Omega\right)}\right),$$
(5.4)
$$\exp\left(-2\mu\left(s+d\right)^{-\nu}\right)\int_{\Omega}\left(\Delta%
\widetilde{W}\right)^{2}\varphi_{\mu}\left(z\right)d\mathbf{x\leq}C\delta\left%
(1+\left\|V_{\ast}\right\|{}_{H^{3}\left(\Omega\right)}^{2}\right).$$
(5.5)
Since
$$\exp\left(-2\mu\left(s+d\right)^{-\nu}\right)\varphi_{\mu}\left(z\right)\geq%
\exp\left(-2\mu\left(s+d\right)^{-\nu}\right)\exp\left(2\mu\left(s+d\right)^{-%
\nu}\right)=1,$$
then Theorem 4.1 implies that
$$\begin{gathered}\displaystyle\exp\left(-2\mu\left(s+d\right)^{-\nu}\right)\int%
_{\Omega}\left(\Delta\widetilde{W}\right)^{2}\varphi_{\mu}\left(z\right)d%
\mathbf{x}\\
\displaystyle\mathbf{\geq}\frac{C}{\mu}\left(\sum_{i,j=1}^{3}\int_{\Omega}%
\widetilde{W}_{x_{i}x_{j}}^{2}d\mathbf{x+}\mu^{2}\int_{\Omega}\left(\left(%
\nabla\widetilde{W}\right)^{2}+\widetilde{W}^{2}\right)d\mathbf{x}\right)\geq%
\frac{C}{\mu}\left\|\widetilde{W}\right\|_{H^{2}\left(\Omega\right)}^{2}.\end{gathered}$$
(5.6)
The right estimate (4.3) follows from (5.4), (3.24) and (4.2). The left estimate (4.3) follows from (3.32).
Comparing (5.5) with (5.6) and recalling (3.26) and (4.2), we obtain (3.29). $\square$
5.2 Proof of Theorem 4.3
Recall that we treat any complex valued function $U=\mathop{\rm Re}U+i\mathop{\rm Im}U=U_{1}+iU_{2}$ in two ways: (1) in its original complex
valued form and (2) in an equivalent form as a 2D vector function $\left(U_{1},U_{2}\right)$ (section 3.4). It is always clear from the content what
is what.
Let two arbitrary functions $p_{1},p_{2}\in\overline{B\left(R\right)}.$
Denote $h=p_{2}-p_{1}.$ Then $h=\left(h_{1},h_{2}\right)\in H_{0}^{3}\left(\Omega\right).$ In this proof $C_{1}=C_{1}\left(\Omega,R,\left\|F_{\ast}\right\|_{H_{4}},\left\|V_{\ast}%
\right\|_{H^{3}\left(\Omega\right)},\underline{k},\overline{k}\right)>0$ denotes
different positive constants. Also, in this proof we denote for brevity $V\left(\mathbf{x}\right)=$ $V_{\mu\left(\delta\right),\nu,\alpha\left(\delta\right)}\left(\mathbf{x}\right).$ We note that due
to (3.31), (3.32), (3.34), (3.36) and (4.3)
$$\left\|\nabla V\right\|_{C\left(\overline{\Omega}\right)},\left\|F\right\|_{C^%
{2}\left(\overline{\Omega}\right)}\leq C_{1},$$
(5.7)
$$\left\|\nabla h\right\|_{C\left(\overline{\Omega}\right)}\leq C_{1}.$$
(5.8)
It follows from (3.37) that we need to consider the expression
$$A=\left|L\left(p_{1}+h+F\right)\right|^{2}-\left|L\left(p_{1}+F\right)\right|^%
{2},$$
(5.9)
where the nonlinear operator $L$ is given in (3.13). First, we
will single out the linear, with respect to $h$, part of $A$. This will lead
us to the Frechét derivative $J_{\lambda,\rho}^{\prime}.$ Next, we
will single out $\left|\Delta h\right|^{2}.$ This will enable us
to apply Carleman estimate of Theorem 4.1. We have:
$$\left|z_{1}\right|^{2}-\left|z_{2}\right|^{2}=\left(z_{1}-z_{2}\right)%
\overline{z}_{1}+\left(\overline{z}_{1}-\overline{z}_{2}\right)z_{2},\text{ }%
\forall z_{1},z_{2}\in\mathbb{C}.$$
(5.10)
Let
$$z_{1}=L\left(p_{1}+h+F\right),\text{ }z_{2}=L\left(p_{1}+F\right),$$
(5.11)
Then by (5.9)-(5.17)
$$\displaystyle A_{1}$$
$$\displaystyle=$$
$$\displaystyle\left(z_{1}-z_{2}\right)\overline{z}_{1},\text{ }A_{2}=\left(%
\overline{z}_{1}-\overline{z}_{2}\right)z_{2},\text{ }$$
(5.12)
$$\displaystyle A$$
$$\displaystyle=$$
$$\displaystyle A_{1}+A_{2}.$$
(5.13)
Taking into account (3.13), (3.37) and (5.11), we
obtain
$$\begin{gathered}\displaystyle z_{1}-z_{2}=\Delta h-2k^{2}\nabla h\left(\nabla V%
-\int_{k}^{\overline{k}}\left(\nabla p_{1}+\nabla F\right)d\kappa\right)\\
\displaystyle+2k\left(\int_{k}^{\overline{k}}\nabla hd\kappa\right)\left(2%
\nabla V-2\int_{k}^{\overline{k}}\left(\nabla p_{1}+\nabla F\right)d\kappa+k%
\left(\nabla p_{1}+\nabla F\right)\right)+2i\left(h_{z}-\int_{k}^{\overline{k}%
}h_{z}d\kappa\right).\end{gathered}$$
(5.14)
Next,
$$\overline{z}_{1}=\left(\Delta\overline{h}+\Delta\overline{p_{1}}+\Delta%
\overline{F}\right)$$
$$-2k\left(\nabla\overline{V}-\int_{k}^{\overline{k}}\left(\nabla\overline{p_{1}%
}+\nabla\overline{h}+\nabla\overline{F}\right)d\kappa\right)\cdot\left(k\left(%
\nabla\overline{p_{1}}+\nabla\overline{h}+\nabla\overline{F}\right)+\nabla%
\overline{V}-\int_{k}^{\overline{k}}\left(\nabla\overline{p_{1}}+\nabla%
\overline{h}+\nabla\overline{F}\right)d\kappa\right)$$
$$-2i\left(k\left(\overline{p_{1z}}+\overline{h}_{z}+\overline{F_{z}}\right)+%
\overline{V_{z}}-\int_{k}^{\overline{k}}\left(\overline{p_{1z}}+\overline{h}_{%
z}+\overline{F_{z}}\right)d\kappa\right).$$
Hence, by (5.12)
$$A_{1}=\left(z_{1}-z_{2}\right)\overline{z}_{1}=\left|\Delta h\right|^{2}+B_{1}%
^{\left(linear\right)}\left(h,\mathbf{x},k\right)+B_{1}\left(h,\mathbf{x},k%
\right),$$
(5.15)
where the expression $B_{1}^{\left(linear\right)}\left(h,k\right)$ is
linear with respect to $h=\left(h_{1},h_{2}\right),$
$$\begin{gathered}\displaystyle B_{1}^{\left(linear\right)}\left(h,\mathbf{x},k%
\right)=\Delta hG_{1}+\left(\nabla h\nabla G_{2}\right)\cdot G_{3}+\left(%
\nabla\overline{h}\nabla G_{4}\right)\cdot G_{5}\\
\displaystyle+G_{7}\cdot\left(\int_{k}^{\overline{k}}\nabla hd\kappa\right)%
\nabla G_{6}+G_{9}\cdot\left(\int_{k}^{\overline{k}}\nabla\overline{h}d\kappa%
\right)\nabla G_{8}+G_{10}\left(h_{z}-\int_{k}^{\overline{k}}h_{z}d\kappa%
\right)+G_{11}\left(\overline{h}_{z}-\int_{k}^{\overline{k}}\overline{h}_{z}d%
\kappa\right),\end{gathered}$$
(5.16)
where explicit expressions for functions $G_{j}\left(\mathbf{x},k\right),j=1,...,11$ can be written in an obvious way. Furthermore, it follows from
these expressions as well as from (5.7) that $G_{1},G_{2},G_{4},G_{6}\in C_{1}$ and $G_{3},G_{5},G_{7},G_{9},$ $G_{10},G_{11}\in C_{0}$. And also
$$\left\{\begin{array}[]{c}\left\|G_{1}\right\|_{C_{1}},\left\|G_{2}\right\|_{C_%
{1}},\left\|G_{4}\right\|_{C_{1}},\left\|G_{6}\right\|_{C_{1}}\leq C_{1},\\
\left\|G_{3}\right\|_{C_{0}},\left\|G_{5}\right\|_{C_{0}},\left\|G_{7}\right\|%
_{C_{0}},\left\|G_{9}\right\|_{C_{0}},\left\|G_{10}\right\|_{C_{0}},\left\|G_{%
11}\right\|_{C_{0}}\leq C_{1}.\end{array}\right.$$
(5.17)
The term $B_{1}\left(h,k\right)$ in (5.15) is nonlinear with respect
to $h$. Applying the Cauchy-Schwarz inequality and also using (5.7)
and (5.8), we obtain
$$\left|B_{1}\left(h,\mathbf{x},k\right)\right|\geq\frac{1}{4}\left|\Delta h%
\right|^{2}-C_{1}\left|\nabla h\right|{}^{2}-C_{1}\int_{k}^{\overline{k}}\left%
|\nabla h\right|^{2}d\kappa.$$
(5.18)
Similarly with (5.15)-(5.18) we obtain
$$A_{2}=\left(\overline{z}_{1}-\overline{z}_{2}\right)z_{2}=B_{2}^{\left(linear%
\right)}\left(h,\mathbf{x},k\right)+B_{2}\left(h,\mathbf{x},k\right),$$
(5.19)
where the term $B_{2}^{\left(linear\right)}\left(h,\mathbf{x},k\right)$
is linear with respect to $h$ and its form is similar with the one of $B_{1}^{\left(linear\right)}\left(h,\mathbf{x},k\right)$ in (5.16),
although with different functions $G_{j},$ which still satisfy direct
analogs of estimates (5.17). As to the term $B_{2}\left(h,\mathbf{x},k\right),$ it is nonlinear with respect to $h$ and, as in (5.18),
$$\left|B_{2}\left(h,\mathbf{x},k\right)\right|\geq\frac{1}{4}\left|\Delta h%
\right|^{2}-C_{1}\left|\nabla h\right|^{2}-C_{1}\int_{k}^{\overline{k}}\left|%
\nabla h\right|^{2}d\kappa.$$
(5.20)
Denote $B\left(h,\mathbf{x},k\right)=B_{1}\left(h,\mathbf{x},k\right)+B_{2}\left(h,%
\mathbf{x},k\right).$ In addition to (5.18) and (5.20), the following upper estimate is valid:
$$\left|B\left(h,\mathbf{x},k\right)\right|\leq C_{1}\left(\left|\Delta h\right|%
^{2}+\left|\nabla h\right|^{2}+\int_{k}^{\overline{k}}\left|\nabla h\right|^{2%
}d\kappa\right).$$
(5.21)
Thus, it follows from (3.13), (3.37), (5.11)-(5.13), (5.15)-(5.20) that
$$J_{\lambda,\rho}\left(p_{1}+h\right)-J_{\lambda,\rho}\left(p_{1}\right)=$$
$$\exp\left(-2\lambda\left(s+d\right)^{-\nu}\right)\int_{\underline{k}}^{%
\overline{k}}\int_{\Omega}\left[S_{1}\Delta h+S_{2}\cdot\nabla h\right]\varphi%
_{\lambda}\left(z\right)d\mathbf{x}d\kappa+2\rho\left[h,p_{1}\right]$$
(5.22)
$$+\exp\left(-2\lambda\left(s+d\right)^{-\nu}\right)\int_{\underline{k}}^{%
\overline{k}}\int_{\Omega}B\left(h,\mathbf{x},\kappa\right)\varphi_{\lambda}%
\left(z\right)d\mathbf{x}d\kappa.$$
The second line of (5.22)
$$Lin\left(h\right)=\exp\left(-2\lambda\left(s+d\right)^{-\nu}\right)\int_{%
\underline{k}}^{\overline{k}}\int_{\Omega}\left[S_{1}\Delta h+S_{2}\cdot\nabla
h%
\right]\varphi_{\lambda}\left(z\right)d\mathbf{x}d\kappa+2\rho\left[h,p_{1}\right]$$
(5.23)
is linear with respect to $h$, where vector functions $S_{1}\left(\mathbf{x},k\right),S_{2}\left(\mathbf{x},k\right)$ are such that
$$\left|S_{1}\left(\mathbf{x},k\right)\right|,\left|S_{2}\left(\mathbf{x},k%
\right)\right|\leq C_{1}\text{ in }\overline{\Omega}\times\left[\underline{k},%
\overline{k}\right].$$
(5.24)
As to the third line of (5.22), it can be estimated from the below as
$$\begin{gathered}\displaystyle\exp\left(-2\lambda\left(s+d\right)^{-\nu}\right)%
\int_{\underline{k}}^{\overline{k}}\int_{\Omega}B\left(h,\mathbf{x},\kappa%
\right)\varphi_{\lambda}\left(z\right)d\mathbf{x}d\kappa\\
\displaystyle\geq\exp\left(-2\lambda\left(s+d\right)^{-\nu}\right)\left[\frac{%
1}{2}\int_{\underline{k}}^{\overline{k}}\int_{\Omega}\left|\Delta h\right|^{2}%
\varphi_{\lambda}\left(z\right)d\mathbf{x}d\kappa-C_{1}\int_{\underline{k}}^{%
\overline{k}}\int_{\Omega}\left|\nabla h\right|^{2}\varphi_{\lambda}\left(z%
\right)d\mathbf{x}d\kappa\right]+\rho\left\|h\right\|_{H_{3}}^{2}.\end{gathered}$$
(5.25)
In addition, (5.21) implies that
$$\begin{gathered}\displaystyle\exp\left(-2\lambda\left(s+d\right)^{-\nu}\right)%
\left|\int_{\underline{k}}^{\overline{k}}\int_{\Omega}B\left(h,\mathbf{x},%
\kappa\right)\varphi_{\lambda}\left(z\right)d\mathbf{x}d\kappa\right|\\
\displaystyle\leq C_{1}\exp\left(-2\lambda\left(s+d\right)^{-\nu}\right)\int_{%
\underline{k}}^{\overline{k}}\int_{\Omega}\left(\left|\Delta h\right|^{2}+%
\left|\nabla h\right|^{2}\right)\varphi_{\lambda}\left(z\right)d\mathbf{x}d%
\kappa+\rho\left\|h\right\|_{H_{3}}^{2}.\end{gathered}$$
(5.26)
First, consider the functional $Lin\left(h\right)$ in (5.23). It
follows from (3.25), (5.23) and (5.24) that
$$\left|Lin\left(h\right)\right|\leq C_{1}\exp\left(2\lambda t_{\nu}\right)\left%
\|h\right\|_{H_{3}}.$$
Hence, $Lin\left(h\right):H_{3}\rightarrow\mathbb{R}$ is a bounded linear
functional. Hence, by Riesz theorem for each pair $\lambda,\nu>0$ there
exists a 2D vector function $Z_{\lambda,\nu}\in H_{3}$ independent on $h$
such that
$$Lin\left(h\right)=\left[Z_{\lambda,\nu},h\right],\text{ }\forall h\in H_{3}.$$
(5.27)
In addition, (5.21), (5.22) and (5.27) imply that
$$\left|J_{\lambda,\rho}\left(p_{1}+h\right)-J_{\lambda,\rho}\left(p_{1}\right)-%
\left[Z_{\lambda,\nu},h\right]\right|\leq C_{1}\exp\left(2\lambda t_{\nu}%
\right)\left\|h\right\|_{H_{3}}^{2}.$$
(5.28)
Thus, applying (5.22)-(5.28), we conclude that $Z_{\lambda,\nu}$ is the Frechét derivative of the functional $J_{\lambda,\rho}\left(p_{1}\right)$ at the point $p_{1},Z_{\lambda,\nu}=J_{\lambda,\rho}^{\prime}\left(p_{1}\right)$.
Thus, (5.22) and (5.25) imply that
$$\begin{gathered}\displaystyle J_{\lambda,\rho}\left(p_{1}+h\right)-J_{\lambda,%
\rho}\left(p_{1}\right)-J_{\lambda,\rho}^{\prime}\left(p_{1}\right)\left(h%
\right)\\
\displaystyle\geq\exp\left(-2\lambda\left(s+d\right)^{-\nu}\right)\left[\frac{%
1}{2}\int_{\underline{k}}^{\overline{k}}\int_{\Omega}\left|\Delta h\right|^{2}%
\varphi_{\lambda}\left(z\right)d\mathbf{x}d\kappa-C_{1}\int_{\underline{k}}^{%
\overline{k}}\int_{\Omega}\left|\nabla h\right|^{2}\varphi_{\lambda}\left(z%
\right)d\mathbf{x}d\kappa\right]+\rho\left\|h\right\|_{H_{3}}^{2}.\end{gathered}$$
(5.29)
Assuming that $\lambda\geq\lambda_{0},$ we now apply Carleman estimate of
Theorem 4.1,
$$\frac{1}{2}\int_{\underline{k}}^{\overline{k}}\int_{\Omega}\left|\Delta h%
\right|^{2}\varphi_{\lambda}\left(z\right)d\mathbf{x}d\kappa-C_{1}\int_{%
\underline{k}}^{\overline{k}}\int_{\Omega}\left|\nabla h\right|^{2}\varphi_{%
\lambda}\left(z\right)d\mathbf{x}d\kappa+\rho\left\|h\right\|_{H_{3}}^{2}$$
$$\geq\frac{C}{\lambda}\sum_{i,j=1}^{3}\int_{\underline{k}}^{\overline{k}}\int_{%
\Omega}\left|h_{x_{i}x_{j}}\right|^{2}\varphi_{\lambda}\left(z\right)d\mathbf{%
x}d\kappa+C\lambda\int_{\underline{k}}^{\overline{k}}\int_{\Omega}\left|\nabla
h%
\right|^{2}\varphi_{\lambda}\left(z\right)d\mathbf{x}d\kappa-C_{1}\int_{%
\underline{k}}^{\overline{k}}\int_{\Omega}\left|\nabla h\right|^{2}\varphi_{%
\lambda}\left(z\right)d\mathbf{x}d\kappa+\rho\left\|h\right\|_{H_{3}}^{2}.$$
Choosing $\lambda\geq\lambda_{1}$ to be sufficiently large, we obtain
$$\begin{gathered}\displaystyle\frac{1}{2}\int_{\underline{k}}^{\overline{k}}%
\int_{\Omega}\left|\Delta h\right|^{2}\varphi_{\lambda}\left(z\right)d\mathbf{%
x}d\kappa-C_{1}\int_{\underline{k}}^{\overline{k}}\int_{\Omega}\left|\nabla h%
\right|^{2}\varphi_{\lambda}\left(z\right)d\mathbf{x}d\kappa+\rho\left\|h%
\right\|_{H_{3}}^{2}\\
\displaystyle\geq\frac{C}{\lambda}\sum_{i,j=1}^{3}\int_{\underline{k}}^{%
\overline{k}}\int_{\Omega}\left|h_{x_{i}x_{j}}\right|^{2}\varphi_{\lambda}%
\left(z\right)d\mathbf{x}d\kappa+C_{1}\lambda\int_{\underline{k}}^{\overline{k%
}}\int_{\Omega}\left|\nabla h\right|^{2}\varphi_{\lambda}\left(z\right)d%
\mathbf{x}d\kappa+\rho\left\|h\right\|_{H_{3}}^{2}.\end{gathered}$$
(5.30)
Finally, noting that $\varphi_{\lambda}\left(z\right)\geq\exp\left(2\lambda\left(s+d\right)^{-\nu}\right)$ in $\Omega$ and using (5.29) and (5.30), we obtain
$$J_{\lambda,\rho}\left(p_{1}+h\right)-J_{\lambda,\rho}\left(p_{1}\right)-J_{%
\lambda,\rho}^{\prime}\left(p_{1}\right)\left(h\right)\geq\frac{C_{1}}{\lambda%
}\left\|h\right\|_{H_{2}}^{2}+\rho\left\|h\right\|_{H_{3}}^{2},$$
$$J_{\lambda,\rho}\left(p_{1}+h\right)-J_{\lambda,\rho}\left(p_{1}\right)-J_{%
\lambda,\rho}^{\prime}\left(p_{1}\right)\left(h\right)\geq C_{1}\left\|h\right%
\|_{H_{1}}^{2}+\rho\left\|h\right\|_{H_{3}}^{2}.\text{ \ \ \ \ \ }$$
$\square$
5.3 Proof of Theorem 4.4
This proof is completely similar with the proof of theorem 3.1 of [23] and is, therefore, omitted.
5.4 Proof of Theorem 4.5
The existence and uniqueness of the minimizer $p_{\min,\lambda}\in\overline{B\left(R\right)}$, inequality (4.7) as well as convergence
estimate (4.8) follow immediately from the combination of Theorems
4.3 and 4.4 with lemma 2.1 and theorem 2.1 of [23].
$\ \square$
5.5 Proof of Theorem 4.6
Temporary denote $L\left(p+F\right)=L\left(p+F,V_{\mu\left(\delta\right),\nu,\alpha\left(\delta%
\right)}\right),$ $J_{\lambda,\rho}\left(p\right):=$ $J_{\lambda,\rho}\left(p,F,V_{\mu\left(\delta\right),\nu,\alpha\left(\delta%
\right)}\right)$ meaning dependence on
the tail function $V_{\mu\left(\delta\right),\nu,\alpha\left(\delta\right)}$. Consider the functional $J_{\lambda,\rho}\left(p,F,V_{\mu\left(\delta\right),\nu,\alpha\left(\delta%
\right)}\right)$ for $p=p_{\ast},$
$$\begin{gathered}\displaystyle J_{\lambda,\rho}\left(p_{\ast},F,V_{\mu\left(%
\delta\right),\nu,\alpha\left(\delta\right)}\right)=\exp\left(-2\lambda\left(s%
+d\right)^{-\nu}\right)\int_{\underline{k}}^{\overline{k}}\int_{\Omega}\left|L%
\left(p_{\ast}+F,V\right)\left(\mathbf{x},\kappa\right)\right|^{2}\varphi_{%
\lambda}^{2}\left(z\right)d\mathbf{x}d\kappa\\
\displaystyle+\rho\left\|p_{\ast}\right\|_{H_{3}}^{2}.\end{gathered}$$
(5.31)
Since $p_{\ast}\in B\left(R\right)$ and $L\left(p_{\ast}+F_{\ast},V_{\ast}\right)\left(\mathbf{x},\kappa\right)=0,$ then (5.31)
implies that
$$J_{\lambda,\rho}\left(p_{\ast},F_{\ast},V_{\ast}\right)=\rho\left\|p_{\ast}%
\right\|_{H_{3}}^{2}\leq\rho R^{2}=\sqrt{\delta}R^{2}.$$
(5.32)
It follows from (3.13), (3.29), (3.34), (4.3), (5.31) and (5.32) that
$$J_{\lambda,\rho}\left(p_{\ast},F,V_{\mu\left(\delta\right),\nu,\alpha\left(%
\delta\right)}\right)\leq C_{2}\sqrt{\delta}\sqrt{\ln\left(\delta^{-1/\left(2t%
_{\nu}\right)}\right)}.$$
(5.33)
Next, using (4.4) and recalling that $\lambda=\ln\left(\delta^{-1/\left(2t_{\nu}\right)}\right)$, we obtain
$$\begin{gathered}\displaystyle J_{\lambda,\rho}\left(p_{\ast},F,V_{\mu\left(%
\delta\right),\nu,\alpha\left(\delta\right)}\right)-J_{\lambda,\rho}\left(p_{%
\min,\lambda\left(\delta\right)},F,V_{\mu\left(\delta\right),\nu,\alpha\left(%
\delta\right)}\right)-J_{\lambda,\rho}^{\prime}\left(p_{\min,\lambda\left(%
\delta\right)},F,V_{\mu\left(\delta\right),\nu,\alpha\left(\delta\right)}%
\right)\left(p_{\ast}-p_{\min,\lambda}\right)\\
\displaystyle\geq\frac{C_{2}}{\ln\left(\delta^{-1/\left(2t_{\nu}\right)}\right%
)}\left\|p_{\ast}-p_{\min,\lambda}\right\|_{H_{2}}^{2}.\end{gathered}$$
Next, since $-J_{\lambda,\rho}\left(p_{\min,\lambda\left(\delta\right)},F,V_{\mu\left(%
\delta\right),\nu,\alpha\left(\delta\right)}\right)\leq 0$ and also by (4.7) $-J_{\lambda,\rho}^{\prime}\left(p_{\min,\lambda},F,V_{\mu\left(\delta\right),%
\nu,\alpha\left(\delta\right)}\right)\left(p_{\ast}-p_{\min,\lambda\left(%
\delta\right)}\right)\leq 0,$ we obtain, using (5.33),
$$\left\|p_{\ast}-p_{\min,\lambda\left(\delta\right)}\right\|_{H_{2}}^{2}\leq C_%
{2}\sqrt{\delta}\left[\ln\left(\delta^{-1/\left(2t_{\nu}\right)}\right)\right]%
^{3/2},$$
which implies (4.9). Estimate (4.10) follows immediately from (4.9), (3.35) and Remark 3.1.
We now prove (4.11) and (4.12). Using (4.8), (4.9)
and the triangle inequality, we obtain for $n=1,2,...$
$$\begin{gathered}\displaystyle\left\|p_{\ast}-p_{n}\right\|_{H_{2}}\leq\left\|p%
_{\ast}-p_{\min,\lambda\left(\delta\right)}\right\|_{H_{2}}+\left\|p_{\min,%
\lambda\left(\delta\right)}-p_{n}\right\|_{H_{2}}\leq C_{2}\delta^{1/4}\left[%
\ln\left(\delta^{-1/\left(2t_{\nu}\right)}\right)\right]^{3/4}+\left\|p_{\min,%
\lambda\left(\delta\right)}-p_{n}\right\|_{H_{3}}\\
\displaystyle\leq C_{2}\delta^{1/4}\left[\ln\left(\delta^{-1/\left(2t_{\nu}%
\right)}\right)\right]^{3/4}+r^{n}\left\|p_{\min,\lambda}-p_{0}\right\|{}_{H_{%
3}},\end{gathered}$$
which proves (4.11). Next, using (4.10) and (4.8), we
obtain
$$\begin{gathered}\displaystyle\left\|c_{\ast}-c_{n}\right\|_{L_{2}\left(\Omega%
\right)}\leq\left\|c_{\ast}-c_{\min,\lambda\left(\delta\right)}\right\|_{L_{2}%
\left(\Omega\right)}+\left\|c_{\min,\lambda\left(\delta\right)}-c_{n}\right\|_%
{L_{2}\left(\Omega\right)}\\
\displaystyle\leq C_{2}\delta^{1/4}\left[\ln\left(\delta^{-1/\left(2t_{\nu}%
\right)}\right)\right]^{3/4}+C_{2}\left\|p_{\min,\lambda\left(\delta\right)}-p%
_{n}\right\|_{H_{2}}\leq C_{2}\delta^{1/4}\left[\ln\left(\delta^{-1/\left(2t_{%
\nu}\right)}\right)\right]^{3/4}+C_{2}\left\|p_{\min,\lambda\left(\delta\right%
)}-p_{n}\right\|_{H_{3}}\\
\displaystyle\leq C_{2}\delta^{1/4}\left[\ln\left(\delta^{-1/\left(2t_{\nu}%
\right)}\right)\right]^{3/4}+C_{2}r^{n}\left\|p_{\min,\lambda\left(\delta%
\right)}-p_{0}\right\|_{H_{3}}.\end{gathered}$$
The latter proves (4.12). $\square$
6 Numerical Study
In this section, we describe some details of the numerical implementation of
the proposed globally convergent method and demonstrate results of
reconstructions for computationally simulated data. Recall that, as it is
stated in section 2.1, our applied goal in numerical studies is to calculate
locations and dielectric constants of targets which mimic antipersonnel land
mines and IEDs. We model these targets as small sharp inclusions located in
an uniform background, which is air with its dielectric constant $c\left(air\right)=1.$ Sometimes IEDs can indeed be located in air. In addition, in
previous works [6, 51] of the first
author with coauthors the problem of imaging of targets mimicing land mines
and IEDs in the case when those targets are buried in a sandbox is
considering. Microwave experimental data are used in these publications. The
tail functions numerical method was used in these works. It was demonstrated
in [6, 51] that, after applying certain
data preprocessing procedures, one can treat those targets as ones located
in air. Recall that $c\left(air\right)=1$ is a good approximation for the
value of the dielectric constant of air. Thus, in this paper, we conduct
numerical experiments for the case when small inclusions of our interest are
located in air. We test several of values of the dielectric constant and
sizes of those inclusions. However, we do not assume in computations the
knowledge of the background in the domain of interest $\Omega$ in (2.1), except of the knowledge that $c\left(\mathbf{x}\right)=1$
outside of $\Omega,$ see (2.6).
6.1 The Carleman Weight Function of numerical studies
The CWF $\varphi_{\lambda}\left(z\right)=\exp\left(2\lambda\left(z+s\right)^{-\nu}%
\right),$ which was introduced in (3.19), changes
too rapidly due to the presence of the parameter $\nu>0.$ We have
established in our computational experiments that such a rapid change does
not allow us to obtain good numerical results, also, see page 1581 of [27] for a similar conclusion. Hence, we use in our numerical studies a
simpler CWF $\psi_{\lambda}\left(z\right),$
$$\psi_{\lambda}\left(z\right)=e^{-2\lambda z}.$$
(6.1)
We cannot prove an analog of Theorem 4.1 for this CWF. Nevertheless, the
following Carleman estimate is valid in the 1D case [1]:
$$\int_{-\xi}^{d}\left(w^{\prime\prime}\right)^{2}\psi_{\lambda}\left(z\right)dz%
\geq C_{3}\left[\int_{-\xi}^{d}\left(w^{\prime\prime}\right)^{2}\psi_{\lambda}%
\left(z\right)dz+\lambda\int_{-\xi}^{d}\left(w^{\prime}\right)^{2}\psi_{%
\lambda}\left(z\right)dz+\lambda^{3}\int_{-\xi}^{d}w^{2}\psi_{\lambda}\left(z%
\right)dz\right],$$
(6.2)
for for all $\lambda>1$ and for any real valued function $w\in H^{2}\left(-\xi,d\right)$ such that $w\left(-\xi\right)=w^{\prime}\left(-\xi\right)=0.$ Here and below the number $C_{3}=C_{3}\left(\xi,d\right)>0$
depends only on numbers $\xi$ and $d$.
To briefly justify the CWF (6.1) from the analytical standpoint,
consider now the case when the Laplace operator is written in partial finite
differences with respect to variables $x,y\in\left[-b,b\right]$ (see (2.1)) with the uniform grid step size $h>0$ with respect to each
variable $x$ and $y,$
$$\Delta^{h}=\frac{\partial^{2}}{\partial z^{2}}+\Delta_{x,y}^{h}.$$
(6.3)
Here $\Delta_{x,y}^{h}$ is the Laplace operator with respect to $x,y$,
which is written in finite differences. Suppose that we have $M_{h}$
interior grid points in each direction $x$ and $y$. The domain $\Omega$ in (2.1) becomes
$$\Omega_{h}=\left\{\left(x_{j},y_{s},z\right):\left|x_{j}\right|,\left|y_{s}%
\right|<b,z\in\left(-\xi,d\right)\right\};\text{ }j,s=1,...,M_{h},$$
where $\left(x_{j},y_{s}\right)$ are grid points. Then the finite
difference analog of the integral of $\left(\Delta u\right)^{2}\psi_{\lambda}\left(z\right)$ over the domain $\Omega$ is
$$Z_{h}\left(u,\lambda\right)=\sum_{j,s=1}^{M_{h}}h^{2}\int_{-\xi}^{d}\left[%
\left(u_{zz}+u_{xx}^{h}+u_{yy}^{h}\right)\left(x_{j},y_{s},z\right)\right]^{2}%
\psi_{\lambda}\left(z\right)dz,$$
(6.4)
where $u\left(x_{j},y_{s},z\right)$ is the discrete real valued function
defined in $\Omega_{h}$ and such that $u_{zz}\left(x_{j},y_{s},z\right)\in L_{2}\left(-\xi,d\right)$ for all $\left(x_{j},y_{s}\right).$ In
addition, $u\left(-\xi,x_{j},y_{s}\right)=\partial_{z}u\left(-\xi,x_{j},y_{s}\right)=0.$ Also, in (6.4) $u_{xx}^{h}$ and $u_{yy}^{h}$
are corresponding finite difference derivatives of the function $u\left(x_{j},y_{s},z\right)$ at the point $\left(x_{j},y_{s},z\right)$.
“Interior” grid points are those located in $\overline{\Omega}\diagdown\partial\Omega.$ As to the grid points located at $\partial\Omega,$ they are counted in the well known way in finite
differences derivatives in (6.4). Obviously,
$$Z_{h}\left(u,\lambda\right)\geq\frac{1}{2}\sum_{j,s=1}^{M_{h}}h^{2}\int_{-\xi}%
^{d}\left[u_{zz}\left(x_{j},y_{s},z\right)\right]^{2}\psi_{\lambda}\left(z%
\right)dz-\widehat{C}\sum_{j,s=1}^{M_{h}}\int_{-\xi}^{d}\left[u\left(x_{j},y_{%
s},z\right)\right]^{2}\psi_{\lambda}\left(z\right)dz.$$
(6.5)
Here and below in this section the number $\widehat{C}=\widehat{C}\left(1/h\right)>0$ depends only on $1/h.$ Hence, the following analog of the
Carleman estimate (6.2) for the case of the operator (6.3)
follows immediately from (6.5):
$$\begin{gathered}\displaystyle Z_{h}\left(u,\lambda\right)\geq C_{3}\sum_{j,s=1%
}^{M_{h}}h^{2}\int_{-\xi}^{d}\left[u_{zz}\left(x_{j},y_{s},z\right)\right]^{2}%
\psi_{\lambda}\left(z\right)dz\\
\displaystyle+\widehat{C}\left[\lambda\sum_{j,s=1}^{M_{h}}\int_{-\xi}^{d}\left%
[u_{z}\left(x_{j},y_{s},z\right)\right]^{2}\psi_{\lambda}\left(z\right)dz+%
\lambda^{3}\sum_{j,s=1}^{M_{h}}\int_{-\xi}^{d}\left[u\left(x_{j},y_{s},z\right%
)\right]^{2}\psi_{\lambda}\left(z\right)dz\right],\forall\lambda\geq\widetilde%
{\lambda}\left(h\right)>1,\end{gathered}$$
(6.6)
where $\widetilde{\lambda}\left(h\right)$ increases with the decrease of $h$.
Suppose now that operators $\Delta$ and $\nabla$ in (3.13), (3.20), (3.37) are rewritten in partial finite differences
with respect to $x,y.$ As to the spaces $H^{3}(\Omega)$ and $H_{3+r},$ they
were introduced to ensure that functions $p\in C_{1},V\in C^{1}\left(\overline{\Omega}\right),F\in C_{2},$ see (3.31), (3.32). Note
that by the embedding theorem $H^{n}\left(-\xi,d\right)\subset C^{n-1}\left[-\xi,d\right],n\geq 1$. Thus, we replace the space $H^{m}(\Omega)$
with $m=1,2,3$ in (3.30) with the following finite difference analog
of it for complex valued functions $f$:
$$H^{n,h}(\Omega_{h})=\left\{f\left(x_{j},y_{s},z\right):\left\|f\right\|_{H^{n,%
h}(\Omega_{h})}^{2}=\sum_{j,s=1}^{M_{h}}\sum_{r=0}^{n}h^{2}\int_{-\xi}^{d}%
\left|\partial_{z}^{r}f\left(x_{j},y_{s},z\right)\right|^{2}dz\right\},n=1,2,$$
and similarly for the replacement of $H_{m}$ with $H_{n,h}.$ So, we replace
the regularization terms $\alpha\|V\|_{H^{3}(\Omega)}^{2}$ and $\rho\left\|p\right\|_{H_{3}}^{2}$ in (3.20) and (3.37) with $\alpha\|V_{h}\|_{H^{2,h}(\Omega)}^{2}$ and $\rho\left\|p_{h}\right\|_{H_{2,h}}^{2}$ respectively. Also, we replace
in (3.34) $H_{4}$ with $H_{3,h}$ and in (3.36) we replace $H_{3}$
with $H_{2,h}.$ The functionals $J_{\lambda,\rho}\left(p+F\right)$ and $\widetilde{I}_{\mu,\alpha}\left(W\right)$ in (3.37) and (3.27) are replaced with their finite difference analogs,
$$\widetilde{I}_{\mu,\alpha}^{h}\left(W_{h}\right)=\exp\left(2\mu d\right)\int_{%
\Omega}\left|\Delta^{h}W_{h}+\Delta^{h}Q_{h}\right|^{2}\psi_{\mu}\left(z\right%
)d\mathbf{x}+\alpha\|W_{h}+Q_{h}\|_{H^{2,h}(\Omega_{h})}^{2},\text{ }$$
(6.7)
$$J_{\lambda,\rho}^{h}\left(p_{h}\right)=\exp\left(2\lambda d\right)\int_{%
\underline{k}}^{\overline{k}}\int_{\Omega}\left|L^{h}\left(p_{h}+F_{h}\right)%
\left(\mathbf{x},\kappa\right)\right|^{2}\varphi_{\lambda}^{2}\left(z\right)d%
\mathbf{x}d\kappa+\rho\left\|p_{h}\right\|_{H_{2,h}}^{2},$$
(6.8)
where $V_{h},W_{h},p_{h},Q_{h}$ and $F_{h}$ are finite difference analogs of
functions $V,W,p,Q$ and $F$ respectively and $L^{h}$ is the finite
difference analog of the operator $L$ in which operators $\Delta$ and $\nabla$ are replaced with their above mentioned finite difference analogs.
Then the Carleman estimate (6.6) implies that the straightforward
analogs of Theorems 4.2-4.6 are valid for functionals (6.7) and (6.8). The only restriction is that the grid step size $h$ should be
bounded from the below,
$$h\geq h_{0}=const.>0.$$
(6.9)
In other words, numerical experiments should not be conducted for the case
when $h$ tends to zero, as it is done sometimes for forward problems for
PDEs. It is our computational experience that condition (6.9) is
sufficient for computations. So, we do not change $h$ in our numerical
studies below.
Remarks 6.1:
1.
For brevity, we do not reformulate here those analogs of Theorems
4.2-4.6. Also, both for brevity and convenience we describe our procedures
below for the case of the continuous spatial variable $\mathbf{x}$. Still,
we actually work in our computations with functionals (6.7) and (6.8).
2.
The reason why we have presented the above theory for the case of the
CWF (3.19) is that it is both consistent and is valid for the 3D case.
We believe that this theory is interesting in its own right from the
analytical standpoint. On the other hand, in the case of the CWF (6.1)
and the assumption about partial finite differences, the corresponding
theory (unlike computations!) is similar to the one which we (with
coauthors) have developed in the 1D case of [1].
6.2 Data simulation and propagation
To computationally simulate the boundary data $g_{0}(\mathbf{x},k)$ in (2.7), we solve the Lippmann-Schwinger integral equation
$$u(\mathbf{x},k)=e^{ikz}+k^{2}\int_{\Omega}\Phi(\mathbf{x},\mathbf{y},k)(c(%
\mathbf{y})-1)u(\mathbf{y}k)d\mathbf{y},$$
(6.10)
where $\Phi(\mathbf{x},\mathbf{y},k)$ is the fundamental solution of the
Helmholtz equation with $c(\mathbf{x})\equiv 1$:
$$\Phi(\mathbf{x},\mathbf{y},k)=\frac{e^{ik|\mathbf{x}-\mathbf{y}|}}{4\pi|%
\mathbf{x}-\mathbf{y}|},\quad\mathbf{x}\neq\mathbf{y}.$$
The spectral method of [52], which is based on the
periodization technique and the fast Fourier transform, is used to solve (6.10), see, e.g. [53] for the
numerical implementation of this method in MATLAB.
We work with dimensionless variables. Typically linear sizes of
antipersonnel land mines are between 5 and 10 centimeters (cm), see, e.g.
[54]. Hence, just as in papers with experimental data of our
research group [6, 7], we introduce
the dimensionless variables $\mathbf{x}^{\prime}=\mathbf{x}/(10\,\text{cm})$. Our mine-like targets are ball-shaped. Hence, their radii $r=0.3$ and $0.5,$ for example, correspond to diameters of those balls of 6 cm and 10 cm
respectively. This change of variables leads to the dimensionless frequency $k$, which is also called the “wavenumber”. Hereafter, for
convenience and brevity, we will leave the same notations for
dimensionless spatial variables $\mathbf{x}$ as before. Note that the
dimensionless wavenumber $k=16.2,$ which we work with below (see (6.12)), corresponds to the frequency of $f=7.7$ GHz. Since in [6, 7] microwave experimental data were
collected by our research group for the range of frequencies from 1 GHz to
10 GHz, then $f=7.7$ GHz is a realistic value of the frequency.
All inclusions, which we have numerically tested, are listed in Table 1, where $r$ denotes the radius of the corresponding ball-shaped
inclusion. To have smooth target/background interfaces, the dielectric
constants of inclusions were smoothed out for a better stability of the
numerical method of solving the Lippmann-Schwinger equation (6.10). But the maximal values of dielectric constants
remain unchanged in this smoothing, and these values are reached in the
centers of those balls. In our study, the center of each ball representing a
single inclusion is at the point $\mathbf{x}=\left(x,y,z\right)=\left(0,0,0\right)$ and centers of two inclusions in case number 4 of Table 1 are placed at points $\left(x,y,z\right)=(-0.75,0,0)$ (left
inclusion) and $\left(x,y,z\right)=(0.75,0,0)$ (right inclusion). However,
when running the inversion procedure, we do not assume the knowledge of
neither those centers nor the shapes of those inclusions.
In the setup for our computational experiments, we want to be close to the
experimental setup of [6, 7].
Actually, in [6, 7] the data are
collected not at the part $\Gamma$ (2.1) of the boundary of the
domain $\Omega$ as in (2.7). Instead, they are collected on a
square $P_{meas}$, which is a part of the so-called measurement plane
$$P_{m}=\{z=-A\},$$
(6.11)
where $A=const.>\xi.$ We solve the Lippmann-Schwinger equation (6.10) to obtain computationally simulated data $f(\mathbf{x},k)$ for $\mathbf{x}\in P_{meas}.$ We refer to $f(\mathbf{x},k)$ as the
“measured data”. The measurement plane $P_{m}$ is located
far from $\Gamma$. This causes several complications. First, we would need
to solve our CISP in a large computational domain, which could be
time-consuming process. Second, looking at the measured data is not clear
enough how to distinguish inclusions; see Fig. 1a.
Hence, we need to propagate the measured data $f(\mathbf{x},k)$ generated by
the Lippmann-Schwinger solver from the rectangle $P_{meas}$ to the so-called
propagation plane $P_{p}=\{z=A^{\prime}\}$, $A^{\prime}\leq-\xi,$ which
is closer to our inclusions. In fact, we propagate to the plane which
includes the rectangle $\Gamma.$ As a result we get the so-called
propagated data (Fig. 1b), which are more focused on the
target of our interest than the original data. So, we can clearly see now
the location of our inclusion in $x,y$ coordinates. The resulting function $u\left(\mathbf{x},k\right)$ is our given boundary data $g_{0}(\mathbf{x},k)$ in (2.7) for our CISP. The derivative $u_{z}\left(\mathbf{x},k\right)$ for $\mathbf{x}\in\Gamma,$ i.e. the function $g_{1}(\mathbf{x},k)$ in (2.8), is calculated by propagating $f(\mathbf{x},k)$ into
a plane $\left\{z=-\xi-\varepsilon\right\}$ for a small number $\varepsilon>0$. Next, the finite difference is used to approximate $g_{1}(\mathbf{x},k)$.
For brevity, we do not describe the data propagation procedure here.
Instead, we refer to [5, 7, 6] for detailed descriptions. In fact, this procedure is
quite popular in Optics under the name the angular spectrum
representation method [55].
We also remark that both in the data propagation procedure and in our
convexification method we need to calculate the some derivatives of noisy
data: the $\partial_{k,z}^{2}-$derivative of the propagated data and the $\partial_{k}-$derivative in the convexification. In all cases this is done
using finite differences. We have not observed any instabilities probably
because the step sizes of our finite differences were not too small. The
same was in all previous above cited publications of this research group.
To propagate the function $f(\mathbf{x},k)$ close to an inclusion, we need
to figure out first where this inclusion is located, i.e. we need to
estimate the number $\xi$ in (2.1). Fortunately, the data
propagation procedure allows us to do this. For example, consider two
inclusions with the same size $r=0.3$, but with different dielectric
constants $c=3$ and $c=5$. Centers of both are located at the point $\left(0,0,0\right).$ We solve the Lippmann-Schwinger equation for each of these
two cases to generate the data at the measurement plane $P_{m}$ with $A=8,$
see (6.11). Next, we propagate the data to several propagated planes $P_{p,a}=\left\{z=a\right\},$ where $a=\left(-8,2\right]$. Here, we use $k=16.2$ (see (6.12)).
The dependence of the maximal absolute value of the propagated data $M\left(a\right)=\max_{P_{p,a}}\left|u\left(x,y,a,16.2\right)\right|$
on the number $a$ for these inclusions is depicted in Fig. 2.
We see that the function $M\left(a\right)$ attains its maximal value near
the point $a_{0}=-0.5$ for both cases. This point is located reasonably
close to the actual position of the front faces (at $z=-0.15)$ of the
corresponding inclusions. The function $M\left(a\right)$ has attained its
maximal value at points $a$ close to $a_{0}$ for all other inclusions we
have tested. Therefore, we propagate the measured data for all inclusions to
the propagated plane $P_{p,-0.5}=\left\{z=-0.5\right\}$ and we set in (2.1) $\xi=-0.5$.
We have found in our computations that the optimal interval of wavenumbers
is:
$$k\in\left[15.2,16.2\right].$$
(6.12)
We divide this interval in ten (10) subintervals with the step size $\Delta k=0.1.$ For each $k=15.2,15.3,...16.1,16.2$ and for each inclusion under the
consideration we solve Lippmann-Schwinger equation (6.10) to generate the function $f(\mathbf{x},k).$ Next, by
propagating this data, we obtain the functions $g_{0}(\mathbf{x},k)$ and $g_{1}(\mathbf{x},k)$ in (2.7) and (2.8) respectively.
Using (2.4), (3.3), (3.7) and (3.9), consider
the function $q(\mathbf{x},k)$ on the propagated plane $P_{p}$, i.e. at the
boundary $\Gamma$. In fact, this function is denoted as $\phi_{0}\left(\mathbf{x},k\right)$ in (3.14) and it is one of the two
boundary conditions (the second one is $\phi_{1}\left(\mathbf{x},k\right)$
in (3.14)) which generate the function $F_{h}$ in the
functional $J_{\lambda,\rho}^{h}\left(p_{h}\right)$ in (6.8).
Fig. 1c displays the function $\phi_{0}(\mathbf{x},k)$
for the inclusion number 1 in Table 1 for $k=16.2$.
6.3 Computational domain
To model the experimental setup of [6, 7] we use the following measurement
plane:
$$P_{m}=\left\{z=-8\right\},P_{meas}=\{\mathbf{x}:(x,y)\in(-3,3)\times(-3,3),z=-%
8\},$$
where $P_{meas}\subset P_{m}$ is the square on which measurements are
conducted and $z=-8$ corresponds to the 80 cm. The latter is the approximate
distance from the center of any inclusion to the plane $\left\{z=0\right\}$
where detectors are located in [6, 7]. Solving equation (6.10), we generate the function $f(\mathbf{x},k),k\in\left[15.2,16.2\right].$ Next, we propagate this
function to $\Gamma,$
$$\Gamma=\{\mathbf{x}:(x,y)\in(-3,3)\times(-3,3),z=-0.5\}\subset P_{p}.$$
Here, $z=-0.5$ was found in section 6.2. Finally, we define our
computational domain as
$$\Omega=\{\mathbf{x}:(x,y,z)\in(-3,3)\times(-3,3)\times(-0.5,4.5)\}.$$
(6.13)
6.4 Adding noise
We add a random noise to the simulated data $f(\mathbf{x},k)$ as follows:
$$f_{noisy}(\mathbf{x},k)=f(\mathbf{x},k)+\delta\|f(\mathbf{x},k)\|{}_{L^{2}(%
\Gamma)}\frac{\sigma(\mathbf{x},k)}{\|\sigma(\mathbf{x},k)\|_{L^{2}(\Gamma)}}.$$
Here, $\delta$ is the noise level. Next, $\sigma(\mathbf{x},k)=\sigma_{1}(\mathbf{x},k)+i\sigma_{2}(\mathbf{x},k)$, where $\sigma_{1}(\mathbf{x},k)$ and $\sigma_{2}(\mathbf{x},k)$ are random numbers uniformly distributed on the
interval $(-1,1)$. We use below $\delta=0.15$, i.e. $15\%$ of the additive
noise. Fig. 3 displays the absolute value of simulated
data with noise $f_{noise}(\mathbf{x},k)$, the corresponding propagated data
$g_{0,noisy}(\mathbf{x},k)$ and the function $\phi_{0,noisy}(\mathbf{x},k)$
for the same inclusion and the wavenumber $k=16.2$ as in Fig. 1. We see that the data propagation procedure has a
smoothing effect on our noisy measured data, since $g_{0}(\mathbf{x},k)$ in
Fig. 1b and $g_{0,noisy}(\mathbf{x},k)$ in Fig. 3b are almost identical.
6.5 The algorithm
Based on the above theory, we use the following algorithm for determining
the function $c(\mathbf{x})$ from simulated data with noise $f(\mathbf{x},k)$
(here, the subscript “$noisy$” is left out for convenience,
also see item 1 in Remarks 6.1):
1.
Using the data propagation procedure, calculate the boundary data $g_{0}(\mathbf{x},k)$ and $g_{1}(\mathbf{x},k)$.
2.
Calculate the subsequent boundary conditions $\phi_{0}(\mathbf{x},k)$, $\phi_{1}(\mathbf{x},k)$, $\psi_{0}(\mathbf{x})$, and $\psi_{1}(\mathbf{x})$.
3.
Compute the auxiliary functions $Q_{h}(\mathbf{x})$ and $F_{h}(\mathbf{x},k)$.
4.
Compute the minimizer $W_{\min,h}(\mathbf{x})$ of the functional $\widetilde{I}_{\mu,\alpha}^{h}\left(W_{h}\right)$ in (6.7).
5.
Using the computed function $V_{h}(\mathbf{x})=W_{\min,h}(\mathbf{x})+Q_{h}(\mathbf{x})$, minimize the functional $J_{\lambda,\rho}^{h}(p_{h})$ in (6.8). Let the function $p_{h,\min}(\mathbf{x},k)$ be its minimizer.
Calculate the function $q_{h}(\mathbf{x},k)=p_{h,\min}(\mathbf{x},k)+F_{h}(\mathbf{x},k)$.
6.
Compute the function $v_{h}(\mathbf{x},k)$ for $k=\underline{k}$ as
follows:
$$v_{h}(\mathbf{x},\underline{k})=-\int_{\underline{k}}^{\overline{k}}q_{h}(%
\mathbf{x},\kappa)d\kappa+V_{h}(\mathbf{x}).$$
7.
Calculate the approximation for the unknown coefficient $c(\mathbf{x})$
using the following formulae, see (2.6), (3.8)
$$\beta(\mathbf{x})=-\Delta^{h}v_{h}(\mathbf{x},\underline{k})-\underline{k}^{2}%
\nabla v_{h}(\mathbf{x},\underline{k})\cdot\nabla v(\mathbf{x},\underline{k})+%
2i\underline{k}v_{z}(\mathbf{x},\underline{k}),$$
$$c\left(\mathbf{x}\right)=\left\{\begin{array}[]{c}\mathop{\rm Re}\beta\left(%
\mathbf{x}\right)+1,\text{ if }\mathop{\rm Re}\beta\left(\mathbf{x}\right)\geq
0%
\text{ and }\mathbf{x}\in\Omega,\\
1,\text{ otherwise.}\end{array}\right.$$
6.6 Numerical implementation
We now present some details of the numerical implementation. When minimizing
functionals $\widetilde{I}_{\mu,\alpha}^{h}\left(W_{h}\right)$ and $J_{\lambda,\rho}^{h}\left(p_{h}\right)$ in (6.7) and (6.8), we use finite differences not only with respect to $x,y$ but with respect
to $z$ as well. Thus, $z-$derivatives in these functionals are also written
in finite differences. For brevity we use the same notations $\widetilde{I}_{\mu,\alpha}^{h}\left(W_{h}\right)$ and $J_{\lambda,\rho}^{h}\left(p_{h}\right)$ for these functionals.
This is the fully discrete case, unlike the semi-discrete case of (6.7), (6.8). The theory for the fully discrete cases of nonlinear
ill-posed problems for PDEs is not yet developed well. It seems that such a
theory is much more complicated than the one for the semi-discrete case.
There are only a few results for the fully discrete case, and all are for
linear ill-posed problems for PDEs, as opposed to our nonlinear case, see,
e.g. [56, 57]. Since it is not yet clear to us how to extend above
theorems for the fully discrete case, we are not concerned with such
extensions here.
We minimize resulting functionals with respect to the values of
corresponding functions at grid points. In the computational domain (6.13), we use the uniform grid with $N_{x}=N_{y}=N_{z}=51$ points with
the corresponding step sizes $h_{x},h_{y},h_{z}$, where $h_{x}=h_{y}=h.$ The
grid point labeled $(j,s,l)$ corresponds to $\mathbf{x}=(x,y,z)=(x_{j},y_{s},z_{l})$. In addition, the interval $k=[\underline{k},\overline{k}]$ of wavenumbers is divided into $N_{k}=11$ points $k_{n}$ with
the step size $h_{k}$. Hence, we use the following discrete functions $W_{h}(\mathbf{x})=W(x_{j},y_{s},z_{l})=W_{j,s,l}$ and $p_{h}(\mathbf{x},k)=p_{h}(x_{j},y_{s},z_{l},k_{n})=p_{j,s,l,n}$ at grid points.
To minimize the functionals $\widetilde{I}_{\mu,\alpha}^{h}(W_{j,s,l})$
and $J_{\lambda,\rho}^{h}(p_{j,s,l,n}),$ we use the conjugate gradient
method (CG) instead of the gradient projection method, which is suggested by
our theory. Indeed, similarly with [1], we have
observed that the results obtained by both these methods are practically the
same. On the other hand, CG is easier to implement numerically than the
gradient projection method. Note that we do not employ the standard line
search algorithm for determining the step size of the CG. Instead, we start
with the step size $10^{-4}$, which is reduced two times if the value of the
corresponding functional on the current iteration exceeds its value on the
previous iteration otherwise it remains the same. The minimization algorithm
is stopped when the step size is less then $10^{-10}$. We use zero as the
starting point of the CG for both functions $W_{j,s,l}$ and $p_{j,s,l,n}$.
Gradients of both functionals $\widetilde{I}_{\mu,\alpha}^{h}(W_{j,s,l})$
and $J_{\lambda,\rho}^{h}(p_{j,s,l,n})$ are calculated analytically on
each step, and we do not provide details of this for brevity. Rather, we
refer to formulae (7.7) and (7.8) of [58], where
gradients of similar functionals are calculated analytically using the
Kronecker delta function. Also, due to the difficulty with the numerical
implementation of the $H^{2,h}(\Omega_{h})-$norm, we use the simpler $L_{2}$
norm in (6.7). As to (6.8), we have established numerically
that the minimization of the functional $J_{\lambda,\rho}^{h}\left(p_{h}\right)$ works better if the regularization term is absent. Hence, we
set $\rho=0$ in (6.8).
6.7 Reconstruction results
In this section we present the results of our reconstructions for the
inclusions listed in Table 1 using the above algorithm. These
results are obtained using the Carleman Weight Function (6.1) with $\mu=8$ in (6.7) and $\lambda=8$ in (6.8). We have found that
these are optimal values of the parameters $\mu$ and $\lambda.$ Table 2 lists each inclusion with the maximal value $c_{exact}$ of the exact
coefficient $c_{exact}=\max_{inclusion}c\left(\mathbf{x}\right)$, radius $r$, the maximal value of the computed coefficient $c_{comp}=\max_{inclusion}c(\mathbf{x})$, the relative computational error
$$\varepsilon=\frac{|c_{comp}-c_{exact}|}{c_{exact}}\cdot 100\%,$$
and location, i.e. the $z$ coordinate of the point where the value of $c_{comp}$ is achieved.
Note that while we have added $15\%$ noise in our simulated data, the
relative computational errors of reconstructed coefficients do not exceed
9% in all cases, which is 1.67 times less than the level of noise in the
data. Moreover, the locations of points where the values of $c_{comp}$ are
achieved, are reconstructed with a good accuracy as well. Indeed, we need
our reconstructed inclusions to be somewhere between $-r$ and $r$, where
either $r=0.3$ or $r=0.5$. Fig. 4 displays the exact and
computed images for the inclusion number 1 in Table 1. Images are
obtained in Paraview.
Until now we have considered only the case of a single inclusion. The case
of two inclusions, which is listed as number 4 in Table 1, is very
similar. The absolute value of simulated data with noise $f_{noise}(\mathbf{x},k)$, the propagated data $g_{0,noisy}(\mathbf{x},k)$, and the function $\phi_{0,noisy}(\mathbf{x},k)$ for two inclusions and the wavenumber $k=16.2$
are displayed on Fig. 5.
Looking at the original data of Fig. 5a, we cannot clearly
distinguish these two inclusions. However, Figures 5b and 5c show that these two inclusions can be clearly separated after
the data propagation procedure. Furthermore, these figures also indicate
that the left inclusion has a larger dielectric constant and a smaller size
than the right one, which is true. The reconstruction results of Fig. 6 reflect this fact too. Here, the locations of both inclusions
are computed accurately and the larger inclusion appears larger in the
reconstructed image 6b. The values of $c_{comp}$ in both
inclusions are also computed with a good accuracy, see Table 2.
This result is obtained using the same parameters as in the case with a
single inclusion.
References
[1]
M. V. Klibanov, A. E. Kolesov, L. Nguyen, A. Sullivan, Globally strictly
convex cost functional for a 1-D inverse medium scattering problem with
experimental data, SIAM J. on Applied Mathematics 77 (5) (2017) 1733–1755.
[2]
L. Beilina, M. V. Klibanov, A globally convergent numerical method for a
coefficient inverse problem, SIAM Journal on Scientific Computing 31 (1)
(2008) 478–509.
[3]
L. Beilina, M. V. Klibanov, Approximate Global Convergence and Adaptivity for
Coefficient Inverse Problems, Springer, 2012.
[4]
M. V. Klibanov, D.-L. Nguyen, L. H. Nguyen, H. Liu, A globally convergent
numerical method for a 3D coefficient inverse problem with a single
measurement of multi-frequency data, accepted for publication in Inverse
Problems and Imaging, also available in arXiv: 1612.0414.
[5]
A. E. Kolesov, M. V. Klibanov, L. H. Nguyen, D.-L. Nguyen, N. T. Thanh, Single
measurement experimental data for an inverse medium problem inverted by a
multi-frequency globally convergent numerical method, Applied Numerical
Mathematics 120 (2017) 176–196.
[6]
D.-L. Nguyen, M. V. Klibanov, L. H. Nguyen, M. A. Fiddy, Imaging of buried
objects from multi-frequency experimental data using a globally convergent
inversion method, J. Inverse and Ill-Posed Problems, accepted for
publication (2017), available online, DOI: 10.1515/jiip-2017- 0047.
[7]
D.-L. Nguyen, M. V. Klibanov, L. H. Nguyen, A. E. Kolesov, M. A. Fiddy, H. Liu,
Numerical solution of a coefficient inverse problem with multi-frequency
experimental raw data by a globally convergent algorithm, Journal of
Computational Physics 345 (2017) 17–32.
[8]
L. Beilina, M. V. Klibanov, Globally strongly convex cost functional for a
coefficient inverse problem, Nonlinear Analysis: Real World Applications 22
(2015) 272–288.
[9]
M. V. Klibanov, O. V. Ioussoupova, Uniform strict convexity of a cost
functional for three-dimensional inverse scattering problem, SIAM Journal on
Mathematical Analysis 26 (1) (1995) 147–179.
[10]
M. V. Klibanov, Global convexity in a three-dimensional inverse acoustic
Problem, SIAM Journal on Mathematical Analysis 28 (6) (1997) 1371–1388.
[11]
M. V. Klibanov, Global convexity in diffusion tomography, Nonlinear World 4
(1997) 247–265.
[12]
M. V. Klibanov, A. Timonov, Carleman Estimates for Coefficient Inverse
Problems and Numerical Applications, de Gruyter, Utrecht, 2004.
[13]
M. V. Klibanov, V. G. Kamburg, Globally strictly convex cost functional for an
inverse parabolic problem, Mathematical Methods in the Applied Sciences
39 (4) (2016) 930–940.
[14]
M. V. Klibanov, L. H. Nguyen, A. Sullivan, L. Nguyen, A globally convergent
numerical method for a 1-d inverse medium problem with experimental data,
Inverse Problems and Imaging 10 (4) (2016) 1057–1085.
[15]
M. V. Klibanov, N. T. Thành, Recovering dielectric constants of
explosives via a globally strictly convex cost functional, SIAM Journal on
Applied Mathematics 75 (2) (2015) 518–537.
[16]
G. Chavent, Nonlinear Least Squares for Inverse Problems - Theoretical
Foundations and Step-by-Step Guide for Applications, Springer, 2009.
[17]
A. Goncharsky, S. Romanov, Supercomputer technologies in inverse problems of
ultrasound tomography, Inverse Problems 29 (2013) 075004.
[18]
A. V. Goncharsky, S. Y. Romanov, Iterative methods for solving coefficient
inverse problems of wave tomography in models with attenuation, Inverse
Problems 33 (2) (2017) 025003.
[19]
J. A. Scales, M. L. Smith, T. L. Fischer, Global optimization methods for
multimodal inverse problems, Journal of Computational Physics 103 (2) (1992)
258–268.
[20]
A. Lakhal, KAIRUAIN-algorithm applied on electromagnetic imaging, Inverse
Problems 29 (2010) 095001.
[21]
A. Lakhal, A direct method for nonlinear ill-posed problems, Inverse
Problems, accepted for publication, available online at
http://iopscience.iop.org/article/10.1088/1361-6420/aa91e0/pdf.
[22]
M. V. Klibanov, N. A. Koshev, J. Li, A. G. Yagola, Numerical solution of an
ill-posed Cauchy problem for a quasilinear parabolic equation using a
Carleman weight function, Journal of Inverse and Ill-posed Problems 24
(2016) 761–776.
[23]
A. B. Bakushinskii, M. V. Klibanov, N. A. Koshev, Carleman weight functions
for a globally convergent numerical method for ill-posed Cauchy problems for
some quasilinear PDEs, Nonlinear Analysis: Real World Applications 34 (2017)
201–224.
[24]
M. V. Klibanov, Carleman weight functions for solving ill-posed Cauchy
problems for quasilinear PDEs, Inverse Problems 31 (12) (2015) 125007.
[25]
A. Bukhgeim, M. Klibanov, Uniqueness in the large of a class of
multidimensional inverse problems, Soviet Math. Doklady 17 (1981) 244–247.
[26]
M. V. Klibanov, Carleman estimates for global uniqueness, stability and
numerical methods for coefficient inverse problems, Journal of Inverse and
Ill-Posed Problems 21 (4) (2013) 477–560.
[27]
L. Baudouin, M. d. Buhan, S. Ervedoza, Convergent algorithm based on Carleman
estimates for the recovert of a potential in the wave equation, SIAM J. on
Numerical Analysis 55 (2017) 1578–1613.
[28]
H. Ammari, J. Garnier, W. Jing, H. Kang, M. Lim, K. Solna, H. Wang,
Mathematical and statistical methods for multistatic imaging, Lecture Notes
in Mathematics 2098 (2013) 125–157.
[29]
H. Ammari, Y. Chow, J. Zou, The concept of heterogeneous scattering and its
applications in inverse medium scattering, SIAM J. Mathematical Analysis 46
(2014) 2905–2935.
[30]
H. Ammari, Y. Chow, J. Zou, Phased and phaseless domain reconstruction in
inverse scattering problem via scattering coefficients, SIAM J. Applied
Mathematics 76 (2016) 1000–1030.
[31]
G. Bao, P. Li, J. Lin, F. Triki, Inverse scattering problems with
multi-frequencies, Inverse Problems 31 (2015) 093001.
[32]
M. de Buhan, M. Kray, A new approach to solve the inverse scattering problem
for waves: combining the TRAC and the Adaptive Inversion methods, Inverse
Problems 29 (2013) 085009.
[33]
Y. T. Chow, J. Zou, A numerical method for reconstructing the coefficient in a
wave equation, Numerical Methods in Partial Differential Equations 31 (2015)
289–307.
[34]
Y. T. Chow, K. Ito, K. Liu, J. Zou, Direct sampling method in diffuse optical
tomography, SIAM J. Scientific Computing 37 (2015) A1658–A1684.
[35]
K. Ito, B. Jin, J. Zou, A direct sampling method for inverse electromagnetic
medium scattering, Inverse Problems 29 (9) (2013) 095018.
[36]
B. Jin, Z. Zhou, A finite element method with singularity reconstruction for
fractional boundary value problems, ESAIM: Mathematical Modelling and
Numerical Analysis 49 (2015) 1261–1283.
[37]
S. Kabanikhin, A. Satybaev, M. Shishlenin, Direct Methods of Solving
Multidimensional Inverse Hyperbolic Problem, VSP, 2004.
[38]
S. Kabanikhin, K. Sabelfeld, N. Novikov, M. Shishlenin, Numerical solution of
the multidimensional Gelfand-Levitan equation, J. Inverse and Ill-Posed
Problems 23 (2015) 439–450.
[39]
S. Kabanikhin, N. Novikov, I. Osedelets, M. Shishlenin, Fast Toeplitz linear
system inversion for solving two-dimensional acoustic inverse problem, J.
Inverse and Ill-Posed Problems 23 (2015) 687–700.
[40]
A. Lakhal, A decoupling-based imaging method for inverse medium scattering for
Maxwell’s equations, Inverse Problems 26 (2010) 015007.
[41]
J. Li, H. Liu, Q. Wang, Enhanced multilevel linear sampling methods for
inverse scattering problems, J. Comput. Phys. 257 (2014) 554–571.
[42]
J. Li, P. Li, H. Liu, X. Liu, Recovering multiscale buried anomalies in a
two-layered medium, Inverse Problems 31 (2015) 105006.
[43]
H. Liu, Y. Wang, C. Yang, Mathematical design of a novel gesture-based
instruction/input device using wave detection, SIAM J. Imaging Sci. 9 (2016)
822–841.
[44]
M. V. Klibanov, D.-L. Nguyen, L. H. Nguyen, A coefficient inverse problem with
a single measurement of phaseless scattering data, arXiv:1710.04804.
[45]
M. V. Klibanov, V. Romanov, Two reconstruction procedures for a 3-D phaseless
inverse scattering problem for the generalized Helmholtz equation, Inverse
Problems 32 (2016) 0150058.
[46]
V. Romanov, Inverse Problems of Mathematical Physics, VNU Science Press,
1987.
[47]
D. Gilbarg, N. Trudinger, Elliptic Partial Differential Equations of Second
Order, Springer, 1984.
[48]
V. Romanov, Inverse problems for differential equations with memory, Eurasian
J. of Mathematical and Computer Applications 2 (4) (2014) 51–80.
[49]
M. V. Klibanov, Carleman estimates for the regularization of ill-posed Cauchy
problems, Applied Numerical Mathematics 94 (2015) 46–74.
[50]
A. Tikhonov, A. Goncharsky, V. Stepanov, A. Yagola, Numerical Methods for the
Solution of Ill-Posed Problems, Kluwer, London, 1995.
[51]
N. T. Thành, L. Beilina, M. V. Klibanov, M. A. Fiddy, Imaging of buried
objects from experimental backscattering time-dependent measurements using a
globally convergent inverse algorithm, SIAM Journal on Imaging Sciences
8 (1) (2015) 757–786.
[52]
G. Vainikko, Fast solvers of the Lippmann-Schwinger equation, in: D. Newark
(Ed.), Direct and Inverse Problems of Mathematical Physics, Int. Soc. Anal.
Appl. Comput. 5, Kluwer, Dordrecht, 2000, p. 423.
[53]
A. Lechleiter, D.-L. Nguyen, A trigonometric Galerkin method for volume
integral equations arising in TM grating scattering, Advanced Computational
Mathematics 40 (2014) 1–25.
[54]
https://en.wikipedia.org/wiki/M14_mine.
[55]
L. Novotny, B. Hecht, Principles of Nano-Optics, 2nd Edition, Cambridge
University Press, Cambridge, 2012.
[56]
E. Burman, J. Ish-Horowicz, L. Oksanen, Fully discrete finite element data
assimilation method for the heat equation, arXiv:1707.06908.
[57]
M. Klibanov, F. Santosa, A computational quasi-reversibility method for Cauchy
problems for Laplace’s equation, SIAM J. Applied Mathematics 51 (1991)
1653–1675.
[58]
A. V. Kuzhuget, M. Klibanov, Global convergence for a 1-D inverse problem with
application to imaging of land mines, Applicable Analysis 89 (2010)
125–157. |
Casimir effect in a wormhole spacetime
Artem R.
Khabibullin${}^{a}$11footnotemark: 1, Nail R.
Khusnutdinov${}^{a}$22footnotemark: 2, Sergey V.
Sushkov${}^{b}$33footnotemark: 3
e-mail:
[email protected]:
[email protected]: [email protected]
${}^{a}$Department of Physics,
${}^{b}$Department of Mathematics,
Kazan State Pedagogical University, Mezhlauk 1, Kazan 420021,
Russia
(November 25, 2020)
Abstract
We consider the Casimir effect for quantized massive scalar field
with non-conformal coupling $\xi$ in a spacetime of wormhole whose
throat is rounded by a spherical shell. In the framework of
zeta-regularization approach we calculate a zero point energy of
scalar field. We found that depending on values of coupling $\xi$,
a mass of field $m$, and/or the throat’s radius $a$ the Casimir
force may be both attractive and repulsive, and even equals to
zero.
pacs: 04.62.+v, 04.70.Dy, 04.20.Gz
I Introduction
The central problem of wormhole physics consists of the fact that
wormholes are accompanied by unavoidable violations of the null
energy condition, i.e., the matter threading the wormhole’s throat
has to possess “exotic” properties. The classical matter does
satisfy the usual energy conditions, hence wormholes cannot arise
as solutions of classical relativity and matter. If they exist,
they must belong to the realm of semiclassical or perhaps quantum
gravity. In the absence of the complete theory of quantum gravity,
the semiclassical approach begins to play the most important role
for examining wormholes. Recently the self-consistent wormholes in
the semiclassical gravity were studied numerically in Refs
HocPopSus97 ; KhuSus02 ; Khu03 ; Gar05 . It was shown that the
semiclassical Einstein equations provide an existence of wormholes
supported by energy of vacuum fluctuations. However, it should be
stressed that a natural size of semiclassical vacuum wormholes
(say, a radius of wormhole’s throat $a$) should be of Planckian
scales or less. This fact can be easily argued by simple
dimensional considerations ForRom96 . In order to obtain
semiclassical wormholes having scales larger than Planckian one
has to consider either non-vacuum states of quantized fields (say,
thermal states with a temperature $T>0$) or a vacuum polarization
(the Casimir effect) which may happen due to some external
boundaries (with a typical scale $R$) existing in a wormhole
spacetime. In the both cases there appears an additional
dimensional macroscopical parameter (say $R$) which may result in
enlargement of wormhole’s size.
In this paper we will study the Casimir effect in a wormhole
spacetime. For this aim we will consider a static spherically
symmetric wormhole joining two different universes (asymptotically
flat regions). We will also suppose that each universe contains a
perfectly conducting spherical shell rounding the throat. These
shells will dictate the Dirichlet boundary conditions for a
physical field and, as the result, produce a vacuum polarization.
Note that this problem is closely related to the known problem
which was investigated by Boyer Boy68 who studied the
Casimir effect of a perfectly conducting sphere in Minkowski
spacetime (see also BorEliKirLes97 ). However, there is an
essential difference which is expressed in different topologies of
wormhole and Minkowski spacetimes. A semitransparent sphere as
well as semitransparent boundary condition were investigated in
Refs.
BorVas99 ; Sca99 ; Sca00 ; BorVas04 ; GraJafKheQuaShrWei04 ; Mil04 .
The consideration of the delta-like potential which models a
semitransparent boundary condition in quantum field theory cause
some problems and there is ambiguity in renormalization procedure
(see the Refs. BorVas04 ; GraJafKheQuaShrWei04 ; Mil04 and
references therein). Thermal corrections to the one-loop effective
action on singular potential background was considered recently in
Ref. MckNay05 .
We will adopt a simple geometrical model of wormhole spacetime:
the short-throat flat-space wormhole which was suggested and
exploited in Ref. KhuSus02 . The model represents two
identical copies of Minkowski spacetime; from each copy a
spherical region is excised, and then boundaries of those regions
are to be identified. The spacetime of the model is everywhere
flat except a throat, i.e., a two-dimensional singular spherical
surface. We will assume that the wormhole’s throat is rounding by
two perfectly conducting spherical shells (in each copy of
Minkowski spacetime) and calculate the zero-point energy of a
massive scalar field on this background. In the end of
calculations the radius of one sphere will tend to infinity giving
the Casimir energy for single sphere. For calculations we will use
the zeta function regularization approach DowCri76 ; ZetaBook
which was developed in Refs.
Method ; BorEliKirLes97 ; KhuBor99 ; BezBezKhu01 ; BorMohMos01 . In
framework of this approach, the ground state energy of scalar
field $\phi$ is given by
$$E(s)=\frac{1}{2}\mu^{2s}\zeta_{\cal L}\left(s-\frac{1}{2}\right),$$
(1)
where
$$\zeta_{\cal L}(s)=\sum_{(n)}\left(\lambda_{(n)}^{2}+m^{2}\right)^{-s}$$
is the zeta function of the corresponding
Laplace operator. The parameter $\mu$, having the dimension of
mass, makes right the dimension of regularized energy.
The $\lambda_{(n)}^{2}$ are eigenvalues of the
three dimensional Laplace operator ${\cal L}=\triangle-\xi\mathcal{R}$
$$(\triangle-\xi\mathcal{R})\phi_{(n)}=\lambda_{(n)}^{2}\phi_{(n)},$$
(2)
where $\mathcal{R}$ is the curvature scalar (which is singular in
framework of our model, see Eq. 6).
The expression 1 is divergent in the limit $s\to 0$
which we are interested in. For renormalization we subtract from
1 the divergent part of it
$$E^{\rm ren}=\lim_{s\to 0}\left(E(s)-E^{\rm div}(s)\right),$$
(3)
where
$$E^{\rm div}(s)=\lim_{m\to\infty}E(s).$$
By virtue of the heat kernel expansion of zeta function is the
asymptotic expansion for large mass, the divergent part has the
following form (in $3+1$ dimensions)
$$\displaystyle E^{\rm div}(s)$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{2}\left(\frac{\mu}{m}\right)^{2s}\frac{1}{(4\pi)^{3/2}%
\Gamma(s-\frac{1}{2})}$$
$$\displaystyle\times$$
$$\displaystyle\left\{B_{0}m^{4}\Gamma(s-2)+B_{1/2}m^{3}\Gamma(s-\frac{3}{2})+B_%
{1}m^{2}\Gamma(s-1)+B_{3/2}m\Gamma(s-\frac{1}{2})+B_{2}\Gamma(s)\right\},$$
where $B_{\alpha}$ are the heat kernel coefficients of operator
${\cal L}$. In the case of singular potential (singular scalar
curvature) one has to use specific formulae from Refs.
BorVas99 ; GilKirVas01 for calculation the heat kernel
coefficients (see also a recent review Vas02 ).
Finally, the renormalized ground state energy 3 should
obey the normalization condition
$$\lim_{m\to\infty}E^{\rm ren}=0.$$
For more details of approach see review
BorMohMos01 .
The organization of the paper is the following. In Sec.
II we describe a spacetime of wormhole in the
short-throat flat-space approximation. In Sec. III we
analyze the solution of equation of motion for massive scalar
field and obtain close expression for zero point energy.
In Sec. IV we discuss obtained results and make some
speculations.
We use units $\hbar=c=G=1$. The signature of the spacetime,
the sign of the Riemann and Ricci tensors, is the same as in the
book by Hawking and Ellis HawEllBook .
II The geometry of the model
We will take a metric of static spherically symmetric wormhole in
a simple form:
$$ds^{2}=-dt^{2}+d\rho^{2}+r^{2}(\rho)(d\theta^{2}+\sin^{2}\theta d\varphi^{2}),$$
(5)
where $\rho$ is a proper radial distance, $\rho\in(-\infty,\infty)$. The function $r(\rho)$ describes the profile of
throat. In the paper we adopt the model suggested in the Ref.
KhuSus02 which was called there as short-throat flat-space
approximation. In framework of this model the shape function
$r(\rho)$ is
$$r(\rho)=|\rho\,|+a,$$
with $a>0$. $r(\rho)$ is always positive and has the minimum at
$\rho=0$: $r(0)=a$, where $a$ is a radius of throat. It is easy
to see that in two regions ${\cal D}_{+}\!:\,\rho>0$ and ${\cal D}_{-}\!:\,\rho<0$ one can introduce new radial coordinates
$r_{\pm}=\pm\rho+a$, respectively, and rewrite the metric
5 in the usual spherical coordinates:
$$ds^{2}=-dt^{2}+dr_{\pm}^{2}+r_{\pm}^{2}(d\theta^{2}+\sin^{2}\theta\,d\varphi^{%
2}),$$
This form of the metric explicitly indicates that the regions
${\cal D}_{+}$ and ${\cal D}_{-}$ are flat. However, note that such
the change of coordinates $r_{\pm}=\pm\rho+a$ is not global, because
it is ill defined at the throat $\rho=0$. Hence, as was expected,
the spacetime is curved at the wormhole throat. To illustrate this
we calculate the Ricci tensor in the metric 5:
$$\displaystyle\mathcal{R}^{\rho}_{\rho}$$
$$\displaystyle=$$
$$\displaystyle-\frac{2r^{\prime\prime}}{r}=-4\frac{\delta(\rho)}{a},$$
$$\displaystyle\mathcal{R}^{\theta}_{\theta}=\mathcal{R}^{\varphi}_{\varphi}$$
$$\displaystyle=$$
$$\displaystyle-\frac{-1+r^{\prime 2}+rr^{\prime\prime}}{r^{2}}=-2\frac{\delta(%
\rho)}{a},$$
(6)
$$\displaystyle\mathcal{R}$$
$$\displaystyle=$$
$$\displaystyle-\frac{2(-1+r^{\prime 2}+2rr^{\prime\prime})}{r^{2}}=-8\frac{%
\delta(\rho)}{a}.$$
The energy-momentum tensor corresponding to this metric has the
diagonal form from which we observe that the source of this metric
possesses the following energy density and pressure:
$$\displaystyle\varepsilon$$
$$\displaystyle=$$
$$\displaystyle-\frac{-1+r^{\prime 2}+2rr^{\prime\prime}}{8\pi r^{2}}=-\frac{%
\delta(\rho)}{2\pi a},$$
$$\displaystyle p_{\rho}$$
$$\displaystyle=$$
$$\displaystyle\frac{-1+r^{\prime 2}}{8\pi r^{2}}=0,$$
$$\displaystyle p_{\theta}$$
$$\displaystyle=$$
$$\displaystyle p_{\varphi}=\frac{r^{\prime\prime}}{8\pi r}=\frac{\delta(\rho)}{%
4\pi a}.$$
III Zero point energy
Let us now consider a scalar field $\phi$ in the spacetime with
the metric 5. The equation for eigenvalues of operator
${\cal L}$ is
$$(\triangle-\xi{\cal R})\phi_{(n)}=\lambda_{(n)}^{2}\phi_{(n)},$$
(7)
where ${\cal R}$ is the scalar curvature, $\xi$ is an arbitrary
coupling with ${\cal R}$ and $\triangle=g^{\alpha\beta}\nabla_{\alpha}\nabla_{\beta}$, $\alpha=1,2,3$. Due to
the spherical symmetry of spacetime 5, a general
solution to the equation 7 can be found in the
following form:
$$\phi(\rho,\theta,\varphi)=u(\rho)Y_{ln}(\theta,\varphi),$$
where $Y_{ln}(\theta,\varphi)$ are spherical functions,
$l=0,1,2,\dots$, $n=0,\pm 1,\pm 2,\dots,\pm l$, and a function
$u(\rho)$ obeys the radial equation
$$u^{\prime\prime}+2\frac{r^{\prime}}{r}u^{\prime}+\left(\lambda^{2}-\frac{l(l+1%
)}{r^{2}}-\xi{\cal R}\right)u=0,$$
(8)
where a prime denotes the derivative with respect $\rho$,
$\lambda=\sqrt{\omega^{2}-m^{2}}$ and scalar curvature ${\cal R}=-8\delta(\rho)/a$. For new function $w=ur$ this equation reads
$$w^{\prime\prime}+\left(\lambda^{2}-\frac{l(l+1)}{r^{2}}-\xi{\cal R}-\frac{r^{%
\prime\prime}}{r}\right)w=0,$$
and looks like the Schrödinger equation for massive particle
with mass $M$ with total energy $E=\lambda^{2}/2M$ and potential
energy
$$U=(\xi{\cal R}+\frac{r^{\prime\prime}}{r})/2M=\frac{1-4\xi}{aM}\delta(\rho).$$
(9)
Therefore, the $\xi>1/4$ corresponds to negative potential.
Unfortunately, in our case it is impossible to find in manifest
form the spectrum of operator ${\cal L}$ given by Eq.
7. For this reason, we will use an approach developed
in Refs.
Method ; BorEliKirLes97 ; KhuBor99 ; BorMohMos01 ; BezBezKhu01 .
This approach does not need an explicit form of spectrum. The
spectrum of an operator is usually found from some boundary
conditions which look like an equation $\Psi(\lambda)=0$ where
function $\Psi$ consists of the solutions of Eq. 8
and depends additionally on other parameters of problem. It was
shown in Refs.
Method ; BorEliKirLes97 ; KhuBor99 ; BorMohMos01 ; BezBezKhu01 that
the zero point energy may be represented in the following form:
$$E(s)=-\mu^{2s}\frac{\cos(\pi s)}{2\pi}\sum_{(n)}d_{n}\int_{m}^{\infty}dk(k^{2}%
-m^{2})^{1/2-s}\frac{\partial}{\partial k}\ln\Psi(ik),$$
(10)
with the function $\Psi$ taken on the imaginary axes. The sum is
taken over all numbers of problem and $d_{n}$ is degenerate of state
444For the spherical symmetry case $(n)=l$ and $d_{n}=2l+1=2\nu$.. This formula takes into account the possible
boundary states, too. If they exist we have to include them
additively at the beginning in the Eq. 1. But
integration over interval $|k|<m$ (the possible boundary states
exist in this domain) will cancel this contribution. For this
reason the integration in the formula 10 is started
from the energy $k=m$. Therefore, hereinafter we will consider the
solution of the Eq. 8 for negative energy that is in
imaginary axes $\lambda=ik$. The main problem is now reduced to
finding the function $\Psi$. Thus, now we need no explicit form of
spectrum of operator ${\cal L}$.
In the flat regions ${\cal D}_{\pm}$, where $r(\rho)=\pm\rho+a$,
$r^{\prime}(\rho)=\pm 1$, ${\cal R}(\rho)=0$, and in imaginary axes the
Eq.8 reads
$$u^{\prime\prime}+\frac{2}{\rho\pm a}u^{\prime}-\left(k^{2}+\frac{l(l+1)}{(\rho%
\pm a)^{2}}\right)u=0.$$
(11)
A general solution of this equation can be written as
$$u^{\pm}[k(a\pm\rho)]=A^{\pm}\sqrt{\frac{\pi}{2k(a\pm\rho)}}I_{\nu}[k(a\pm\rho)%
]+B^{\pm}\sqrt{\frac{\pi}{2k(a\pm\rho)}}K_{\nu}[k(a\pm\rho)],$$
(12)
where $I_{\nu},K_{\nu}$ are the Bessel functions of second kind, $\nu=l+1/2$, and $A^{\pm}$, $B^{\pm}$ are four arbitrary constants.
The solutions $u^{\pm}[k(\rho\pm a)]$ have been obtained in the
flat regions ${\cal D}_{\pm}$ separately. To find a solution in the
whole spacetime we must impose matching conditions for $u^{\pm}[k(\rho\pm a)]$ at the throat $\rho=0$. The first condition demands
that the solution has to be continuous at $\rho=0$. This gives
$$u^{-}[ka]=u^{+}[ka].$$
(13a)
To obtain the second condition we integrate Eq.8
within the interval $$(-\epsilon,\epsilon)$$ and then go to the
limit $$\epsilon\to 0$$. It gives the second condition
$$\left.-\frac{du^{-}[x]}{dx}\right|_{x=ka}=\left.\frac{du^{+}[x]}{dx}\right|_{x%
=ka}+\frac{8\xi}{ka}u^{+}[ka].$$
(13b)
Therefore, the general solution of Eq. 11 depends
on two constants, only. Two other constants may be found from Eqs.
13a and 13b.
In addition to two matching conditions 13a and 13b
we impose two boundary conditions. We round the wormhole throat by
sphere of radius $a+R$ ($\rho=R$) in region ${\cal D}_{+}$, and by
sphere of radius $a+R^{\prime}$ ($\rho=-R^{\prime}$) in region ${\cal D}_{-}$.
Therefore the space of wormhole is divided by two spheres to three
regions: the space of finite volume between spheres and two
infinite volume spaces out of spheres. We suppose that the scalar
field obeys the Dirichlet boundary condition on both of these
spheres which means the perfect conductivity of spheres:
$$\displaystyle u^{-}[k(R^{\prime}+a)]$$
$$\displaystyle=$$
$$\displaystyle 0,$$
(13c)
$$\displaystyle u^{+}[k(R+a)]$$
$$\displaystyle=$$
$$\displaystyle 0.$$
(13d)
The four conditions 13 obtained represent a homogeneous
system of linear algebraic equations for four coefficients
$A^{\pm}$, $B^{\pm}$. As is known, such a system has a nontrivial
solution if and only if the matrix of coefficients is degenerate.
Hence we get
$$\left|\begin{array}[]{cccc}-I_{\nu}[ka]&-K_{\nu}[ka]&I_{\nu}[ka]&K_{\nu}[ka]\\
I^{\prime}_{\nu}[ka]+\frac{16\xi-1}{2ka}I_{\nu}[ka]&K^{\prime}_{\nu}[ka]+\frac%
{16\xi-1}{2ka}K_{\nu}[ka]&I^{\prime}_{\nu}[ka]-\frac{1}{2ka}I_{\nu}[ka]&K^{%
\prime}_{\nu}[ka]-\frac{1}{2ka}K_{\nu}[ka]\\
I_{\nu}[k(a+R)]&K_{\nu}[k(a+R)]&0&0\\
0&0&I_{\nu}[k(a+R^{\prime})]&K_{\nu}[k(a+R^{\prime})]\end{array}\right|=0.$$
(14)
After some algebra the above formula can be reduced to the
following relation for function $$\Psi$$ which we need for
calculation of the energy 10:
$$\displaystyle\Psi_{in}$$
$$\displaystyle=$$
$$\displaystyle I_{\nu}[k(a+R^{\prime})]\left(\Psi^{*}\left[\left(\xi-\frac{1}{8%
}\right)K_{\nu}[ka]+\frac{ka}{4}K^{\prime}_{\nu}[ka]\right]-\frac{1}{8}K_{\nu}%
[k(a+R)]\right)$$
$$\displaystyle-$$
$$\displaystyle K_{\nu}[k(a+R^{\prime})]\left(\Psi^{*}\left[\left(\xi-\frac{1}{8%
}\right)I_{\nu}[ka]+\frac{ka}{4}I^{\prime}_{\nu}[ka]\right]-\frac{1}{8}I_{\nu}%
[k(a+R)]\right)=0,$$
with
$$\Psi^{*}=I_{\nu}[k(a+R)]K_{\nu}[ka]-K_{\nu}[k(a+R)]I_{\nu}[ka].$$
(15)
In the case $$R^{\prime}=R$$ above expression coincides with that obtained
in Ref. KhuSus02 . In this case $$\Psi_{in}$$ may be
represented as follows: $$\Psi_{in}=\Psi^{1}_{l}\Psi^{2}_{l}$$, where
$$\displaystyle\Psi^{1}_{l}$$
$$\displaystyle=$$
$$\displaystyle\Psi^{*}=I_{\nu}[k(a+R)]K_{\nu}[ka]-K_{\nu}[k(a+R)]I_{\nu}[ka],$$
$$\displaystyle\Psi^{2}_{l}$$
$$\displaystyle=$$
$$\displaystyle\left(\xi-\frac{1}{8}\right)\Psi^{*}+\frac{ka}{4}\left[I_{\nu}[k(%
a+R)]K^{\prime}_{\nu}[ka]-K_{\nu}[k(a+R)]I^{\prime}_{\nu}[ka]\right].$$
The solutions of Eq. 15 gives the spectrum of energies
between the spheres $R$ and $R^{\prime}$.
The spectra for regions out of these spheres can be found as
follows:
$$\displaystyle\Psi_{out}^{1}=K_{\nu}[k(a+R)],$$
(15b)
$$\displaystyle\Psi_{out}^{2}=K_{\nu}[k(a+R^{\prime})].$$
(15c)
Indeed, let us consider the energy spectrum of field in space
between two spheres with radii $R$ and $\widetilde{R}>R$ and
Dirichlet boundary conditions on them. The solution is a linear
combination of two modified Bessel functions
$$u_{R\widetilde{R}}=C_{1}I_{\nu}[k\rho]+C_{2}K_{\nu}[k\rho].$$
The Dirichlet boundary conditions give two equations
$$\displaystyle C_{1}I_{\nu}[k(a+R)]+C_{2}K_{\nu}[k(a+R)]=0,$$
$$\displaystyle C_{1}I_{\nu}[k(a+\widetilde{R})]+C_{2}K_{\nu}[k(a+\widetilde{R})%
]=0.$$
Using these equations we may represent the solution in the
following form:
$$u_{R\widetilde{R}}=\frac{C_{1}}{K_{\nu}[k(a+R)]}\left\{I_{\nu}[k\rho]\frac{K_{%
\nu}[k(a+\widetilde{R})]}{I_{\nu}[k(a+\widetilde{R})]}-K_{\nu}[k\rho]\right\}.$$
Let us now assume that $\widetilde{R}\to\infty$.
In this limit the solution takes the following form:
$$u_{R\infty}=CK_{\nu}[k\rho].$$
The Dirichlet boundary condition for this solution on the sphere
of radius $R$ gives the equation 15b. As expected this
condition coincides with expression for space out of sphere of
radius $a+R$ in Minkowski spacetime BorEliKirLes97 . It is
obviously because the spacetime out of sphere (in general out of
throat) is exactly Minkowski spacetime.
Therefore the regularized total energy 10 reads
$$E(s)=-\mu^{2s}\frac{\cos(\pi s)}{\pi}\sum_{l=0}^{\infty}\nu\int_{m}^{\infty}dk%
(k^{2}-m^{2})^{1/2-s}\frac{\partial}{\partial k}\left[\ln\Psi_{in}+\ln\Psi_{%
out}^{1}+\ln\Psi_{out}^{2}\right].$$
(16)
Regrouping terms we can rewrite the above formula in the form
having clear physical sense of each term:
$$E(s)=\triangle E(s)+E_{R}^{M}(s)+E_{R^{\prime}}^{M}(s),$$
(17)
where
$$\displaystyle E_{R}^{M}(s)$$
$$\displaystyle=$$
$$\displaystyle-\mu^{2s}\frac{\cos(\pi s)}{\pi}\sum_{l=0}^{\infty}\nu\int_{m}^{%
\infty}dk(k^{2}-m^{2})^{1/2-s}\frac{\partial}{\partial k}\ln I_{\nu}[k(a+R)]K_%
{\nu}[k(a+R)],$$
(18)
$$\displaystyle E_{R^{\prime}}^{M}(s)$$
$$\displaystyle=$$
$$\displaystyle-\mu^{2s}\frac{\cos(\pi s)}{\pi}\sum_{l=0}^{\infty}\nu\int_{m}^{%
\infty}dk(k^{2}-m^{2})^{1/2-s}\frac{\partial}{\partial k}\ln I_{\nu}[k(a+R^{%
\prime})]K_{\nu}[k(a+R^{\prime})],$$
(19)
$$\displaystyle\triangle E(s)$$
$$\displaystyle=$$
$$\displaystyle-\mu^{2s}\frac{\cos(\pi s)}{\pi}\sum_{l=0}^{\infty}\nu\int_{m}^{%
\infty}dk(k^{2}-m^{2})^{1/2-s}\frac{\partial}{\partial k}\ln\Psi$$
(20)
and
$$\Psi=\frac{\Psi_{in}}{I_{\nu}[k(a+R^{\prime})]I_{\nu}[k(a+R)]}$$
The
term $E_{R}^{M}(s)$ in the formula 17 is nothing but a zero
point energy of sphere of radius $a+R$ in Minkowski spacetime with
Dirichret boundary condition on the sphere BorEliKirLes97 ;
note that the term $E_{R^{\prime}}^{M}(s)$ has an analogous sense.
Now we are ready to calculate the Casimir energy for two spherical
boundaries by using expression 16 and Eq. 3. Then
let us consider the Boyer’s problem. We consider ”gedanken
experiment”: we take a single conducting sphere and measure the
Casimir force in this situation. For this reason we have to take a
limit $R^{\prime}\to\infty$. In this case the energy 19 tends to
zero, and so the term $\triangle E(s)$ in Eq. 20
represents the difference between Casimir energies of a sphere
rounding the wormhole and a sphere of the same radius in Minkowski
spacetime without wormhole. In the limit $R^{\prime}\to\infty$ we find
$$\Psi=\left(K_{\nu}[ka]-I_{\nu}[ka]\frac{K_{\nu}[k(a+R)]}{I_{\nu}[k(a+R)]}%
\right)\left(\left(\xi-\frac{1}{8}\right)K_{\nu}[ka]+\frac{ka}{4}K^{\prime}_{%
\nu}[ka]\right)-\frac{1}{8}\frac{K_{\nu}[k(a+R)]}{I_{\nu}[k(a+R)]}$$
(21)
If one turns $R\to\infty$ then the energy $E^{M}_{R}$ tends to zero
and so
$$\Psi\to K_{\nu}[ka]\left(\left(\xi-\frac{1}{8}\right)K_{\nu}[ka]+\frac{ka}{4}K%
^{\prime}_{\nu}[ka]\right).$$
(22)
This expression coincides exactly with that obtained in Ref.
KhuSus02 and describes the zero point energy for whole
wormhole spacetime without any additional spherical shells.
A comment is in order. As already noted the positive $\xi$
corresponds to attractive potential and therefore the boundary
states may appear. The appearance of boundary states with
delta-like potential has been observed in Ref. MamTru82 .
Thus, we have to take into account the boundary states at the
beginning. Nevertheless, the final formula 16 contains
these boundary states, as it was noted in Ref. Method . But
it is necessary to note, that in this paper we will consider $\xi<1/4$. Indeed, let us consider for example $l=0$. In this case
$$\Psi=\frac{\pi}{8}e^{-ka}\cosh(k(a+R))\left\{\cosh(kR)+\left[2\frac{1-4\xi}{ka%
}+1\right]\sinh(kR)\right\}.$$
For $\xi>1/4$ this expression may be equal to zero for some value
of $k>m$, $R$ and $a$ and integral 16 will be divergent.
As noted in Ref. MamTru82 in this case we can not use the
present theory. The same boundary for $\xi$ was noted in Ref.
KhuSus02 . This statement is easy to see from expression for
potential energy given by Eq. 9. For $\xi>1/4$ the
energy is negative and the boundary states may appear.
The general strategy of the subsequent calculations is following
(for more details see Refs.
Method ; BorEliKirLes97 ; KhuBor99 ; BorMohMos01 ; BezBezKhu01 ). To
single out in manifest form the divergent part of regularized
energy we subtract from and add to integrand in Eq. 16 its
uniform expansion over $1/\nu$. It is obviously that it is enough
to subtract expansion up to $1/\nu^{2}$, the next term will give the
converge series. We may set $s=0$ in the part from which we had
subtracted the uniform expansion because it is now finite (see Eq.
26). The divergent singled out part will contain the standard
divergent terms given by Eq. I and some finite terms
which we calculate in manifest form (all terms except $A$ in
23).
The uniform asymptotic expansions both 21 and
22 are the same for $R\not=0$. Indeed, in this case
the ratios
$$\displaystyle\frac{I_{\nu}[ka]}{K_{\nu}[ka]}\frac{K_{\nu}[k(a+R)]}{I_{\nu}[k(a%
+R)]}$$
$$\displaystyle\approx$$
$$\displaystyle e^{-2\nu\ln(1+\frac{R}{a})},$$
$$\displaystyle\frac{1}{K^{2}_{\nu}[ka]}\frac{K_{\nu}[k(a+R)]}{I_{\nu}[k(a+R)]}$$
$$\displaystyle\approx$$
$$\displaystyle 2\nu e^{-2\nu\ln(1+\frac{R}{a})}$$
are exponentially small and we may neglect them. The well-known
uniform expansions of Bessel functions AbrSte were used in
these expressions. For this reason we may disregard this fraction
in Eq. 21 and arrive to Eq. 22. This is a key
observation for next calculations. Due to this observation the
divergent part which we have to subtract for renormalization from
20 has been already calculated in Ref. KhuSus02 .
By using the results of this paper we may write out the expression
for renormalized zero point energy:
$$\displaystyle\triangle E$$
$$\displaystyle=$$
$$\displaystyle-\frac{1}{32\pi^{2}a}\left(b\ln\beta^{2}+\Omega\right),$$
(23)
$$\displaystyle\Omega$$
$$\displaystyle=$$
$$\displaystyle A+\sum_{k=-1}^{3}\omega_{k}(\beta),$$
(24)
$$\displaystyle b$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{2}b_{0}\beta^{4}-b_{1}\beta^{2}+b_{2},$$
(25)
where
$$\displaystyle A$$
$$\displaystyle=$$
$$\displaystyle 32\pi\sum_{l=0}^{\infty}\nu^{2}\int_{\beta/\nu}^{\infty}dy\sqrt{%
y^{2}-\frac{\beta^{2}}{\nu^{2}}}\frac{\partial}{\partial y}\left(\ln\Psi+2\nu%
\eta(y)+\frac{1}{\nu}N_{1}-\frac{1}{\nu^{2}}N_{2}+\frac{1}{\nu^{3}}N_{3}\right),$$
(26)
$$\displaystyle\Psi$$
$$\displaystyle=$$
$$\displaystyle\left(K_{\nu}[\nu y]-I_{\nu}[\nu y]\frac{K_{\nu}[\nu y(1+x)]}{I_{%
\nu}[\nu y(1+x)]}\right)\left(\left(\xi-\frac{1}{8}\right)K_{\nu}[\nu y]+\frac%
{\nu y}{4}K^{\prime}_{\nu}[\nu y]\right)-\frac{1}{8}\frac{K_{\nu}[\nu y(1+x)]}%
{I_{\nu}[\nu y(1+x)]},$$
(27)
$b_{k}$ are the heat kernel coefficients, $\beta=ma$ is a
dimensionless parameter of mass, and $x=R/a$ is a dimensionless
parameter of sphere’s radius. The explicit form of heat kernel
coefficients $b_{k}$, and also expressions for $\omega_{k},\ N_{k},\ \eta$ are given in the Ref. KhuSus02 . Note that they do not
depend on the radius of sphere $R$. The only dependence on $R$ is
contained in the coefficient $A$ which has to be calculated
numerically. The expression for contribution of the sphere in
Minkowski spacetime 18 may be found in Ref.
BorEliKirLes97 . We only have to make a change $R\to a+R$.
IV Discussion and conclusion
In this section we will discuss results of numerical calculations
of zero-point energy given by formula 23. The renormalized
zero-point energy is represented in figures 1,
2 as a function of $x=R/a$ (the position of sphere
rounding the wormhole) for various values of $\beta=ma$ and $\xi$.
(Note that the value $x=R/a$ characterizes the position of sphere
rounding the wormhole; $x=0$ corresponds to sphere’s radius equals
to throat’s radius.) In Fig. 1 we only show the full
energy $E$. Note that the $\triangle E$ differs just slightly from
the full energy $E$. For the same reason we reproduce in Fig.
2 the $\triangle E$, only.
Characterizing the result of calculations we should first of all
stress that the value of zero point energy $E_{ren}$ in the limit
$R\to\infty$ tends to some constant value obtained in Ref.
KhuSus02 for the case of wormhole spacetime without any
spherical shells. In the limit $R\to 0$ (i.e., when the sphere
radius $a+R$ tends to the throat’s radius $a$) the zero-point
energy $E_{ren}$ is infinitely decreasing for all $\beta$ and
$\xi$. This means that the Casimir force acting on the spherical
shell and corresponding to the Casimir zero point energy $E_{ren}$
is “attractive”, i.e., it is directed inward to the wormhole’s
throat, for sufficiently small values of $R$. In the interval
$0<R/a<\infty$ there are three qualitatively different cases of
behavior of $E_{ren}$ depending on values of $\beta$ and $\xi$.
Namely, (i) the zero point energy $E_{ren}$ is monotonically
increasing in the whole interval $0<R/a<\infty$. There are neither
maxima no minima in this case. Hence the Casimir force is
attractive for all positions of the spherical shell. (ii)
$E_{ren}$ is first increasing and then decreasing. A graph of the
zero point energy has the form of barrier with some maximal value
of $E_{ren}$ at $R_{1}/a$. The Casimir force is attractive for the
sphere’s radius $R<R_{1}$ and repulsive for $R>R_{1}$. The value
$R=R_{1}$ corresponds to the point of unstable equilibrium. (iii)
The zero point energy $E_{ren}$ is increasing for $R/a<R_{1}/a$,
decreasing for $R_{1}/a<R/a<R_{2}/a$ and then finally increasing
for $R/a>R_{2}/a$, so that a graph of $E_{ren}$ has a maximum and
minimum. In this case the Casimir force is directed outward
provided the sphere’s radius $R_{1}<R<R_{2}$, and inward provided
$R<R_{1}$ or $R>R_{2}$. Now the value $R=R_{2}$ corresponds to the
point of stable equilibrium, since the zero point energy $E_{ren}$
has here a local minimum.
It is worth noting that the Casimir force is attractive in the
whole interval $0<R/a<\infty$ for sufficiently small values of
$\xi$ and/or large values of $\beta$. Otherwise, it can be both
attractive and repulsive depending on a radius of sphere rounding
the wormhole’s throat. The similar situation appears for
delta-like potential on the spherical or on the cylindrical
boundaries Sca99 ; Sca00 . The repulsive Casimir force was
also observed in Ref. HerSam05 for scalar field living in
the Einstein Static Universe.
The considered model let us speculate in spirit of Casimir idea
who suggested a model of electron as a charged spherical shell
Cas56 . Casimir assumed that such a configuration should be
stable due to equilibrium between the repulsive Coulomb force and
the attractive Casimir force. However, as is known, this idea does
not work in Minkowski spacetime since the Casimir force for sphere
turns out to be repulsive Boy68 . Now one can revive the
Casimir’s idea by considering a spherical shell rounding the
wormhole. In this paper we have shown that the Casimir force now
can be both attractive and repulsive. Moreover, there exists
stable configurations for which the Casimir force equals to zero;
the radius of spherical shell in this case depends on the throat’s
radius $a$ as well as the field’s mass $m$ and coupling constant
$\xi$. Thus, one may try to realize the Casimir’s idea taking a
sphere rounding a wormhole. Of course, our consideration was based
on the very simple model of wormhole spacetime. However, we
believe that main features of above consideration remain the same
for more realistic models.
Acknowledgements.
The work was supported by part the Russian Foundation for Basic
Research grant N 05-02-17344.
References
(1)
M. Abramowitz and I.A. Stegun, Handbook of Mathematical
Functions, (US National Bureau of Standards, Washington, 1964)
(2)
E.R. Bezerra de Mello, V.B. Bezerra and N.R. Khusnutdinov,
J. Math. Phys. 42, 562 (2001)
(3)
M. Bordag, J. Phys. A 28, 755 (1995); M. Bordag and K.
Kirsten, Phys. Rev. D 53, 5753 (1996); M. Bordag, K.
Kirsten, and E. Elizalde, J. Math. Phys. 37, 895 (1996); M.
Bordag, K. Kirsten, and D. Vassilevich, Phys. Rev. D 59,
085011 (1999).
(4)
M. Bordag, E. Elizalde, K. Kirsten,
and S. Leseduarte, Phys. Rev. D 56, 4896 (1997)
(5)
M. Bordag and D.V. Vassilevich, J. Phys. A
32, 8247 (1999)
(6)
M. Bordag M. U. Mohideen, V.M. Mostepanenko, Phys. Rep.
353, 1 (2001)
(7)
M. Bordag and D.V. Vassilevich, Phys. Rev. D
70, 045003 (2004)
(8)
T.H. Boyer, Phys. Rev. 174, 1764 (1968)
(9)
H.B.G. Casimir, Physica 19, 846 (1956)
(10)
J. S. Dowker and R. Critchley, Phys. Rev. D 13, 3224
(1976); W. Hawking, Commun. Math. Phys. 55, 133 (1977); S.
K. Blau, M. Visser, and A. Wipf, Nucl. Phys. B 310, 163
(1988)
(11)
E. Elizalde, S. D. Odintsov, A. Romeo, A. A. Bytsenko, and S.
Zerbini, Zeta Regularization Techniques with Applications,
(World Scientific, Singapore, 1994)
(12)
L.H. Ford, T. A. Roman, Phys. Rev. D 53,
5496 (1996); K.K. Nandi, Y.Z. Zhang and Kumar K.B. Vijaya Phys.
Rev. D 70, 064018 (2004)
(13)
R. Garattini, Class. Quant. Grav., 22 1105
(2005)
(14)
P.B. Gilkey, K. Kirsten, and D.V.
Vassilevich, Nucl. Phys. B601, 125 (2001)
(15)
N. Graham, R.L. Jaffe, V. Khemani, M. Quandt, O. Schroeder,
and H. Weigel, Nucl. Phys. B677, 379 (2004)
(16)
S.W. Hawking and G.F.R. Ellis, The Large Scale Structure of
spacetime, (Cambridge University Press, Cambridge, London, 1973)
(17)
C.A.R. Herdeiro, M. Sampaio, hep-th/0510052
(18)
D. Hochberg, A. Popov, S. V. Sushkov, Phys. Rev.
Lett. 78, 2050 (1997)
(19)
N.R. Khusnutdinov and M. Bordag, Phys. Rev. D 59,
064017 (1999).
(20)
N.R. Khusnutdinov and S.V. Sushkov, Phys.
Rev. D 65, 084028 (2002)
(21)
S.G. Mamaev and N.N. Trunov, Yadernaya Fiz.
35, 1049 (1982) [in Russian]
(22)
J.J. Mckenzie-Smith, and W. Naylor,
Phys. Lett. B610, 159 (2005)
(23)
N.R. Khusnutdinov, Phys. Rev. D 67,
124020 (2003); Theor. Math. Phys. 138(2), 250 (2004)
(24)
K. Milton, J. Phys. A: Math. Gen. 37,
6391 (2004)
(25)
M. Scandurra, J. Phys. A 32, 5679 (1999)
(26)
M. Scandurra, J. Phys. A 33, 5707 (2000)
(27)
D. Vassilevich, Phys. Rep. 388, 279 (2003) |
Dust in the wind II:
Polarization imaging from disk-born outflows
F. Marin${}^{*}$
Observatoire Astronomique de Strasbourg, Université de Strasbourg,
CNRS, UMR 7550, 11 rue de l’Université, 67000 Strasbourg, France
R. W. Goosmann${}^{1}$
Abstract
In this second research note of a series of two, we aim to map the polarized flux emerging from a disk-born, dusty outflow as it was prescribed by Elvis (2000).
His structure for quasars was achieved to unify the emission and absorption features observed in active galactic nuclei (AGN) and can be used as an alternative
scenario to the typical dusty torus that is extensively used to account for AGN circumnuclear obscuration. Using Monte Carlo radiative transfer simulations, we
model an obscuring outflow arising from an emitting accretion disk and examine the resulting polarization degree, polarization angle and polarized flux.
Polarization cartography reveals that a disk-born outflow has a similar torus morphology in polar viewing angles, with bright polarized fluxes reprocessed
onto the wind funnel. At intermediate and edge-on inclinations, the model is rather close to a double-conical wind, with higher fluxes in the cone bases.
It indicates that the optically thick outflow is not efficient enough to avoid radiation escaping from the central region, particularly due to the geometrically
thin divergence angle of the outflow. As parametrized in this research note, a dusty outflow does not seem to be able to correctly reproduce the polarimetric
behavior of an usual dusty torus. Further refinement of the model is necessary.
keywords:
Galaxies: active - Galaxies: Seyfert - Polarization - Radiative transfer - Scattering
\TitreGlobal
SF2A 2013
††thanks: ${}^{*}$ [email protected]
1 Introduction
The anisotropic obscuration of the central region of AGN and quasars is well explained by the unified scheme (Antonucci 1993; Urry & Padovani 1995). In this scenario,
the fueling engine of AGN (a supermassive black hole SMBH and its surrounding, reprocessing accretion disk) is hidden along the equatorial plane by a circumnuclear medium
situated at a few parsecs from the SMBH (Krolik & Begelman 1986, 1988). Illuminated by the continuum source, the obscuring material is expected to re-emit in
the infrared domain, as the UV-heated dust particles re-emit at lower wavelengths. Such hypotheses were strongly supported by early photometric infrared
observations of Seyfert-like galaxies (Low & Kleinmann 1968; Kleinmann & Low 1970b, a), and an indubitable confirmation was found by Jaffe et al. (2004) and
Wittkowski et al. (2004). They observed the dusty heart of NGC 1068 using mid-infrared interferometry and resolved the outer morphology of the obscuring
region. The resulting morphology is close to a toroidal configuration; a geometry usually adopted in many AGN simulations
(Kartje 1995; Wolf & Henning 1999; Young 2000; Watanabe et al. 2003; Goosmann & Gaskell 2007; Marin et al. 2012b). However, the actual morphology of the circumnuclear gas surrounding the AGN
equator is highly debated. While simple, torus-like models give a sufficient approximation for radiative transfer simulations, more complex structures are
actually investigated (see Elitzur & Shlosman 2006 for a review).
This research note focuses on the peculiar model given by Elvis (2000, 2012), where the dusty torus is pictured as an
optically thick, equatorial outflow originating close to the supermassive black hole. While the presence of dust grains at a hundred of gravitational
radii seems uncommon, Czerny & Hryniewicz (2012) proved that dust can be formed in accretion disk atmospheres. The dust mixture is protected by an internal,
shielding outflow composed of a flow of warm, highly ionized matter (WHIM) and can rise from the disk atmosphere, before being evaporated at high altitudes.
It results in a dust outflow that propagates mainly along the equatorial plane, an interesting configuration to explain the edge-on obscuration of AGN
(Antonucci & Miller 1985; Pier & Krolik 1992).
Using an academic model inspired by our previous analyses (Marin & Goosmann, accepted) of the model proposed by Elvis (2000), we aim to produce polarization
maps of a dusty outflow in order to compare it with observations. This research note is the second in a series of two, both included in this volume. Polarization spectra
were investigated in the first issue (Marin & Goosmann, submitted).
2 Model setup
The empirically-derived structure for quasars presented by Elvis (2000) stipulates that a flow of matter arises from a narrow range of radii on the
accretion disk. The material is bent and accelerated outward by internal radiation pressure, with an angle $\theta$ = 60${}^{\circ}$ and a divergence angle
$\delta\theta$ = 3${}^{\circ}$. This angular parametrization is given by the ratio of narrow absorption line (NAL) to absorption-free AGN
(Reynolds 1997; Crenshaw et al. 1999), while the morphological parameters (distance to the ionizing source r${}_{1}$, width of the column flow r${}_{2}$) are derived
from constraints of the broad emission line (BEL) region (Peterson 1997). To follow the prescription of Elvis (2000) and our previous investigation
(Marin & Goosmann, submitted), we set r${}_{1}$ to 0.0032 pc (10${}^{16}$ cm) and r${}_{2}$ to 0.00032 pc (10${}^{15}$ cm). The wind extends up to r${}_{3}$ = 0.032 pc
(10${}^{17}$ cm). The dust mixture, filling our model, is taken from Wolf & Henning (1999). We opted for an optically thick outflow, with optical depths
$\tau_{2}\sim$ 3600 along the wind and $\tau_{1}\sim$ 36 along the equator. The model is summarized in Marin & Goosmann (submitted, Fig. 1).
We use the radiative transfer Monte Carlo code stokes (Goosmann & Gaskell 2007), upgraded with a polarization imaging technic (Marin et al. 2012b, a)
to compute the net polarization emerging from the model. The unpolarized, input spectrum comes from an isotropic, disk-like emitting region and has a power-law
spectral energy distribution $F_{\rm*}~{}\propto~{}\nu^{-\alpha}$ and $\alpha=1$.
3 Wavelength-integrated polarization maps
In Fig. 1, we present the simulated polarization cartography of the dusty model by Elvis (2000). The maps simultaneously show the polarized flux,
$PF_{\nu}$, the polarization percentage $P$, and the polarization position angle, $\psi$. The angle $\psi$ is represented by black bars drawn in the center
of each spatial bin and the length of the vector is proportional to $P$. A vertical bar indicates a polarization of $\psi$ = 90${}^{\circ}$, a bar leaning to
the right denotes 90${}^{\circ}>\psi>0^{\circ}$ and a horizontal bar stands for $\psi$ = 0${}^{\circ}$. For each pixel, the Stokes
parameters are integrated over the full 2000 – 8000 Å range.
At a polar viewing angle (Fig. 1, top-left), the funnel of the outflow is illuminated by the central source, and multiple forward and backward scattering
events are responsible for the high polarized flux detected in the vicinity of the emitting region. The illumination decreases with distance from the source and
nearly no polarized flux is detected at the most extended parts of the outflows. The spectropolarimetric pattern is similar to the polarization maps produced for an isolated
dusty torus (Marin et al. 2012b) and appears not to be distinguishable by polarization imaging.
In the intermediate inclination case (Fig. 1, top-right), the flux mainly comes from backscattering events on the lower part of the model, where photons
escaping from the geometrically thin outflow ($\delta\theta$ = 3${}^{\circ}$) are reprocessed. Due to the important optical depth along the wind flow, all of the
polarized flux is absorbed at the wind base. The overall shape of the model is that of a double-cone, with a maximum polarized flux detected in the wind bases.
However, the flux gradient, as seen from previous modeling of hourglass-shaped outflows (Marin et al. 2012b), is absent due to the hollow geometry of the model.
Finally, along the equatorial viewing angle (Fig. 1, bottom), the outflow is nearly invisible as it absorbs most of the incident photon flux. However, a
large fraction of radiation can escape by transmission through the equatorial medium and collaborate to rise the polarization degree detected in type-2 viewing
angles. A few photons are detected on the northern part of the map, due to rare backscattering events on the top-end of the upper wind, since the inclination of
the model is not exactly 90${}^{\circ}$.
4 Discussion
Polarization mapping is a unique tool to visualize the morphological differences between different reprocessing media. In the context of the model for
quasars presented by Elvis (2000), it appears that the polarization signatures show a wide panel of geometries, revealing the outermost part of the structures.
In comparison with a usual, dusty torus (Marin et al. 2012b), type-1 views are rather similar between a torus and a disk-born wind. The funnel of the model is
highly irradiated, while the extended parts of the regions are less impacted by the input radiation. Looking at an intermediate viewing angle, a dusty wind
becomes more similar to a double-cone geometry in terms of flux repartition. The small width of the outflow allows radiation to escape from the inner regions
and the polarized flux is stronger than in type-1 inclinations. The main difference between a torus and a disk-born wind is seen at equatorial, type-2 viewing angles.
The central part of the map shows the brightest flux, associated with intermediate degrees of polarization. The rest of the model is nearly invisible due to absorption.
If we think that a dusty outflow fits in with the observed presence of dusty NLR clouds at larger distances from the irradiation source, then it is hard
to reconcile the resulting polarization maps with the observations of IC 5063 (Morganti et al. 2007). This southern Seyfert-2 galaxy shows an ionized structure
extending out to 15 kpc in a remarkable, X-shaped morphology (Morganti et al. 2003). In Marin & Goosmann (2013, in press), we found that a bi-phased outflow
well reproduce both the polarization degrees and the geometrical X shape. Here, we complementary demonstrate that a wind uniquely filled with dust grains
cannot reproduce the flux morphology of IC 5063 as the hollow outflows mostly absorb the inner radiation.
5 Summary and conclusions
As a complementary work to the spectrophotometric modeling of a dusty outflow (Marin & Goosmann, submitted), based on morphological constraints presented
in Elvis (2000), we modeled the optical/UV polarization maps of a theoretical, dusty model. We found that the resulting maps present various
geometries, depending on the observer’s viewing angle. While for a polar inclination, a dusty disk-born outflow can be mistaken for a regular, obscuring torus,
intermediate and edge-on views show rather different results. New observational campaigns, using polarimetric imagers would greatly help to disentangle
between a hydrostatic torus and a dynamic outflow, but our present conclusions show that a pure dusty outflow hardly reproduces the observational data.
As suggested in Marin & Goosmann (2013, in press), refining the bending and divergence angles of the flow could lead to better simulations. The academic model
presented here should not be taken as an actual model to replace a toroidal region, but as a step toward an upgrade of the model by Elvis (2000).
Future work, focusing on broadband polarimetric signature, velocity fields and absorption/emission features will be conducted to push forward the conclusions
drawn in Marin & Goosmann (2013, in press).
References
Antonucci (1993)
Antonucci, R. 1993, \araa, 31, 473
Antonucci & Miller (1985)
Antonucci, R. R. J. & Miller, J. S. 1985, \apj, 297, 621
Crenshaw et al. (1999)
Crenshaw, D. M., Kraemer, S. B., Boggess, A., et al. 1999, \apj, 516,
750
Czerny & Hryniewicz (2012)
Czerny, B. & Hryniewicz, K. 2012, Journal of Physics Conference Series,
372, 012013
Elitzur & Shlosman (2006)
Elitzur, M. & Shlosman, I. 2006, \apjl, 648, L101
Elvis (2000)
Elvis, M. 2000, \apj, 545, 63
Elvis (2012)
Elvis, M. 2012, in Astronomical Society of the Pacific Conference Series,
Vol. 460, AGN Winds in Charleston, ed. G. Chartas, F. Hamann, & K. M.
Leighly, 186
Goosmann & Gaskell (2007)
Goosmann, R. W. & Gaskell, C. M. 2007, \aap, 465, 129
Jaffe et al. (2004)
Jaffe, W., Meisenheimer, K., Röttgering, H. J. A., et al. 2004,
\nat, 429, 47
Kartje (1995)
Kartje, J. F. 1995, \apj, 452, 565
Kleinmann & Low (1970a)
Kleinmann, D. E. & Low, F. J. 1970a, \apjl, 161, L203
Kleinmann & Low (1970b)
Kleinmann, D. E. & Low, F. J. 1970b, \apjl, 159, L165
Krolik & Begelman (1986)
Krolik, J. H. & Begelman, M. C. 1986, \apjl, 308, L55
Krolik & Begelman (1988)
Krolik, J. H. & Begelman, M. C. 1988, \apj, 329, 702
Low & Kleinmann (1968)
Low, J. & Kleinmann, D. E. 1968, \aj, 73, 868
Marin et al. (2012a)
Marin, F., Goosmann, R., & Dovčiak, M. 2012a, Journal
of Physics Conference Series, 372, 012065
Marin et al. (2012b)
Marin, F., Goosmann, R. W., Gaskell, C. M., Porquet, D., &
Dovčiak, M. 2012b, \aap, 548, A121
Morganti et al. (2007)
Morganti, R., Holt, J., Saripalli, L., Oosterloo, T. A., &
Tadhunter, C. N. 2007, \aap, 476, 735
Morganti et al. (2003)
Morganti, R., Oosterloo, T., Holt, J., Tadhunter, C., & van der
Hulst, J. M. 2003, The Messenger, 113, 67
Peterson (1997)
Peterson, B. M. 1997, An Introduction to Active Galactic Nuclei
Pier & Krolik (1992)
Pier, E. A. & Krolik, J. H. 1992, \apjl, 399, L23
Reynolds (1997)
Reynolds, C. S. 1997, \mnras, 286, 513
Urry & Padovani (1995)
Urry, C. M. & Padovani, P. 1995, \pasp, 107, 803
Watanabe et al. (2003)
Watanabe, M., Nagata, T., Sato, S., Nakaya, H., & Hough, J. H. 2003,
\apj, 591, 714
Wittkowski et al. (2004)
Wittkowski, M., Kervella, P., Arsenault, R., et al. 2004, \aap, 418,
L39
Wolf & Henning (1999)
Wolf, S. & Henning, T. 1999, \aap, 341, 675
Young (2000)
Young, S. 2000, \mnras, 312, 567 |
The Effects of Dark Matter-Baryon Scattering on Redshifted 21 cm Signals
Hiroyuki Tashiro${}^{1}$, Kenji Kadota${}^{2}$ and Joseph Silk${}^{3,4,5}$
${}^{1}$ Department of Physics, Nagoya University, Nagoya
464-8602, Japan
${}^{2}$ Center for Theoretical Physics of the Universe, Institute for Basic Science, Daejeon 305-811, Korea
${}^{3}$ Institut d’Astrophysique de Paris, CNRS, UPMC Univ Paris 06,
UMR7095, 98 bis, boulevard Arago, F-75014, Paris, France
${}^{4}$ The Johns Hopkins University, Department of Physics and Astronomy, Baltimore, Maryland 21218, USA
${}^{5}$ Beecroft Institute of Particle Astrophysics and Cosmology, University of Oxford, Oxford OX1 3RH, UK
()
Abstract
We demonstrate that elastic scattering between dark matter (DM) and
baryons can affect the thermal evolution of the intergalactic medium at
early epochs and discuss the observational consequences. We show that,
due to the interaction between DM and baryons, the baryon temperature is cooled after decoupling from the CMB temperature.
We illustrate our findings by calculating the 21 cm power spectrum in
coexistence with a velocity-dependent DM elastic scattering cross
section. For instance, for a DM mass of $10$ GeV, the 21 cm
brightnesstemperature angular power spectrum can be suppressed by a
factor 2 within the currently allowed DM-baryon cross section bounded
by the CMB and large-scale structure data. This scale-independent
suppression of the angular power spectrum can be even larger for a
smaller DM mass with a common cross section (for instance, as large as
a factor 10 for $m_{d}\sim 1$ GeV), and such an effect would be of great interest for probing the nature of DM in view of forthcoming cosmological surveys.
1 Introduction
The nature of dark matter (DM) is one of the greatest mysteries of modern
cosmology. One can infer its properties through its interactions with
other visible objects. Even though conventional DM models assume only
gravitational interactions with ordinary baryonic matter, other forms of
couplings are not ruled out and deserve further study in view of the
potential signals observable in forthcoming experiments.
DM-baryon interactions are of great interest for cosmology because the DM-baryon coupling can modify the evolution of structure
formation at early epochs, and stringent constraints have been obtained
from current data (e.g. CMB and Ly-$\alpha$) for a wide variety of dark
matter models such as millicharged DM, dipole DM and strongly
interacting DM [1, 2, 3, 4, 5, 6].
In this paper, we focus on the impact of the DM-baryon coupling on the
temperature evolution of DM and baryons and explore the consequences for
the redshifted 21 cm signal from very early epochs.
In the standard cosmology, the
baryon temperature $T_{b}$ couples with the CMB temperature $T_{\gamma}$ due to Compton
scattering via the small residual fraction of free electrons left over from recombination
down to a redshift $z_{\rm dec}(\sim 200)$ while $T_{b}$ subsequently
cools adiabatically at lower redshift $z\lesssim z_{\rm dec}$.
On the other hand, the DM temperature $T_{d}$ decouples from $T_{\gamma}$ at a much earlier stage of the universe and $T_{d}$ is assumed to
evolve adiabatically since then. The DM is hence “cold”, and $T_{d}$ is much lower than $T_{b}$. Due to DM-baryon coupling, however, the baryons can be cooled by the DM after the baryon temperature decouples from the CMB temperature. In order to probe this effect, we consider the
observations of redshifted 21 cm lines from neutral hydrogen during the dark ages before reionization starts ($20\lesssim z\lesssim 1000$).
The signal of redshifted 21 cm lines depends on the properties of baryon
gas at high redshifts: including the
density, the temperature and the ionization fraction [7] (see
refs. [8, 9] for recent reviews). The observations of redshifted 21 cm lines hence can provide a probe of
the thermal evolution of baryonic gas. There have been related paper s investigating the
21 cm signal due to energy injection during the dark ages including the
dissipation of magnetic fields [10, 11],
energy injection from primordial black holes [12, 13], and the decay or annihilation of dark matter [14, 15]. Our study in contrast looks into the effects of elastic scattering between the DM and baryons on the 21 cm signals by quantifying the change in the evolution of $T_{b}$ and $T_{d}$ due to DM-baryon coupling.
There are several on-going and planned projects to
measure the redshifted 21 cm signals by large interferometers such as
the LOw Frequency ARray (LOFAR) [16], the Murchison Widefield Array (MWA) [17],
the Giant Metre-wave Radio Telescope (GMRT) [18] and Square Kilometer Array (SKA)111http://www.skatelescope.org/.
The purpose of this paper is to demonstrate the potential
significance of DM-baryon coupling on the 21cm observables
and investigate the range of DM-baryon coupling for observational feasibility.
We discuss, for simplicity, the case where cold dark matter accounts for the entire DM density, and we calculate the 21 cm signal in the presence of DM-baryon coupling during the dark ages before reionization starts $20\lesssim z\lesssim 1000$ for its observational feasibility. This suffices for our purpose of quantifying the significance of DM-baryon coupling on future cosmological observables.
Throughout this paper, we adopt the standard $\Lambda$CDM model parameters: $h=0.7$, $h^{2}\Omega_{b}=0.0226$ and $\Omega_{d}=0.112$, where
$h$ is the present Hubble constant normalized by 100 km/s/Mpc and
$\Omega_{b}$ and $\Omega_{d}$ are the density parameters of baryons and DM.
2 Thermal evolution of baryons and DM with DM-baryon coupling
We solve the Boltzmann equations to follow the background temperature evolution.
The coupling between baryons and DM induces momentum transfer between
them, and the temperatures of DM and baryons, $T_{d}$ and $T_{b}$, evolve as [19]
$$\displaystyle(1+z)\frac{dT_{d}}{dz}$$
$$\displaystyle=$$
$$\displaystyle{2}T_{d}+\frac{2m_{d}}{m_{d}+m_{H}}\frac{K_{b}}{H}(T_{d}-T_{b}),$$
(1)
$$\displaystyle(1+z)\frac{dT_{b}}{dz}$$
$$\displaystyle=$$
$$\displaystyle{2}T_{b}+\frac{2\mu_{b}}{m_{e}}\frac{K_{\gamma}}{H}(T_{b}-T_{%
\gamma})+\frac{2\mu_{b}}{m_{d}+m_{H}}\frac{\rho_{d}}{\rho_{b}}\frac{K_{b}}{H}(%
T_{b}-T_{d}),$$
(2)
where
$\mu_{b}\simeq m_{H}(n_{\rm H}+4n_{\rm He})/(n_{\rm H}+n_{\rm He}+n_{e})$ is the mean molecular
weight of baryons (including free electrons, and H, He ions), and
$K_{\gamma}$ and $K_{b}$ are the momentum transfer rates. $K_{\gamma}$ represents the usual Compton collision rate
$$\displaystyle K_{\gamma}=\frac{4\rho_{\gamma}}{3\rho_{b}}n_{e}\sigma_{T},$$
(3)
where $\sigma_{T}$ is the Thomson scattering cross-section. For $K_{b}$, we
consider the general form of cross section which can be velocity
dependent parameterized by the baryon-DM relative velocity $v$
$$\displaystyle\sigma(v)=\sigma_{0}v^{n},$$
(4)
so that the momentum transfer rate $K_{b}$ becomes [6]
$$\displaystyle K_{b}=\frac{c_{n}\rho_{b}\sigma_{0}}{m_{H}+m_{d}}\left(\frac{T_{%
b}}{m_{H}}+\frac{T_{d}}{m_{d}}\right)^{\frac{n+1}{2}}.$$
(5)
The spectral index $n$ depends on the nature of DM models, for instance,
$n=-1$ corresponds to the
Yukawa-type potential DM , $n=-2,-4$ are respectively for dipole DM and
millicharged DM [3, 4, 5, 6, 20, 21, 22, 23, 24, 25, 26].
The constant coefficient $c_{n}$ depends on the value of $n$ and also can
include the correction factor for including the helium in addition to
hydrogen. $c_{n}$ can vary in the range of ${\cal O}(0.1\sim 10)$ for the
parameter range of our interest [6] and we simply set $c_{n}=1$
in our analysis, which suffices for our purpose of demonstrating the
effects of the DM-baryon coupling on the 21cm observables222We in this paper use the conventional cross section for the momentum transfer [5, 6, 27, 28], which is the integration of the differential cross section weighted by $(1-\cos\theta)$
$\displaystyle\sigma(v)=\int d\cos\theta(1-\cos\theta)\frac{d\sigma(v)}{d\cos\theta}$
(6)
The weight factor $(1-\cos\theta)$ is introduced to consider the
longitudinal momentum transfer and it can regulate spurious infrared
divergence for the forward scattering with no momentum transfer
corresponding to $\cos\theta\rightarrow 1$..
We solve Eqs. (1) and (2) with
$T_{\gamma}=T_{0}(1+z)$, where $T_{0}=2.73~{}$K, numerically. In the early
stage of the universe, it is well-known that the baryon temperature is tightly coupled with the
CMB temperature, $T_{b}\sim T_{\gamma}$. Similarly, for a sufficiently large $K_{b}$, the difference between $T_{d}$ and $T_{b}$ can become small in the early universe. To numerically calculate the evolution accurately
in both of these tight coupling regimes, it is useful to expand
Eqs. (1) and (2) up to the first order in the
temperature differences as performed in Ref. [29]. For this purpose, we introduce two heating time-scales due to Compton scattering and
DM-baryon coupling, $t_{C}=m_{e}/2\mu_{b}K_{\gamma}$ and $t_{DB}=(m_{d}+m_{\rm H})/2m_{d}K_{b}$, and we classify the
thermal evolution in the early universe in three cases.
The first is the case
with $Ht_{C}\ll 1$ and $Ht_{DB}\ll 1$, that is, $T_{b}$ and $T_{d}$ are
tightly coupled with $T_{\gamma}$. The second case is for $Ht_{C}\ll 1$ and
$Ht_{DB}>1$, in which only $T_{b}$ is tightly coupled with $T_{\gamma}$.
The third is for $Ht_{C}>1$ and
$Ht_{DB}\ll 1$ (which corresponds to $z\lesssim z_{\rm dec}$ for the parameter range of our interests as explicitly shown below).
2.1 Regime I: $Ht_{C}\ll 1$ and $Ht_{DB}\ll 1$
When $Ht_{C}\ll 1$ and $Ht_{DB}\ll 1$, the
difference among $T_{b}$, $T_{d}$ and $T_{\gamma}$ would be very small, and we can expand $T_{b}$ and $T_{d}$ as
$$\displaystyle T_{b}$$
$$\displaystyle=$$
$$\displaystyle T_{\gamma}-\epsilon_{\gamma},$$
(7)
$$\displaystyle T_{d}$$
$$\displaystyle=$$
$$\displaystyle T_{b}-\epsilon_{b},$$
(8)
where $|\epsilon_{\gamma}|/T_{\gamma}\ll 1$ and $|\epsilon_{b}|/T_{b}\ll 1$.
We also assume
that $\epsilon_{\gamma}/T_{\gamma}$ and $\epsilon_{b}/T_{b}$ are of the same order as
$Ht_{C}$ and $Ht_{DB}$.
Substituting Eq. (7) into Eq. (2), we obtain up to first order in $\epsilon_{b}$
$$\frac{\epsilon_{b}}{T_{\gamma}}=Ht_{DB},$$
(9)
where we used $T_{\gamma}\propto(1+z)$. Because the coefficient $1/Ht\gg 1$ is very large, we treat $d\epsilon/dz=0$ so that
$dT_{d}/dz=dT_{b}/dz=dT_{\gamma}/dz$ at first order. Similarly
$\epsilon_{\gamma}$ is given by
$$\frac{\epsilon_{\gamma}}{T_{\gamma}}=\left(1+\frac{1}{f}\right)Ht_{C},$$
(10)
where $f$ is $f=m_{d}\Omega_{b}/\mu_{b}\Omega_{d}$.
With these approximations at hand, in terms of $\epsilon_{\gamma}$ and
$\epsilon_{b}$, the time evolutions of the temperature can be rewritten as
$$\displaystyle\frac{dT_{b}}{dz}$$
$$\displaystyle\approx$$
$$\displaystyle\frac{T_{\gamma}}{1+z}-\epsilon_{\gamma}\left(\frac{1}{1+z}+\frac%
{d\ln H}{dz}+\frac{d\ln t_{C}}{dz}\right),$$
(11)
$$\displaystyle\frac{dT_{c}}{dz}$$
$$\displaystyle\approx$$
$$\displaystyle\frac{T_{\gamma}}{1+z}-\epsilon_{\gamma}\left(\frac{1}{1+z}+\frac%
{d\ln H}{dz}+\frac{d\ln t_{C}}{dz}\right)-\epsilon_{b}\left(\frac{1}{1+z}+%
\frac{d\ln H}{dz}+\frac{d\ln t_{DB}}{dz}\right),$$
(12)
where we assume that $f$ is constant333Since $f$ depends on the ionization rate
through $\mu_{b}$, this assumption is invalid during the epochs of
recombination and reionization. We, however, checked that, even though the evolution of $f$ itself is not negligible, its effects on the temperature evolution is negligible even during these epochs..
The evolutions of $T_{b}$ and $T_{d}$ are obtained by solving
Eqs. (11) and (12) with
Eqs. (9) and (10).
2.2 Regime II: $Ht_{C}\ll 1$ and $Ht_{DB}>1$
Although the DM temperature $T_{d}$ decouples from the baryon temperature $T_{b}$,
$T_{b}$ still couples with $T_{\gamma}$. We hence can assume that
$$\displaystyle T_{b}$$
$$\displaystyle=$$
$$\displaystyle T_{\gamma}-\epsilon_{\gamma},$$
(13)
with $|\epsilon_{\gamma}|/T_{\gamma}\ll 1$.
Eq. (2) provides to first order in $\epsilon_{\gamma}$
$$\frac{\epsilon_{\gamma}}{T_{\gamma}}=Ht_{C}+\frac{t_{C}}{ft_{DB}}\left(1-\frac%
{T_{c}}{T_{\gamma}}\right).$$
(14)
The redshift derivative of $T_{b}$ can then be approximated as
$$\displaystyle\frac{dT_{b}}{dz}$$
$$\displaystyle\approx$$
$$\displaystyle\frac{T_{\gamma}}{1+z}-\frac{T_{\gamma}}{1+z}Ht_{C}-T_{\gamma}t_{%
C}\frac{dH}{dz}-T_{\gamma}H\frac{dt_{C}}{dz}-\frac{d}{dz}\left[T_{\gamma}\frac%
{t_{C}}{ft_{DB}}\left(1-\frac{T_{c}}{T_{\gamma}}\right)\right].$$
(15)
We numerically calculate the thermal evolution of $T_{b}$
and $T_{d}$ from Eq. (15) along with
Eq. (1).
2.3 Regime III: $Ht_{C}>1$ and $Ht_{DB}\ll 1$
In this case, while the baryon temperature $T_{b}$ is already decoupled
from the CMB temperature $T_{\gamma}$, the dark matter temperature $T_{d}$
is coupled to $T_{b}$.
We can write the dark matter temperature as
$$\displaystyle T_{d}$$
$$\displaystyle=$$
$$\displaystyle T_{b}-\epsilon_{b},$$
(16)
with $|\epsilon_{b}|/T_{b}\ll 1$.
From Eqs. (1) and (2),
we obtain to first order in $\epsilon_{b}$ and $Ht_{DB}$,
$$\epsilon_{b}=-\left(1+\frac{1}{f}\right)^{-1}\frac{t_{DB}}{t_{C}}\left(T_{b}-T%
_{\gamma}\right).$$
(17)
Therefore, in this tight-coupling regime,
the evolution of $T_{d}$ can be approximated as
$$\frac{dT_{d}}{dz}\approx\frac{dT_{b}}{dz}-\epsilon_{b}\left[\frac{d\ln t_{DB}}%
{dz}-\frac{d\ln t_{C}}{dz}+\frac{1}{T_{b}-T_{\gamma}}\left(\frac{dT_{b}}{dz}-%
\frac{T_{\gamma}}{1+z}\right)\right].$$
(18)
On the other hand, the evolution of $T_{b}$ can be written as
$$(1+z)\frac{dT_{b}}{dz}\approx{2}T_{b}+\left(1+\frac{1}{f}\right)^{-1}\frac{1}{%
Ht_{C}}(T_{b}-T_{\gamma}).$$
(19)
Since $f\propto m_{d}/m_{\rm H}$, the change of $T_{b}$ due to the
DM-baryon coupling becomes bigger for a bigger $m_{d}$ (with a fixed $\Omega_{d}$), and, in the limit of $m_{d}\gg m_{\rm H}$, the baryons and DM can be described as
a single gas. In such a tight coupling limit with $m_{d}\gg m_{\rm H}$, the total number density of the
DM-baryon mixed gas does not change from that of the baryon gas, and the
evolution of $T_{b}$ along with a large $m_{d}$ is similar
to the $T_{b}$ evolution without the DM-baryon coupling.
In other words, a small $m_{d}$ ($\ll m_{\rm H}$) leads to a significant
increase of the total number density of the mixed gas, and the Compton
cooling term to couple $T_{b}$ to $T_{\gamma}$ effectively becomes
small. Hence, for a smaller $m_{d}$, the deviation of $T_{b}\approx T_{d}$
from $T_{\gamma}$ with the DM-baryon coupling becomes bigger compared with the deviation of $T_{b}$ from $T_{\gamma}$ without DM-baryon coupling.
3 Numerical results for DM and baryon temperature evolution
Following the previous section on numerical treatments of tight coupling regimes, we numerically calculate the DM and
baryon temperatures, $T_{d}$ and $T_{b}$, modifying the public code RECFAST [30]. Before presenting the results with
different couplings between DM and baryons, we note that there exist
strong constraints on this DM-baryon coupling notably from the CMB and
large-scale structure due to the suppression of the matter density
perturbations where the DM perturbation growth is suppressed because of
the drag force arising from the momentum transfer between the DM and
baryon fluids [2, 6, 31, 32].
For instance, small-scale observations (Lyman-$\alpha$ forest) by SDSS and
the CMB data by Planck can set upper bounds on the coupling between DM and baryons of order
$\sigma_{0}/m_{d}\lesssim 10^{-17,-9,-6,-3,+4}~{}{\rm cm}^{2}/{\rm g}$
for $n=-4,-2,-1,0,+2$ [6]. For the purpose of presenting our
findings through a concrete example, in the following we discuss the
scenarios of $n=-4$ (typical for a millicharged DM scenario [4, 5, 20, 21, 23]) because a large negative power leads to a prominent enhancement in the cross section for a smaller momentum transfer at low redshift. We found, for the scenarios with $n=-2,-1,0,+2$, that the DM-baryon coupling cannot lead to any appreciable change in the 21 cm power spectrum within the aforementioned cross-section upper bounds from the currently available data.
Fig. 1 represents
the temperature evolution with $n=-4$ for different values of $\sigma_{17}$, where we normalized the coupling constant as $\sigma_{0}=\sigma_{17}m_{\rm H}\times 10^{-17}~{}{\rm cm^{2}/g}$.
To demonstrate the mass dependence, we simply show the results for $m_{d}=m_{\rm H}$ and $10m_{\rm H}$ in Fig. 1 for different values of DM-baryon coupling. At high redshifts, $z>z_{\rm dec}$, $T_{b}$ is tightly coupled to $T_{\gamma}$ ($T_{b}\approx T_{\gamma}$, and hence the thermal evolution can be described with the treatment in in Sec. 2.2 where $T_{d}\propto 1/t_{DB}$). It is consequently difficult to find any difference between
the evolution of $T_{b}$ for the different couplings in
Fig. 1 at high redshift. Note, however, that $T_{d}$
deviates from $T_{b}\approx T_{\gamma}$ at high redshifts. In the presence
of DM-baryon coupling, the DM thermal evolution is not adiabatic and is determined by the balance between
the adiabatic cooling and the heating due to the coupling. We can infer,
by substituting $T_{b}\approx T_{\gamma}$ in Eq. (1), that DM
evolution follows $T_{d}\sim T_{\gamma}/t_{DB}H$. More precisely, from
Fig. 1, we numerically find that the DM temperature is
well approximated by the fitting formula $T_{d}\approx T_{\gamma}/1.5t_{DB}H$. The time-scale $t_{DB}$ is proportional to $(m_{d}+m_{\rm H})^{2}/\sigma_{17}m_{d}$. When $m_{d}\gg m_{\rm H}$, $t_{DB}\propto m_{d}$ which results in $T_{d}\propto 1/t_{DB}\propto 1/m_{d}$, and Fig. 1 indeed shows that $T_{d}$ is larger for a smaller $m_{d}$.
Let us here note that the DM-baryon momentum transfer rate $K_{b}$ given
in Eq. 5, hence the thermal evolution at high
redshifts, turns out to be heavily dependent on $T_{b}$ but not so much
on $T_{d}$, where the temperature dependence of $K_{b}$ shows up in the
factor $(T_{b}/m_{\rm H}+T_{d}/m_{d})$. For $m_{d}\gg m_{\rm H}$, $(T_{b}/m_{\rm H}+T_{d}/m_{d})\sim T_{b}/m_{\rm H}$ to leading order in $m_{\rm H}/m_{d}$. For $m_{d}\ll m_{\rm H}$ on the other hand, $T_{d}\sim 1/t_{DB}\sim m_{d}$ and the $m_{d}$
dependence cancels out in $T_{d}/m_{d}$444Consequently, because the
baryon temperature never exceeds the cold dark matter temperature, this
factor $(T_{b}/m_{\rm H}+T_{d}/m_{d})$ would be at most of order $\sim 2\times T_{b}/m_{\rm H}$ saturated at $m_{d}\sim m_{\rm H}$. We hence
expect the upper bound $\sigma_{0}\lesssim 10^{-16}m_{\rm H}~{}{\rm cm^{2}/g}$ (corresponding to $\sigma_{17}=10$ in our notation) which Ref. [6] obtained for $m_{d}=10$ GeV would not become significantly tighter even for a smaller dark matter mass. We therefore restricted the parameter range of our discussion to be $\sigma_{17}\leq 10$ and presented the results for $m_{d}=10,1$ GeV, which would suffice our purpose of showing the potential significance of the DM-baryon coupling on the 21 cm signals..
At low redshifts after $T_{b}$ has decoupled from $T_{\gamma}$, $z\lesssim z_{\rm dec}$, the coupling between baryons and dark matter affects the temperature
evolution of baryons.
The baryons become cooler through the DM-baryon coupling because $T_{d}<T_{b}$ as compared with no-coupling scenarios. Sufficient coupling can make the
temperatures of baryons and DM equal. Once they match each other, the coupling term in the Boltzmann equations ($\propto(T_{b}-T_{d})$) reaches effectively zero and the thermal evolution becomes adiabatic, that is, $T_{b}$ and $T_{d}$ are
proportional to $(1+z)^{-2}$ because the DM and baryons have the same
adiabatic index. Since we set $n=-4$ for the velocity-dependence of the coupling,
the coupling strength becomes bigger for a smaller momentum transfer at a smaller redshift. The evolution of $T_{b}$ is modified at lower redshifts
even for a small $\sigma_{17}$ for $m_{d}=m_{H}$ in the left panel. We find, however, that, when
$\sigma_{17}<0.001$, the baryon temperature does not couple with
the dark matter temperature even at lower redshifts and its evolution is similar to the case
without the coupling.
The DM-baryon coupling term for the baryon temperature evolution, which appears in
Eq. (2), becomes small with increase of $m_{d}$, as confirmed in Fig. 1.
For a sufficiently large value of DM-baryon coupling (in our example, for $\sigma_{17}>10$),
the DM temperature is well coupled with the baryon temperature, and $T_{d}\approx T_{b}$ is established even around the epoch when the baryon
temperature starts to decouple from the CMB temperature. The evolution
in this regime corresponds to the tight-coupling case discussed in Sec. 2.3 where a small DM mass, due to a small Compton coupling between $T_{b}$ and $T_{\gamma}$, leads to the early decoupling of $T_{b}\approx T_{d}$ from $T_{\gamma}$. The difference of the baryon temperature evolution from the no DM-baryon coupling scenarios hence becomes bigger for a smaller DM mass.
Finally it is worth mentioning the case in the limit of $m_{d}\ll m_{\rm H}$.
At high redshifts $z>z_{\rm dec}$, the time scale $t_{DB}$ is proportional to $1/m_{d}$, and the DM temperature $T_{d}\propto 1/t_{DB}$ which decreases as $m_{d}$ becomes small.
The DM-baryon coupling term in
Eq. (2) does not become small in the limit of $m_{d}\ll m_{\rm H}$, in contrast to $m_{d}\gg m_{\rm H}$ case, and, in fact, becomes
independent of $m_{d}$ with only its dependence on $\sigma_{0}$.
Hence the baryon temperature can be dragged to the lower dark matter
temperature, and one finds that the change in the $T_{b}$ evolution is bigger for a smaller $m_{d}$.
4 The evolution of 21 cm signals with DM-baryon coupling
The DM-baryon coupling can affect the evolution of the baryon
temperature as shown in the previous section, and the measurement of baryon temperature in the dark ages, in
particular during $20<z<z_{\rm dec}$, could well reveal the nature of DM.
The measurement of redshifted 21 cm lines from neutral hydrogen is
expected to be a good probe of baryon gas in the dark ages. The strength
of the emission or absorption of the 21 cm lines depends on the
density, temperature and ionization fraction of baryon gas.
The observational signals of redshifted 21 cm lines are measured as the
difference between the brightness temperature of redshifted 21 cm signals and
the CMB temperature.
This differential brightness temperature is given by
$$\delta T_{b}(z)=\left[1-\exp(-\tau)\right]\frac{T_{s}-T_{\gamma}}{1+z},$$
(20)
where $\tau$ is the optical depth and $T_{s}$ is the spin temperature.
The spin temperature describes the number density ratio of hydrogen atoms
in the excitation state to those in the ground state, and is given by [33, 34]
$$T_{s}=\frac{T_{*}+T_{\gamma}+y_{k}T_{b}}{1+y_{k}},$$
(21)
where $T_{*}$ is the temperature corresponding to the energy of hyperfine
structure of neutral hydrogen and $y_{k}$ represents the kinetic coupling term given by
$$y_{k}=\frac{T_{*}}{AT_{b}}(C_{\rm H}+C_{e}+C_{p}),$$
(22)
where $A$ is the spontaneous emission rate and $C_{\rm H}$, $C_{e}$, and $C_{p}$ are
the de-excitation rates of the triplet due to collisions with neutral atoms, electrons, and protons [8].
For these rates, following Ref. [35], we adopt the values from
Refs. [8, 36].
Since we are
interested in the signals from the dark age, we neglect the
Lyman-$\alpha$ coupling (Wouthysen field effect) term [33, 37] in Eq. (21), which is
ineffective without luminous objects.
We show the evolution of $T_{s}$ for different DM-baryon coupling values in Fig. 2.
As one can expect
from Fig. 1, the difference from the case without the
coupling is larger for $m_{d}=m_{\rm H}$ than for $m_{d}=10m_{\rm H}$.
The 21 cm signals depend on $T_{s}$, and we hence can expect the redshift evolution of the differential brightness
temperature also depends on $\sigma_{17}$.
Measurements of cosmological $21$ cm signals will be performed by
interferometers such as LOFAR and SKA which can measure the fluctuations in the
differential brightness temperature. The angular power spectrum of
$\delta T_{b}$ is given by
$$C_{\ell}(z)=\delta T_{b0}^{2}\int dk~{}k^{2}\Delta^{2}_{21,\ell}(z,k)P(k),$$
(23)
where $\Delta_{21,\ell}$ is the transfer function for the 21 cm
fluctuations, $P(k)$ is the power spectrum of the primordial
curvature perturbations and $\delta T_{b0}$ is the value of the
differential 21cm brightness temperature
which can be approximated by [38]
$$\delta T_{b0}\approx 26~{}{\rm mK}~{}x_{H}\left(1-\frac{T_{\gamma}}{T_{s}}%
\right)\left(\frac{h^{2}\Omega_{b}}{0.02}\right)\left[\left(\frac{1+z}{10}%
\right)\left(\frac{0.3}{\Omega_{m}}\right)\right].$$
(24)
In this paper, since we consider the effect of the coupling between baryons
and dark matter on the temperature evolution, we focus only on the
modification of $\delta T_{b0}$ due to the coupling.
We ignore the effect of the DM-baryon coupling on the evolution of the
density fluctuations [6, 2].
Therefore, the transfer function
$\Delta_{21,\ell}$, which we calculate by using CAMB [35],
is the same as that in the standard $\Lambda$CDM model.
We show the
dependence of the angular power spectrum $C_{\ell}(z)$ on DM-baryon coupling in Fig. 3.
According to Eq. (24), the evolution of $\delta T_{b0}$ depends on
the spin temperature shown in Fig. 2.
The coupling between baryons and
dark matter lowers the baryon temperature. Therefore, the kinetic
coupling term for the hyper-fine structure in Eq. (21)
becomes small due to the low baryon temperature. The spin
temperature then quickly approaches the CMB temperature for $z\lesssim 50$, which results in a smaller amplitude of $C_{\ell}$ compared with the no coupling case.
For instance, for $\sigma_{17}<0.1$ with $m_{d}=m_{\rm H}$, the amplitude of
$C_{\ell}$ is suppressed by $1/10$ (see the red and magenta lines in the left panel of
Fig. 3). As the coupling increases, the dark matter
temperature becomes larger and approaches the baryon temperature as shown in Fig.1.
Fig. 3 indeed shows that the amplitudes are comparable, except in the small coupling case ($\sigma_{17}=0.01$). The behavior for this small cross-section is due to the fact that $T_{b}$ turns out not to couple with $T_{d}$ at high redshifts $z>50$ due to the small coupling. As $T_{b}$ becomes smaller at lower redshifts, however, the coupling can become more effective due to the enhancement for small momentum transfer.
Fig. 3 confirms our expectation that the effects of DM-baryon coupling on the $T_{b}$ evolution becomes small as $m_{d}$ increases (as mentioned at the end of §2.3).
Note that, while the amplitude of $C_{\ell}$ is suppressed due to the
coupling between baryons and dark matter at a low redshift ($z\lesssim 40$), it is amplified at a high redshift ($z\gtrsim 50$).
This is because, at high redshifts, the kinematic coupling term in Eq. (21) is significant and the spin
temperature is tightly coupled with the baryon temperature. The deviation of the spin temperature from the CMB
temperature hence becomes large and $C_{\ell}$ is consequently amplified at high redshifts.
Let us also comment on $C_{\ell}$ when $m_{d}$ is smaller than $m_{\rm H}$.
As mentioned in Sec. 3,
the baryon temperature is strongly dragged to the dark matter
temperature which becomes small with decreasing $m_{d}$.
Therefore, the kinetic coupling term is small for a small $m_{d}$ and
the spin temperature has a tighter coupling with the CMB temperature.
This tight coupling causes the strong suppression of $C_{\ell}$, according
to Eq. (24).
As a result,
when $m_{d}\ll m_{\rm H}$, the suppression due to the coupling is
significant even at large redshifts.
5 Discussion and conclusion
Before concluding our studies on 21 cm signals, let us briefly mention other relevant observables which could potentially be affected by the change in the background temperature evolution because of the DM-baryon coupling.
Epoch of recombination and CMB anisotropies: If the baryon temperature changes around the epoch of recombination,
the last scattering surface could be modified and this modification can produce a
footprint on the CMB temperature anisotropies.
We evaluate the ionization fraction for different
$\sigma_{17}$ and plot the results in Fig. 4.
We found, since the baryon temperature is strongly coupled with the CMB
temperature around these redshifts (see Fig. 1), the dark matter cooling cannot
decrease the baryon temperature enough to modify the epoch of
recombination.
Therefore, the coupling between baryons and dark matter cannot produce a
observable signature in the primordial CMB anisotropies.
At lower redshifts, when the baryon temperature decouples from the
CMB temperature, the dark matter cooling could affect the
thermal evolution of baryons. Since the baryon temperature is dragged to
lower temperature, the residual ionization fraction becomes
small. It is, however, difficult to measure such small residual
ionization fraction by cosmological observations.
CMB distortions:
Precise measurements of CMB spectral distortions from the blackbody
spectrum can be a promising probe of thermal history of the Universe (see Ref. [39] for a recent
review).
Generally CMB distortions can be classified into two types [40, 41]. One is the $\mu$-type
distortion which is generated between $z\sim 10^{6}$ and $z\sim 10^{4}$. The other is the $y$-type distortion which is produced after the
epoch of the $\mu$-type distortion generation ($z\lesssim 10^{4}$).
The difference in the adiabatic indexes between baryons and
CMB photons can create CMB distortions [42].
Because the baryon temperature is always lower than the CMB temperature,
the energy of the CMB photons is transferred to baryons via Compton scatterings.
This energy transfer modifies the CMB frequency spectrum and we can observe
this modification as CMB distortions.
Following Ref. [42],
in order to evaluate the CMB
distortions due to this baryon cooling, it is useful to define the
parameter $Y_{\rm BEC}$ as
$$Y_{\rm BEC}=-\int dz\left(1-\frac{T_{b}}{T_{\gamma}}\right)\frac{k_{B}\sigma_{%
T}}{m_{e}c}\frac{n_{e}T_{\gamma}}{(1+z)H}.$$
(25)
For example, the $y$-parameter which characterizes the $y$-type distortion is obtained
by $y=-Y_{\rm BEC}$.
We evaluated $Y_{\rm BEC}$ for different values of $\sigma_{17}$. We find that
$Y_{\rm BEC}$ becomes at most ${\cal O}(10^{-9})$ for the parameter range of interest, $10^{-3}<\sigma_{17}<10^{2}$, while $Y_{\rm BEC}$ is on
the order of $10^{-10}$ without the coupling between baryons and dark
matter.
The value of $Y_{\rm BEC}$ corresponds to $\mu\sim 10^{-9}$ for the
$\mu$-type distortion and $y\sim 10^{-9}$ for the $y$-type distortion.
Because the Silk damping of the primordial density perturbations
produces $\mu\sim 10^{-8}$ [43, 44] and the reionization process gives $y\sim 10^{-7}$ [45], it would be difficult to find the signature of the coupling between
baryons and dark matter in the CMB distortions.
We have demonstrated that DM-baryon coupling can affect the background temperature evolution and consequently the 21 cm signal. Our specific example, the velocity-dependent elastic scattering cross-section, would be also of great interest for particle physics studies because of its infrared enhancement for a low momentum transfer, which has been explored for potential signals beyond the standard model at collider and dark matter search experiments [21, 23, 24, 25, 26]. Such probes of the dark matter properties from both cosmology and particle physics deserve further study in view of forthcoming experiments which can explore the nature of the DM coupling to ordinary baryons.
We have shown that the 21 cm signal is suppressed due to
the existence of DM-baryon coupling, and it would certainly be useful to provide further constraints on DM-baryon coupling. For instance, we have found
that the 21 cm brightness temperature angular power spectrum can be
suppressed by a factor 2 for $m_{d}=10$ GeV within the current bounds from
the CMB and Ly$-\alpha$ data. This overall suppression can even be larger
for a smaller dark mass with a fixed cross-section, for instance of
order a factor 10 for $m_{d}=1$ GeV. We have however found that the degree of
further suppression becomes milder for an even smaller $m_{d}\ll m_{\rm H}$, partly because the temperature dependence of the DM-baryon momentum
transfer rate $K_{b}$ on the dark matter mass saturates at $m_{d}\sim m_{\rm H}$ and becomes independent of $m_{d}$ for $m_{d}\ll m_{\rm H}$.
We plan to explore the effects of DM-baryon coupling on the evolution of fluctuations in future work where one needs extra care in the treatment of non-linearities. Some simplifications made in our analysis would also deserve further study. For instance, we considered only the thermal velocity and did not include the peculiar velocity contributions in estimating the DM-baryon momentum transfer rate.
Even though the inclusion of such bulk velocity contributions does not always change the constraints on the upper bounds on the allowed DM-baryon scattering cross sections, there are cases where the cross-section constraints could get tighter (possibly even by a factor 10) even though more detailed numerical analysis is needed because of the uncertainties caused by non-linear evolution [6].
This work was supported by the MEXT of Japan, Program for Leading Graduate Schools ”PhD
professional: Gateway to Success in Frontier Asia”,
the Japan Society for Promotion of Science (JSPS) Grant-in-Aid for Scientiffic Research
(No. 25287057),
the ERC project 267117 (DARK) hosted by Université Pierre et Marie Curie - Paris 6 and at JHU by NSF grant OIA-1124403.
References
[1]
D. N. Spergel and P. J. Steinhardt,
Phys. Rev. Lett. 84, 3760 (2000)
[astro-ph/9909386].
[2]
X. -l. Chen, S. Hannestad and R. J. Scherrer,
Phys. Rev. D 65, 123515 (2002)
[astro-ph/0202496].
[3]
K. Sigurdson, M. Doran, A. Kurylov, R. R. Caldwell and M. Kamionkowski,
Phys. Rev. D 70, 083501 (2004)
[Erratum-ibid. D 73, 089903 (2006)]
[astro-ph/0406355].
[4]
A. Melchiorri, A. Polosa and A. Strumia,
Phys. Lett. B 650, 416 (2007)
[hep-ph/0703144].
[5]
S. Tulin, H. -B. Yu and K. M. Zurek,
Phys. Rev. D 87, no. 11, 115007 (2013)
[arXiv:1302.3898 [hep-ph]].
[6]
C. Dvorkin, K. Blum and M. Kamionkowski,
Phys. Rev. D 89, 023519 (2014)
[arXiv:1311.2937 [astro-ph.CO]].
[7]
P. Madau, A. Meiksin and M. J. Rees,
Astrophys. J. 475, 429 (1997)
[astro-ph/9608010].
[8]
S. Furlanetto, S. P. Oh and F. Briggs,
Phys. Rept. 433, 181 (2006)
[astro-ph/0608032].
[9]
J. R. Pritchard and A. Loeb,
Rept. Prog. Phys. 75, 086901 (2012)
[arXiv:1109.6012 [astro-ph.CO]].
[10]
H. Tashiro and N. Sugiyama,
Mon. Not. Roy. Astron. Soc. 372, 1060 (2006)
[astro-ph/0607169].
[11]
M. Shiraishi, H. Tashiro and K. Ichiki,
Phys. Rev. D 89, 103522 (2014)
[arXiv:1403.2608 [astro-ph.CO]].
[12]
K. J. Mack and D. H. Wesley,
arXiv:0805.1531 [astro-ph].
[13]
H. Tashiro and N. Sugiyama,
Mon. Not. Roy. Astron. Soc. 435, 3001 (2013)
arXiv:1207.6405 [astro-ph.CO].
[14]
S. R. Furlanetto, S. P. Oh and E. Pierpaoli,
Phys. Rev. D 74, 103502 (2006)
[astro-ph/0608385].
[15]
L. Chuzhoy,
arXiv:0710.1856 [astro-ph].
[16]
M. P. van Haarlem, M. W. Wise, A. W. Gunst, G. Heald, J. P. McKean, J. W. T. Hessels, A. G. de Bruyn and R. Nijboer et al.,
Astronomy & Astrophysics, 556, 53 (2013)
arXiv:1305.3550 [astro-ph.IM].
[17]
C. J. Lonsdale, R. J. Cappallo, M. F. Morales, F. H. Briggs, L. Benkevitch, J. D. Bowman, J. D. Bunton and S. Burns et al.,
Proceedings of the IEEE, 97, 1497 (2009)
arXiv:0903.1828 [astro-ph.IM].
[18]
G. Paciga, J. Albert, K. Bandura, T. -C. Chang, Y. Gupta, C. Hirata, J. Odegova and U. -L. Pen et al.,
arXiv:1301.5906 [astro-ph.CO].
[19]
C. -P. Ma and E. Bertschinger,
Astrophys. J. 455, 7 (1995)
[astro-ph/9506072].
[20]
S. Davidson, S. Hannestad and G. Raffelt,
JHEP 0005, 003 (2000)
[hep-ph/0001179].
[21]
D. Feldman, Z. Liu and P. Nath,
Phys. Rev. D 75, 115001 (2007)
[hep-ph/0702123 [HEP-PH]].
[22]
E. Masso, S. Mohanty, S. Rao
Phys. Rev. D 80, 036009 (2009)
[arXiv:0906.1979 [hep-ph]].
[23]
S. D. McDermott, H. -B. Yu and K. M. Zurek,
Phys. Rev. D 83, 063509 (2011)
[arXiv:1011.2907 [hep-ph]].
[24]
I. Lopes, K. Kadota and J. Silk,
Astrophys. J. Lett. 780, L15 (2014)
[arXiv:1310.0673 [astro-ph.SR]].
[25]
E. Del Nobile, G. B. Gelmini, P. Gondolo and J. -H. Huh,
JCAP 1406, 002 (2014)
[arXiv:1401.4508 [hep-ph]].
[26]
K. Kadota and J. Silk,
Phys. Rev. D 89, 103528 (2014)
[arXiv:1402.7295 [hep-ph]].
[27]
S. A. Raby and G. West,
Nucl. Phys. B 292, 793 (1987).
[28]
P. Krstić and D. Schultz,
Phys. Rev. A 60 2118 (1999)
[29]
D. Scott and A. Moss,
arXiv:0902.3438 [astro-ph.CO].
[30]
S. Seager, D. D. Sasselov and D. Scott,
Astrophys. J. 523, L1 (1999)
[astro-ph/9909275].
[31]
B. D. Wandelt, R. Dave, G. R. Farrar, P. C. McGuire, D. N. Spergel and P. J. Steinhardt,
astro-ph/0006344.
[32]
S. Davidson, S. Hannestad and G. Raffelt,
JHEP 0005, 003 (2000)
[hep-ph/0001179].
[33]
G. B. Field, Proc. I.R.E. 46, 240 (1958).
[34]
G. B. Field, Astrophys. J. 129, 536 (1959).
[35]
A. Lewis and A. Challinor,
Phys. Rev. D 76, 083005 (2007)
[astro-ph/0702600 [ASTRO-PH]].
[36]
S. Furlanetto and M. Furlanetto,
Mon. Not. Roy. Astron. Soc. 379, 130 (2007)
[astro-ph/0702487 [ASTRO-PH]].
[37]
S. A. Wouthuysen, Astron. J. 57, 31 (1952).
[38]
B. Ciardi and P. Madau,
Astrophys. J. 596, 1 (2003)
[astro-ph/0303249].
[39]
H. Tashiro,
PTEP 2014, no. 6, 06B107 (2014).
[40]
Y. B. Zeldovich and R. A. Sunyaev, Astrophys. Space Sci., 4, 301 (1969).
[41]
R. A. Sunyaev and Y. B. Zeldovich, Astrophys. Space Sci., 7, 20 (1970).
[42]
R. Khatri, R. A. Sunyaev and J. Chluba,
Astron. Astrophys. 540, A124 (2012)
[arXiv:1110.0475 [astro-ph.CO]].
[43]
J. B. Dent, D. A. Easson and H. Tashiro,
Phys. Rev. D 86, 023514 (2012)
[arXiv:1202.6066 [astro-ph.CO]].
[44]
J. Chluba, A. L. Erickcek and I. Ben-Dayan,
Astrophys. J. 758, 76 (2012)
[arXiv:1203.2681 [astro-ph.CO]].
[45]
R. A. Sunyaev and R. Khatri,
Int. J. Mod. Phys. D 22, 1330014 (2013)
[arXiv:1302.6553 [astro-ph.CO]]. |
NEAR-INFRARED PHOTOMETRY OF THE STAR CLUSTERS IN THE DWARF IRREGULAR GALAXY IC 5152
JAEMANN
KYEONG${}^{1}$
,
EON$-$CHANG
SUNG${}^{1}$
,
SANG
CHUL
KIM${}^{1}$
,
SANGMO
TONY
SOHN${}^{2,3}$
AND
HYUN$-$IL
SUNG${}^{1}$
Abstract
We present $JHK$-band near-infrared photometry of star clusters in the dwarf irregular
galaxy IC 5152. After excluding possible foreground stars, a number of candidate star
clusters are identified in the near-infrared images of IC 5152, which include young
populations. Especially, five young star clusters are identified in the $(J-H,H-K)$
two color diagram and the total extinction values toward these clusters are estimated
to be $A_{V}=2-6$ from the comparison with the theoretical values given by the
Leitherer et al. (1999)’s theoretical star cluster model.
NEAR-INFRARED PHOTOMETRY OF THE STAR CLUSTERS IN THE DWARF IRREGULAR GALAXY IC 5152${}^{1}$Korea Astronomy & Space Science Institute, Taejon 305-348, Korea
E-mail: jman,ecsung,sckim,[email protected]
${}^{2}$Center for Space Astrophysics, Yonsei University, Seoul 120-749, Korea
${}^{3}$Space Astrophysics Lab, California Institute of Technology, MC 405-47, 1200 East California Boulevard, Pasadena, CA 91125
E-mail: [email protected]
(Received Nov. 16, 2006; Accepted Dec. 1, 2006)
Key words :
galaxies: individual (IC 5152) — galaxies: dwarf irregular galaxies —
galaxies: star clusters — galaxies: photometry - infrared: galaxies
00footnotetext: Corresponding Author:
J. Kyeong
I INTRODUCTION
Since the discovery of new Milky Way satellite galaxies
(e.g., Ursa Major dwarf spheroidal galaxy (Willman et al. 2005),
Canes Venatici dwarf galaxy (Zucker et al. 2006a),
Bootes dwarf galaxy (Belokurov et al. 2006),
Ursa Major II dwarf spheroidal galaxy (Zucker et al. 2006b)) and
new M31 satellite galaxies (e.g.,
Andromeda IX dwarf spheroidal galaxy (Zucker et al. 2004),
Andromeda X dwarf spheroidal galaxy (Zucker et al. 2006c)),
the number of dwarf galaxies in the Local Group has been increased to
well over 30 (Grebel 2000).
The dwarf galaxies are the most abundant type of galaxy in groups and clusters and
the building blocks of more massive galaxies.
Furthermore the intrinsic properties of dwarf galaxies,
such as mass, density, and gas content, are likely to affect the star
formation history and chemical enrichment.
Therefore, the understanding of the properties of dwarf galaxies
provides clues on the galaxy formation processes.
IC 5152 (=IRAS 21594–51, ESO 237-27), a dwarf irregular galaxy (Sdm IV$-$V),
is one of the best objects available to
study starbursts due to its rather close distance.
On the other hand, a very bright star (HD 209142, $V=7.9$, $K_{s}=7.14$)
in the north-western part of the galaxy makes it difficult to obtain deep exposures.
Basic information of IC 5152 is summarized in Table 1.
It has been controversial whether IC 5152 is a member of the Local Group or not.
Sandage (1986) estimated a distance modulus of $(m-M)_{0}=26$ from the brightest stars
in this galaxy, and
van den Bergh (1994) pointed out that this distance estimate is too uncertain to find
the possibility of the Local Group membership.
Zijlstra & Minniti (1999) found the distance modulus of IC 5152 to be
($m-M)_{0}=26.15\pm 0.2$ using $VI$ color-magnitude diagram (CMD)
and red giant branch (RGB) tip method.
But their photometry is not so deep to determine the RGB tip magnitude.
Recently, using the HST snapshot survey of nearby galaxies,
Karachentsev et al. (2002) found the accurate magnitude of the tip of the RGB (TRGB) of
IC 5152 to be $I$(TRGB)$=22.58\pm 0.16$ mag and
derived the distance to be $2.07\pm 0.18$ Mpc, equivalent to $(m-M)_{0}=26.58\pm 0.18$,
therefore, it is located at the outskirts of the Local Group.
The brightest H ii region #A has been known in IC 5152 and several other H ii regions
are known in the disk of this galaxy (Hidalgo-Gámez & Olofsson 2002).
IC 5152 is also an H i rich dwarf (Buyle et al. 2006).
The H i image of the galaxy shows that H i gas is spread out on the face of IC 5152.
This suggested that there should be many newly formed young stellar populations on the disk.
The young populations are also contained in the massive young clusters which are hardly
seen on optical images due to surrounding gas clouds.
Light at near-infrared wavelengths is less affected by dust absorption than
at visible wavelengths($A_{K}=0.1A_{V}$), making it easier to probe the heavily obscured
star forming regions and detect young clusters.
Since the data used in this study do not have enough angular resolution
to detect individual stars of IC 5152,
the goal of this paper is to find star clusters and investigate their properties.
Especially, young star clusters with heavy reddening are investigated using the
near-infrared wavelength characteristics.
This paper is composed as follows.
We describe our near-infrared observations and data reduction in Section II.
Section III present the results and analyses on star clusters in IC 5152 and
a summary is given in Section IV.
II NEAR-INFRARED OBSERVATIONS AND DATA REDUCTION
$JHK_{n}$ images of IC 5152 were obtained on the night of UT 2002 June 30
using the 2.3 m telescope
and the infrared (IR) camera CASPIR (Cryogenic Array Spectrometer/Imager)
at the Siding Spring Observatory.
A gray-scale map of the $K_{n}$-band CASPIR image of IC 5152 is displayed
in Figure 1.
The gain and readout noise of CASPIR are
9 e${}^{-}$/ADU and 50 electrons, respectively.
CASPIR uses 256 $\times$ 256 InSb detector and the pixel scale is
$0\hbox{$.\!\!^{\prime\prime}$}50$ pixel${}^{-1}$, which gives the field of view of
2$.\mskip-4.0mu ^{\prime}$1 $\times$ 2$.\mskip-4.0mu ^{\prime}$1.
In order to get high signal-to-noise ratio, a total of 9 frames with 5 s exposure
and 12 cycles were obtained, which
means that the combined image is a result of combining 108 frames of the same
exposure.
The bias frames were frequently obtained over the night because the
bias level was known to vary throughout the night (McGregor 1995).
Also, in order to remove possible detector instability and
temporal changes of the bright IR background,
nearby sky frames ($10\hbox{${}^{\prime}$}$ away from IC 5152) were taken with the same
exposure time.
The instrument characteristics and the IR sky of strong signal and rapid variability,
makes the reduction procedure of near-IR data more complex than that of optical CCD data.
First, the raw data had to be linearized to recover the low and high intensity
information accurately according to the formula given by McGregor (1995).
Then the bias and dark frames obtained just before and after each object frame
were subtracted. The median combined sky frame was subtracted from the object
frame.
Finally, the data images are flattened by dome flats.
Dome flats were
obtained for each filter by differencing exposures with the flatfield lamp on
and off, and several such frames were combined to form the final dome
flat for each filter.
We used the point spread function (PSF) fitting packages of DAOPHOT II and ALLSTAR for
the photometry.
Stars of different frames were matched by DAOMATCH/DAOMASTER routines
(Stetson 1993).
For each frame, several isolated unsaturated stars were used to construct
a good model PSF.
Aperture corrections were made using the program DAOGROW (Stetson 1990)
for which we used the same stars used in the PSF construction.
In order to transform instrumental magnitudes to the standard system, we observed 12 SAAO
photometric standard stars given by Carter & Meadows (1995)
throughout the observing run.
We derived the transformation equations between our instrument magnitudes and the
standard values $J,H,K$ as following;
$$\displaystyle J$$
$$\displaystyle=$$
$$\displaystyle j+18.87(\pm 0.10)-0.16(\pm 0.09)*X_{j}-$$
$$\displaystyle-~{}~{}0.09(\pm 0.07)*(J-K)$$
$$\displaystyle H$$
$$\displaystyle=$$
$$\displaystyle h+18.69(\pm 0.03)-0.14(\pm 0.09)*X_{h}+$$
$$\displaystyle+~{}~{}0.20(\pm 0.07)*(J-H)$$
$$\displaystyle K$$
$$\displaystyle=$$
$$\displaystyle k+17.79(\pm 0.05)-0.07(\pm 0.13)*X_{k}+$$
$$\displaystyle+~{}~{}0.13(\pm 0.09)*(J-K)$$
where $j$, $h$ and $k$ are instrumental magnitudes, $X_{j}$, $X_{h}$ and $X_{k}$ are airmasses
at each bandpass.
The residuals of standard star calibration were 0.10, 0.03, and 0.06 mag for $J,H$, and
$K$ filter, respectively.
We can check the photometric calibration accuracy using a common star
observed by CASPIR (the second bright star in the $K$-band CASPIR frame
at X=175.8, Y=198.2) and 2MASS Point Source Catalog (Cutri et al. 2003).
The comparison gives a good agreement in $J$ and $H$-bands within
the photometric errors.
However, in the $K$-band, there is somewhat big difference of
$\Delta$(Ours$-$2MASS)=0.13 mag due to the different filter systems used,
$K_{n}$ and $K_{s}$.
III STAR CLUSTERS IN IC 5152
(a) Contamination of Foreground Stars
We adopted the distance modulus of IC 5152
$(m-M)_{0}=26.6\pm 0.2$ (Karachentsev et al. 2002)
and the reddening toward this galaxy $E(B-V)=0.0$
(Burstein & Heiles 1984; Zijlstra & Minniti 1999).
The color-magnitude diagrams for all the objects in IC 5152
are shown in Figure 2.
Since the detected objects are not resolved, they could be contaminated by foreground
stars.
Likely foreground stars cannot be identified using near-IR CMD and/or
two-color diagram because their near-IR colors and brightnesses are not separated
from those of the IC 5152 members.
Therefore, we identified the foreground stars using another catalog.
As pointed out by Zijlstra & Minniti (1999) already,
$2\sim 3$ foreground stars are expected in our observed field of
2$.\mskip-4.0mu ^{\prime}$1 $\times$ 2$.\mskip-4.0mu ^{\prime}$1
with $V\leq 21$ mag from the Galactic star count model of Ratnatunga & Bahcall (1985).
One foreground star except the brightest star (HD 209142)
at the upper right corner of the image can be identified using 2MASS Point Source Catalog.
A comparison with sources in the 2MASS Point Source Catalog confirms the second
bright star($\alpha$(J2000) = 22${}^{h}$ 02${}^{m}$ 39.${}^{s}$4, $\delta$(J2000) =
$-$51${}^{\circ}$ 17${}^{\prime}$ 05$.\!\!^{\prime\prime}$6, $J$ = 14.28, $H$ = 13.66, $K_{s}$ = 13.39).
We also can identify the foreground stars in the blue plate (IIIaO + GG38) of NED
(NASA/IPAC Extragalactic Database)
after subtracting out the diffuse body of the galaxy.
A point-source like object is found during this process at X=28 and Y=29 of
our $K$-band image with $J$=16.89, $H$=16.09, and $K$=15.86 mag.
Only very faint background galaxies might be included in our small field.
(b) Star Clusters
Using the magnitude of the brightest blue supergiants, $M_{K}=-10$ mag
(Rozanski & Rowan-Robinson 1994) and our adopted distance modulus of IC 5152
$(m-M)_{0}=26.6\pm 0.2$ (Karachentsev et al. 2002),
the bright upper limit of supergiant magnitude is set to $m_{K}=16.6$ mag.
The peak of the M31 globular cluster luminosity function exists near $M_{K}=-10$
(Barmby et al. 2001), of which the globular cluster system is one of the
best studied systems. We have detected 20 star cluster candidates brighter
than $K=16.6$ mag and listed them in Table 2 together with the $JHK$ photometry.
In order to make direct confirmation for these star cluster candidates,
we have examined the radial profiles of the candidate clusters with
the HST/WFPC2 F814W image (Karachentsev et al. 2002).
Stars on the WFPC2 WF3 image (pixel scale of 0.${}^{\prime\prime}$1)
have typically 0.9 pixels of FWHM. On the contrary, our sample of cluster candidates
have the FWHM range of 1.5 to 2.6 pixels. Thus, the result suggests that most of
the cluster candiates must be genuine star clusters. The young cluster candidates
found in the following subsection could not be confirmed due to the heavy
reddening in the optical wavelength.
(c) Young Clusters
It is plausible that there are many young star clusters in the star forming galaxy IC 5152.
However, it is not easy to identify young clusters only in CMD
because of the heavy internal reddenings.
Therefore, the $(J-H,H-K)$ two color diagram plotted in Figure 3
is very helpful for this purpose.
The suspected young clusters form a sequence in the ($J-H,H-K$) diagram that
parallels the reddening vector while the old clusters are located
near the main sequence or giant branch lines given by Bessell & Brett (1988).
Five objects with very red colors are detected in Figure 3
and these are identified as compact young clusters
forming a sequence that parallels the reddening vector.
The line of sight extinction for each candidate young cluster was estimated by
extrapolation along the reddening vector to a point midway between the
log $t$(yr)=6.0 and 6.6 models given by Leitherer et al. (1999).
The theoretical colors were computed by the $Z=0.004$ STARBURST99 models
with $\alpha=2.35$, $M_{\rm low}=1M_{\odot}$, and $M_{\rm up}=100M_{\odot}$,
where $Z$ is the metallicity,
$\alpha$ is the exponent of the power law initial mass function, and
$M_{\rm low}$ and $M_{\rm up}$ are the low- and high-mass cutoff values,
respectively.
While the metallicity derived for IC 5152 by Zijlstra & Minniti (1999) is $Z=0.002$,
we have adopted the metallicity of $Z=0.004$
since the STARBURST99 models calculate only for five fixed values of metallicities.
The photometry result of the candidate young clusters and
their reddening values are listed in Table 3.
Especially, the young cluster near the H ii region #A (X=60.41, Y=119.80)
shows heavy reddening ($A_{V}=5.6$).
Finally the identified objects in the observed field are shown in Figure 4.
IV SUMMARY
From the analysis of our $JHK$ photometry of IC 5152, we can summarize
the followings:
1. We found 20 star cluster candidates based on the distance modulus $(m-M)_{0}=26.6$
and the bright limit of supergiant stars ($M_{K}=-10.0$) after excluding the
foreground stars that are expected from the Galactic star count model of
Ratnatunga & Bahcall (1985). And the radial profiles of these candidates on
the HST/WFPC2 image show that the FWHMs of these objects are much larger
than those of the typical stars. This confirms that these candidates must be
genuine star clusters.
2. The possible young star clusters with heavy internal reddenings are identified
in the ($J-H,H-K$) two color diagram. Total extinction values toward these
young clusters are estimated to be $A_{V}=2-6$ from the comparison with
theoretical values.
We would like to thank the staffs of MSSSO, Australian National University
for the use of observing facilities.
This research has made use of the NASA/IPAC Extragalactic Database (NED)
which is operated by the Jet Propulsion Laboratory, California Institute
of Technology, under contract with the National Aeronautics and Space
Administration.
References
1
Barmby, P., Huchra, J. P., & Brodie, J. P. 2001,
The M31 globular cluster luminosity function, AJ, 121, 1482
2
Belokurov, V., et al. 2006, A faint new Milky Way satellite in Bootes,
ApJ, 647, L111
3
Bessell, M. S., & Brett, J. M. 1988,
$JHKLM$ photometry – Standard systems, passbands, and intrinsic colors,
PASP, 100, 1134
4
Burstein, D. A., & Heiles, C. 1984,
Reddening estimates for galaxies in the Second Reference Catalog and
the Uppsala General Catalog, ApJS, 54, 33
5
Buyle, P., Michielsen, D., De Rijcke, S., Ott, J. & Dejonghe, H. 2006,
The CO content of the Local Group dwarf irregular galaxies IC 5152, UGCA 438,
and the Phoenix dwarf, MNRAS, 373, 793
6
Carter, B. S., & Meadows, V. S. 1995,
Fainter southern $JHK$ standards suitable for infrared arrays, MNRAS, 276, 734
7
Cutri, R. M., et al. 2003, The IRSA 2MASS All-Sky Point Source Catalog,
http://irsa.ipac.caltech.edu/
applications/Gator/
8
Grebel, E. K. 2000, The star formation history of the Local Group,
in Star Formation from the Small to the Large Scale,
Proceedings of the 33rd ESLAB symposium, eds. F. Favata, A. Kaas and A. Wilson,
(ESA, SP-445:ESA), 87
9
Hidalgo-Gámez, A. M. & Olofsson, K. 2002,
The chemical content of a sample of dwarf irregular galaxies, A&A, 389, 836
10
Huchtmeier, W., & Richter, O. G. 1986,
H i -observations of galaxies in the Kraan-Korteweg – Tammann catalogue
of nearby galaxies. I. The data, A&AS, 63, 323
11
Karachentsev, I. D., Sharina, M. E., Makarov, D. I., Dolphin, A. E.,
Grebel, E. K., Geisler, D., Guhathakurta, P., Hodge, P. W., Karachentseva, V. E.,
Sarajedini, A., & Seitzer, P. 2002, The very local Hubble flow,
A&A, 389, 812
12
Leitherer, C., Schaerer, D., Goldader, J. D., Gónzalez Delgado, R. M., Robert, C.,
Kune, D. F., de Mello, D. F., Devost, D., & Heckman, T. M.
1999, Starburst99: Synthesis Models for Galaxies with Active Star Formation,
ApJS, 123, 3
13
McGregor, P. 1995, Users Manual for the CASPIR on the MSSSO 2.3 m Telescope
14
Ratnatunga, K. U., & Bahcall, J. N. 1985,
Estimated number of field stars toward Galactic globular clusters and
Local Group Galaxies, ApJS, 59, 63
15
Rieke, G. H., & Lebofsky, M. J. 1985,
The interstellar extinction law from 1 to 13 microns, ApJ, 288, 618
16
Rozanski, R., & Rowan-Robinson, M. 1994,
The accuracy of the brightest stars in galaxies as distance indicators,
MNRAS, 271, 530
17
Sandage, A. 1986, The redshift-distance relation. IX. Perturbation of
the very nearby velocity field by the mass of the Local Group, ApJ, 307, 1
18
Sandage, A., & Bedke, J. 1985,
Candidate galaxies for study of the local velocity field and distance scale
using Space Telescope. I. The most easily resolved, AJ, 90, 1992
19
Skillman, E. D., Kennicutt, R. C., & Hodge, P. W. 1989,
Oxygen abundances in nearby dwarf irregular galaxies, ApJ, 347, 875
20
Stetson, P. B. 1990,
On the growth-curve method for calibrating stellar photometry with CCDs,
PASP, 102, 932
21
Stetson, P. B. 1993, Further progress in CCD photometry, in
Stellar Photometry, Current Techniques and Future Developments,
Proceedings of the IAU Colloquium No. 136,
eds. C. J. Butler & I. Elliot(Cambridge University Press: Cambridge), 291
22
van den Bergh, S. 1994, The outer fringes of the Local Group, AJ, 107, 1328
23
Willman, B., et al. 2005, ApJ, A new Milky Way dwarf galaxy in Ursa Major,
ApJ, 626, L85
24
Zijlstra, A. A., & Minniti, D. 1999, A dwarf irregular galaxy at the edge
of the Local Group: Stellar populations and distance of IC 5152, AJ, 117, 1743
25
Zucker, D. B., et al. 2004, Andromeda IX: a new dwarf spheroidal satellite
of M31, ApJ, 612, L121
26
Zucker, D. B., et al. 2006a, A new Milky Way dwarf satellite in Canes Venatici,
ApJ, 643, L103
27
Zucker, D. B., et al. 2006b, A curious Milky Way satellite in Ursa Major,
ApJ, 650, L41
28
Zucker, D. B., et al. 2006c, Andromeda X, a new dwarf spheroidal galaxy of
M31: Photometry, ApJL, submitted (astro-ph/0601599) |
Sablas: Learning Safe Control for Black-box Dynamical Systems
Zengyi Qin${}^{1}$, Dawei Sun${}^{2}$, and Chuchu Fan${}^{1}$
Manuscript received: September 9, 2021; Revised December 11, 2021; Accepted January 5, 2022.This paper was recommended for publication by Editor Clement Gosselin upon evaluation of the Associate Editor and Reviewers’ comments. This work was supported by the Defense Science and Technology Agency in Singapore, but this article solely reflects the opinions and conclusions of its authors and not DSTA Singapore or the Singapore Government.${}^{1}$Zengyi Qin and Chuchu Fan are with Massachusetts Institute of Technology, Cambridge, MA, 02139 USA.
{qinzy, chuchu}@mit.edu${}^{2}$Dawei Sun is with University of Illinois at Urbana-Champaign, Champaign,
IL, 61820 USA.
[email protected] Object Identifier (DOI): see top of this page.
Abstract
Control certificates based on barrier functions have been a powerful tool to generate probably safe control policies for dynamical systems. However, existing methods based on barrier certificates are normally for white-box systems with differentiable dynamics, which makes them inapplicable to many practical applications where the system is a black-box and cannot be accurately modeled. On the other side, model-free reinforcement learning (RL) methods for black-box systems suffer from lack of safety guarantees and low sampling efficiency. In this paper, we propose a novel method that can learn safe control policies and barrier certificates for black-box dynamical systems, without requiring for an accurate system model. Our method re-designs the loss function to back-propagate gradient to the control policy even when the black-box dynamical system is non-differentiable, and we show that the safety certificates hold on the black-box system. Empirical results in simulation show that our method can significantly improve the performance of the learned policies by achieving nearly 100% safety and goal reaching rates using much fewer training samples, compared to state-of-the-art black-box safe control methods. Our learned agents can also generalize to unseen scenarios while keeping the original performance. The source code can be found at https://github.com/Zengyi-Qin/bcbf.
Index Terms:
Robot Safety; Robust/Adaptive Control.
I Introduction
Guaranteeing safety is an open challenge in designing control policies for many autonomous robotic systems, ranging from consumer electronics to self-driving cars and aircrafts. In recent years, the development of machine learning (ML) has created unprecedented opportunities to control modern autonomous systems with growing complexity. However, ML also poses great challenges for developing high-assurance autonomous systems that are provably dependable. While many learning-based approaches [1, 2, 3, 4] have been proposed to train controllers to accomplish complex tasks with improved empirical performance, the lack of safety certificates for the learning-enabled components has been a fundamental hurdle that blocks the massive deployment of the learned solutions.
For decades, mathematical control certificates has been used as proofs that the desired properties of the system are satisfied in closed-loop with certain control policies. For example, Control Lyapunov Function [5, 6] and Control Contraction Metrics [7, 8, 9, 10] ensure the existence of a controller under which the system converges to an equilibrium or a desired trajectory. Control Barrier Function [11, 12, 13, 14, 15, 16, 17] ensures the existence of a controller that keeps the system inside a safe invariant set.
Classical control synthesis methods based on Sum-of-Squares [18, 12, 19, 20] and linear programs [21, 22] usually only work for systems with low-dimensional state space and simple dynamics. This is because they choose polynomials as the candidate certificate functions, where low-order polynomials may not be a valid certificate for complex dynamical systems and high-order polynomials will increase the number of decision variables exponentially in those optimization problems.
Recent data-driven methods [17, 23, 9, 24] have shown significant progress in overcoming the limitations of the traditional synthesis methods. They can jointly learn the control certificates and policies as neural networks (NN). However, data-driven methods that can generate control certificates still require an explicit (or white-box) differentiable model of the system dynamics, as the derivatives of the dynamics are required in the learning process.
Finally, many works study the use of control certificates such as CBF to provide safety guarantees during and after training the policies [16, 25], but they more or less requires an accurate model of the system dynamics or a reasonable modeling of the system uncertainties to build the certificates.
Many dynamical systems in the real-world are black-box and lack accurate models. The most popular model-free approach to handle such black-box systems is safe reinforcement learning (RL).
Safe RL methods enforce safety and performance by maximizing the expectation of the cumulative reward and constraining the expectation of the cost to be less or equal to a given threshold. The biggest disadvantage of safe RL methods is the lack of systematic or theoretically grounded way of designing cost function and reward functions, which heavily rely on empirical trials and errors. The lack of explainable safety guarantees and low sampling efficiency also make safe RL methods difficult to exhibit satisfactory performance.
Instead of stressing about the trade-off between the strong guarantees from control certificates and the practicability of model-free methods, in this work, we propose SABLAS to achieve both. SABLAS is a general-purpose approach to learning safe control for black-box dynamical systems. SABLAS enjoys the guarantees provided by the safety certificate from CBF theory without requiring for an accurate model for the dynamics. Instead, SABLAS only needs a nominal dynamics function that can be obtained through regressions over simulation data. There is no need to model the error between the nominal dynamics and the real dynamics since SABLAS essentially re-design the loss function in a novel way to back-propagate gradient to the controller even when the black-box dynamical system is non-differentiable.
The resulting CBF (and the corresponding safety certificate) holds directly on the original black-box system if the training process converges.
The proposed algorithm is easy-to-implement and follows almost the same procedure of learning CBF for white-box systems with minimal modification.
SABLAS fundamentally solves the problem that control certificates cannot be learned directly on black-box systems, and opens up the next chapter use of CBF theory on synthesizing safe controllers for black-box dynamical systems.
Experimental results demonstrates the superior advantages of SABLAS over leading learning-based safe control methods for black-box systems including CPO [26], PPO-Safe [3] and TRPO-Safe [27, 28]. We evaluate SABLAS on two challenging tasks in simulation: drone control in a city and ship control in a valley (as shown in Fig. 1). The dynamics of the drone and ship are assumed unknown. In both tasks, the controlled agent should avoid collision with uncontrolled agents and other obstacles, and reach its goal before the testing episode ends. We also examine the generalization capability of SABLAS on testing scenarios that are not seen in training. Fig. 1 shows that SABLAS can reach a near 1.0 relative safety rate and task completion rate while using only $1/10$ of the training data compared to existing safe RL methods, demonstrating a significant improvement.
We also study the effect of model error (between the nominal model and actual dynamics) on the performance of the learned policy. It is shown that SABLAS is tolerant to large model errors while keeping a high safety rate.
A detailed description of the results is presented in the experiment section. Video results can be found at supplementary materials.
To summarize the strength of SABLAS:
1. SABLAS can jointly find a safety certificate (i.e. CBF) and the corresponding control policy on black-box dynamics;
2. Unlike RL-based methods that need tedious trial-and-error on designing the rewards, SABLAS provides a systematic way of learning certified control policy, without parameters (other than the standard hyper-parameters in NN training) that need fine-tuning.
3. Empirical results that SABLAS can achieve a nearly perfect performance in terms of guaranteeing safety and goal-reaching, using much less samples than state-of-the-art safe RL methods.
II Related Work
There is a rich literature on controlling black-box systems and safe RL. Due to the space limit, we only discuss a few directly related and commonly used techniques on black-box system control. We will also skip the literature review for the large body of works on model-based safe control and trajectory planning as the research problems we are solving are very different.
II-A Controller Synthesis for Black-box Systems
Proportional–integral–derivative (PID) controller is widely used in controlling black-box systems. The advantage of PID controller is that it does not rely on a model of the system and only requires the measurement of the state. A drawback of PID controller is that it does not guarantee safety or stability, and the system may overshoot or oscillate about the control setpoint. If the underlying black-box system is linear time-invariant, existing work [29] has presented a polynomial-time control algorithm without relying on any knowledge of the environment. For non-linear black-box systems, the dynamics model can be approximated using system identification and controlled using model-based approaches [30], and PID can be used to handle the error part. The concept of practical relative degree [31] is also proposed to enhance the control performance on systems with heavy uncertainties. Recent advance in reinforcement learning [4, 3, 27, 32] also gives us insight into treating the system as a pure black-box and estimating the gradient for the black-box functions in order to optimize the control variables. However, in safety-critical systems, these black-box control methods still lack formal safety guarantee or certificate.
Simulation-guided controller synthesis methods can also generate control certificates for black-box systems, and sometimes those certificates can indicate how policies should be constructed [21, 20, 22]. However, most of these techniques use polynomial templates for the certificates, which limits their use on high-dimensional and complex systems. Another line of work studies the use of [33, 34] data-driven reachability analysis, jointly with receding-horizon control to constructed optimal control policies. These methods rely on side information about the black-box systems (e.g. lipschitz constants of the dynamics, monotonicity of the states, decoupling in the states’ dynamics) to do the reachability, which is not needed in our method.
II-B Safe Reinforcement Learning
Safe RL [26, 28, 2, 35, 36] extends RL by adding constraints on the expectation of certain cost functions, which encode safety requirements or resource limits. CPO [26] derived a policy improvement step that increases the reward while satisfying the safety constraint. DCRL [2] imposes constraint on state density functions rather than cost value functions, and shows that density constraint has better expressiveness over cost value function-based constraints. RCPO [28] weights the cost using Lagrangian multipliers and add it to the reward. FISAR [36] uses a meta-optimizer to achieve forward invariance in the safe set. A disadvantage of safe RL is that it does not provide safety guarantee, or their safety guarantee cannot be realized in practice. The problem of sampling efficiency and sparsity of cost also increase the difficulty to synthesize safe controller through RL.
II-C Safety Certificate and Control Barrier Functions
Mathematical certificates can serve as proofs that the desired property of the system is satisfied under the corresponding planning [37, 38] and control components. Such certificate is able to guide the controller synthesis for dynamical systems in order to ensure safety. For example, Control Lyapunov Function [5, 6, 24] ensures the existence of a controller so that the system converges to desired behavior. Control Barrier Function [11, 12, 13, 14, 15, 16, 17, 39] ensures the existence of a controller that keep the system inside a safe invariant set. However, the existing controller synthesis with safety certificate heavily relies on a white-box model of the system dynamics. For black-box systems or system with large model uncertainty, these methods are not applicable. While recent work [40] proposes to learn the model uncertainty effect on the CBF conditions, it still assumes that a handcrafted CBF is given, which is not always available for complex non-linear dynamical systems. Our approach represents a substantial improvement over the existing CBF-based safe controller synthesis strategy. The proposed SABLAS framework simultaneously enjoys the safety certificate from the CBF theory and the effectiveness on black-box dynamical systems.
III Preliminaries
III-A Safety of Black-box Dynamical Systems
Definition 1 (Black-box Dynamical System).
A black-box dynamical system is represented by tuple $\langle\mathcal{S},\mathcal{U},f\rangle$, where $\mathcal{S}\in\mathbb{R}^{n}$ is the state space and $\mathcal{U}\in\mathbb{R}^{m}$ is the control input space. $f:\mathcal{S}\times\mathcal{U}\mapsto\mathcal{S}$ is the system dynamics $\dot{s}=f(s,u)$, which is unknown due to the black-box assumption.
Let $\mathcal{S}_{0},\mathcal{S}_{g},\mathcal{S}_{s}$ and $\mathcal{S}_{d}$ denote the set of initial states, goal states, safe states and dangerous states respectively. The problem we aim to solve is formalized in Definition 2.
Definition 2 (Safe Control of Black-box Systems).
Given a black-box dynamical system modeled as in Definition 1, the safe control problem aims to find a controller $\pi:\mathcal{S}\mapsto\mathcal{U}$ such that under control input $u=\pi(s)$ and the unknown dynamics $\dot{s}=f(s,u)$, the following is satisfied:
$$\displaystyle\exists~{}t>0,~{}s(t)\in\mathcal{S}_{g}$$
(1)
$$\displaystyle\text{s.t.}~{}\forall~{}t>0,~{}s(t)\not\in\mathcal{S}_{d}\text{~{}and~{}}s(0)\in\mathcal{S}_{0}$$
The above definition requires that starting from the initial set $\mathcal{S}_{0}$, the system should never enter the dangerous set $\mathcal{S}_{d}$ under controller $\pi$.
III-B Control Barrier Function as Safety Certificate
A common approach for guaranteeing safety of dynamical systems is via control barrier functions (CBF) [11], which ensures that the state always stay in the safe invariant set. A control barrier function $h:\mathcal{S}\mapsto\mathbb{R}$ satisfies:
$$\displaystyle h(s)\geq 0,\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad~{}~{}\forall s\in\mathcal{S}_{0}$$
(2)
$$\displaystyle h(s)<0,\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad~{}~{}\forall s\in\mathcal{S}_{d}$$
$$\displaystyle\dot{h}+\alpha(h)=\frac{\partial h}{\partial s}f(s,u)+\alpha(h)\geq 0,\quad\forall s\in\mathcal{S}_{p}$$
where $\mathcal{S}_{p}=\{s\mid h(s)\geq 0\}$, $u=\pi(s)$ and $\alpha(\cdot)$ is a class-$\mathcal{K}$ function that is strictly increasing and $\alpha(0)=0$. It is proven [11] that if there exists a CBF $h$ for a given controller $\pi$, the system controlled by $\pi$ starting from $s(0)\in\mathcal{S}_{0}$ will never enter the dangerous set $\mathcal{S}_{d}$. Besides the formal safety proof in [11], there is an informal but straightforward way to understand the safety guarantee. Whenever $h(s)$ decreases to $0$, we have $\dot{h}+\alpha(h)=\dot{h}\geq 0$, which means $h(s)$ no longer decreases and $h(s)<0$ will not occur. Thus $s$ will not enter the dangerous set $\mathcal{S}_{d}$ where $h(s)<0$.
III-C Co-learning Controller and CBF for White-box Systems
For white-box dynamical systems where $f(s,u)$ is known, we can jointly synthesize the controller $\pi$ and it safety certificate $h$ that satisfy CBF conditions (2) via a learning-based approach. We model $\pi$ and $h$ as neural networks with parameters $\Theta$ and $\Omega$. Given a dataset $\mathcal{D}$ of the state samples in $\mathcal{S}$, the CBF conditions (2) can be translated into empirical loss functions:
$$\displaystyle\mathcal{L}_{0}(\Omega)=\frac{1}{|\mathcal{S}_{0}\cap\mathcal{D}|}\sum_{s\in\mathcal{S}_{0}\cap\mathcal{D}}\max(0,-h(s;\Omega))$$
(3)
$$\displaystyle\mathcal{L}_{d}(\Omega)=\frac{1}{|\mathcal{S}_{d}\cap\mathcal{D}|}\sum_{s\in\mathcal{S}_{d}\cap\mathcal{D}}\max(0,h(s;\Omega))$$
$$\displaystyle\mathcal{L}_{p}(\Omega,\Theta)=\frac{1}{|\mathcal{S}_{p}\cap\mathcal{D}|}\sum_{s\in\mathcal{S}_{p}\cap\mathcal{D}}$$
$$\displaystyle\quad\quad\quad\quad\max\left(0,-\frac{\partial h}{\partial s}f(s,\pi(s;\Theta))-\alpha(h(s,\Omega))\right).$$
Each of the loss functions in (3) corresponds to a CBF condition in (2). In addition to safety, we also consider goal-reaching by penalize the difference between the safe controller $\pi$ and the goal-reaching nominal controller $\pi_{nom}$ in (4). The synthesis of $\pi_{nom}$ is well-studied and is not the contribution of our work.
$$\displaystyle\mathcal{L}_{g}(\Theta)=\frac{1}{|\mathcal{S}_{p}\cap\mathcal{D}|}\sum_{s\in\mathcal{S}_{p}\cap\mathcal{D}}||\pi(s;\Theta)-\pi_{nom}(s)||_{2}^{2}.$$
(4)
The total loss function is $\mathcal{L}_{0}(\Omega)+\mathcal{L}_{d}(\Omega)+\mathcal{L}_{p}(\Omega,\Theta)+\lambda\mathcal{L}_{g}(\Theta)$, where $\lambda$ is a constant that balances the goal-reaching and safety objective. The total loss function is minimized via stochastic gradient descent to find the parameters $\Omega$ and $\Theta$. The dataset $\mathcal{D}$ is not fixed during training and will be periodically updated with new samples by running the current controller. When the loss $\mathcal{L}_{0}+\mathcal{L}_{d}+\mathcal{L}_{p}$ converges to $0$, the resulting controller $\pi(s;\Theta)$ and CBF $h(s;\Omega)$ will satisfy (2) on unseen testing samples with a generalization bound as proven in [17]. Therefore, a safe controller and the corresponding CBF is found for the white-box system.
However, for black-box dynamical systems where $f(s,u)$ is unknown, the co-learning method for safe controller and CBF described above is no longer applicable. In Section IV, we propose an important and easy-to-implement modification to (3) such that we can leverage similar co-learning framework to jointly synthesize the safe controller and its corresponding CBF as safety certificate.
IV Learning CBF on black-box systems
In this section, we first elaborate on why it is difficult to learn the safe controller and its CBF for black-box dynamical systems. Then we will propose an important and easy-to-implement re-formulation of the optimization objective, which makes learning safe controller in black-box systems as easy as in white-box systems.
IV-A Challenges in Black-box Dynamical Systems
Among the three loss functions $\mathcal{L}_{0},\mathcal{L}_{d}~{}\text{and}~{}\mathcal{L}_{p}$ in (3), $\mathcal{L}_{p}$ is the only one that can propagate gradient to the controller parameter $\Theta$. The main challenge of training safe controller with its CBF for black-box dynamical systems is that the gradient can no longer be back-propagated to $\Theta$ when $f$ is unknown. Therefore, a safe controller cannot be trained by minimizing the loss functions in (3).
Given state samples $s(t)$ and $s(t+\Delta t)$ from the black-box system where $\Delta t$ is a sufficiently small time interval, we can approximate $\dot{h}$ and compute $\mathcal{L}_{p}$ as:
$$\displaystyle\dot{h}_{1}(s)=\frac{h(s(t+\Delta t))-h(s(t))}{\Delta t}$$
(5)
$$\displaystyle\mathcal{L}_{p1}=\frac{1}{|\mathcal{S}_{p}\cap\mathcal{D}|}\sum_{s\in\mathcal{S}_{p}\cap\mathcal{D}}\max\left(0,-\dot{h}_{1}(s)-\alpha(h(s))\right).$$
$\mathcal{L}_{p1}$ does give the value of $\mathcal{L}_{p}$, but its backward gradient flow to the controller is cut off by the black-box system that is non-differentiable. (5) can only be used to train CBF $h$ but not the safe controller $\pi$. Even worse, the $h$ obtained by minimizing (5) does not guarantee that a corresponding safe controller $\pi$ exists. If we have an differential expression of dynamics $f$ and replace $h(s(t+\Delta t))$ with $h(s(t)+f(s(t),\pi(s(t)))\Delta t)$, the gradient flow can successfully reach $\pi(s)$ and update the controller parameter. However, this is not immediately possible because $f$ is unknown by the black-box assumption.
A possible way to back-propagate gradient to $\pi$ is to use a differentiable nominal model $f_{nom}$. There are many methods to obtain $f_{nom}$, such as fitting a neural network using sampled data from the real black-box system. We do not require $f_{nom}$ to perfectly match the real dynamics $f$, because there will always exist an error between them. With $f_{nom}$, we can approximate $\mathcal{L}_{p}$ as:
$$\displaystyle\dot{h}_{2}(s)=\frac{h(s+f_{nom}(s,\pi(s))\Delta t)-h(s)}{\Delta t}$$
(6)
$$\displaystyle\mathcal{L}_{p2}=\frac{1}{|\mathcal{S}_{p}\cap\mathcal{D}|}\sum_{s\in\mathcal{S}_{p}\cap\mathcal{D}}\max\left(0,-\dot{h}_{2}(s)-\alpha(h(s))\right),$$
which is differentiable w.r.t. $\pi(s)$ because both $h$ and $f_{nom}$ are differentiable. The gradient of $\mathcal{L}_{p2}$ can be back-propagated to the controller to update its parameters. However, whatever way we get $f_{nom}$, there still exists an error between the real dynamics $f$ and $f_{nom}$, which means $\dot{h}_{2}$ is not a good approximation of $\dot{h}$ and $\mathcal{L}_{p2}$ is not the true value of $\mathcal{L}_{p}$. Using $\mathcal{L}_{p2}$, it is not guaranteed that the third CBF condition in (2) will be satisfied.
IV-B Learning Safe Control for Black-box Dynamics
We present a novel re-formulation of $\mathcal{L}_{p}$ that makes learning safe controller with CBF for black-box dynamical systems as easy as in white-box systems. The proposed formulation possesses two features: it enables the gradient to back-propagate to the controller in training, and offers an error-free approximation of $\dot{h}$.
Given state samples $s(t)$ and $s(t+\Delta t)$ from the trajectories of the real black-box dynamical system, where $\Delta t$ is a sufficiently small time interval, we define $s_{nom}(t+\Delta t)$ as :
$$\displaystyle s_{nom}(t+\Delta t)=s(t)+f_{nom}(s(t),\pi(s(t)))\Delta t,$$
then construct $\bar{s}(t+\Delta t)$ as:
$$\displaystyle\bar{s}(t+\Delta t)=s_{nom}(t+\Delta t)+g(s(t+\Delta t)-s_{nom}(t+\Delta t)),$$
where $g(s)=s$ is an identity function but without gradient. We need to pretend that $g(s)$ is a constant and in back-propagation, the gradient on $g(s)$ cannot propagate to its argument $s$. In PyTorch [41], there is an off-the-shelf implementation of $g(s)$ as $g(s)=s.\rm{detach()}$, which cuts off the gradient from $g$ to $s$ in back-propagation. Then we approximate $\mathcal{L}_{p}$ using:
$$\displaystyle\dot{h}_{3}(s)=\frac{h(\bar{s}(t+\Delta t))-h(s(t))}{\Delta t}$$
(7)
$$\displaystyle\mathcal{L}_{p3}=\frac{1}{|\mathcal{S}_{p}\cap\mathcal{D}|}\sum_{s\in\mathcal{S}_{p}\cap\mathcal{D}}\max\left(0,-\dot{h}_{3}(s)-\alpha(h(s))\right).$$
Theorem 1.
$\nabla_{\Theta}\mathcal{L}_{p3}$ exists and $\lim_{\Delta t\rightarrow 0}\mathcal{L}_{p3}=\mathcal{L}_{p}$. Namely, $\mathcal{L}_{p3}$ is differentiable w.r.t. the controller parameter $\Theta$, and $\mathcal{L}_{p3}$ is an error-free approximation of $\mathcal{L}_{p}$ as $\Delta t\rightarrow 0$.
Proof.
Since $f_{nom}$ is differentiable, $s_{nom}$ and $\bar{s}(t+\Delta t)$ are differentiable w.r.t. $\pi$, $\dot{h}_{3}$ and $\mathcal{L}_{p3}$ are also differentiable w.r.t. $\pi$ and its parameter $\Theta$. Thus, $\nabla_{\Theta}\mathcal{L}_{p3}$ exists. Furthermore, since $\bar{s}(t+\Delta t)=s(t+\Delta t)$, $\dot{h}_{3}$ is an error-free approximation of the real $\dot{h}$ when $\Delta t\rightarrow 0$. Thus $\mathcal{L}_{p3}$ is also an error-free approximation of $\mathcal{L}_{p}$ when $\Delta t\rightarrow 0$.
∎
Note that Theorem 1 reveals the reason why the proposed SABLAS method can jointly learn the CBF and the safe controller for black-box dynamical system. First, since $\nabla_{\Theta}\mathcal{L}_{p3}$ exists, the gradient from $\mathcal{L}_{p3}$ can be back-propagated to the controller parameters to learn a safe controller. On the contrary, $\dot{h}_{1}$ is not differentiable w.r.t. $\pi$ so $\mathcal{L}_{p1}$ cannot be used to train the controller. Second, since $\mathcal{L}_{p3}$ is a good approximation of $\mathcal{L}_{p}$, minimizing $\mathcal{L}_{p3}$ contributes to the minimization of $\mathcal{L}_{p}$ and the satisfaction of the third CBF condition in (2). On the contrary, $\dot{h}_{2}$ is an inaccurate approximation of $\dot{h}$ as we elaborated in Section IV-A. The construction of $\mathcal{L}_{p3}$ incorporates the advantages of $\mathcal{L}_{p1}$ and $\mathcal{L}_{p2}$, and avoids the disadvantages of $\mathcal{L}_{p1}$ and $\mathcal{L}_{p2}$ at the same time. The computational graph of $\mathcal{L}_{p3}$ is illustrated in Fig. 2, which shows the forward pass and the backward gradient propagation from $\mathcal{L}_{p3}$ to controller $\pi$.
Remark 1. One may argue that the gradient received by $\pi$ via minimizing $\mathcal{L}_{p3}$ is not exactly the gradient it should receive if we had a perfect differentiable model of the black-box system. Despite this, minimizing $\mathcal{L}_{p3}$ directly contributes to the satisfaction of the third CBF condition in (2). A safe controller and its CBF can be found to keep the system within safe set.
Remark 2. Although the current formulation of $\mathcal{L}_{p3}$ leads to promising performance in simulation as we will show in experiments, $\mathcal{L}_{p3}$ requires further consideration in hardware experiments. Directly using the future state $s(t+\Delta t)$ to calculate the time derivative of $h$ or $s$ is not always desirable because noise will possibly dominate the numerical differentiation. When the noise dominates the time derivative of $s$ or $h$, the training will have convergence issues. But a moderate noise is actually beneficial to training, because our optimization objective makes the CBF conditions hold even under noise disturbance, which increases the robustness of the trained CBF and controller. On physical robots where noise dominates the numerical differentiation, one can incoperate filtering techniques to mitigate the noise.
Combining with the loss functions $\mathcal{L}_{0},\mathcal{L}_{d}$ and $\mathcal{L}_{g}$ in (3) and (4), the total loss function can be formulated as:
$$\displaystyle\mathcal{L}(\Omega,\Theta)=\mathcal{L}_{0}(\Omega)+\mathcal{L}_{d}(\Omega)+\mathcal{L}_{p3}(\Omega,\Theta)+\lambda\mathcal{L}_{g}(\Theta),$$
(8)
where $\lambda$ is a constant balancing the safety and goal-reaching objective. $\mathcal{L}$ is minimized via stochastic gradient descent. Algorithm 1 summarizes the learning process of the safe controller and the corresponding safety certificate (CBF) for black-box dynamical systems. The Run function runs the black-box system using the controller $\pi$ under initial condition $s(0)$, and returns the trajectory data. The Update function updates parameters $\Omega,\Theta$ by minimizing $\mathcal{L}(\Omega,\Theta)$ using the state samples in $\mathcal{D}$ via gradient descent.
V Experiment
The primary objective of our experiment is to examine the effectiveness of the proposed method in terms of safety and goal-reaching when controlling black-box dynamical systems. We will conduct comprehensive experiments on two simulation environments illustrated in Fig.1 (a) and (b), and compare with state-of-the-art learning-based control methods for black-box systems.
V-A Task Environment Description
Drone control in a city (CityEnv)
In our first case study, we consider the package delivery task in a city using drones, as is illustrated in Fig. 1 (a). There is one controlled drone and 1024 non-player character (NPC) drones that are not controlled by our controller. In each simulation episode, each drone is assigned a sequence of randomly selected goals to visit. The aim of our controller is to make sure the controlled drone reach its goals while avoiding collision with NPC drones at any time. A reference trajectory $s_{ref}(t),t\in[0,T]$ will be given, which sequentially connects the goals and avoid collision with buildings. The reference trajectory can be generated by any off-the-shelf single-agent path planning algorithm. We use FACTEST [42] in our implementation, and other options such as RRT [43] are also suitable. The reference path planner does not need to consider the dynamic obstacles, such as the moving NPCs in our experiment. A nominal controller $\pi_{norm}$ will also be given, which outputs control commands that drive the drone to follow the reference trajectory. However, $\pi_{norm}$ is purely for goal-reaching and does not consider safety. The CityEnv has two modes: with static NPCs and moving NPCs. If the NPCs are static, they will constantly stay at their initial locations. If the NPCs are moving, they will follow pre-planned trajectories to sequentially visit their goals. The drone model is with state space $[x,y,z,v_{x},v_{y},v_{z},\theta_{x},\theta_{y}]$, where $\theta_{x}$ and $\theta_{y}$ are row and pitch angles. The control inputs are the angular acceleration of $\theta_{x},\theta_{y}$ and the vertical thrust. The underlying model dynamics is from [17] and assumed unknown to the controller and CBF in our experiment.
Ship control in a valley (ValleyEnv)
In our second case study, we consider task of controlling a ship in valley illustrated in Fig. 1 (b). There are one controlled ship and 32 NPC ships. The number of NPCs in ValleyEnv is less than CityEnv because ships are large in size and inertia, and hard to maneuver in dense traffic. Also, different from the 3D CityEnv, ValleyEnv is in 2D, which means the agents have fewer degrees of freedom to avoid collision. The initial location and goal location of each ship is randomly initialized at the beginning of each episode. The aim of our controller is to ensure the controlled ship reach its goal and avoid collision with NPC ships. Similar to CityEnv, a reference trajectory and nominal controller will be provided. There are also two modes in ValleyEnv, including the static and moving NPC mode, as is in CityEnv. The ship model is with state space $[x,y,\theta,u,v,\omega]$, where $\theta$ is the heading angle, $u,v$ are speed in the ship body coordinates, and $\omega$ is the angular velocity of the heading angle. The ship model is from Sec. 4.2 of [44] and is unknown to the controller and CBF.
V-B Evaluation Criteria
Three evaluation criteria are considered. Relative safety rate measures the improvement of safety comparing to a nominal controller that only targets at goal-reaching but not safety. To formally define the relative safety rate, we first consider the absolute safety rate $\alpha$
as $\alpha=\frac{1}{T}\int_{0}^{T}\mathbb{I}(s(t)\not\in\mathcal{S}_{d})~{}dt$, which measures the proportion of time that the system stays outside the dangerous set. Given two control policies $\pi_{1}$ and $\pi_{2}$ with absolute safety rate $\alpha_{1}$ and $\alpha_{2}$, the relative safety rate of $\pi_{1}$ w.r.t. $\pi_{2}$ is defined as $\beta_{12}=\frac{\alpha_{1}-\alpha_{2}}{1-\alpha_{2}}\in(-\infty,1].$
If $\beta_{12}=0$, then control policy $\pi_{1}$ does not have any improvement over $\pi_{2}$ in terms of safety. If $\beta_{12}=1$, then $\pi_{1}$ completely guarantees safety of the system in $t\in[0,T]$. In our experiment, $\pi_{1}$ is the controller to be evaluated, and $\pi_{2}$ is the nominal controller $\pi_{norm}$ that only accounts for goal-reaching without considering safety. Task completion rate is defined as the success rate of reaching the goal state before timeout. Tracking error is the average deviation of the system’s state trajectory comparing to a pre-planned reference trajectory $s_{ref}(t),t\in[0,T]$ as $\gamma=\frac{1}{T}\int_{0}^{T}||s(t)-s_{ref}(t)||^{2}_{2}~{}dt.$
Note that we do not assume $s_{ref}$ always stay outside the dangerous set.
V-C Baseline Approaches
In terms of safe control for black-box systems, the most recent state-of-the-art approaches are safe reinforcement learning (safe RL) algorithms. We choose three safe RL algorithms for comparison: CPO [26] is a general-purpose policy optimization algorithm for black-box systems that maximizes the expected reward while satisfying the safety constraints. PPO-Safe is a combination of PPO [3] and RCPO [28]. It uses PPO to maximize the expected cumulative reward while leveraging the Lagrangian multiplier update rule in RCPO to enforce the safety constraint. TRPO-Safe is a combination of TRPO [27] and RCPO [28]. The expected reward is maximized via TRPO and the safety constraints are imposed using the Lagrangian multiplier in RCPO.
V-D Implementation and Training
Both the controller $\pi$ and CBF $h$ are multi-layer perceptrons (MLP) with architecture adopted from Sec. 4.2 of [17]. $\pi$ and $h$ not only take the state of the controlled agent as input, but also the states of 8 nearest NPCs that the controlled agent can observe. In Algorithm 1, we choose $K=1,N=2000$. The total number of state samples collected during training is $10^{6}$. In Update of the algorithm, we use the Adam [45] optimizer with learning rate $10^{-4}$ and batch size 1024. The gradient descent runs for 100 iterations in Update. The nominal model dynamics are fitted from trajectory data in simulation. We used $10^{4}$ state samples to fit a linear approximation of the drone dynamics, and $10^{5}$ samples to fit a non-linear 3-layer MLP as the ship dynamics.
In training the safe RL methods, the reward in every step is the negative distance between the system’s current state and the goal state, and the cost is 1 if the system is within the dangerous set $\mathcal{S}_{d}$ and 0 otherwise. The threshold for expected cost is set to 0, which means we wish the system never enter the dangerous set (never reach a state with a positive cost). During training, the agent runs the system for $10^{7}$ timesteps in total and performs $2000$ policy updates. In each policy update, 100 iterations of gradient descent are performed. The implementation of the safe RL methods is based on [46].
All the methods are trained with static NPCs and tested on both static and moving NPCs. We believe this can make the testing more challenging and examine the generalization capability of the tested methods in different scenarios. All the agents are assigned random initial and goal locations in every simulation episode, which prevents the learned controller from overfitting a single configuration.
V-E Experimental Results
Safety and goal-reaching performance
Results are shown in Fig. 3. Among the compared methods, our method is the only one that can reach a high task completion rate and relative safety rate at the same time. For other methods such as TRPO-Safe, when the controlled drone or ship is about to hit the NPCs, the learned controller tend to brake and decelerate. Thus, the agent is less likely to reach its goal when a simulation episode ends. The task completion rate and safety rate are opposite to each other for CPO, PPO-Safe and TRPO-Safe. On the contrary, the controller obtained by our method can maneuver smoothly among NPCs without severe deceleration. This enables the controlled agent to reach the goal location on time. Our method can also keep a relatively low tracking error, which means the difference between actual trajectories and reference trajectories is small.
Generalization capability to unseen scenarios
As is stated in Sec. V-D, the NPCs are static in training and can be either static or moving in testing. Fig. 3 also demonstrates that our method has promising generalization capability across different training and testing scenarios.
Sampling efficiency
In Fig. 4 (a), we show the safety performance under different sizes of the training set. The results are averaged over the drone and ship control tasks. SABLAS only needs around $1/10$ of the samples required by the compared methods to achieve a nearly perfect relative safety rate. Note that SABLAS requires an extra $10^{4}$ to $10^{5}$ samples to fit the nominal dynamics, but this does not change the fact that the total number of samples needed by SABLAS is much fewer than the baselines.
Effect of model error
We investigate the influence of the model error $||\dot{s}-\dot{s}_{nom}||$ between the real dynamics and the nominal dynamics on the safety performance. We change the modeling error of the drone model and test the learning controller on CityEnv with static NPCs. We also perform an ablation study where we use $\mathcal{L}_{p2}$ in Eqn. 6 instead of $\mathcal{L}_{p3}$ in Eqn. 7 as loss function. The red curve in Fig. 4 (b) show that SABLAS is tolerant to large model errors while exhibiting a promising safety rate. In our previous experiments, the model error $e=\mathbf{E}[||\dot{s}-\dot{s}_{nom}||]/\mathbf{E}[||\dot{s}||]$ is always less than $0.2$. We did not encounter any difficulty fitting a nominal model with empirical error $e\leq 0.2$.
The orange curve in Fig. 4 (b) shows that if we use $\mathcal{L}_{p2}$ in Eqn. 6, the trained controller will have a worse performance in terms of safety rate. This is because $\mathcal{L}_{p2}$ only uses the nominal dynamics to calculate the loss, without leveraging the real black-box dynamics.
V-F Discussion on Limitation
The main limitation of the proposed approach is that it cannot guarantee the satisfaction of the CBF conditions in (2) in the entire state space. Even if we minimize $\mathcal{L}_{0},\mathcal{L}_{d}$ and $\mathcal{L}_{p3}$ to $0$ during training, the CBF conditions may still be occasionally violated during testing. After all, the training samples are finite and cannot cover the continuous state space. If the testing distribution and training distribution are the same, one can leverage the Rademacher complexity to give an error bound that the CBF conditions are violated, as is in Appendix B of [17]. But if the testing distribution is different from training, it is still unclear to derive the generalization error of the CBF conditions. To train CBF and controller that provably satisfy the CBF conditions, one can also use verification tools to find the counterexamples in the state space that violates the CBF conditions and add those counterexamples to the training set [24, 23]. The process is finished when no more counterexample can be found. However, the time complexity of the verification makes it not applicable for large and expressive neural networks. Also, the error between the nominal and real dynamics will have a negative impact on the safety performance. These limitations are left for future work.
VI Conclusion and future works
We presented SABLAS, a general-purpose safe controller learning approach for black-box systems, which is supported by the theoretical guarantees from control barrier function theory, and at the same time is strengthened using a novel learning structure so it can directly learn the policies and barrier certificates for black-box dynamical systems.
Simulation results show that SABLAS indeed provides a systematic way of learning safe control policies with a great improvement over safe RL methods. For future works, we plan to study SABLAS on multi-agent systems, especially with adversarial players.
References
[1]
Z. Qin, K. Fang, Y. Zhu, L. Fei-Fei, and S. Savarese, “Keto:
Learning keypoint representations for tool manipulation,” in 2020 IEEE
International Conference on Robotics and Automation (ICRA), 2020, pp.
7278–7285.
[2]
Z. Qin, Y. Chen, and C. Fan, “Density constrained reinforcement learning,” in
International Conference on Machine Learning. PMLR, 2021, pp. 8682–8692.
[3]
J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal
policy optimization algorithms,” arXiv preprint arXiv:1707.06347,
2017.
[4]
T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa,
D. Silver, and D. Wierstra, “Continuous control with deep reinforcement
learning,” arXiv preprint arXiv:1509.02971, 2015.
[5]
R. Freeman and P. V. Kokotovic, Robust nonlinear control design:
state-space and Lyapunov techniques. Springer Science & Business Media, 2008.
[6]
A. Isidori, E. Sontag, and M. Thoma, Nonlinear control systems. Springer, 1995, vol. 3.
[7]
W. Lohmiller and J.-J. E. Slotine, “On contraction analysis for non-linear
systems,” Automatica, vol. 34, no. 6, pp. 683–696, 1998.
[8]
I. R. Manchester and J.-J. E. Slotine, “Control contraction metrics: Convex
and intrinsic criteria for nonlinear feedback design,” IEEE
Transactions on Automatic Control, 2017.
[9]
D. Sun, S. Jha, and C. Fan, “Learning certified control using contraction
metric,” in Conference on Robot Learning, 2020.
[10]
H. Tsukamoto and S.-J. Chung, “Neural contraction metrics for robust
estimation and control: A convex optimization approach,” arXiv
preprint arXiv:2006.04361, 2020.
[11]
A. D. Ames, J. W. Grizzle, and P. Tabuada, “Control barrier function based
quadratic programs with application to adaptive cruise control,” in
Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on. IEEE, 2014, pp. 6271–6278.
[12]
A. D. Ames, S. Coogan, M. Egerstedt, G. Notomista, K. Sreenath, and P. Tabuada,
“Control barrier functions: Theory and applications,” in 2019 18th
European Control Conference (ECC). IEEE, 2019, pp. 3420–3431.
[13]
U. Borrmann, L. Wang, A. D. Ames, and M. Egerstedt, “Control barrier
certificates for safe swarm behavior,” IFAC-Papers-OnLine, vol. 48,
no. 27, pp. 68–73, 2015.
[14]
Y. Chen, A. Singletary, and A. D. Ames, “Guaranteed obstacle avoidance for
multi-robot operations with limited actuation: a control barrier function
approach,” IEEE Control Systems Letters, vol. 5, no. 1, pp. 127–132,
2020.
[15]
R. Cheng, G. Orosz, R. M. Murray, and J. W. Burdick, “End-to-end safe
reinforcement learning through barrier functions for safety-critical
continuous control tasks,” in AAAI Conference on Artificial
Intelligence, vol. 33, 2019, pp. 3387–3395.
[16]
R. Cheng, M. J. Khojasteh, A. D. Ames, and J. W. Burdick, “Safe multi-agent
interaction through robust control barrier functions with learned
uncertainties,” arXiv preprint arXiv:2004.05273, 2020.
[17]
Z. Qin, K. Zhang, Y. Chen, J. Chen, and C. Fan, “Learning safe multi-agent
control with decentralized neural barrier certificates,” in
International Conference on Learning Representations, 2021.
[18]
A. D. Ames, X. Xu, J. W. Grizzle, and P. Tabuada, “Control barrier function
based quadratic programs for safety critical systems,” IEEE
Transactions on Automatic Control, vol. 62, no. 8, pp. 3861–3876, 2017.
[19]
A. Papachristodoulou and S. Prajna, “A tutorial on sum of squares techniques
for systems analysis,” in Proceedings of the 2005, American Control
Conference, 2005. IEEE, 2005, pp.
2686–2700.
[20]
U. Topcu, A. Packard, and P. Seiler, “Local stability analysis using
simulations and sum-of-squares programming,” Automatica, vol. 44,
no. 10, pp. 2669–2675, 2008.
[21]
J. Kapinski, J. V. Deshmukh, S. Sankaranarayanan, and N. Aréchiga,
“Simulation-guided lyapunov analysis for hybrid dynamical systems,” in
Proceedings of the 17th international conference on Hybrid systems:
computation and control, 2014, pp. 133–142.
[22]
H. Ravanbakhsh and S. Sankaranarayanan, “Learning control lyapunov functions
from counterexamples and demonstrations,” Autonomous Robots, vol. 43,
no. 2, pp. 275–307, 2019.
[23]
Y.-C. Chang, N. Roohi, and S. Gao, “Neural lyapunov control,” in
Advances in Neural Information Processing Systems, 2019, pp.
3245–3254.
[24]
H. Dai, B. Landry, L. Yang, M. Pavone, and R. Tedrake, “Lyapunov-stable
neural-network control,” Robotics Science and Systems (RSS), 2021.
[25]
S. Dean, A. J. Taylor, R. K. Cosner, B. Recht, and A. D. Ames, “Guaranteeing
safety of learned perception modules via measurement-robust control barrier
functions,” in 2020 Conference on Robotics Learning (CoRL), 2020.
[26]
J. Achiam, D. Held, A. Tamar, and P. Abbeel, “Constrained policy
optimization,” in International Conference on Machine Learning. PMLR, 2017, pp. 22–31.
[27]
J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz, “Trust region
policy optimization,” in International conference on machine
learning. PMLR, 2015, pp. 1889–1897.
[28]
C. Tessler, D. J. Mankowitz, and S. Mannor, “Reward constrained policy
optimization,” arXiv preprint arXiv:1805.11074, 2018.
[29]
X. Chen and E. Hazan, “Black-box control for linear dynamical systems,” in
Proceedings of Thirty Fourth Conference on Learning Theory, ser.
Proceedings of Machine Learning Research, M. Belkin and S. Kpotufe, Eds.,
vol. 134. PMLR, 15–19 Aug 2021, pp.
1114–1143.
[30]
M. Fliess, C. Join, and H. Sira-Ramirez, “Complex continuous nonlinear
systems: their black box identification and their control,” IFAC
Proceedings Volumes, vol. 39, no. 1, pp. 416–421, 2006.
[31]
A. Levant, “Practical relative degree in black-box control,” in 2012
IEEE 51st IEEE Conference on Decision and Control (CDC). IEEE, 2012, pp. 7101–7106.
[32]
W. Grathwohl, D. Choi, Y. Wu, G. Roeder, and D. Duvenaud, “Backpropagation
through the void: Optimizing control variates for black-box gradient
estimation,” arXiv preprint arXiv:1711.00123, 2017.
[33]
F. Djeumou, A. P. Vinod, E. Goubault, S. Putot, and U. Topcu, “On-the-fly
control of unknown smooth systems from limited data,” in 2021 American
Control Conference (ACC). IEEE, 2021,
pp. 3656–3663.
[34]
F. Djeumou, A. Zutshi, and U. Topcu, “On-the-fly, data-driven reachability
analysis and control of unknown systems: an f-16 aircraft case study,” in
Proceedings of the 24th International Conference on Hybrid Systems:
Computation and Control, 2021, pp. 1–2.
[35]
T.-Y. Yang, J. Rosca, K. Narasimhan, and P. J. Ramadge, “Projection-based
constrained policy optimization,” arXiv preprint arXiv:2010.03152,
2020.
[36]
C. Sun, D.-K. Kim, and J. P. How, “Fisar: Forward invariant safe reinforcement
learning with a deep neural network-based optimizer,” in 2021 IEEE
International Conference on Robotics and Automation (ICRA). IEEE, 2021, pp. 10 617–10 624.
[37]
J. Tordesillas, B. T. Lopez, and J. P. How, “Faster: Fast and safe trajectory
planner for flights in unknown environments,” in 2019 IEEE/RSJ
international conference on intelligent robots and systems (IROS). IEEE, 2019, pp. 1934–1940.
[38]
J. Tordesillas and J. P. How, “Mader: Trajectory planner in multiagent and
dynamic environments,” IEEE Transactions on Robotics, 2021.
[39]
J. J. Choi, D. Lee, K. Sreenath, C. J. Tomlin, and S. L. Herbert, “Robust
control barrier-value functions for safety-critical control,” arXiv
preprint arXiv:2104.02808, 2021.
[40]
F. Castañeda, J. J. Choi, B. Zhang, C. J. Tomlin, and K. Sreenath,
“Pointwise feasibility of gaussian process-based safety-critical control
under model uncertainty,” arXiv preprint arXiv:2106.07108, 2021.
[41]
A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen,
Z. Lin, N. Gimelshein, L. Antiga, et al., “Pytorch: An imperative
style, high-performance deep learning library,” Advances in neural
information processing systems, vol. 32, pp. 8026–8037, 2019.
[42]
C. Fan, K. Miller, and S. Mitra, “Fast and guaranteed safe controller
synthesis for nonlinear vehicle models,” in Computer Aided
Verification, S. K. Lahiri and C. Wang, Eds. Cham: Springer International Publishing, 2020, pp. 629–652.
[43]
S. M. LaValle, J. J. Kuffner, B. Donald, et al., “Rapidly-exploring
random trees: Progress and prospects,” Algorithmic and computational
robotics: new directions, vol. 5, pp. 293–308, 2001.
[44]
T. I. Fossen, “A survey on nonlinear ship control: from theory to practice,”
IFAC Proceedings Volumes, vol. 33, no. 21, pp. 1–16, 2000, 5th IFAC
Conference on Manoeuvring and Control of Marine Craft (MCMC 2000), Aalborg,
Denmark, 23-25 August 2000.
[45]
D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,”
arXiv preprint arXiv:1412.6980, 2014.
[46]
A. Ray, J. Achiam, and D. Amodei, “Benchmarking safe exploration in deep
reinforcement learning,” arXiv preprint arXiv:1910.01708, vol. 7,
2019. |
Learning Communities in the Presence of Errors
Konstantin Makarychev
Microsoft Research
Yury Makarychev
TTIC
Aravindan Vijayaraghavan
Northwestern University
()
Abstract
We study the problem of learning communities in the presence of modeling errors and give robust recovery algorithms for the Stochastic Block Model (SBM). This model, which is also known as the Planted Partition Model, is widely used for community detection and graph partitioning in various fields, including machine learning, statistics, and social sciences.
Many algorithms exist for learning communities in the Stochastic Block Model, but they do not work well in the presence of errors.
In this paper, we initiate the study of robust algorithms for partial recovery in SBM with modeling errors or noise.
We consider graphs generated according to the Stochastic Block Model and then modified by an adversary. We allow two types of adversarial errors,
Feige—Kilian or monotone errors, and edge outlier errors. Our work answers affirmatively an open question posed by Mossel, Neeman and Sly (STOC 2015),
that asked whether an almost exact recovery is possible when the adversary is allowed to add $o(n)$ edges.
We then show that our algorithms work not only when the instances come from SBM, but also work when the instances come from
any distribution of graphs that is $\varepsilon m$ close to SBM in the Kullback—Leibler divergence.
This result also works in the presence of adversarial errors. Finally, we present almost tight lower bounds for two communities.
\newrefformat
eq(LABEL:#1)
\newrefformatlemLemma LABEL:#1
\newrefformatdefDefinition LABEL:#1
\newrefformatthmTheorem LABEL:#1
\newrefformatcorCorollary LABEL:#1
\newrefformatchaChapter LABEL:#1
\newrefformatsecSection LABEL:#1
\newrefformatappAppendix LABEL:#1
\newrefformattabTable LABEL:#1
\newrefformatfigFigure LABEL:#1
\newrefformathypHypothesis LABEL:#1
\newrefformatalgAlgorithm LABEL:#1
\newrefformatremRemark LABEL:#1
\newrefformatitemItem LABEL:#1
\newrefformatstepstep LABEL:#1
\newrefformatconjConjecture LABEL:#1
\newrefformatfactFact LABEL:#1
\newrefformatpropProposition LABEL:#1
\newrefformatprobProblem LABEL:#1
\newrefformatclaimClaim LABEL:#1
\newrefformatrelaxRelaxation LABEL:#1
\newrefformatredReduction LABEL:#1
\newrefformatpartPart LABEL:#1
1 Introduction
In this paper, we present robust recovery algorithms for the Stochastic Block Model (SBM), also known as the Planted Partition Model.
This model is widely used for community detection and graph partitioning in various fields, including machine learning, statistics, and social sciences.
Extensive research on SBM, summarized in Section 1.1, has taken place in computer science and statistics over the last three decades.
Until recently, research on SBM was focused on graphs with a poly-logarithmic average degree.
In the past few years, however, most of the research has shifted toward graphs with a constant average degree, and there has been
significant progress in the understanding of the conditions under which a partial recovery is possible for such graphs in SBM.
In particular, Massoulié [Mas14] and Mossel et al [MNS12, MNS13] have derived sharp conditions under which
a partial recovery is possible for the case of two communities (clusters). Yet existing algorithms are not robust and
may fail in the presence of noise. In this paper, we initiate the study of robust algorithms for a partial recovery in SBM.
Our algorithms work in the presence of adversarial noise.
We answer affirmatively an open question posed by Mossel et al [MNS15], which asked whether an almost exact recovery is possible when
the adversary is allowed to add $o(n)$ edges.
Let us now recall the definition of the Stochastic Block Model111We note that some authors denote by $n$ not the number of vertices in each cluster but the total number of vertices. Our $\text{SBM}(n,k,a,b)$ model is the same as their $\text{SBM}^{\prime}(kn,k,ka,kb)$ model..
Definition 1.1 (Stochastic Block Model).
A graph $G_{sb}(V,E_{sb})$ with $N=nk$ vertices is generated according to the Stochastic Block Model $\text{SBM}(n,k,a,b)$ (where $a\geq b$) as follows:
1.
There is a equipartition $P^{*}=\left(V^{*}_{1},V^{*}_{2},\dots,V^{*}_{k}\right)$ of vertices $V$ with $|V^{*}_{i}|=n$ for each $i\in[k]$.
2.
For each $i\in[k]$, and for any two vertices $u,v\in V^{*}_{i}$, there is an edge $(u,v)\in E_{sb}$ with probability $a/n$.
3.
For each $i,j\in[k]$ with $i\neq j$, and for any two vertices $u\in V^{*}_{i},v\in V^{*}_{j}$, there is an edge $(u,v)\in E_{sb}$ with probability $b/n$.
We denote the expected number of edges in $G$ by $m$: $m=\tfrac{1}{2}(nka+nk(k-1)b)$.
We consider the Stochastic Block model with two types of modeling errors (adversarial noise): the outlier errors and Feige–Kilian [FK98] (monotone) errors.
Definition 1.2 (Stochastic Block Model with modeling errors).
In the Stochastic Block Model $\text{SBM}(n,k,a,b)$ with modeling errors, the graph $G(V,E)$ is generated as follows.
First, a random graph $G_{sb}=(V,E_{sb})$ is sampled from the Stochastic Block Model $\text{SBM}(n,k,a,b)$.
Then the adversary adds some new edges to $E^{\prime}$ and removes some existing edges from $E^{\prime}$. Specifically,
the adversary may do the following:
1.
In the Feige—Kilian or monotone error model, the adversary may add any edges within the clusters and remove any clusters between the clusters.
2.
In the model with $\varepsilon m$ outliers , the adversary may choose $\varepsilon_{1}\geq 0$ and $\varepsilon_{2}\geq 0$ with $\varepsilon_{1}+\varepsilon_{2}\leq\varepsilon$, then add at most $\varepsilon_{1}m$ edges between the clusters and remove at most $\varepsilon_{2}m$ edges within the clusters.
3.
In the model with two types of errors, the adversary may introduce both types of errors.
Our goal is to find the unknown planted partition $(V_{1}^{*},\dots,V_{k}^{*})$ given the graph $G=(V,E)$ from the Stochastic Block Model with modelling errors. However, in this paper,
we focus on the regime where the exact recovery is impossible even information–theoretically. So we are interested in designing polynomial–time algorithms that partially recover the planted partition.
Definition 1.3.
We say that a partition $V_{1},\dots,V_{k}$ is $\delta$-close to the planted partition $V_{1}^{*},\dots,V_{k}^{*}$, if each cluster $V_{i}$ has size exactly $n$ and there is a permutation $\sigma$ of indices such that
$$\Bigl{|}\bigcup_{j=\sigma(i)}V_{i}^{*}\cap V_{j}\Bigr{|}\geq(1-\delta)kn.$$
An algorithm $(1-\delta)$-partially recovers the planted partition if it finds a partition that is $\delta$–close to the planted partition.
We present two algorithms for partial recovery. The first algorithm can handle instances with both monotone and outlier errors, while the second algorithm
handles only instances with outlier errors. The second algorithm also has stronger requirements on $a$ and $b$. However, it has a much better recovery guarantee.
Theorem 1.4 (First Algorithm).
Consider the stochastic block model $\text{SBM}(n,k,a,b)$ with $\varepsilon m$ outliers and monotone errors.
There is a polynomial-time algorithm that $(1-\delta)$-partially recovers the planted partition given an instance of the model, where
$$\delta=O\Bigl{(}\frac{\sqrt{a+b(k-1)}}{a-b}+\frac{\varepsilon\left(a+b(k-1)%
\right)}{a-b}\Bigr{)}.$$
The algorithm succeeds with probability at least $1-3\exp\left(-2N\right)$ over the randomness of the instance.
Furthermore, for any $\eta\in(0,\tfrac{1}{2})$, with probability at least $1-3\exp\left(-\eta m(1-\tfrac{2N}{\eta m})\right)$, the algorithm $(1-\delta^{\prime})$-partially recovers the planted partition
with
$$\delta^{\prime}=O\Bigl{(}\frac{(\varepsilon+\sqrt{\eta})\left(a+b(k-1)\right)}%
{a-b}\Bigr{)}.$$
For the case of two clusters ($k=2$), we prove that the result of Theorem 1.5 is asymptotically optimal (see Theorem 7.1).
Theorem 1.5 (Second Algorithm).
Consider the stochastic block model $\text{SBM}(n,k,a,b)$ with $\varepsilon m$ outliers (without any monotone modelling errors).
Assume that
$$\frac{\sqrt{a+b(k-1)}}{a-b}+\frac{\varepsilon\left(a+b(k-1)\right)}{a-b}\leq c%
/k,$$
where $c>0$ is some absolute constant.
Let $\delta_{0}=ke^{\frac{(a-b)^{2}}{100a}}$ and $\delta=O(\delta_{0}+\frac{\varepsilon m}{(a-b)kn})$.
There is a randomized polynomial-time algorithm that $(1-\delta)$-partially recovers the planted partition.
The algorithm succeeds with probability at least $1-3\exp(-\delta_{0}kn)$ over the randomness of the instance and random bits used by the algorithm.
Let us compare the performance of our algorithms to the performance of the state of the art algorithms for the Stochastic Block Model.
•
If no adversarial noise is present, our first algorithm works under the same condition on parameters $a$, $b$ and $k$:
$$\frac{(a-b)^{2}}{a+b(k-1)}>C\quad\text{for some absolute constant }C$$
as the algorithm by Abbe and Sandon [AS15] for SBM (the absolute constant $C$ in our condition is different from that in [AS15]).
•
Our second algorithm achieves the same recovery rate as the algorithm of Chin et al. for SBM [CRV15]
(that, however, is not surprising, since our second algorithm uses the “boosting” technique developed in [CRV15]) .
We note that, unlike many previously known algorithms for the Stochastic Block Model, our recovery algorithms fail with probability that is exponentially small in $\eta m$ (when $\eta>4N/m$).
In particular, this implies that the algorithm from Theorem 1.4 works even if we sample the initial graph $G_{sb}$ not from $\text{SBM}(n,k,a,b)$
but from a distribution that is $(\lambda m)$-close to $\text{SBM}(n,k,a,b)$ in the KL-divergence distance (see Section 6).
Theorem 1.6.
Let $\cal G$ be a distribution that is $\lambda m$ close to $\text{SBM}(n,k,a,b)$ in the KL divergence:
$D_{KL}({\cal G},\text{SBM}(n,k,a,b))\leq\lambda m$. Suppose that $m\geq 4N$. Consider a model where the graph is sampled from
the distribution $\cal G$ and then the adversary introduces monotone and outlier modeling errors (with parameter $\varepsilon m$).
Algorithm from Theorems 1.4 works in this model with the same recovery guarantee:
$$\delta=O\Bigl{(}\frac{(\varepsilon+\sqrt{\eta})\left(a+b(k-1)\right)}{a-b}+%
\frac{2\sqrt{a+b(k-1)}}{(a-b)}\Bigr{)}.$$
It may fail with probability at most $2\lambda/\eta$.
Organization
Our algorithms are based on semidefinite programming. And thus we start with
presenting our SDP relaxation for the partition recovery problem (see Section 2).
Then, in Section 3,
we consider the optimal solution to this SDP. The solution assigns vectors to vertices of the graph.
We prove that the vectors assigned to vertices in the same cluster are close to each other (on average),
while the vectors assigned to vertices in different clusters are far from each other (on average).
To show this, we rely on the ideas from the paper by Guedon and Vershynin [GV14].
However, we develop a more robust approach than the one used in [GV14]. In [GV14], it is shown that the SDP matrix (the Gram matrix of the SDP vectors)
is very close to a particular rank-$k$ solution— namely, the solution that encodes the planted partition.
This argument, however, is not very robust and does not work in the presence of even small amounts of noise.
In this work, we instead argue that the vectors are clustered in space consistently with the planted partition — most vectors in the same cluster are very close to each other, and most vectors in distinct clusters are far apart. As we show, this geometric structural property holds even in the presence of adversarial noise.
In Section 4, we present our first algorithm and prove Theorem 1.4.
Roughly speaking, the algorithm clusters together SDP vectors that lie close to each other, obtains
a partition of SDP vectors, and
outputs the corresponding partition of vertices. In Section 5, we show how to
“boost” the performance of this algorithm by using the technique by Chin et al. [CRV15]. This yields
Theorem 1.5.
Finally,
we present Theorem 1.6 in Section 6 and describe our negative results in Section 7.
1.1 Overview of Prior Work
We will now review prior work on learning probabilistic models for graph partitioning while focusing on algorithms that give polynomial time guarantees. In what follows, $C$ denotes a constant that is chosen to be sufficiently large.
Stochastic Block Models
The Stochastic Block Model is the most widely studied probabilistic model for community detection and graph partitioning in different fields like machine learning, computer science, statistics and social sciences (see e.g. [BLCS87, HLL83, WBB76, For10]). This model is also sometimes called the Planted Partitioning model and was studied in a series of papers, which among others include Dyer and Frieze [DF86], Boppana [Bop87], Jerrum and Sorkin [JS93], Dimitriou and Impagliazzo [DI98], Condon and Karp [CK99], McSherry [McS01] and Coja-Oghlan [CO06]. The existing algorithmic guarantees for the Stochastic Block Model fall into three broad categories: exact recovery, weak recovery or detection and partial recovery.
For exactly recovering the communities, provable guarantees are known for many different algorithms like spectral algorithms, convex relaxations and belief propagation. These algorithms need sufficient difference between the average intra-cluster degree $a$ and inter-cluster degree $b$, and a lower bound on the average degree $a+b=\Omega(\log n)$. For $k=2$ clusters, Boppana [Bop87] used spectral techniques to give an algorithm that recovers the clusters when $a-b\geq C\cdot\sqrt{a\log n}$. Recently, Abbe et al. [ABH14] and Mossel et al. [MNS15] determined sharp thresholds for exact recovery in the case of $k=2$ communities. The influential work of McSherry [McS01] used spectral clustering to handle a more general class of stochastic block models with many clusters, and the guarantees have been subsequently improved in different parameter regimes of $a,b,k$ by various works using both spectral techniques and convex relaxations [CSX12, Ame14, Vu14].
The goal in weak recovery (or detection) is to output a partition of the nodes which is positively correlated with the true partition with high probability. This problem was introduced by Coja-Oghlan [Co10]. Decelle et al [DKMZ11] conjectured that there is a sharp phase transition in the case of $k=2$ clusters depending on whether value of $\frac{(a-b)^{2}}{(a+b)}>2$ or not, and this was settled independently by Massoulié [Mas14] and Mossel et al [MNS12, MNS13]. It was also recently shown that semidefinite programs get close to this threshold [MS15]222The algorithm of [MNS13, Mas14] uses non-backtracking random walks.. The problem is still open for $k>2$ communities, and the conjecture of Decelle et al [DKMZ11, MNS13] for larger $k$ is that the clustering problem can be solved in polynomial time when $\frac{(a-b)^{2}}{a+(k-1)b}>k$.
In partial recovery, the goal is to recover the clusters in the planted partitioning up to $\eta N$ vertices i.e. up to $\eta N$ vertices are allowed to be misclassified in total (here $\eta$ can be thought of as $o(1)$). Coja-Oghlan [Co10] and Mossel et al [MNS14] studied this problem for the case of $k=2$ communities. Guedon and Vershynin [GV14] analysed the semidefinite programming relaxation using the Grothendieck inequality to partially recover the communities (for $k=2$) when $(a-b)^{2}>C(a+b)/{\eta^{2}}$. These results were extended to the case of $k$-communities by [GV14, CRV15, AS15]. The algorithm of [CRV15] recovers the communities up to $\eta$ error when $\frac{(a-b)^{2}}{a}\geq Ck^{2}\log(1/\eta)$ 333In fact [CRV15] gives the stronger guarantee of recovering each of the clusters up to $\eta n$ vertices . These results were recently improved by [AS15] who gave algorithms and information-theoretic lower bounds for partial recovery in fairly general stochastic block models.
Semirandom models
Semi-random models provide robust alternatives to average-case models by allowing much more structure then completely random instances.
Research on semi-random models was initiated by [BS95], who introduced and investigated semi-random models for $k$-coloring.
Feige and Kilian [FK98] studied a semi-random model for Minimum Bisection (two communities of size $n$ each) that introduced the notion of a monotone adversary. The graph is generated in two steps: first a graph is generated according to $\text{SBM}(n,2,a,b)$ and then an adversary is allowed to either add edges inside the clusters or delete some of the edges present between the clusters. They [FK98] showed that semi-definite programs remain integral when $a-b\geq C\cdot\sqrt{a\log n}$. This was also extended to the case of $k$ clusters by [CSX12, ABKK15].
The results of [MMV12, MMV14, MMV15] use semi-definite programming to give algorithmic guarantees for various average-case models for graph partitioning and clustering. These works [MMV12, MMV14] consider probabilistic models for Balanced Cut (where the two clusters have roughly equal size) that are more general than stochastic block models, but they are incomparable to the models considered in this work. Besides, the focus of [MMV12, MMV14] is to find a Balanced Cut of small cost (the partitioning returned by the algorithm need not necessarily be close to the planted partitioning) and they make no structural assumptions on the graph inside the clusters. The algorithm in [MMV12] also returns a partitioning closed to the planted partitioning under some mild assumptions about the expansion inside the clusters. However, it requires that $b=\tilde{O}(\sqrt{\log n})$,
while the focus of this work is the regime when $a$ and $b$ are constants.
Handling Modeling Errors
The most related result in terms of modeling robustness is the recent work of Cai and Li [CL15] who consider the stochastic block model in the presence of some outlier vertices. The graph is generated in two steps: first a graph is drawn according to a stochastic block model $SBM(n,k,a,b)$444They also consider the case where communities can have different sizes as well.. In addition there are $m$ outlier vertices which can have arbitrary edges to the rest of the graph. Cai and Li [CL15] give an algorithm based on semidefinite programming followed by $k$-means to partially recover the communities. To perform partial recovery the requirement (condition 3.1 in Theorem 3.1) dictates that $a\geq C\log n$ and $(a-b)>C\left(\sqrt{a\log n}+\sqrt{kb}+m\sqrt{k}\right)$. Further, for $a,b=O(\log n)$ they can tolerate up to $O(\log n)$ outliers, and to handle up to $\varepsilon n$ outliers, the condition requires the graph to be very dense i.e. $a,b=\Omega(\varepsilon n)$.
In the regime when $a+b(k-1)\geq C\log n$, robustness to edge outliers is more general than robustness to vertex outliers. (In this case, the degree of each vertex is tightly concentrated around $a+(k-1)b$, hence one can remove all outlier vertices whose degree is substantially larger than $a+(k-1)b$ in the given graph $G$). Using the results in our work, we can handle the case when $\varepsilon$ fraction of the vertices are corrupted since this corresponds to $\varepsilon$ fraction of the edges being corrupted in our outlier model. Additionally, our algorithm also performs partial recovery in the sparse regime (when $a,b=O(1)$).
Finally, the work of Brubaker [Bru09] gave new algorithms for clustering data arising from a mixture of gaussians when $\varepsilon=O(1/(k\log^{2}n))$ fraction of the data points are outliers. Surprisingly, Brubaker showed that this tolerance to noise can be achieved when the separation between the means is only a logarithmic factor more than the separation needed for learning gaussian mixtures with no noise [KSV05, AM05]. While these results apply to very different problems in unsupervised learning, in the analogous regime, our algorithm can tolerate up to $\varepsilon=O(1)$ fraction of the observations come from errors. Finally, our results also handle large errors in the probabilistic model, when measured in KL divergence (up to $\varepsilon m$).
2 Preliminaries
2.1 Notation
Given an equipartition $(V^{*}_{1},\dots,V^{*}_{k})$ of the vertices of $G(V,E)$, let $(V\times V)_{in}$ represent all the pairs of vertices inside the clusters,
and $(V\times V)_{out}$ represent the pairs that go between the clusters. Similarly, let $E_{in}$ be the edges inside the clusters,
and $E_{out}$ be the edges that go between the different clusters.
2.2 SDP Relaxation
Our partition recovery algorithms are based on semidefinite programming. In all our algorithms,
we use the following basic SDP relaxation for the partition recovery problem (the SDP is presented in the vector form).
For every vertex $u$ in the graph, we have a vector variable $\bar{u}$ in the SDP relaxation.
$$\displaystyle\min$$
$$\displaystyle\sum_{(u,v)\in E}\tfrac{1}{2}\lVert\bar{u}-\bar{v}\rVert^{2}$$
(2.1)
s.t.
$$\displaystyle\lVert\bar{u}\rVert^{2}=1$$
$$\displaystyle\leavevmode\nobreak\ \forall u\in V$$
(2.2)
$$\displaystyle\sum_{u,v\in V}\frac{1}{2}\lVert\bar{u}-\bar{v}\rVert^{2}=n^{2}k(%
k-1)=N^{2}\left(1-\frac{1}{k}\right)$$
(2.3)
$$\displaystyle\left\langle\bar{u},\bar{v}\right\rangle\geq 0$$
$$\displaystyle\leavevmode\nobreak\ \forall u,v\in V$$
(2.4)
Note that the summation in constraint (2.3) is over all $N^{2}$ pairs of vertices.
The SDP relaxation does not have $\ell_{2}^{2}$-triangle inequalities.
We denote the optimal value of this SDP relaxation by $\operatorname{sdp}$. Consider the following feasible SDP solution corresponding to the planted partition. Assign $\bar{u}=e_{i}$ for all
$u\in V_{i}^{*}$ for all $i$, where $e_{1},\dots,e_{k}$ is an orthonormal basis. It is easy to see that this is a feasible SDP solution. Its value is equal to the number of edges going
between partitions. Since the value of the optimal $SDP$ solution is at most the value of this solution,
$$\operatorname{sdp}\leq\{(u,v)\in E:u\in V_{i}^{*},v\in V_{j}^{*}\text{ for %
some }i\neq j\}.$$
(2.5)
3 Structure of the SDP solution
In this section, we analyze the geometric structure of the SDP solution. We show that SDP vectors for vertices in the same cluster are close
to each other (in average); SDP vectors for vertices in different clusters are far away from each other (in average).
We will use quantities $\alpha$ and $\beta$ to denote the average distances assigned by the SDP to pairs of vertices inside clusters and between clusters, respectively. Formally,
$$\displaystyle\alpha$$
$$\displaystyle=\underset{(u,v)\in(V\times V)_{in}}{\operatorname{Avg}}\tfrac{1}%
{2}\lVert\bar{u}-\bar{v}\rVert^{2}$$
(3.1)
$$\displaystyle\beta$$
$$\displaystyle=\underset{(u,v)\in(V\times V)_{out}}{\operatorname{Avg}}\tfrac{1%
}{2}\lVert\bar{u}-\bar{v}\rVert^{2}$$
(3.2)
It follows from constraint (2.3) that the values of $\alpha$ and $\beta$ satisfy:
$$\alpha+(k-1)\beta=k-1.$$
(3.3)
We prove the following two lemmas that give bounds on values of $\alpha$ and $\beta$.
Lemma 3.1.
Let $G(V,E)$ be a graph generated according to the the stochastic block model $\text{SBM}(n,k,a,b)$ with $\varepsilon m$ outliers and any monotone errors. Suppose that
$(a+b(k-1))>C$ for some absolute constant $C$. Then the average intra-cluster distance $\alpha$ and inter-cluster distance $\beta$ satisfy the following bounds with probability at least $1-3\exp\left(-2N\right)$:
$$\alpha\leq\frac{c_{\ref{lem:distances}}(\sqrt{a+b(k-1)})}{a-b}+\frac{%
\varepsilon\left(a+b(k-1)\right)}{a-b},\leavevmode\nobreak\ \text{and}%
\leavevmode\nobreak\ \beta\geq 1-\frac{c_{\ref{lem:distances}}(\sqrt{a+b(k-1)}%
)}{(k-1)(a-b)}-\frac{\varepsilon\left(a+b(k-1)\right)}{(k-1)(a-b)},$$
(3.4)
where $c_{\ref{lem:distances}}\leq 4\sqrt{2}K_{G}+4\sqrt{3}$ is an absolute constant.
Furthermore, for any $\eta\in(0,1/2)$, with probability at least $1-3\exp\left(-\eta m(1-\tfrac{2N}{\eta m})\right)$
$$\alpha\leq\frac{(\varepsilon+\tfrac{1}{2}c_{\ref{lem:distances}}\sqrt{\eta})%
\left(a+b(k-1)\right)}{a-b},\leavevmode\nobreak\ \text{and}\leavevmode\nobreak%
\ \beta\geq 1-\frac{(\varepsilon+\tfrac{1}{2}c_{\ref{lem:distances}}\sqrt{\eta%
})\left(a+b(k-1)\right)}{(k-1)(a-b)}.$$
(3.5)
To prove Lemma 3.1, we first give a lower bound on the cost of the SDP solution in terms of
$\alpha$ and $\beta$. A weaker version of Lemma 3.2 can be shown by considering the spectral gap of
the random graph produced by the stochastic model when the average degrees $a,b=\Omega(\log n)$.
Our lemma, however, handles instances with $a,b=\Omega(1)$. Further, it gives a concentration of $\exp(-\varepsilon m)$,
which can not be obtained using spectral techniques. The proof follows the approach of Guedon and Vershynin [GV14].
Lemma 3.2.
Let $G$ be a graph generated according to $\text{SBM}(n,k,a,b)$ with $m=\tfrac{1}{2}(nka+nk(k-1)b)$, and suppose there are $m^{FK}_{out}$ monotone errors between clusters i.e. edges removed (adversarial) between the clusters of the planted partition, and at most $\varepsilon_{2}m$ edges are removed inside clusters (adversarial). Also suppose $\alpha,\beta$ are the average intra-cluster and inter-cluster distances given by the SDP solution. Then there exists a universal constant $c_{\ref{lem:localglobal}}\leq(\sqrt{3}+2\sqrt{2}K_{G})$, such that with probability $1-\exp(-2N)$
$$\operatorname{sdp}=\sum_{\begin{subarray}{c}(u,v)\in E\end{subarray}}\tfrac{1}%
{2}\lVert\bar{u}-\bar{v}\rVert^{2}\geq\alpha\cdot\frac{nka}{2}+\beta\cdot\frac%
{nk(k-1)b}{2}-c_{\ref{lem:localglobal}}N\sqrt{a+(k-1)b}-\varepsilon_{2}m-m^{FK%
}_{out}.$$
(3.6)
Further, we have for any $\eta\in(0,\tfrac{1}{2})$ with probability at least $1-\exp\left(-\eta m\left(1-\tfrac{2N}{\eta m}\right)\right)$
$$\operatorname{sdp}=\sum_{\begin{subarray}{c}(u,v)\in E\end{subarray}}\tfrac{1}%
{2}\lVert\bar{u}-\bar{v}\rVert^{2}\geq\alpha\cdot\frac{nka}{2}+\beta\cdot\frac%
{nk(k-1)}{2}-c_{\ref{lem:localglobal}}\sqrt{\eta}m-\varepsilon_{2}m.$$
(3.7)
Remark 3.3.
We can obtain a similar upper bound on the SDP value in terms of $\alpha$ and $\beta$, but we do not need it for our purposes. The second part of the lemma (3.7) will be used with values of $\eta>4N/m$ (think of $\eta$ as a small constant), hence the failure probability will be at most $\exp(-\eta m/2)$.
Proof.
By Grothendieck’s inequality [GL76, AN06, AMMN06, BMMN11], we have that for any matrix $M\in\mathbb{R}^{n\times n}$,
$$\max_{\begin{subarray}{c}u_{1},\dots,u_{n},\ v_{1},\dots,v_{n}\\
\forall i,j:\leavevmode\nobreak\ \lVert u_{i}\rVert=\lVert v_{j}\rVert=1\end{%
subarray}}\,\sum_{i,j=1}^{n}M_{ij}\left\langle u_{i},v_{j}\right\rangle\leq K_%
{G}\cdot\max_{x,y\in\{-1,1\}^{n}}\sum_{i,j=1}^{n}M_{ij}x_{i}y_{j}=K_{G}\lVert M%
\rVert_{\infty\rightarrow 1},$$
(3.8)
where $K_{G}\leq 1.79$ is the Grothendieck constant.
Let $B$ be the adjacency matrix representing the edges in $E_{sb}$ i.e. the edges of the graph drawn from $\text{SBM}(n,k,a,b)$ before adding any adversarial edges. Consider the matrix $M=B-\operatorname*{\mathbb{E}}B$, and the unit vectors $\{\bar{u}\}_{u\in V(G)}$ given by the SDP solution.
Note that we know the expectations of entries of matrix $B$, since the set of edges $E_{sb}$ comes from the stochastic block model:
if $u=v$, then $(\operatorname*{\mathbb{E}}B)_{uv}=0$; otherwise, if $(u,v)\in(V\times V)_{in}$, then $(\operatorname*{\mathbb{E}}B)_{uv}=a/n$; and if $(u,v)\in(V\times V)_{out}$, then $(\operatorname*{\mathbb{E}}B)_{uv}=b/n$.
We now bound the contribution of edges from $E_{sb}$ to the SDP objective. We use that $\frac{1}{2}\|\bar{u}-\bar{v}\|^{2}=1-\langle\bar{u},\bar{v}\rangle$ (since $\bar{u}$ and $\bar{v}$
are unit vectors).
$$\displaystyle 2\sum_{(u,v)\in E_{sb}}\tfrac{1}{2}\lVert\bar{u}-\bar{v}\rVert^{2}$$
$$\displaystyle=\sum_{u,v\in V}B_{uv}\tfrac{1}{2}\lVert\bar{u}-\bar{v}\rVert^{2}%
=\sum_{u,v\in V}\operatorname*{\mathbb{E}}B_{uv}\tfrac{1}{2}\lVert\bar{u}-\bar%
{v}\rVert^{2}+\sum_{u,v\in V}(B-\operatorname*{\mathbb{E}}B)_{u,v}\left(1-%
\left\langle\bar{u},\bar{v}\right\rangle\right)$$
$$\displaystyle\geq\sum_{u,v\in V}\operatorname*{\mathbb{E}}B_{uv}\cdot\tfrac{1}%
{2}\lVert\bar{u}-\bar{v}\rVert^{2}+\sum_{u,v\in V}B_{uv}-\sum_{u,v\in V}%
\operatorname*{\mathbb{E}}B_{uv}-K_{G}\cdot\lVert B-\operatorname*{\mathbb{E}}%
B\rVert_{\infty\rightarrow 1}$$
Thus,
$$\displaystyle\sum_{(u,v)\in E_{sb}}\tfrac{1}{2}\lVert\bar{u}-\bar{v}\rVert^{2}$$
$$\displaystyle\geq\frac{nka}{2}\cdot\alpha+\frac{nk(k-1)b}{2}\cdot\beta+\lvert E%
_{sb}\rvert-m-\frac{K_{G}}{2}\cdot\lVert B-\operatorname*{\mathbb{E}}B\rVert_{%
\infty\rightarrow 1}.$$
We now use Lemma A.2 to bound $\lVert B-\operatorname*{\mathbb{E}}B\rVert_{\infty\rightarrow 1}$. Note that in our case,
$$\sigma^{2}=\tfrac{1}{2}\left(nka+nk(k-1)b\right)=\tfrac{N}{2}\left(a+(k-1)b%
\right)=m.$$
Then $|E_{sb}|-m$ is tightly concentrated by Fact A.1 (use $t^{\prime}=\sqrt{6Nm}$). Hence, by applying Lemma A.2
with $t=8\sqrt{N}$, we get that with probability at least $1-e^{-2N}$,
$$\sum_{\begin{subarray}{c}(u,v)\in E_{sb}\end{subarray}}\tfrac{1}{2}\lVert\bar{%
u}-\bar{v}\rVert^{2}\geq\alpha\cdot\frac{Na}{2}+\beta\cdot\frac{N(k-1)b}{2}-%
\sqrt{3}N\sqrt{a+(k-1)b}-2\sqrt{2}K_{G}N\sqrt{a+(k-1)b}.$$
Similarly by picking $t=4\sqrt{\eta m}$ for Lemma A.2(note that $t\leq 3\sigma$ since $\eta<9/16$) and $t^{\prime}=\sqrt{3\eta}m$ in Fact A.1, we get that with probability $1-\exp\left(-\eta m\left(1-\tfrac{2N}{\eta m}\right)\right)$,
$$\sum_{\begin{subarray}{c}(u,v)\in E_{sb}\end{subarray}}\tfrac{1}{2}\lVert\bar{%
u}-\bar{v}\rVert^{2}\geq\alpha\cdot\frac{Na}{2}+\beta\cdot\frac{N(k-1)b}{2}-(%
\sqrt{3}+\sqrt{2}K_{G})\sqrt{\eta}m.$$
Finally, the adversary may delete at most $\varepsilon_{2}m$ edges, decreasing the SDP value by at most $\varepsilon_{2}m$:
$$\sum_{\begin{subarray}{c}(u,v)\in E\end{subarray}}\tfrac{1}{2}\lVert\bar{u}-%
\bar{v}\rVert^{2}\geq\sum_{\begin{subarray}{c}(u,v)\in E_{sb}\end{subarray}}%
\tfrac{1}{2}\lVert\bar{u}-\bar{v}\rVert^{2}-\varepsilon_{2}m.$$
∎
We now prove Lemma 3.1 that shows that the vectors $\bar{u}$ for vertices $u\in V_{i}^{*}$ are geometrically clustered together.
Proof of Lemma 3.1.
We first note that the value of the SDP relaxation is at most the value of the planted partition (see inequality (2.5)),
which before the adversarial perturbations is tightly concentrated around $nk(k-1)b/2$.
Suppose that there are $m^{FK}_{out}$ edges deleted between the clusters in the planted partition by the monotone adversary. Hence, for every $\tau>0$ with probability at least $1-\exp(-\tau^{2}/3)$,
$$\operatorname{sdp}=\sum_{(u,v)\in E}\tfrac{1}{2}\lVert u-v\rVert^{2}\leq\tfrac%
{1}{2}N(k-1)b+\tau\sqrt{\tfrac{1}{2}N(k-1)b}+\varepsilon_{1}m-m^{FK}_{out}.$$
where the last inequality follows because the adversary may add at most $\varepsilon_{1}m$ edges. Further from Lemma 3.2, we know that with high probability
$$\operatorname{sdp}\geq\frac{N}{2}\left(a\alpha+b(k-1)\beta\right)-\nu-%
\varepsilon_{2}m-m^{FK}_{out},$$
where $\nu$ is given by equations (3.6) and (3.7) (depending on the required concentration). Combining these two bounds on the SDP value, we get
$$\displaystyle\frac{N}{2}\left(a\alpha+b(k-1)\beta\right)-\nu-\varepsilon_{2}m$$
$$\displaystyle\leq\tfrac{1}{2}N(k-1)b+\tau\sqrt{N(k-1)b/2}+\varepsilon_{1}m$$
Using (3.3)
$$\displaystyle\frac{N}{2}\alpha(a-b)$$
$$\displaystyle\leq\nu+\varepsilon\cdot\tfrac{N}{2}\left(a+b(k-1)\right)+\tau%
\sqrt{N(k-1)b/2}$$
$$\displaystyle\alpha$$
$$\displaystyle\leq\frac{\varepsilon}{a-b}+\frac{\nu}{N(a-b)/2}+\frac{\tau\sqrt{%
N(k-1)b/2}}{N(a-b)/2}.$$
Set $c_{\ref{lem:distances}}=2c_{\ref{lem:localglobal}}+2\sqrt{3}$.
To derive (3.4), we use the bound on $\alpha$ given by (3.6), along with $\tau=\sqrt{6N}$ (and a union bound) to get that with probability
at least $1-3\exp(-2N)$
$$\displaystyle\alpha$$
$$\displaystyle\leq\frac{\varepsilon}{a-b}+\frac{2c_{\ref{lem:localglobal}}\sqrt%
{a+b(k-1)}}{a-b}+\frac{2\sqrt{3}\sqrt{(k-1)b}}{a-b}$$
$$\displaystyle\leq\frac{\varepsilon}{a-b}+c_{\ref{lem:localglobal}}\frac{2\sqrt%
{a+b(k-1)}}{a-b}+\frac{2\sqrt{3}\sqrt{a+b(k-1)}}{a-b}.$$
Similarly, we derive (3.5) by using $\tau=\sqrt{3\eta m}$ along with the bound on $\nu$ given by (3.7). We get that with probability at least
$1-3\exp\left(-\eta m(1-\tfrac{2N}{\eta m})\right)$
$$\alpha\leq\frac{\varepsilon}{a-b}+c_{\ref{lem:localglobal}}\frac{\sqrt{\eta}%
\left(a+b(k-1)\right)}{a-b}+\frac{\sqrt{3\eta}(a+b(k-1))}{a-b},$$
as required.
∎
4 First Algorithm
In this section, we present our first algorithm for a partial recovery. The algorithm given the SDP solution finds a partition $V_{1},\dots,V_{k}$ of $V$,
which is close to the planted partition $V_{1}^{*},\dots,V_{k}^{*}$.
Definition 4.1.
Consider a feasible SDP solution $\{\bar{u}\}_{u\in V}$. We define the center $\bar{W}_{i}$ of cluster $V_{i}^{*}$ as
$$\bar{W}_{i}=\operatorname*{Avg}_{u\in V_{i}^{*}}\bar{u}.$$
For every vertex $u$ let $R_{u}=\|\bar{u}-\bar{W}_{i}\|$, where $W_{i}$ is the center of the cluster $V_{i}^{*}$ that contains $u$.
Let $\alpha_{i}=\frac{1}{2}\operatorname*{Avg}_{u,v\in V_{i}^{*}}\|\bar{u}-\bar{v}%
\|^{2}$.
Definition 4.2.
Let $\rho=1/5$ and $\Delta=6\rho=6/5$. We define the core of cluster $V_{i}^{*}$ as
$$\operatorname{core}(i)=\{u\in V_{i}^{*}:\|\bar{u}-\bar{W}_{i}\|<\rho\}.$$
We say that centers $\bar{W}_{i}$ and $\bar{W}_{j}$ are well-separated if $\|\bar{W}_{i}-\bar{W}_{j}\|\geq\Delta$.
A set of clusters ${\cal S}$ is well-separated if centers of every two centers in ${\cal S}$ are well-separated.
We now prove basic facts about centers $\bar{W}_{i}$ and parameters $\alpha,\alpha_{i},\beta$.
Lemma 4.3.
We have,
1.
$\operatorname*{Avg}_{i}\alpha_{i}=\alpha$.
2.
$\operatorname*{Avg}_{u\in V_{i}^{*}}R_{u}^{2}=\alpha_{i}$.
3.
$\operatorname*{Avg}_{u\in V}R_{u}^{2}=\alpha$.
4.
$\operatorname*{Avg}_{i\neq j}\langle W_{i},W_{j}\rangle=1-\beta=\alpha/(k-1)$.
5.
$\|\bar{W}_{i}\|^{2}=1-\alpha_{i}$.
Proof.
1. This follows immediately from the definitions of $\alpha$ and $\alpha_{i}$.
2. Write,
$$\displaystyle 2\alpha_{i}$$
$$\displaystyle=\operatorname*{Avg}_{u,v\in V_{i}^{*}}\|\bar{u}-\bar{v}\|^{2}=%
\operatorname*{Avg}_{u,v\in V_{i}^{*}}\|(\bar{u}-\bar{W}_{i})-(\bar{v}-\bar{W}%
_{i})\|^{2}$$
$$\displaystyle=\operatorname*{Avg}_{u,v\in V_{i}^{*}}(\|\bar{u}-\bar{W}_{i}\|^{%
2}+\|\bar{v}-\bar{W}_{i}\|^{2})+\operatorname*{Avg}_{u,v\in V_{i}^{*}}\langle%
\bar{u}-\bar{W}_{i},\bar{v}-\bar{W}_{i}\rangle$$
$$\displaystyle=2\operatorname*{Avg}_{u\in V_{i}^{*}}R_{u}^{2}+0=2\operatorname*%
{Avg}_{u\in V_{i}^{*}}R_{u}^{2}.$$
3. This follows from items 1 and 2.
4. Write,
$$\displaystyle\beta$$
$$\displaystyle=\frac{1}{2}\operatorname*{Avg}_{i\neq j}\operatorname*{Avg}_{u%
\in V_{i}^{*},v\in V_{j}^{*}}\|\bar{u}-\bar{v}\|^{2}=\operatorname*{Avg}_{i%
\neq j}\operatorname*{Avg}_{u\in V_{i}^{*},v\in V_{j}^{*}}(1-\langle\bar{u},%
\bar{v}\rangle)$$
$$\displaystyle=1-\operatorname*{Avg}_{i\neq j}\left(\langle\operatorname*{Avg}_%
{u\in V_{i}^{*}}\bar{u},\operatorname*{Avg}_{v\in V_{i}^{*}}\bar{v}\rangle%
\right)=1-\operatorname*{Avg}_{i\neq j}\langle\bar{W}_{i},\bar{W}_{j}\rangle.$$
We get that $\operatorname*{Avg}_{i\neq j}\langle\bar{W}_{i},\bar{W}_{j}\rangle=1-\beta=%
\alpha/(k-1)$.
5. Write,
$$\alpha_{i}=\operatorname*{Avg}_{u\in V_{i}^{*}}R_{u}^{2}=\operatorname*{Avg}_{%
u\in V_{i}^{*}}\|\bar{W}_{i}-\bar{u}\|^{2}=\|\bar{W}_{i}\|^{2}+1-2%
\operatorname*{Avg}_{u\in V_{i}^{*}}\langle\bar{W}_{i},\bar{u}\rangle=1-\|\bar%
{W}_{i}\|^{2}.$$
The claim follows.
∎
Lemma 4.4.
Let $V^{\prime}=\bigcup_{i}V_{i}^{*}\setminus\operatorname{core}(i)$. That is, a vertex $u$ lies in $V^{\prime}$ if it does not lie in the core of the cluster that contains it.
Then
$$|V^{\prime}|\leq\frac{\alpha}{\rho^{2}}\,kn.$$
Proof.
Note that $u\in V^{\prime}$ if and only if $R_{u}\geq\rho$, or, equivalently, $R_{u}^{2}\geq\rho^{2}$. Since $\operatorname*{Avg}_{u\in V}R_{u}^{2}=\alpha$, we get by the Markov inequality that
$|V^{\prime}|\leq(\alpha/\rho^{2})|V|=(\alpha/\rho^{2})kn$.
∎
We now prove that by removing at most a $\delta$ fraction of all clusters, we can obtain a well-separated set of clusters.
Lemma 4.5.
Let $\delta=6\alpha/(2-\Delta^{2})$. There exists a set ${\cal S}\subset\{V_{1}^{*},\dots,V_{k}^{*}\}$ of well-separated clusters of size at least $(1-\delta)k$.
Proof.
Let $\mu=\alpha/\delta$. From the Markov inequality and item 4 in Lemma 4.3, we get that there are at most
$$\frac{\alpha}{(k-1)\mu}\times\frac{k(k-1)}{2}=\frac{\alpha k}{2\mu}=\frac{%
\delta k}{2}$$
unordered pairs $\{i,j\}$ with $\langle W_{i},W_{j}\rangle\geq\mu$. We choose one of the elements in each pair and remove the corresponding clusters.
We obtain a set of clusters ${\cal S}_{0}$ of size at least $(1-\delta/2)k$. By the construction, for every distinct $V_{i}^{*}$ and $V_{j}^{*}$ in ${\cal S}_{0}$, we have
$\langle W_{i},W_{j}\rangle<\mu$.
Let ${\cal S}_{1}$ be the set of clusters $V_{i}^{*}$ with $\alpha_{i}\leq 2\alpha/\delta$. By the Markov inequality and item 1 in Lemma 4.3,
the set ${\cal S}_{1}$ contains at least $(1-\delta/2)k$ clusters.
Finally, let ${\cal S}={\cal S}_{0}\cap{\cal S}_{1}$. Clearly, $|{\cal S}|\geq(1-\delta)k$. For every two clusters $V_{i}^{*}$ and $V_{j}^{*}$ in ${\cal S}$, we have
$$\|\bar{W}_{i}-\bar{W}_{j}\|^{2}=\|\bar{W}_{i}\|^{2}+\|\bar{W}_{j}\|^{2}-2%
\langle\bar{W}_{i},\bar{W}_{j}\rangle>(1-\alpha_{i})+(1-\alpha_{j})-2\mu\geq 2%
(1-3\alpha/\delta)=\Delta^{2}.$$
∎
Now we are ready to present our algorithm that finds a partition close to the planted partition.
The algorithm resembles the clustering algorithm from [MMV15].
Recovery Algorithm
Input: an optimal SDP solution $\left\{\bar{u}\right\}_{u\in V}$.
Output: partition $V_{1},\dots,V_{k^{\prime}}$ of $V$ into $k^{\prime}$ clusters ($k^{\prime}$ might not be equal to $k$).
$i=1$; $\rho=0.27$
Define an auxiliary graph $G_{aux}=(V,E_{aux})$ with $E_{aux}=\left\{(u,v):\|\bar{u}-\bar{v}\|<2\rho\right\}$
(note that, $(u,u)\in E_{aux}$ for every $u\in V$)
while $V\setminus(V_{1}\cup\dots V_{i-1})\neq\varnothing$
Let $u$ be the vertex of maximum degree in $G_{aux}[V\setminus(V_{1}\cup\dots V_{i-1})]$.
Let $V_{i}=\left\{v\notin V_{1}\cup\dots\cup V_{i-1}:(u,v)\in E_{aux}\right\}$
If $|V_{i}|>n$, remove $|V_{i}|-n$ vertices from $|V_{i}|$ arbitrarily, so that $|V_{i}|=n$.
$i=i+1$
return clusters $V_{1},\dots,V_{i-1}$.
We will show now that the algorithm finds a “good” partition $V_{1},\dots,V_{k}$. However, the clusters $V_{1},\dots,V_{k}$ are not necessarily all of the same size.
So we cannot say that the partition is $\delta$-close to the planted partition according to Definition 1.3.
We will be able, however, to prove that the partition is $\delta$-close to the planted partition in the weak sense.
Definition 4.6 (cf. with Definition 1.3).
We say that a partition $V_{1},\dots,V_{k^{\prime}}$ is $\delta$-close to the planted partition
$V_{1}^{*},\dots,V_{k}^{*}$ in the weak sense, if each cluster $V_{i}$ has size at most $n$ and there is a partial matching $\sigma$ between $1,\dots,k$ and $1,\dots,{k^{\prime}}$ such that
$$\Bigl{|}\bigcup_{j=\sigma(i)}V_{i}^{*}\cap V_{j}\Bigr{|}\geq(1-\delta)kn$$
(the union is over all $i$ such that $\sigma(i)$ is defined).
Theorem 4.7.
The Recovery Algorithm finds a partitioning $V_{1},\dots,V_{k^{\prime}}$ of $V$ that is $(72\alpha)$-close to the planted partition in the weak sense.
Proof.
Let ${\cal S}$ be the set of clusters from Lemma 4.5.
Consider a cluster $V_{j}$. We first show that it cannot intersect cores of two distinct clusters $V_{i_{1}}^{*}\in{\cal S}$ and $V_{i_{2}}^{*}\in{\cal S}$.
Assume to the contrary that it does. Let $u_{1}$ be a vertex in $V_{i_{1}}^{*}\cap V_{j}$, and $u_{2}$ be a vertex in $V_{i_{1}}^{*}\cap V_{j}$.
Then $\|\bar{W}_{i_{1}}-\bar{u}_{1}\|<\rho$ and $\|\bar{W}_{i_{2}}-\bar{u}_{2}\|<\rho$. Since $u_{1},u_{2}\in V_{j}$, vertices $u_{1}$ and $u_{2}$
have a common neighbor $u$ in the auxiliary graph $G_{aux}=(V,E_{aux})$, and, therefore, $\|\bar{u}_{1}-\bar{u}_{2}\|<4\rho$.
We get that
$$\|\bar{W}_{i_{1}}-\bar{W}_{i_{2}}\|\leq\|\bar{W}_{i_{1}}-\bar{u}_{1}\|+\|\bar{%
W}_{i_{2}}-\bar{u}_{2}\|+\|\bar{u}_{1}-\bar{u}_{2}\|<6\rho=\Delta,$$
which is impossible since ${\cal S}$ is a well separated set of clusters.
We now construct a partial matching $\sigma$ between clusters $V_{i}^{*}$ and $V_{j}$. We match every cluster $V_{i}^{*}\in{\cal S}$
with the first cluster $V_{j}$ that intersects $\operatorname{core}(V_{j})$ (then we let $\sigma(i)=j$).
Since each vertex belongs to some $V_{j}$, we necessarily match every $V_{i}^{*}\in{\cal S}$ with some $V_{j}$. Moreover,
we cannot match distinct clusters $V_{i_{1}}^{*}$ and $V_{i_{2}}^{*}$ with the same $V_{j}$ because $V_{j}$ cannot intersect both cores
$\operatorname{core}(i_{1})$ and $\operatorname{core}(i_{2})$.
Let $Y=\bigcup_{V_{i}^{*}\in{\cal S}}\operatorname{core}(i)$ and $Z=V\setminus Y$.
By Lemmas 4.4 and 4.5,
$$|Z|\leq\Bigl{|}\bigcup_{i}V_{i}^{*}\setminus\operatorname{core}(i)\Bigr{|}+%
\Bigl{|}\bigcup_{V_{i}^{*}\notin{\cal S}}V_{i}^{*}\Bigr{|}\leq\left(\frac{1}{%
\rho^{2}}+\frac{6}{2-\Delta^{2}}\right)\alpha kn<36\alpha kn.$$
Consider a cluster $V_{i}^{*}$ and the matching cluster $V_{j}$.
As we proved, $V_{j}$ does not intersect $\operatorname{core}(i^{\prime})$ of any $V_{i^{\prime}}\in{\cal S}$ other than $V_{i}$.
Therefore, $V_{j}\subset\operatorname{core}(i)\cup Z$.
We now show that
$$|V_{i}^{*}\cap V_{j}|\geq|\operatorname{core}(i)|-|Z\cap V_{j}|.$$
Observe that every two vertices $v_{1},v_{2}\in\operatorname{core}(V_{i}^{*})$ are connected with an edge in $E_{aux}$ since
$$\|\bar{v}_{1}-\bar{v}_{2}\|\leq\|\bar{v}_{1}-\bar{W}_{i}\|+\|\bar{v}_{2}-\bar{%
W}_{i}\|<2\rho.$$
In particular, every vertex $v\in\operatorname{core}(i)$ has degree at least $|\operatorname{core}(i)|$ in $G_{aux}[V\setminus(V_{1}\cup\dots V_{j-1})]$.
Let $u$ be the vertex that we chose in iteration $j$. Since $u$ is a vertex of maximum degree in $G_{aux}[V\setminus(V_{1}\cup\dots V_{j-1})]$,
it must have degree at least $|\operatorname{core}(i)|$. Now, either $V_{i}$ consists of all neighbors of $u$ in $G_{aux}[V\setminus(V_{1}\cup\dots V_{j-1})]$ then $|V_{j}|\geq|\operatorname{core}(i)|$, or we removed some vertices from $V_{j}$ because it contained more than $n$ vertices, then $|V_{j}|=n\geq|\operatorname{core}(i)|$. In either case, $|V_{j}|\geq|\operatorname{core}(i)|$. We have,
$$|V_{i}^{*}\cap V_{j}|\geq|\operatorname{core}(i)\cap V_{j}|=|V_{j}|-|V_{j}%
\setminus\operatorname{core}(i)|=|V_{j}|-|V_{j}\cap Z|\geq|\operatorname{core}%
(i)|-|V_{j}\cap Z|.$$
Finally, using that all sets $V_{j}\cap Z$ are disjoint, we get
$$\displaystyle\sum_{j=\sigma(i)}|V_{i}^{*}\cap V_{j}|\geq\Bigl{(}\sum_{V_{i}^{*%
}\in{\cal S}}\operatorname{core}(i)\Bigr{)}-|Z|=|Y|-|Z|=|V|-2|Z|\geq(1-72%
\alpha)kn.$$
∎
Lemma 4.8.
There is a linear-time algorithm that given a partition $V_{1},\dots,V_{k^{\prime}}$ of $V$ that is $\delta$-close to the planted partition in the weak sense,
outputs a partition $V_{1}^{\prime},\dots,V_{k}^{\prime}$ that is $(2\delta)$-close to the planted partition in the strong sense.
Proof.
We choose $k$ largest clusters among $V_{1},\dots,V_{k^{\prime}}$. Let $V_{1}^{\prime},\dots,V_{k}^{\prime}$ be these clusters. Then we distribute, in arbitrary way, all vertices from other
clusters between $V_{1}^{\prime},\dots,V_{k}^{\prime}$ so that each of the clusters $V_{i}^{\prime}$ contains exactly $n$ vertices.
We now show that partition $V_{1}^{\prime},\dots,V_{k}^{\prime}$ is $(2\delta)$-close to the planted partition in the strong sense. We may assume without loss of generality that
we chose clusters $V_{1},\dots,V_{k}$ and that $V_{i}^{\prime}$ consists of $V_{i}$ and some vertices from clusters $V_{j}$ with $j>k$.
Let $\sigma$ be the partial matching between clusters $V_{i}^{*}$ and $V_{j}$ (from the definition of the $\delta$-closeness).
We first let $\sigma^{\prime}(i)=\sigma(i)$ if $\sigma(i)$ is defined and $\sigma(i)\leq k$. We get a partially defined permutation on $\{1,\dots,k\}$.
Then we extend $\sigma$ to a permutation defined everywhere in an arbitrary way.
Write,
$$\displaystyle\Bigl{|}\bigcup_{j=\sigma^{\prime}(i)}V_{i}^{*}\cap V_{j}^{\prime%
}\Bigr{|}$$
$$\displaystyle\geq\Bigl{|}\bigcup_{j=\sigma(i)\leq k}V_{i}^{*}\cap V_{j}\Bigr{|%
}=\Bigl{|}\bigcup_{j=\sigma(i)}V_{i}^{*}\cap V_{j}\Bigr{|}-\Bigl{|}\bigcup_{j=%
\sigma(i)\in\{k+1,\dots,k^{\prime}\}}V_{i}^{*}\cap V_{j}\Bigr{|}$$
$$\displaystyle\geq(1-\delta)kn-\Bigl{|}\bigcup_{j=\sigma(i)\in\{k+1,\dots,k^{%
\prime}\}}V_{j}\Bigr{|}.$$
Let
$$\displaystyle J_{1}$$
$$\displaystyle=\{j\in\{k+1,\dots,k^{\prime}\}:j=\sigma(i)\}$$
$$\displaystyle J_{2}$$
$$\displaystyle=\{j\in\{1,\dots,k\}:j\neq\sigma(i)\text{ for every }i\}.$$
Since $\sigma$ takes at most $k$ values, $|J_{1}|\leq|J_{2}|$. Also, $|V_{j_{1}}|\leq|V_{j_{2}}|$ for every $j_{1}\in J_{1}$ and $j_{2}\in J_{2}$ by our choice of $V_{1},\dots,V_{k}$.
Therefore,
$$\Bigl{|}\bigcup_{j\in J_{1}}V_{j}\Bigr{|}\leq\Bigl{|}\bigcup_{j\in J_{2}}V_{j}%
\Bigr{|}\leq\Bigl{|}\bigcup_{V_{j}\text{ is not matched}}V_{j}\Bigr{|}\leq%
\delta nk.$$
We conclude that
$$\Bigl{|}\bigcup_{j=\sigma^{\prime}(i)}V_{i}^{*}\cap V_{j}^{\prime}\Bigr{|}\geq%
(1-2\delta)nk.$$
∎
Now we are ready to prove Theorem 1.4.
Proof of Theorem 1.4.
We solve the SDP relaxation. Consider the parameter $\alpha$, which is defined by (3.1). From Lemma 3.1, we get
that $\alpha$ satisfies bounds (3.4) and (3.5) with probabilities at least $1-3\exp(-2N)$ and $1-3\exp\left(-\eta m(1-\tfrac{2N}{\eta m})\right)$.
Now we run the Recovery Algorithm and find a partition $(V_{1},\dots,V_{k})$. By Theorem 4.7, it is $(72\alpha)$-close to the planted partition in the weak sense. Finally, using the algorithm from Lemma 4.8, we transform this partition to the desired partition $V_{1}^{\prime},\dots,V_{k}^{\prime}$, which $(144\alpha)$-close to the planted partition.
∎
5 Second Algorithm
In this section, we present our second algorithm and prove Theorem 5.1. Theorem 1.5 follows immediately from Theorem 1.5.
Theorem 5.1.
Suppose that there is a polynomial-time algorithm $\cal A$ that given an instance of SBM($n$, $k$, $a/2$, $b/2$) with $\varepsilon m$ outliers finds a partition $V_{1}$, …, $V_{k}$ that is $1/(10k)$-close to the planted partition (in the strong sense) with probability at least $1-\tau$.
There is a randomized polynomial-time algorithm that given
an instance of SBM($n$, $k$, $a$, $b$) with $\varepsilon m$ outliers finds a partition $U_{1},\dots,U_{k}$
that is $\delta$-close to the planted partition (in the strong sense), where
$$\delta=4\delta_{0}+\frac{80\varepsilon m}{(a-b)kn}$$
and
$$\delta_{0}=ke^{-\frac{(a-b)^{2}}{100a}}.$$
The algorithm succeeds with probability at least $1-\tau-\exp(-\delta_{0}kn/4)$.
Proof.
Recall that in the stochastic-block model with outliers we generate the set of edges $E$ in two steps. First, we generate
a random set of edges $E^{\prime}=E_{sb}$. Then, the adversary adds and removes some edges from $E^{\prime}$, and we obtain the set of edges $E$.
Let us partition all edges in $E^{\prime}$ and $E$ into two groups. To this end, independently color all edges of $E\cup E^{\prime}$ in two colors $1$ and $2$ uniformly at random.
Let $E_{1}$ and $E_{2}$ be the subsets of edges in $E$ colored in 1 and 2, respectively; similarly, let $E_{1}^{\prime}$ and $E_{2}^{\prime}$ be the subsets of edges in $E^{\prime}$ colored in 1 and 2.
Denote $E^{\Delta}_{i}=E_{i}\Delta E_{i}^{\prime}$ for $i\in\{1,2\}$.Note that $(V,E_{1})$ is an instance of SBM($n$, $k$, $a/2$, $b/2$) with $\varepsilon m$ outliers.
Given the graph $G=(V,E)$, we generate sets of edges $E_{1}$ and $E_{2}$ (it is important that do so, we do not have to know $E^{\prime}$). We first use edges in $E_{1}$
to find a partition that is $1/(10k)$-close to the planted partition. To this end,
we run algorithm $\cal A$ on $(V,E_{1})$ and obtain a partition $V_{1},\dots,V_{k}$ of $V$.
Now we use edges from $E_{2}$ to find a partition that is $\delta$-close to the planted partition.
We do this in two steps. First, we define a partition $U_{1}^{0},\dots,U_{k}^{0}$,
which is close to the planted partition, but not necessarily balanced – some sets $U_{i}$ may contain more then $n$ vertices.
Then we transform $U_{1}^{0},\dots,U_{k}^{0}$ to a balanced partition $U_{1},\dots,U_{k}$.
Let us start with defining the partition $U_{1}^{0},\dots,U_{k}^{0}$.
For technical reasons (to ensure that certain events that we consider below are independent),
it will be convenient to us to partition each set $V_{i}$ into two sets $V_{i}^{L}$ and $V_{i}^{R}$ containing $n/2$ vertices each
(we assume that $n$ is even; otherwise we can take sets of sizes $(n-1)/2$ and $(n+1)/2$). Denote $n^{\prime}=|V_{i}^{L}|=|V_{i}^{R}|=n/2$.
Let $V^{L}=\bigcup_{i}V_{i}^{L}$ and $V^{R}=\bigcup_{i}V_{i}^{R}$.
For every vertex $u\in V^{L}$, we count the number of its neighbors w.r.t. edges in $E_{2}$ in each of the sets
$V_{1}^{R},\dots,V_{k}^{R}$. We find the set $V_{i}^{R}$ that has most neighbors of $u$ and add $u$ to $U_{i}^{0}$ (we break ties arbitrarily).
Similarly, for every vertex $u\in V^{R}$, we count the number of its neighbors w.r.t. edges in $E_{2}$ in each of the sets
$V_{1}^{L},\dots,V_{k}^{L}$, find the set $V_{i}^{L}$ that has most neighbors of $u$, and add $u$ to $U_{i}^{0}$.
We obtain a partition $U_{1}^{0},\dots,U_{k}^{0}$.
Now we make sure that all clusters have the same size. To this end, we redistribute vertices from clusters of size greater than $n$
among other clusters so that each cluster has size $n$. Formally, we first let $U_{i}^{1}=U_{i}^{1}$ if $|U_{i}^{0}|\leq n$,
and let $U_{i}^{1}$ be an arbitrary subset of $n$ vertices of $U_{i}^{0}$ if $|U_{i}^{0}|>n$.
Then we arbitrarily assign all remaining vertices (i.e., vertices from $\bigcup_{i}U_{i}^{0}\setminus U_{i}^{1}$) among all clusters so that each cluster contains exactly $n$ vertices.
We obtain a partition $U_{1},\dots,U_{k}$.
Let us analyze this algorithm. We may assume without loss of generality that the matching between the partition $V_{1},\dots,V_{k}$ and the planted
partition is given by the identity permutation. Then
$$\sum_{i=1}^{k}|V_{i}^{*}\cap V_{i}|\geq nk(1-1/(10k))=nk-n/10.$$
In particular, for every cluster $V_{i}$, we have
$$\displaystyle|V_{i}\cap V_{i}^{*}|$$
$$\displaystyle\geq 9n/10,$$
$$\displaystyle|V_{i}\cap V_{j}^{*}|$$
$$\displaystyle\leq n/10\qquad\text{for every }j\neq i.$$
Also, for every set $V_{i}^{R}$ (and similarly for every set $V_{i}^{L}$), we have
$$\displaystyle|V_{i}^{R}\cap V_{i}^{*}|$$
$$\displaystyle\geq 9n/10-n/2=4n^{\prime}/5,$$
(5.1)
$$\displaystyle|V_{i}^{R}\cap V_{j}^{*}|$$
$$\displaystyle\leq n/10=n^{\prime}/5\qquad\text{for every }j\neq i.$$
(5.2)
Let us say that a vertex $u$ is corrupted if it is incident on at least $T=(a-b)/20$ edges in $E_{2}^{\Delta}$.
Claim 5.2.
The total number of corrupted edges is at most $2\varepsilon m/T$.
Proof.
Each edge in $E_{2}^{\Delta}$ is incident to at most two corrupted vertices. The total number of edges in $E_{2}^{\Delta}$ is at most $\varepsilon m$.
Therefore, the number of corrupted vertices is at most $2\varepsilon m/T$.
∎
Consider a vertex $u\in V_{i}^{*}$. Assume that it is not corrupted. We are going to show that $u\in U_{i}^{0}$ with probability at least $1-ke^{-\frac{(a-b)^{2}}{100a}}$.
We assume without loss of generality that $u\in V^{L}$.
Let random variable $Z_{j}$ be the number of neighbors of $u$ in $V_{j}^{R}$ w.r.t. edges in $E_{2}^{\prime}$.
Consider the event ${\cal E}_{u}$ that $Z_{i}\geq(3a+2b)/20$ and $Z_{j}\geq(2a+3b)/20$ for every $j\neq i$.
We will prove now that if ${\cal E}_{u}$ happens then $u\in U_{i}^{0}$. After that we will show that the probability that ${\cal E}_{u}$ does not happen is exponentially small.
Assume to the contrary that ${\cal E}_{u}$ happens but $u\in U_{j}^{0}$ for some $j\neq i$. Then $u$ has at least as many neighbors in $V_{j}^{R}$ as in $V_{i}^{R}$.
Let $A_{+}$ be the number of edges $e\in E_{2}\setminus E_{2}^{\prime}$ from $u$ to vertices in $V_{j}$ (edges added by the adversary); and
$A_{-}$ be the number of edges in $e\in E_{2}^{\prime}\setminus E_{2}$ from $u$ to $V_{i}$ (edges removed by the adversary).
Then $A_{+}+A_{-}<T$ since $u$ is not corrupted.
Observe that there are at most $Z_{j}+A_{+}$ edges $e\in E_{2}$ from $u$ to $V_{j}^{R}$; there are at least $Z_{i}-A_{-}$ edges $e\in E_{2}$ from $u$ to $V_{i}^{R}$.
Therefore,
$Z_{j}+A_{+}\geq Z_{i}-A_{-}$, and hence (using that event ${\cal E}_{u}$ happens)
$$T>A_{+}+A_{-}\geq Z_{i}-Z_{j}\geq\frac{3a+2b}{20}-\frac{2a+3b}{20}=\frac{a-b}{%
20}=T,$$
we get a contradiction.
We use the Bernstein inequality to upper bound the probability that ${\cal E}_{u}$ does not happen.
Note that for every $j$ (including $j=i$), $u$ is connected to every vertex in $V_{j}^{R}\cap V_{i}^{*}$ by an edge in $E_{2}^{\prime}$ with
probability $a/(2n)$; $u$ is connected to every vertex in $V_{j}^{R}\setminus V_{i}^{*}$ by an edge in $E_{2}^{\prime}$ with
probability $b/(2n)$. Using bound (5.1), we get that the expected number
of neighbors of $u$ in $V_{i}^{R}$ is at least $(4a+b)/10$. That is, $\operatorname*{\mathbb{E}}[Z_{i}]\geq(4a+b)/10$. By the Bernstein inequality,
$$\operatorname*{\mathbb{P}}[Z_{i}<(3a+2b)/10]\leq e^{-\frac{(a-b)^{2}/100}{2((4%
a+b)/10+(a-b)/30)}}\leq e^{-\frac{(a-b)^{2}}{100a}}.$$
Similarly, using bound (5.2), we get that for every $j\neq i$, $\operatorname*{\mathbb{E}}[Z_{j}]\leq(a+4b)/5$. By the Bernstein inequality,
$$\operatorname*{\mathbb{P}}[Z_{j}>(2a+3b)/10]\leq e^{-\frac{(a-b)^{2}/100}{2((a%
+4b)/10+(a-b)/30)}}\leq e^{-\frac{(a-b)^{2}}{100a}}.$$
By the union bound, $\operatorname*{\mathbb{P}}[{{\cal E}_{u}}]\geq 1-ke^{-\frac{(a-b)^{2}}{100a}}$.
Let $\delta_{0}=ke^{-\frac{(a-b)^{2}}{100a}}$. We proved that for every $u\in V_{i}^{*}$, $\operatorname*{\mathbb{P}}[u\in U^{0}_{i}]\geq 1-\delta_{0}$.
Let $B_{L}$ be the number of vertices in $V_{L}$ such that ${\cal E}_{u}$ does not happen, and
$B_{R}$ be the number of vertices in $V_{R}$ such that ${\cal E}_{u}$ does not happen.
Note that $\operatorname*{\mathbb{E}}[B_{L}]\leq\delta_{0}kn^{\prime}$. Also all events ${\cal E}_{u}$ with $u\in V_{L}$ are independent since each event ${\cal E}_{u}$ depends only on the subset of edges
of $E_{2}$ that goes from $u$ to $V_{R}$. Therefore, by the Chernoff bound
$$\operatorname*{\mathbb{P}}[B_{L}\geq 2\delta_{0}kn^{\prime}]<e^{-\delta_{0}kn^%
{\prime}/2}=e^{-\delta_{0}kn/4}.$$
Similarly, $\operatorname*{\mathbb{P}}[B_{R}\geq 2\delta_{0}kn^{\prime}]<e^{-\delta_{0}kn/4}$, and $\operatorname*{\mathbb{P}}[B_{L}+B_{R}\geq 2\delta_{0}kn]<2e^{-\delta_{0}kn/4}$.
Assume now that $\operatorname*{\mathbb{P}}[B_{L}+B_{R}<2\delta_{0}kn]$. Then
$$\operatorname*{\mathbb{E}}\Bigl{[}\sum_{i=1}^{k}|V_{i}^{*}\cap U_{i}^{0}|\Bigr%
{]}\geq(1-2\delta_{0})kn-40\varepsilon m/(a-b)=(1-\delta/2)kn.$$
here, $40\varepsilon m/(a-b)$ is the upper bound on the number of corrupted vertices from Claim 5.2, and $\delta=4\delta_{0}+\frac{80\varepsilon m}{(a-b)kn}$ as in the statement of the theorem.
Now,
$$\sum_{i=1}^{k}|V_{i}^{*}\cap U_{i}|\geq\sum_{i=1}^{k}|V_{i}^{*}\cap U_{i}^{1}|%
\geq\sum_{i=1}^{k}|V_{i}^{*}\cap U_{i}^{0}|-\sum_{i:|U_{i}^{0}|>n}(|U_{i}^{0}|%
-n)\geq(1-\delta/2)kn-(\delta/2)kn=(1-\delta)kn.$$
We proved that $U_{1},\dots,U_{k}$ is $\delta$-close to the planted partition, when algorithm $\cal A$ succeeds and $B_{L}+B_{R}<2\delta_{0}kn$; that is,
with probability at least $1-\tau-\exp(-\delta_{0}kn/4)$.
∎
Now we present the proof of Theorem 1.5.
Proof of Theorem 1.5.
Observe that under our assumption that
$$\frac{\sqrt{a+b(k-1)}}{a-b}+\frac{\varepsilon\left(a+b(k-1)\right)}{a-b}\leq c%
/k,$$
our first algorithm finds a partition that is $1/(10k)$-close to the planted partition given an instance of $\text{SBM}(n,k,a/2,b/2)$.
Hence, we can apply Theorem 5.1 and get a partition that is $\delta$-close to the planted partition, as desired.
∎
6 KL-divergence
Proof of Theorem 1.6.
Theorem 1.6 almost immediately follows from Theorem 1.4 and Lemma 6.1.
Lemma 6.1.
Consider any two distributions $P,Q$ over the same sample space $\Omega$. For every event ${\cal E}\subset\Omega$, we have
$$Q({\cal E})\leq\max\Big{(}\frac{2d_{\text{KL}}\left(Q,P\right)}{-\log P({\cal E%
})},\sqrt{P({\cal E})}\Big{)},$$
(6.1)
where $d_{KL}(Q,P)$ is the Kullback–Leibler divergence of $P$ from $Q$.
Consider the worst adversary $A$ for
the algorithm from Theorem 1.4 – that is, the adversary for which the algorithm succeeds to recover a $(1-\delta)$
fraction of vertices with the smallest probability. The adversary takes the graph $G\sim{\cal G}$
and transforms it to $A(G)$. Without loss of generality we may assume that the adversary is deterministic.
Let ${\cal E}$ be the set of graphs $G$ for which the algorithm fails to recover $\delta$ fraction of vertices on the corrupted graph $A(G)$. Recollect that we set $\eta=\max\left\{4N/m,\eta\right\}$.
By Theorem 1.4, the probability of ${\cal E}$ in the Stochastic Block Model distribution is at most $3\exp\big{(}-\eta m(1-\tfrac{2N}{\eta m})\big{)}$.
Thus, by Lemma 6.1, the probability of ${\cal E}$ in the distribution of ${\cal G}$ is bounded
as
$$\delta\leq\frac{2\lambda m}{\eta m(1-\tfrac{2N}{\eta m})}\leq\frac{4\lambda}{%
\eta}.$$
Further since $\eta\geq 4N/m$, we get from Theorem 1.4 the required bound on $\delta$.
∎
We now prove Lemma 6.1.
Proof of Lemma 6.1.
By the definition, KL divergence equals
$$d_{\text{KL}}\left(Q,P\right)=\sum_{\sigma\in\Omega}Q(\sigma)\log\left(\frac{Q%
(\sigma)}{P(\sigma)}\right)\geq\sum_{\sigma\in{\cal E}}Q(\sigma)\log\left(%
\frac{Q(\sigma)}{P(\sigma)}\right).$$
Let $P^{\prime}(\sigma)=P(\sigma)/P({\cal E})$ be the conditional distribution of $\sigma\in{\cal E}$ given ${\cal E}$. We write the right
hand side of the equation above as follows:
$$\sum_{\sigma\in{\cal E}}Q(\sigma)\log\left(\frac{Q(\sigma)}{P(\sigma)}\right)=%
P(E)\times\sum_{\sigma\in{\cal E}}\frac{P(\sigma)}{P(E)}\cdot\frac{Q(\sigma)}{%
P(\sigma)}\log\left(\frac{Q(\sigma)}{P(\sigma)}\right)=P(E)\times\operatorname%
*{\mathbb{E}}_{\sigma\sim P^{\prime}}\Big{[}\frac{Q(\sigma)}{P(\sigma)}\log%
\left(\frac{Q(\sigma)}{P(\sigma)}\right)\Big{]}.$$
Applying Jensen’s inequality to the convex function $f(x)=x\log x$, we get
$$d_{\text{KL}}\left(Q,P\right)\geq P({\cal E})\times\operatorname*{\mathbb{E}}_%
{\sigma\sim P^{\prime}}\Big{[}\frac{Q(\sigma)}{P(\sigma)}\log\Big{(}\frac{Q(%
\sigma)}{P(\sigma)}\Big{)}\Big{]}\geq P({\cal E})\times\operatorname*{\mathbb{%
E}}_{\sigma\sim P^{\prime}}\Big{[}\frac{Q(\sigma)}{P(\sigma)}\Big{]}\log\Big{(%
}\operatorname*{\mathbb{E}}_{P^{\prime}}\Big{[}\frac{Q(\sigma)}{P(\sigma)}\Big%
{]}\Big{)}.$$
(6.2)
Observe, that
$$\operatorname*{\mathbb{E}}_{\sigma\sim P^{\prime}}\Big{[}\frac{Q(\sigma)}{P(%
\sigma)}\Big{]}=\sum_{\sigma\in{\cal E}}\frac{Q(\sigma)}{P(\sigma)}\cdot\frac{%
P(\sigma)}{P({\cal E})}=\frac{Q({\cal E})}{P({\cal E})}.$$
Thus, inequality (6.2) implies
$$Q({\cal E})\log\left(\frac{Q({\cal E})}{P({\cal E})}\right)\leq d_{\text{KL}}%
\left(Q,P\right).$$
We have that either
$Q({\cal E})\leq\sqrt{P({\cal E})}$, or $Q({\cal E})/P({\cal E})\geq 1/\sqrt{P({\cal E})}$, and, consequently,
$$Q({\cal E})\leq\frac{2d_{\text{KL}}\left(Q,P\right)}{-\log P({\cal E})}.$$
∎
7 Lower Bounds
In this section we give lower bounds on the partial recovery in the model with two communities. We show that it is not possible to recover a $\delta$ fraction of all vertices in the pure Stochastic Block Model if
$$(a-b)<C\sqrt{(a+b)\ln 1/\delta},$$
(7.1)
for some constant $C$, and it is not possible to recover a $\delta$ fraction
of all vertices in the Stochastic Block Model with Outliers (where the adversary is allowed to add at most $\varepsilon(a+b)n$ edges) if
$$(a-b)<C\varepsilon\delta^{-1}(a+b).$$
(7.2)
We note that very recently [ZZ15] shows a lower bound with a dependence similar to (7.1).
For simplicity of exposition we slightly alter the Stochastic Block Model. We consider graphs with parallel edges. The
number of edges between two vertices $u$ and $v$ in the new model is not a Bernoulli random variable with parameter $a/n$ or $b/n$ as in the standard
Stochastic Block Model, but a
Poisson random variable with parameter $a/n$ or $b/n$. Note that recovering partitions in the Poisson model with very slightly
modified parameters $a^{\prime}=n\ln(1-a/n)$ and $b^{\prime}=n\ln(1-b/n)$, is not harder
than in the Bernoulli model, since the algorithm may simply replace parallel edges with single edges
and obtain a graph from the standard Stochastic Block Model.
Before proceeding to the formal proofs, we informally discuss why these bounds hold. Consider two vertices $u$ and $v$ lying in the opposite
clusters. Suppose we give the algorithm not only the graph $G$, but also the correct clustering of all vertices but $u$ and $v$. The algorithm needs now to decide
where to put $u$ and $v$. It turns out that the only useful information the algorithm has about $u$ and $v$ are the four numbers – the number of neighbors $u$ and $v$
have in the left and right clusters. These numbers are distributed according to the Poisson distribution with parameters $a$ and $b$. So the algorithm
is really given four numbers: two numbers $X_{1},Y_{1}$ for vertex $u$ and two numbers $Y_{2},X_{2}$ for vertex $v$. The algorithm needs to
decide whether
(a)
$X_{1}$ and $X_{2}$ have the Poisson distribution with parameter $a$, and
$Y_{1}$ and $Y_{2}$ have the Poisson distribution with parameter $b$; or
(b)
$X_{1}$ and $X_{2}$ have the Poisson distribution with parameter $b$, and
$Y_{1}$ and $Y_{2}$ have the Poisson distribution with parameter $a$.
We show in Corollary 7.10 that no test distinguishes (a) from (b) with error probability less than $\delta$ given
by the bound (7.1). This implies (7.1).
To prove the bound (7.2), we first specify what the adversary does in the model with outlier edges (noise). It
picks $\delta n$ fraction of all vertices on the left side and on the right side. For each chosen vertex, it adds approximately $(a-b)$
extra edges going to the opposite side. After that every chosen vertex has the same distribution of edges going to the opposite cluster as to its own cluster.
Hence, the chosen vertices on the left side and chosen vertices on the right side are statistically indistinguishable.
To add $(a-b)$ extra edges to every chosen vertex, the adversary needs $2(a-b)n$ edges, but
he has a budget of $\Theta(\varepsilon(a+b))$ edges. This gives the bound (7.2).
In the rest of the section, we transform the ideas outlined above into formal proof of the following theorem.
Theorem 7.1.
It is statistically impossible to recover more than $\delta$ fraction of all vertices if the bound 7.1 holds in the Stochastic Block Model,
and if the bound 7.2 holds in the Stochastic Block Model with Outliers, where the adversary can add at most $O(\varepsilon(a+b)n)$ edges.
The constant $C$ is a universal constant.
7.1 Adversary in the SBM with Outliers
We first describe the adversary for generating graphs in the SBM with Outlier edges. The adversary fixes two sets
$L^{\prime}\subset L$ and $R^{\prime}\subset R$ in the left and right clusters of size $\rho n$ each for $\rho=\Theta(\varepsilon(a+b)/(a-b))$.
Let $L^{\prime\prime}=L\setminus L$ and $R^{\prime\prime}=R\setminus R^{\prime}$. The adversary counts the number of edges going from
$L^{\prime}$ to $R^{\prime\prime}$, and the number of edges
going from $R^{\prime}$ to $R^{\prime\prime}$. Denote these numbers by $Z_{L^{\prime}}$ and $Z_{R^{\prime}}$ respectively.
Then, the adversary independently computes two numbers $\kappa_{L^{\prime}}=\kappa(Z_{L^{\prime}})$ and
$\kappa_{R^{\prime}}=\kappa(Z_{R^{\prime}})$ using a random function $\hat{\kappa}$ we describe in a moment.
He adds $\kappa_{L^{\prime}}$ edges between $L^{\prime}$ and $R^{\prime\prime}$
and $\kappa_{R^{\prime}}$ edges between $R^{\prime}$ and $L^{\prime\prime}$. He adds edges one by one
every time adding one edge between a random vertex in $L^{\prime}$ and a random vertex in $R^{\prime\prime}$ or between a random vertex in $R^{\prime}$ and a random vertex in $L^{\prime\prime}$.
Denote $M=\rho(1-\rho)n$. In Corollary 7.13, we show that there exists a function $\hat{\kappa}$ upper bounded by
$(a-b)M$
such that the total variation distance between $P_{1}$ and $\hat{\kappa}(P_{2})$ is at most $1/2$, where $P_{1}$ and $P_{2}$
are Poisson random variables with parameters $aM$ and $bM$.
The adversary uses this function $\hat{\kappa}$. Note that he adds at most
$$4(a-b)M=4(a-b)\rho(1-\rho)n\leq 4(a-b)\rho n=\Theta(\varepsilon(a+b)n),$$
edges.
7.2 Restricted Partitioning
Let us partition the sets $L$ and $R$ into two sets each: $L=L^{\prime}\cup L^{\prime\prime}$ and $R=R^{\prime}\cup R^{\prime\prime}$. Consider the following classification task:
The classifier gets the graph $G$ generated according to the Stochastic Graph Model (with or without the adversary) and the sets
$L^{\prime}$, $R^{\prime}$, $L^{\prime\prime}$ and $R^{\prime\prime}$. We specify that $L^{\prime\prime}\subset L$ and $R^{\prime\prime}\subset R$. However, we swap the order of $L^{\prime}$ and $R^{\prime}$ with probability $1/2$. Thus the classifier does not know whether $L^{\prime}\subset L$ or $L^{\prime}\subset R$ and whether $R^{\prime}\subset L$ or $R^{\prime}\subset R$.
Its goal is to guess whether $L^{\prime}\subset L$ or $L^{\prime}\subset R$ and, consequently, whether
$R^{\prime}\subset L$ or $R^{\prime}\subset R$. We call this classifier a restricted classifier.
Lemma 7.2 (Restricted Classifier for pure Stochastic Block Model).
If there exists a procedure that recovers partitions in the pure Stochastic Block Model with accuracy at least $1-\delta$,
then there exists a restricted classifier (as above) for sets $L^{\prime}=\{u\}$, $R^{\prime}=\{v\}$, $L^{\prime\prime}=L\setminus\{u\}$ and $R^{\prime\prime}=\setminus\{v\}$
that errs with probability at most $2\delta+1/n$.
Proof.
The classifier works as follows. It executes the recovery procedure for the input graph $G=(V,E)$ and gets two sets $S^{*}$ and $T^{*}$.
It picks at random $w^{\prime}\in\{u,v\}$ and $w^{\prime\prime}\in L^{\prime\prime}$. Now if $w^{\prime}$ and $w^{\prime\prime}$ lie in the same set $S^{*}$ or $T^{*}$, then the algorithm
returns “$w^{\prime}\in L$”, otherwise it returns “$w^{\prime}\in R$”.
What is the error probability of this classifier?
Since the distribution of graphs in the Stochastic Block Model is invariant under permutation of vertices in $L$ and in $R$, the error probability
will not change if we alter the process as follows: the classifier first runs the recovery procedure, then we pick two random vertices $u\in L$ and $v\in R$ and give these vertices
to the classifier. Note that the classifier does not need $u$ and $v$ to run the recovery procedure. Let us compute the error probability. Suppose that the recovery procedure misclassified $\delta^{*}$ fraction of all vertices, and say $S^{*}$ corresponds to $L$ i.e. $|S^{*}\cap L|=(1-\delta^{*})n$. If the algorithm picks $w^{\prime}=u\in L$, then the probability that $w^{\prime},w^{\prime\prime}\in S^{*}$ equals
$(1-\delta^{*})((1-\delta^{*})n-1)/n\geq 1-2\delta^{*}+1/n$. Similarly, if $w^{\prime}=v\in R$, then
the probability that $w^{\prime}\in T^{*}$ and $w^{\prime\prime}\in S^{*}$ equals
$(1-\delta^{*})^{2}\geq 1-2\delta^{*}$.
Since the expected value of $\delta^{*}$ is at most $\delta$ we get the desired result.
∎
We now prove a similar lemma for Stochastic Block Model with Outlier edges.
Lemma 7.3 (Restricted Classifier for Stochastic Block Model with Outliers).
If there exists a procedure that recovers partitions in the Stochastic Block Model with Outlier Edges with accuracy at least $1-\delta$,
then there exists a restricted classifier for sets $L^{\prime}$, $R^{\prime}$, $L^{\prime\prime}=L\setminus L^{\prime}$ and $R^{\prime\prime}=\setminus R^{\prime}$
with $|L^{\prime}|=|R^{\prime}|<n/2$ that errs with probability at most $\delta n/|L^{\prime}|$.
Proof.
As before, the classifier executes the recovery procedure for the input graph $G=(V,E)$ and gets two sets $S^{*}$ and $T^{*}$.
Then, the classifier picks sets $W^{\prime}\in\{L^{\prime},R^{\prime}\}$ and $W^{\prime\prime}=\{L^{\prime\prime},R^{\prime\prime}\}$ at random. It also picks random vertices
$w^{\prime}\in W^{\prime}$ and $w^{\prime\prime}\in W^{\prime\prime}$. If $w^{\prime}$ and $w^{\prime\prime}$ lie in the same set
$S^{*}$ or $T^{*}$, the classifier returns “$W^{\prime}$ and $W^{\prime\prime}$ are on the same side of the cut”; otherwise,
it returns “$W^{\prime}$ and $W^{\prime\prime}$ are on different sides of the cut”. Note that the classifier knows whether $W^{\prime\prime}=L^{\prime\prime}$ or $W^{\prime\prime}=R^{\prime\prime}$, and
hence whether $W^{\prime\prime}$ lies on the left or right side of the cut.
Let $\delta^{*}$ be the fraction of misclassified vertices. Further, let $\delta^{\prime}$ be the fraction of misclassified vertices in $L^{\prime}\cup R^{\prime}$; and
$\delta^{\prime\prime}$ be the fraction of misclassified vertices in $L^{\prime\prime}\cup R^{\prime\prime}$. Note that $\delta^{*}=\big{(}\delta^{\prime}(|L^{\prime}|+|R^{\prime}|)+\delta^{\prime%
\prime}(|L^{\prime\prime}|+|R^{\prime\prime}|)\big{)}/(2n)$.
The error probability of the classifier given the partition $S^{*}$ and $T^{*}$ is at most
$$1-(1-\delta^{\prime})(1-\delta^{\prime\prime})\leq\delta_{1}+\delta_{2}\leq%
\frac{2\delta^{*}n}{|L^{\prime}|+|R^{\prime}|}=\frac{\delta^{*}n}{|L^{\prime}|}.$$
The error probability over random choices of the graph is at most $\operatorname*{\mathbb{E}}[\delta^{*}n/|L^{\prime}|]=\delta n/|L^{\prime}|$.
∎
In the next subsection, we argue that, in a way, the only useful information the restricted classifier can use about the graph given the sets $L^{\prime}$, $R^{\prime}$, $L^{\prime\prime}$ and $R^{\prime\prime}$
are the number of edges between sets $L^{\prime}$, $L^{\prime\prime}$, $R^{\prime}$ and $R^{\prime\prime}$.
7.3 Tests for Pairs of Distributions
Let $D_{1}$ and $D_{2}$ be two distributions; and let $D_{Left}=D_{1}\times D_{2}$ and $D_{Right}=D_{2}\times D_{1}$ be the product
distributions – distributions of pairs $(X,Y)$ and $(Y,X)$, where $X$ and $Y$ are independent random variables distributed
as $D_{1}$ and $D_{2}$ respectively. In this section, we consider tests that given two independent
pairs of random variables $(X_{1},Y_{1})$ and $(Y_{2},X_{2})$ distributed according $D_{Left}$ and $D_{Right}$
needs to decide which pair is drawn from $D_{Left}$ and which from $D_{Right}$.
The test gets the pairs as an unordered set $\{(X_{1},Y_{1}),(Y_{2},X_{2})\}$. We show
that the restricted classifier is essentially a test for distributions $D_{1}$ and $D_{2}$,
where $D_{1}$ is the distribution of the total number of edges between $L^{\prime}$ and $L^{\prime\prime}$; $D_{2}$ is
the distribution of the number of edges between $R^{\prime}$ and $R^{\prime\prime}$.
Lemma 7.4.
Consider the Block Stochastic Model with sets $L^{\prime}$, $R^{\prime}$, $L^{\prime\prime}$, $R^{\prime\prime}$ as in Lemma 7.2, or
the Stochastic Model with Outlier edges, with sets $L^{\prime}$, $R^{\prime}$, $L^{\prime\prime}$, $R^{\prime\prime}$ as in Lemma 7.3. When we have outlier edges (noise), we assume
that the adversary behaves as described in Section 7.1 and the sets $L^{\prime}$ and $R^{\prime}$ he chooses are the same sets as above. Let
$D_{1}$ be the distribution of the number of edges between $L^{\prime}$ and $L^{\prime\prime}$, and
$D_{2}$ be the distribution of the number of edges between $L^{\prime}$ and $R^{\prime\prime}$. (Note, that the number of edges between $R^{\prime}$ and $R^{\prime\prime}$ is also distributed as $D_{1}$;
the number of edges between $R^{\prime}$ and $L^{\prime\prime}$ is distributed as $D_{2}$.) Then, if there exists a restricted classifier (see the previous section) with error
probability at most $\delta$, then there exists a test that decides whether $(X_{1},Y_{1})\sim D_{1}\times D_{2}$ or
$(X_{1},Y_{1})\sim D_{2}\times D_{1}$ with error probability at most $\delta$.
Proof.
Suppose we are given a restricted classifier with error probability at most $\delta$. We construct a test for pairs $D_{1}\times D_{2}$ and $D_{2}\times D_{1}$.
The test procedure receives two pairs $(X_{1},Y_{1})$ and $(Y_{2},X_{2})$. Then it generates a graph from the model (the pure SBM, or the one with outlier edges) as follows. It creates four
sets of vertices $A$, $B$, $L^{\prime\prime}$ and $R^{\prime\prime}$. It adds edges to the subgraphs on $A\cup B$ and $L^{\prime\prime}\cup R^{\prime\prime}$ as in the Stochastic Block Model with
planted cuts $(A,B)$ and $(L^{\prime\prime},R^{\prime\prime})$ respectively. Then, it adds $X_{1}$, $Y_{1}$, $X_{2}$, $Y_{2}$ edges
between $A$ and $L^{\prime\prime}$, $A$ and $R^{\prime\prime}$, $B$ and $R^{\prime\prime}$, $B$ and $L^{\prime\prime}$ respectively. These edges are added at random one by one: say, to add an edge
between $A$ and $L^{\prime\prime}$, the test procedure picks a random vertex in $A$ and a random vertex in $L^{\prime\prime}$ and connects these vertices with an edge.
Once the graph is generated, the procedure executes the restricted classifier. If the classifier tells that $A$ and $L^{\prime\prime}$ are on the same side of the cut, the test
returns that $X_{1},X_{2}\sim D_{1}$ and $Y_{1},Y_{2}\sim D_{2}$; otherwise, $X_{1},X_{2}\sim D_{2}$ and $Y_{1},Y_{2}\sim D_{1}$.
We now analyze the tester. We claim that the graph obtained by the procedure above is distributed according to the model
(the pure SBM, or the one with outlier edges), and the planted cut is $(A\cup L^{\prime\prime},B\cup R^{\prime\prime})$ if $X_{1},X_{2}\sim D_{1}$ and $Y_{1},Y_{2}\sim D_{2}$;
the planted cut is $(B\cup L^{\prime\prime},A\cup R)$ if $X_{1},X_{2}\sim D_{2}$ and $Y_{1},Y_{2}\sim D_{1}$. For the proof, assume without loss of generality that
$X_{1},X_{2}\sim D_{1}$ and $Y_{1},Y_{2}\sim D_{2}$. Let $N_{uv}$ be the number of edges between vertices $u$ and $v$. In the pure Stochastic Block Model,
we need to verify that random variables $N_{uv}$ are independent, and $N_{uv}$ has the Poisson distribution with parameter $a$ for $(u,v)\in A\times L^{\prime\prime}$
and $(u,v)\in B\times R^{\prime\prime}$; $N_{uv}$ has the Poisson distribution with parameter $b$ for $(u,v)\in A\times R^{\prime\prime}$
and $(u,v)\in B\times L^{\prime\prime}$. This immediately follows from the following Poisson Thinning Property, since $X_{1}$, $X_{2}$, $Y_{1}$ and $Y_{2}$ have Poisson distributions
with parameters $a|A|\cdot|L^{\prime}|$, $a|B|\cdot|R|^{\prime}$, $b|A|\cdot|L^{\prime}|$, $b|B|\cdot|R|^{\prime}$ respectively.
Fact 7.5.
Suppose we pick a number $P$ according to the Poisson distribution with parameter $\lambda$. Then, we distribute $P$ balls into $m$ bin as follows: We pick balls
one by one and through them into random bins (independently). Then the number of balls in bins are independent and are distributed according to the Poisson
distribution with parameter $\lambda/m$.
When there are outlier edges, we may assume that $Y_{1}=Z_{L^{\prime}}+\hat{\kappa}(Z_{L^{\prime}})$ and $Y_{2}=Z_{R^{\prime}}+\hat{\kappa}(Z_{R^{\prime}})$, where
$Z_{L^{\prime}}$ and $Z_{R^{\prime}}$ are Poisson random variables with parameter $b$ (see Section 7.1). If the test procedure
added $Z_{L^{\prime}}$ and $Z_{R^{\prime}}$ edges between $A$ and $R^{\prime\prime}$ and between $B$ an d$L^{\prime\prime}$, it would get a graph from
the pure Stochastic Block Model with the planted cut $(A\cup L^{\prime\prime},B\cup R^{\prime\prime})$. But adding extra
$\hat{\kappa}(Z_{L^{\prime}})$ and $\hat{\kappa}(Z_{L^{\prime}})$ edges it gets a graph from the SBM with outlier edges.
We showed that if $(A\cup L^{\prime\prime},B\cup R^{\prime\prime})$ is the planted cut, then $X_{1},X_{2}\sim D_{1}$ and $Y_{1},Y_{2}\sim D_{2}$;
if $(B\cup L^{\prime\prime},A\cup R^{\prime\prime})$ is the planted cut, $X_{1},X_{2}\sim D_{2}$ and $Y_{1},Y_{2}\sim D_{1}$. This the restricted classifier outputs
the correct cut with probability $1-\delta$, this test errs also with probability $\delta$.
∎
We will need the following simple lemma.
Lemma 7.6.
Consider two distributions $D_{1}$ and $D_{2}$. Suppose that there exists a joint distribution $D_{12}$ of random variables $X$ and $Y$ such that
$X\sim D_{1}$ and $Y\sim D_{2}$, and
$$\operatorname*{\mathbb{P}}(X=Y)\geq\eta.$$
Then, for any test for pairs of distributions $D_{1}$, $D_{2}$ (see above) errs with probability at lest $\eta^{4}/2$.
Proof.
Consider four independent pairs of random variables $(X_{1},Y_{1})$, $(X_{2},Y_{2})$, $(X_{3},Y_{3})$, and $(X_{4},Y_{4})$.
Each pair $(X_{i},Y_{i})$ is distributed according to $D_{12}$. Let $\zeta$ be the error probability of $T$.
Consider two experiments: In the first experiment we apply the test to the pairs $(X_{1},Y_{2})$ and $(X_{3},Y_{4})$; in the second, we apply the test
to $(Y_{1},X_{2})$ and $(Y_{3},X_{4})$. Observe that the random variables $X_{1}$, $Y_{2}$, $X_{3}$ and $Y_{4}$ are independent;
and $X_{1}\sim D_{1}$, $Y_{2}\sim D_{2}$, $X_{3}\sim D_{1}$, $Y_{4}\sim D_{2}$.
The random variables $Y_{1}$, $X_{2}$, $Y_{3}$ and $X_{4}$ are also independent; but
$Y_{1}\sim D_{2}$, $Y_{2}\sim D_{1}$, $X_{3}\sim D_{2}$, $Y_{4}\sim D_{1}$. So the test should output opposite results in the first and second
experiments. However, with probability at least $\eta^{4}$, we get $X_{1}=Y_{1}$, $X_{2}=Y_{2}$, $X_{3}=Y_{3}$, $X_{4}=Y_{4}$. In this case, the
test returns the incorrect answer either in the first or second experiments.
∎
We now prove Theorem 7.1.
Proof of Theorem 7.1.
By Corollary 7.10, which we prove in the next section, there exists a coupling of two Poisson random variables $P_{1}$, $P_{2}$ with
parameters $a$ and $b$, such that
$$\delta\equiv\operatorname*{\mathbb{P}}(P_{1}=P_{2})\geq C_{1}e^{-\frac{C_{2}(a%
-b)^{2}}{a+b}}$$
for some absolute constants $C_{1}$ and $C_{2}$. By Lemma 7.6, the error
probability of any test for $P_{1}$, $P_{2}$ is at least $\delta$. Since the number of neighbours
of a fixed vertex $u$ on the same side and on the opposite side are distributed as the Poisson distribution with
parameters $a$ and $b$, by Lemma 7.4, we get that
any restricted classifier has error probability at least $\delta$. Finally, by Lemma 7.2,
the expected number of misclassified vertices is at lest $2\delta+O(1/n)=2C_{1}e^{-\frac{C_{2}(a-b)^{2}}{a+b}}+O(1/n)$.
This proves the bound 7.1.
In the model with outlier edges, the total number of edges between the set $L^{\prime}$ and $L^{\prime\prime}$ has the Poisson distributed
with parameter $a|L^{\prime}|\cdot|L\setminus L^{\prime}|=a\rho(1-\rho)n$. The total number of edges between $L^{\prime}$ and $R^{\prime\prime}$
has the same distribution as $P_{1}+\hat{kappa}(P_{1})$, where $P_{1}$ is the Poisson distribution with parameter $b$ (see
Corollary 7.13). By Lemma 7.6 and Corollary 7.13, the error probability
of any test for these two distributions is at lest 1/2. Hence, by Lemma 7.4 and Lemma 7.3,
the expected number of misclassified vertices is at least (see Section 7.1)
$$\delta\geq\frac{|L^{\prime}|}{2n}=\Omega\Big{(}\frac{\varepsilon(a-b)}{a+b}%
\Big{)}.$$
This proves the bound 7.2.
∎
7.4 Poisson Distribution
Fact 7.7 (Median of the Poisson distribution).
For every Poisson random variable $P$ with parameter $\lambda>0$,
$$\operatorname*{\mathbb{P}}(P\geq{\lfloor{\lambda}\rfloor})\geq\frac{1}{2}.$$
Lemma 7.8.
There exists a constant $C>0$ such that for a Poisson random variable with parameter $\lambda\geq 1$ and every $t\geq 1$, the following
inequality holds:
$$\operatorname*{\mathbb{P}}(P\geq\lambda+t\sqrt{\lambda})\geq e^{-Ct^{2}}.$$
Proof.
Let $S^{\prime}=\{k\in\mathbb{Z}^{+}:{\lfloor{\lambda}\rfloor}\leq k<\lambda+t\sqrt%
{\lambda}\}$ and
$S^{\prime\prime}=\{k\in\mathbb{Z}^{+}:k\geq\lambda+t\sqrt{\lambda}\}$. The union $S^{\prime}\cup S^{\prime\prime}$ is the set of all integers greater than ${\lfloor{\lambda}\rfloor}$.
Hence,
$$\operatorname*{\mathbb{P}}(P\in S^{\prime}\cup S^{\prime\prime})=\operatorname%
*{\mathbb{P}}(P\geq{\lfloor{\lambda}\rfloor})\geq 1/2.$$
If $\operatorname*{\mathbb{P}}(P\in S^{\prime})\leq 1/4$, then $\operatorname*{\mathbb{P}}(P\in S^{\prime\prime})\geq 1/4$, and we are done. So we assume that $\operatorname*{\mathbb{P}}(P\in S^{\prime})\geq 1/4$.
Let $\Delta={\lceil{t\sqrt{\lambda}}\rceil}+1$. Notice that $S^{\prime}+\Delta\equiv\{k+\Delta:k\in S^{\prime}\}\subset S^{\prime\prime}$, and, consequently,
$\operatorname*{\mathbb{P}}(P\in S^{\prime\prime})\geq\operatorname*{\mathbb{P}%
}(P\in S^{\prime}+\Delta)$. We lower bound $\operatorname*{\mathbb{P}}(P\in S^{\prime}+\Delta)$ using the following lemma.
Lemma 7.9.
Let $S$ be a subset of natural numbers. Suppose that all elements in $S$ are at most $K$. Then,
$$\frac{\operatorname*{\mathbb{P}}(P\in S)}{\operatorname*{\mathbb{P}}(P\in S+1)%
}\leq 1+\frac{K-\lambda+1}{\lambda},$$
where $P$ is a Poisson random variable with parameter $\lambda$.
Proof.
Write,
$$\frac{\operatorname*{\mathbb{P}}(P\in S)}{\operatorname*{\mathbb{P}}(P\in S+1)%
}=\frac{\sum_{k\in S}\operatorname*{\mathbb{P}}(P=k)}{\sum_{k\in S}%
\operatorname*{\mathbb{P}}(P=k+1)}\leq\max_{k\in S}\frac{\operatorname*{%
\mathbb{P}}(P=k)}{\operatorname*{\mathbb{P}}(P=k+1)}.$$
For each $k\in S$, we have
$$\frac{\operatorname*{\mathbb{P}}(P=k)}{\operatorname*{\mathbb{P}}(P=k+1)}=%
\frac{e^{-\lambda}\lambda^{k}/k!}{e^{-\lambda}\lambda^{k+1}/(k+1)!}=\frac{k+1}%
{\lambda}\leq\frac{K+1}{\lambda}.$$
Hence,
$$\frac{\operatorname*{\mathbb{P}}(P\in S)}{\operatorname*{\mathbb{P}}(P\in S+1)%
}\leq 1+\frac{K-\lambda+1}{\lambda}$$
∎
Applying this lemma $\Delta$ times to the set $S$ with $K=\lambda+2\Delta$, we get
$$\frac{\operatorname*{\mathbb{P}}(P\in S^{\prime})}{\operatorname*{\mathbb{P}}(%
P\in S^{\prime}+\Delta)}\leq\Big{(}1+\frac{2\Delta+1}{\lambda}\Big{)}^{\Delta}%
=\exp\Big{(}\Delta\ln\Big{(}1+\frac{2\Delta+1}{\lambda}\Big{)}\Big{)}\leq\exp%
\Big{(}\frac{\Delta(2\Delta+1)}{\lambda}\Big{)}.$$
Since $\operatorname*{\mathbb{P}}(P\in S^{\prime})\geq 1/4$ and $\Delta={\lceil{t\sqrt{\lambda}}\rceil}+1$, we get for constant $C>0$,
$$\operatorname*{\mathbb{P}}(P\in S^{\prime}+\Delta)\geq\frac{e^{-\frac{2\Delta^%
{2}+\Delta}{\lambda}}}{4}\geq\frac{e^{-3\Delta^{2}/\lambda}}{4}\geq e^{-Ct^{2}}.$$
This finishes the proof.
∎
Corollary 7.10 (Coupling of two Poisson random variables).
There exists positive constants $C_{1},C_{2}>0$ such that for all positive $\lambda_{1}$ and $\lambda_{2}$, there exists a joint distribution
of two Poisson random variables $P_{1}$ and $P_{2}$ with parameters $\lambda_{1}$ and $\lambda_{2}$ such that
$$\operatorname*{\mathbb{P}}(P_{1}=P_{2})\geq C_{1}e^{-\frac{C_{2}(\lambda_{1}-%
\lambda_{2})^{2}}{\lambda_{1}+\lambda_{2}}}.$$
Proof.
Consider the coupling of $P_{1}$ and $P_{2}$ that maximizes the probability of the event $\{P_{1}=P_{2}\}$. The probability
that $P_{1}$ and $P_{2}$ are equal can be expressed in terms of the total variation distance between the distributions of $P_{1}$ and $P_{2}$:
$$\operatorname*{\mathbb{P}}(P_{1}=P_{2})=1-\|P_{1}-P_{2}\|_{TV}=\sum_{k=0}^{%
\infty}\min(\operatorname*{\mathbb{P}}(P_{1}=k),\operatorname*{\mathbb{P}}(P_{%
2}=k)).$$
Assume without loss of generality that $\lambda_{1}\leq\lambda_{2}$. We now consider several cases.
I. If $\lambda_{1}\geq 1$ and $\lambda_{2}\leq 2\lambda_{1}$, then
$$\operatorname*{\mathbb{P}}(P_{1}=P_{2})\geq\sum_{k>\lambda_{2}}^{\infty}\min(%
\operatorname*{\mathbb{P}}(P_{1}=k),\operatorname*{\mathbb{P}}(P_{2}=k))=\sum_%
{k>\lambda_{2}}^{\infty}\operatorname*{\mathbb{P}}(P_{1}=k)=\operatorname*{%
\mathbb{P}}(P_{1}\geq\lambda_{2}).$$
By Lemma 7.8,
$$\operatorname*{\mathbb{P}}(P_{1}=P_{2})\geq\operatorname*{\mathbb{P}}(P_{1}%
\geq\lambda_{2})\geq e^{-C\frac{(\lambda_{1}-\lambda_{2})^{2}}{\lambda_{1}}}%
\geq e^{-9C\frac{(\lambda_{1}-\lambda_{2})^{2}}{\lambda_{1}+\lambda_{2}}}.$$
II. If $\lambda_{2}\geq 2\lambda_{1}$, then
$$\operatorname*{\mathbb{P}}(P_{1}=P_{2})\geq\min(\operatorname*{\mathbb{P}}(P_{%
1}=0),\operatorname*{\mathbb{P}}(P_{2}=0))=\operatorname*{\mathbb{P}}(P_{2}=0)%
=e^{-\lambda_{2}}\geq e^{-\frac{\nicefrac{{9}}{{2}}(\lambda_{1}-\lambda_{2})^{%
2}}{\lambda_{1}+\lambda_{2}}}.$$
In the last inequality we used that
$$\frac{(\lambda_{1}-\lambda_{2})^{2}}{\lambda_{1}+\lambda_{2}}\geq\frac{%
\nicefrac{{1}}{{2}}\lambda_{2}}{\nicefrac{{3}}{{2}}\lambda_{2}}=\frac{2\lambda%
_{2}}{9}.$$
III. Finally, if $\lambda_{1}\leq 1$ and $\lambda_{2}\leq 2\lambda_{1}$, then, as in the previous case,
$$\operatorname*{\mathbb{P}}(P_{1}=P_{2})\geq\operatorname*{\mathbb{P}}(P_{2}=0)%
=e^{-\lambda_{2}}\geq e^{-2}\geq e^{-2}e^{-\frac{(\lambda_{1}-\lambda_{2})^{2}%
}{\lambda_{1}+\lambda_{2}}}.$$
This finishes the proof.
∎
Lemma 7.11.
For every positive $\lambda_{1}\leq\lambda_{2}$ there exists a joint
distribution of two Poisson random variables $P_{1}$ and $P_{2}$ such that
$$\operatorname*{\mathbb{P}}\big{(}P_{2}\geq P_{1}\text{ and }P_{2}-P_{1}\leq 2(%
\lambda_{2}-\lambda_{1})\big{)}\geq\frac{1}{2}.$$
Proof.
Observe that the Poisson distribution with parameter $\lambda_{2}$ stochastically dominates the Poisson distribution with parameter
$\lambda_{1}$ (simply because a Poisson random with parameter $\lambda_{2}$ can be expressed as the sum of two independent Poisson random variables with parameters $\lambda_{1}$ and $\lambda_{2}-\lambda_{1}$). Thus, there exists a coupling of $P_{1}$ and $P_{2}$ such that $P_{2}\geq P_{1}$ a.s. We have
$\operatorname*{\mathbb{E}}[P_{2}-P_{1}]=\lambda_{2}-\lambda_{1}$, and, by Markov’s inequality,
$\operatorname*{\mathbb{P}}((P_{2}-P_{1})\geq 2(\lambda_{1}-\lambda_{1}))\leq 1/2$.
∎
Corollary 7.12.
For every positive $\lambda_{1}<\lambda_{2}$, there exists a random function $\kappa:\mathbb{Z}^{\geq 0}\to\mathbb{Z}^{\geq 0}$
such that $P+\kappa(P)$ has the Poisson distribution with parameter $\lambda_{2}$ if $P$ has the Poisson distribution
with parameter $\lambda_{1}$ ($P$ and $\kappa$ are independent).
Proof.
Consider Poisson random variables $P_{1}$ and $P_{2}$ as in Lemma 7.11. Let $\kappa(i)=j$
with probability $\operatorname*{\mathbb{P}}(P_{2}=i+j\mid P_{1}=i)$. Then, clearly, $\kappa(P_{1})$ is distributed as $P_{2}$.
∎
Corollary 7.13.
For every positive $\lambda_{1}<\lambda_{2}$, there exists a random function $\hat{\kappa}:\mathbb{Z}^{\geq 0}\to\mathbb{Z}^{\geq 0}$
such that $\hat{\kappa}(P_{1})\leq 2(\lambda_{2}-\lambda_{1})$ a.s. for a Poisson random variable $P_{1}$ with parameter $\lambda_{1}$
and there exists a coupled Poisson random variable $P_{2}$ with parameter $\lambda_{2}$ such that
$$\operatorname*{\mathbb{P}}(P_{2}=P_{1}+\hat{\kappa}(P_{1}))\geq 1/2.$$
Proof.
We let $\hat{\kappa}(i)=\min(\kappa(i),2(\lambda_{2}-\lambda_{1}))$ and $P_{2}=P_{1}+\kappa(P_{1})$. Then, clearly $\hat{\kappa}(i)\leq 2(\lambda_{2}-\lambda_{1})$ and
$P_{1}+\hat{\kappa}(P_{1})=P_{1}+\kappa(P_{1})$ with probability at least $1/2$.
∎
References
[ABH14]
Emmanuel Abbe, Afonso S. Bandeira, and Georgina Hall, Exact recovery in
the stochastic block model, CoRR abs/1405.3267 (2014).
[ABKK15]
Naman Agarwal, Afonso S. Bandeira, Konstantinos Koiliaris, and Alexandra Kolla,
Multisection in the stochastic block model using semidefinite
programming, CoRR abs/1507.02323 (2015).
[AM05]
Dimitris Achlioptas and Frank McSherry, On spectral learning of mixtures
of distributions, Learning Theory (Peter Auer and Ron Meir, eds.), Lecture
Notes in Computer Science, vol. 3559, Springer Berlin Heidelberg, 2005,
pp. 458–469 (English).
[Ame14]
BrendanP.W. Ames, Guaranteed clustering and biclustering via semidefinite
programming, Mathematical Programming 147 (2014), no. 1-2, 429–465
(English).
[AMMN06]
Noga Alon, Konstantin Makarychev, Yury Makarychev, and Assaf Naor,
Quadratic forms on graphs, Inventiones mathematicae 163
(2006), no. 3, 499–522 (English).
[AN06]
Noga Alon and Assaf Naor, Approximating the cut-norm via grothendieck’s
inequality, SIAM J. Comput. 35 (2006), 787–803.
[AS15]
Emmanuel Abbe and Colin Sandon, Community detection in general stochastic
block models: fundamental limits and efficient recovery algorithms,
Proceedings of Symposium on Foundations of Computer Science, FOCS ’15, IEEE
Computer Society, 2015.
[BLCS87]
Thang Nguyen Bui, F. Thomson Leighton, Soma Chaudhuri, and Michael Sipser,
Graph bisection algorithms with good average case behavior,
Combinatorica 7 (1987), 171–191.
[BMMN11]
M. Braverman, K. Makarychev, Y. Makarychev, and A. Naor, The grothendieck
constant is strictly smaller than krivine’s bound, Foundations of Computer
Science (FOCS), 2011 IEEE 52nd Annual Symposium on, Oct 2011, pp. 453–462.
[Bop87]
Ravi B. Boppana, Eigenvalues and graph bisection: An average-case
analysis, 28th Annual Symposium on Foundations of Computer
Science, 1987, pp. 280–285.
[Bru09]
S. Charles Brubaker, Robust pca and clustering in noisy mixtures, SODA
(Claire Mathieu, ed.), SIAM, 2009, pp. 1078–1087.
[BS95]
Avrim Blum and Joel Spencer, Coloring random and semi-random
$k$-colorable graphs, J. Algorithms 19 (1995), 204–234.
[CK99]
Anne Condon and Richard Karp, Algorithms for graph partitioning on the
planted partition model, Randomization, Approximation, and Combinatorial
Optimization. Algorithms and Techniques, Lecture Notes in Computer
Science, vol. 1671, Springer Berlin / Heidelberg, 1999, pp. 221–232.
[CL15]
T. Tony Cai and Xiaodong Li, Robust and computationally feasible
community detection in the presence of arbitrary outlier nodes, Ann.
Statist. 43 (2015), no. 3, 1027–1059.
[CO06]
Amin Coja-Oghlan, A spectral heuristic for bisecting random graphs,
Random Structures & Algorithms 29 (2006), no. 3, 351–398.
[Co10]
Amin Coja-oghlan, Graph partitioning via adaptive spectral techniques,
Comb. Probab. Comput. 19 (2010), no. 2, 227–284.
[CRV15]
Peter Chin, Anup Rao, and Van Vu, Stochastic block model and community
detection in sparse graphs: A spectral algorithm with optimal rate of
recovery, Proceedings of The 28th Conference on Learning Theory, COLT
2015, Paris, France, July 3-6, 2015, 2015, pp. 391–423.
[CSX12]
Yudong Chen, Sujay Sanghavi, and Huan Xu, Clustering sparse graphs,
Advances in Neural Information Processing Systems 25 (F. Pereira, C.J.C.
Burges, L. Bottou, and K.Q. Weinberger, eds.), Curran Associates, Inc., 2012,
pp. 2204–2212.
[DF86]
M. E. Dyer and A. M. Frieze, Fast solution of some random NP-hard
problems, 27th Annual Symposium on Foundations of Computer
Science, 1986, pp. 331–336.
[DI98]
Tassos Dimitriou and Russell Impagliazzo, Go with the winners for graph
bisection, Proceedings of the ninth Annual ACM-SIAM Symposium on
Discrete Algorithms, 1998, pp. 510–520.
[DKMZ11]
Aurelien Decelle, Florent Krzakala, Cristopher Moore, and Lenka Zdeborová,
Asymptotic analysis of the stochastic block model for modular networks
and its algorithmic applications, Phys. Rev. E 84 (2011), 066106.
[FK98]
Uriel Feige and Joe Kilian, Heuristics for finding large independent
sets, with applications to coloring semi-random graphs, Proceedings of
Symposium on Foundations of Computer Science, 1998, pp. 674–683.
[For10]
Santo Fortunato, Community detection in graphs, Physics Reports
486 (2010), 75–174.
[GL76]
A. Grothendieck and V. Losert, ”résumé de la théorie
métrique des produits tensoriels topologiques”, Univ., 1976.
[GV14]
Olivier Guedon and Roman Vershynin, Community detection in sparse
networks via grothendieck’s inequality, CoRR abs/1411.4686 (2014).
[HLL83]
Paul W. Holland, Kathryn Blackmond Laskey, and Samuel Leinhardt,
Stochastic blockmodels: First steps, Social Networks 5
(1983), no. 2, 109 – 137.
[JS93]
Mark Jerrum and Gregory Sorkin, Simulated annealing for graph bisection,
Proceedings of the 34th Annual IEEE Symposium on Foundations of
Computer Science, 1993, pp. 94–103.
[KSV05]
Ravindran Kannan, Hadi Salmasian, and Santosh Vempala, The spectral
method for general mixture models, Learning Theory (Peter Auer and Ron Meir,
eds.), Lecture Notes in Computer Science, vol. 3559, Springer Berlin
Heidelberg, 2005, pp. 444–457 (English).
[Mas14]
Laurent Massoulié, Community detection thresholds and the weak
ramanujan property, Symposium on Theory of Computing, STOC 2014, New York,
NY, USA, May 31 - June 03, 2014, 2014, pp. 694–703.
[McS01]
Frank McSherry, Spectral partitioning of random graphs, Proceedings of
the 42nd IEEE Symposium on Foundations of Computer Science, 2001,
pp. 529–537.
[MMV12]
Konstantin Makarychev, Yury Makarychev, and Aravindan Vijayaraghavan,
Approximation algorithms for semi-random partitioning problems,
Proceedings of Symposium on Theory of Computing, 2012, pp. 367–384.
[MMV14]
, Constant factor approximation for balanced cut in the random
PIE model, Proceedings of Symposium on Theory of Computing, 2014.
[MMV15]
Konstantin Makarychev, Yury Makarychev, and Aravindan Vijayaraghavan,
Correlation clustering with noisy partial information, Proceedings of
the Conference on Learning Theory (COLT), 2015.
[MNS12]
Elchanan Mossel, Joe Neeman, and Allan Sly, Stochastic block models and
reconstruction, arXiv preprint arXiv:1202.1499 (2012).
[MNS13]
Elchanan Mossel, Joe Neeman, and Allan Sly, A proof of the block model
threshold conjecture, CoRR abs/1311.4115 (2013).
[MNS14]
, Belief propagation, robust reconstruction and optimal recovery
of block models, Proceedings of The 27th Conference on Learning Theory,
COLT 2014, Barcelona, Spain, June 13-15, 2014, 2014, pp. 356–370.
[MNS15]
Elchanan Mossel, Joe Neeman, and Allan Sly, Consistency thresholds for
the planted bisection model, Proceedings of the Forty-Seventh Annual ACM on
Symposium on Theory of Computing (New York, NY, USA), STOC ’15, ACM, 2015,
pp. 69–75.
[MS15]
Andrea Montanari and Subhabrata Sen, Semidefinite programs on sparse
random graphs, CoRR abs/1504.05910 (2015).
[Vu14]
V. Vu, A simple SVD algorithm for finding hidden partitions, ArXiv
e-prints (2014).
[WBB76]
Harrison C. White, Scott A. Boorman, and Ronald L. Breiger, Social
structure from multiple networks. i. blockmodels of roles and positions,
American Journal of Sociology 81 (1976), no. 4, pp. 730–780
(English).
[ZZ15]
Anderson Y. Zhang and Harrison H. Zhou, Minimax rates of community
detection in stochastic block models, CoRR abs/1507.05313 (2015).
Appendix A Useful Concentration Bounds
The following concentration bound follows from the Bernstein or Hoeffding bounds:
Fact A.1.
Let for each $i,j\in[N]$, $a_{ij}$ be Bernoulli random variables that is $1$ with probability $p_{ij}$. Then
$$\operatorname*{\mathbb{P}}\Big{[}\sum_{ij=1}^{N}a_{ij}-\operatorname*{\mathbb{%
E}}\sum_{ij}a_{ij}>t\Big{]}\leq\exp\left(-\frac{t^{2}/2}{\sum_{ij}p_{ij}+t/3}%
\right).$$
(A.1)
We first give a simple concentration result about the $\infty\rightarrow 1$ norm (with mean $0$ entries) that was used in [GV14]. This will be useful to bound various error terms.
Lemma A.2.
Let $M\in\mathbb{R}^{N\times N}$ be the adjacency matrix of a graph generated as follows: For every pair of vertices an edge is present independently at random with probability $0\leq p_{ij}\leq 1/2$, and let $\sigma^{2}=\sum_{i,j=1,i<j}^{N}p_{ij}$. For any $0<t<3\sigma$ we have
$$\operatorname*{\mathbb{P}}\Big{[}\lVert M-\operatorname*{\mathbb{E}}M\rVert_{%
\infty\rightarrow 1}>t\sigma\Big{]}\leq\exp\left(-\frac{t^{2}}{16}+2N\right).$$
(A.2)
Proof.
For all $i<j\in[N]$, let $Q_{ij}=M_{ij}-\operatorname*{\mathbb{E}}[M_{ij}]$ and $Q_{ji}=Q_{ij}$.
$$\lVert Q\rVert_{\infty\rightarrow 1}=\min_{x,y\in\{-1,1\}^{N}}x^{t}Qy=\min_{x,%
y\in\{-1,1\}^{N}}2\sum_{i<j}Q_{ij}x_{i}y_{j}.$$
Fix an assignment $x,y$. $\{x_{i}Q_{ij}y_{j}\}_{i<j}$ are i.i.d. random variables, which take values in $[1,-1]$ . Hence by Bernstein bounds,
$$\operatorname*{\mathbb{P}}\Big{[}\sum_{i<j}Q_{ij}x_{i}y_{j}>\tfrac{t}{2}\sigma%
\Big{]}\leq\exp\left(-\frac{\sigma^{2}t^{2}/8}{\sum_{i<j}p_{ij}(1-p_{ij})+%
\tfrac{t\sigma}{6}}\right)\leq\exp\left(-t^{2}/16\right),$$
where the last inequality follows from $\operatorname*{\mathbb{E}}[(x^{t}Qy)^{2}]=\sum_{i<j}p_{ij}(1-p_{ij})\leq\sigma%
^{2}/2$ and $t<3\sigma$. The lemma follows from a union bound over all the $2^{2N}$ choices for $x,y$.
∎ |
IMEX error inhibiting schemes with post-processing
Adi Ditkowski
School of Mathematical Sciences, Tel Aviv University, Tel Aviv 69978, Israel.
Email: [email protected]
Sigal Gottlieb
Mathematics Department, University of Massachusetts Dartmouth, 285 Old Westport Road,
North Dartmouth MA 02747. Email: [email protected]
Zachary J. Grant
Department of Computational and Applied Mathematics, Oak Ridge National Laboratory, Oak Ridge TN 37830. Email: [email protected]
Abstract
High order implicit-explicit (IMEX) methods are often desired when evolving the solution of an
ordinary differential equation that has a stiff part that is linear and a non-stiff part that is nonlinear.
This situation often arises in semi-discretization of partial differential equations and many such
IMEX schemes have been considered in the literature. The methods considered usually have a
a global error that is of the same order as the local truncation error. More recently, methods
with global errors that are one order higher than predicted by the local truncation error have been
devised [18, 28, 5].
In prior work [6, 7] we investigated the interplay between
the local truncation error and the global error to construct explicit and implicit error inhibiting schemes
that control the accumulation of the local truncation error over time, resulting in a global
error that is one order higher than expected from the local truncation error, and which can be
post-processed to obtain a solution which is two orders higher than expected.
In this work we extend our error inhibiting with post-processing framework introduced in
[6] and further in [7] to a class of additive general linear
methods with multiple steps and stages.
We provide sufficient conditions under which these methods with local truncation error
of order $p$ will produce solutions of order $p+1$, which can be post-processed to order $p+2$, and
describe the construction of one such post-processor. We apply this approach to obtain
implicit-explicit (IMEX) methods with multiple steps and stages.
We present some of our new IMEX methods and show their linear stability properties,
and investigate how these methods perform in practice on some numerical test cases.
1 Introduction
Efficient high order numerical methods for evolving the solution of an ordinary differential
equation (ODE) are frequently desired, especially for the time-evolution of PDEs that are
semi-discretized in space. Higher order methods are desirable as they enable more accurate
solutions for larger step-sizes.
In this paper we consider numerical solvers for ordinary differential equations (ODEs)
$$\displaystyle u_{t}={\cal F}(t,u)\;,\;\;\;\;\;t\geq 0$$
(1)
$$\displaystyle u(t_{0})=u_{0}.$$
(where ${\cal F}(t,u)$ is sufficiently smooth). Any non-autonomous system of this form can be converted to an
autonomous system (see [15]) so that without loss of generality, we restrict ourselves
to the autonomous case:
$$\displaystyle u_{t}={\cal F}(u)\;,\;\;\;\;\;t\geq 0$$
(2)
$$\displaystyle u(t_{0})=u_{0}.$$
The forward Euler method is the simplest numerical method for evolving this problem forward
in time with time-steps $\Delta t$:
$$v_{n+1}=v_{n}+\Delta t{\cal F}(v_{n})\;,$$
where $v_{n}$ approximates the exact solution at time $t^{n}$ and we let $v_{0}=u_{0}$.
For this method, we have
$$\tau^{n}=\Delta tLTE^{n}=u(t_{n-1})+\Delta tF(t_{n-1},u(t_{n-1}))-u(t_{n})%
\approx O(\Delta t^{2})$$
where $LTE^{n}$ is called the local truncation error, and $\tau^{n}$ the approximation error,
at any given time $t_{n}$. At any such time $t_{n}$ we are interested in is the global error
which is the difference between the numerical and exact solution. For the forward Euler method
the global error is first order accurate:
$$E_{n}=v_{n}-u(t_{n})\approx O(\Delta t).$$
Observe that the global error $E^{n}$ and the local truncation error $LTE^{n}$ are of the same order,
$O(\Delta t)$. This is the common experience that one step methods and linear multi-step
methods that are typically used produce solutions that have a global error that is of the
same order as the local truncation error.
To generate methods with higher local truncation errors, we can generalize the forward Euler scheme
by adding steps (e.g. linear multistep methods), stages (e.g. Runge–Kutta methods), or derivatives
(e.g. Taylor series methods).
The Dahlquist Equivalence Theorem [27] states that any zero-stable, consistent
linear multistep method with truncation error $O(\Delta t^{p})$ will have global error $O(\Delta t^{p})$,
provided that the solution $u$ has at least $p+1$ smooth derivatives. This behavior is usually
seen in other types of schemes, and is the general expectation: so much so that the
order of a numerical method is typically defined solely by the order conditions
derived by Taylor series analysis of the local truncation error.
However, the Lax-Richtmeyer equivalence theorem (see e.g. [21], [12], [22]) states that if the numerical scheme is stable then its global error is at least of the same order as its local truncation error. Recent work
[17, 29, 18, 28, 5] highlights that
while for a stable scheme the global error must at least of the same order as its local truncation error,
it may in fact be of higher order.
Schemes that have global errors that are higher order than predicted by the local truncation errors
were devised by Kulikov [17] and by Weiner and colleagues [29]
following the quasi-consistency theory first introduced by Skeel in 1978 [25].
These methods have truncation errors that are of order $p$ but have global errors of order $p+1$.
In [5] Ditkowski and Gottlieb derived sufficient conditions for a class of general linear methods
(GLMs) under which one can control the accumulation of the local truncation error over time evolution
and produce methods that are one order higher than expected from the truncation error alone.
Additional conditions were derived in
[18, 28, 19],
that allow the computation of the the precise form of the first surviving
term in the global error (the vector multiplying $\Delta t^{p+1}$).
In these works, this form was leveraged for error estimation.
However, the conditions in [18, 28, 19],
were overly restrictive and made it difficult to find methods.
In [6] Ditkowski, Gottlieb, and Grant showed that under less restrictive conditions the
form of the first surviving term in the global error can be computed explicitly and they showed
how the solution can be post-processed to obtain accuracy of order $p+2$.
In this work, they produced implicit method and explicit methods
that have favorable stability properties. The coefficients of those methods can be downloaded from
[8]. In [7] Ditkowski, Gottlieb, and Grant derived sufficient conditions
under which two-derivative GLMs of a certain form can produce solutions of order $p+1$ and
can be post-processed to obtain solutions of order $p+2$.
The coefficients of those methods can be downloaded from [9].
In many problems, there is a component that is stiff and easy to invert (e.g. linear) and a component that
is non-stiff but difficult to invert (e.g. nonlinear). Such examples include advection-diffusion PDEs,
where the diffusion term requires a small time-step when used with an explicit time-stepping method
but is linear, and the advection term is nonlinear but allows a relatively large time-step when used with
an explicit time-stepping method. In such cases it is convenient to distinguish these
two components and write the resulting autonomous ODE as
$$\displaystyle u_{t}=F(u)+G(u)\;,\;\;\;\;\;t\geq 0$$
(3)
$$\displaystyle u(t_{0})=u_{0}.$$
(where we assume, as before, the required smoothness on $F(u)$ and $G(u)$).
Here, $G(u)$ is a stiff but ’simple to invert’ operator while $F(u)$ is non-stiff but complicated
(or costly) to invert and typically non-linear.
Implicit-explicit (IMEX) methods were developed to treat the $F$ component explicitly and the
$G$ component implicitly, thus alleviating the linear stability constraint on the stiff component
$G$ while treating the difficult to invert component $F$ explicitly.
IMEX methods were first introduced by Crouziex in 1980 [4]
for evolving parabolic equations.
For time dependent PDEs (notably (convection-diffusion equations)
Ascher, Ruuth, and Wetton [1]
introduced IMEX multi-step methods and
Ascher, Ruuth and Spiteri [2] IMEX RungeâKutta schemes.
Implicit methods are often particularly desirable when applied to a linear component, in which case
the order conditions may simplify: such methods were considered by Calvo, Frutos, and Novo in
[3].
In 2003, Kennedy and Carpenter [16]
derived IMEX RungeâKutta methods based on singly diagonally implicit RungeâKutta (SDIRK) methods.
This work introduced sophisticated IMEX methods with good accuracy and stability properties, as well as high quality embedded methods for error control and other features that make these methods usable in complicated applications.
Recently, there has been more interest in IMEX methods with multiple steps and stages, including
[14], [26].
Optimized stability regions for IMEX methods
with multiple steps and stages have also been of recent
interest, as in [30] and [20].
In recent work, Schneider and colleagues [23, 24] produced super-convergent IMEX Peer methods
(a Peer method is a GLM where each stage is of the same order).
These methods satisfy error inhibiting conditions similar to those in [5]
and so produce order $p+1$ although their truncation error is of order $p$, so
we shall refer to them here as IMEX-EIS schemes.
In this work, we extend the error inhibiting with post-processing theory in
[6] and [7] to IMEX methods.
We proceed to devise error inhibiting IMEX methods of up to five steps and order six
that have truncation errors or order $p$ but which can be
post-processed to attain a solution of order $p+2$. We refer to these as IMEX-EIS+
methods. We test these methods on a number of
numerical examples to demonstrate their enhanced accuracy properties.
This paper is structured as follows: in Section 2 we present our notation and
some information about our IMEX methods which have multiple steps and stages.
In Section 3.1 we provide sufficient conditions under which these methods with local truncation error
of order $p$ will produce solutions of order $p+1$, which can be post-processed to order $p+2$.
We provide the construction of one such post-processor in Section 3.2. In Section 4
we present some of our new methods and show their linear stability properties, and in Section
5 we show how these methods perform in practice on some numerical
test cases.
2 Multistep multi-stage additive methods
In this work we consider the class of additive general linear methods (GLMs) of the form
$$V^{n+1}=\mathbf{D}V^{n}+\Delta t\left[\mathbf{A}_{F}F(V^{n})+\mathbf{R}_{F}F(V%
^{n+1})+\mathbf{A}_{G}G(V^{n})+\mathbf{R}_{G}G(V^{n+1})\right],$$
(4)
where $V^{n}$ is a vector of length $s$ that contains the numerical solution
at times $\left(t_{n}+c_{j}\Delta t\right)$ for $j=1,\ldots,s$:
$$V^{n}=\left(v(t_{n}+c_{1}\Delta t)),v(t_{n}+c_{2}\Delta t),\ldots,v(t_{n}+c_{s%
}\Delta t)\right)^{T}.$$
(5)
The functions $F(V^{n})$ and $G(V^{n})$ are defined as the component-wise function evaluation on the vector $V^{n}$:
$$F(V^{n})=\left(F(v(t_{n}+c_{1}\Delta t)),F(v(t_{n}+c_{2}\Delta t)),\ldots,F(v(%
t_{n}+c_{s}\Delta t))\right)^{T}.$$
(6)
and
$$G(V^{n})=\left(FGv(t_{n}+c_{1}\Delta t)),G(v(t_{n}+c_{2}\Delta t)),\ldots,G(v(%
t_{n}+c_{s}\Delta t))\right)^{T}.$$
(7)
For convenience, we select $c_{1}=0$ so that the first element in the vector $V^{n}$
approximates the solution at time $t_{n}$, and the abscissas are non-decreasing
$c_{1}\leq c_{2}\leq...\leq c_{s}$.
To initialize these methods, we define the first element in the initial solution
vector $V^{0}_{1}=u(t_{0})$ and the remaining elements $v(t_{0}+c_{j}\Delta t)$
are computed using a sufficiently accurate method.
Similarly, we define the projection of the exact solution of the ODE (3)
onto the temporal grid by:
$$U^{n}=\left(u(t_{n}+c_{1}\Delta t),u(t_{n}+c_{2}\Delta t),\ldots,u(t_{n}+c_{s}%
\Delta t)\right)^{T},$$
(8)
with $F(U^{n})$ and $G(U^{n})$ are the component-wise function evaluation on the vector $U^{n}$.
We define the global error as the difference between the
vectors of the exact and the numerical solutions at some time $t^{n}$
$$E^{n}=V^{n}-U^{n}.$$
(9)
Remark
Note that the additive form (4) includes both explicit and implicit schemes, as
$V^{n+1}$ appears on both sides of the equation. However, if
$\mathbf{R}_{F}$ and $\mathbf{R}_{G}$ are both strictly lower triangular the scheme is explicit.
A special case of additive methods is that the method is explicit in $\mathbf{R}_{F}$ and implicit in $\mathbf{R}_{G}$:
such methods are called implicit-explicit or IMEX methods. In this work we are primarily interested in
IMEX schemes so we require that in the form (4)
$\mathbf{R}_{F}$ is strictly lower triangular, and that $\mathbf{R}_{G}$ contains some diagonal or super-diagonal elements.
However, in this section and in Section (3.1) we do not assume this is the case, and so our
error inhibiting with post-processing theory is true for all additive methods of the
form (4), which is a much more general class than IMEX methods.
${}_{\blacksquare}$
2.1 Truncation errors
A method of the form (4) has an approximation error $\boldsymbol{\tau}^{n}$ at at time $t^{n}$
$$\boldsymbol{\tau}^{n}=\left[\mathbf{D}U^{n-1}+\Delta t\left(\mathbf{A}_{F}F(U^%
{n-1})+\mathbf{R}_{F}F(U^{n})+\mathbf{A}_{G}G(U^{n-1})+\mathbf{R}_{G}G(U^{n})%
\right)\right]-U^{n}$$
(10)
where
$$\boldsymbol{\tau}^{n}=\sum_{j=0}^{\infty}\boldsymbol{\tau}^{n}_{j}\Delta t^{j}%
=\boldsymbol{\tau}_{0}\;u(t_{n})+\sum_{j=1}^{\infty}\Delta t^{j}\left(%
\boldsymbol{\tau}_{j}\left.\frac{d^{j}u}{dt^{j}}\right|_{t=t_{n}}+\hat{%
\boldsymbol{\tau}}_{j}\left.\frac{d^{j-1}G(u)}{dt^{j-1}}\right|_{t=t_{n}}\right)$$
(11)
where
$$\displaystyle\boldsymbol{\tau}_{0}$$
$$\displaystyle=$$
$$\displaystyle\left(\mathbf{D}-\mathbf{I}\right){\mathbb{1}}$$
(12a)
$$\displaystyle\boldsymbol{\tau}_{j}$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{(j-1)!}\left(\frac{1}{j}\mathbf{D}(\mathbf{c}-{\mathbb{1%
}})^{j}+\mathbf{A}_{F}(\mathbf{c}-{\mathbb{1}})^{j-1}+\mathbf{R}_{F}\mathbf{c}%
^{j-1}-\frac{1}{j}\mathbf{c}^{j}\right)\;\;\mbox{for $j>0$}$$
(12b)
$$\displaystyle\hat{\boldsymbol{\tau}}_{j}$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{(j-1)!}\left[\left(\mathbf{A}_{G}-\mathbf{A}_{F}\right)(%
\mathbf{c}-{\mathbb{1}})^{j-1}+\left(\mathbf{R}_{G}-\mathbf{R}_{F}\right)%
\mathbf{c}^{j-1}\right]\;\;\;\;\mbox{for j=1,2, ...}.$$
(12c)
We denote the vector of abscissas by $\mathbf{c}=(c_{1},c_{2},...,c_{s})^{T}$,
and the vector of ones by ${\mathbb{1}}=(1,1,...,1)^{T}$.
Any terms of the form $\mathbf{c}^{j}$ are to be understood component-wise $\mathbf{c}^{j}=(c_{1}^{j},c_{2}^{j},...,c^{j}_{s})^{T}$.
Note that this notation matches the one used in [20].
Alternatively, observing that $u_{t}=F+G$ we can write this as:
$$\displaystyle\boldsymbol{\tau}^{n}$$
$$\displaystyle=$$
$$\displaystyle\boldsymbol{\tau}_{0}u(t_{n})+\sum_{j=1}^{\infty}\Delta t^{j}%
\left(\boldsymbol{\tau}_{j}\left.\frac{d^{j}u}{dt^{j}}\right|_{t=t_{n}}+\hat{%
\boldsymbol{\tau}}_{j}\left.\frac{d^{j-1}G(u)}{dt^{j-1}}\right|_{t=t_{n}}\right)$$
$$\displaystyle=$$
$$\displaystyle\boldsymbol{\tau}_{0}u(t_{n})+\sum_{j=1}^{\infty}\Delta t^{j}%
\left(\boldsymbol{\tau}_{j}\left.\frac{d^{j-1}u_{t}}{dt^{j-1}}\right|_{t=t_{n}%
}+\hat{\boldsymbol{\tau}}_{j}\left.\frac{d^{j-1}G(u)}{dt^{j-1}}\right|_{t=t_{n%
}}\right)$$
$$\displaystyle=$$
$$\displaystyle\boldsymbol{\tau}_{0}u(t_{n})+\sum_{j=1}^{\infty}\Delta t^{j}%
\left(\boldsymbol{\tau}_{j}\left.\frac{d^{j-1}F(u)}{dt^{j-1}}\right|_{t=t_{n}}%
+\left(\boldsymbol{\tau}_{j}+\hat{\boldsymbol{\tau}}_{j}\right)\left.\frac{d^{%
j-1}G(u)}{dt^{j-1}}\right|_{t=t_{n}}\right)$$
$$\displaystyle=$$
$$\displaystyle\boldsymbol{\tau}_{0}u(t_{n})+\sum_{j=1}^{\infty}\Delta t^{j}%
\left(\boldsymbol{\tau}^{F}_{j}\left.\frac{d^{j-1}F(u)}{dt^{j-1}}\right|_{t=t_%
{n}}+\boldsymbol{\tau}^{G}_{j}\left.\frac{d^{j-1}G(u)}{dt^{j-1}}\right|_{t=t_{%
n}}\right)$$
where
$$\displaystyle\boldsymbol{\tau}_{0}$$
$$\displaystyle=$$
$$\displaystyle\left(\mathbf{D}-\mathbf{I}\right){\mathbb{1}}$$
(13a)
$$\displaystyle\boldsymbol{\tau}^{F}_{j}$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{(j-1)!}\left(\frac{1}{j}\mathbf{D}(\mathbf{c}-{\mathbb{1%
}})^{j}+\mathbf{A}_{F}(\mathbf{c}-{\mathbb{1}})^{j-1}+\mathbf{R}_{F}\mathbf{c}%
^{j-1}-\frac{1}{j}\mathbf{c}^{j}\right)\;\;\mbox{for $j>0$}$$
(13b)
$$\displaystyle\boldsymbol{\tau}^{G}_{j}$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{(j-1)!}\left(\frac{1}{j}\mathbf{D}(\mathbf{c}-{\mathbb{1%
}})^{j}+\mathbf{A}_{G}(\mathbf{c}-{\mathbb{1}})^{j-1}+\mathbf{R}_{G}\mathbf{c}%
^{j-1}-\frac{1}{j}\mathbf{c}^{j}\right)\;\;\mbox{for $j>0$}$$
(13c)
This notation is the same as used in [26].
Regardless of whether we use the notation (12) or (13),
for a method to have order truncation error of order $p$ it must satisfy the order conditions
$$\boldsymbol{\tau}^{n}_{j}=0\;\;\;\;\;j\leq p.$$
(14)
As this must be true for all $F$ and $G$ these become
$$\boldsymbol{\tau}_{0}=0,\;\;\;\mbox{and}\;\;\;\boldsymbol{\tau}_{j}=\hat{%
\boldsymbol{\tau}}_{j}=0\;\;\;\forall\;\;j=1,...,p,$$
or, equivalently,
$$\boldsymbol{\tau}_{0}=0,\;\;\;\mbox{and}\;\;\;\boldsymbol{\tau}^{F}_{j}=%
\boldsymbol{\tau}^{G}_{j}=0\;\;\;\forall\;\;j=1,...,p.$$
We are only interested in zero-stable methods. A sufficient condition for this is
that the coefficient matrix $\mathbf{D}$ is a rank one matrix that has row sum one: this
satisfies the consistency condition
$$\boldsymbol{\tau}_{0}=\left(\mathbf{D}-\mathbf{I}\right){\mathbb{1}}=0.$$
For simplicity we assume this to be the case in this the remainder of this work.
2.2 Preliminaries
In this subsection we make an observation that will be useful in the remainder of the paper.
Observation 1 Given the smoothness of $F$ and $G$, and the assumption that $\|E^{n}\|<<1$,
we observe that
$$F(U^{n}+E^{n})=F(U^{n})+F_{y}^{n}E^{n}+O(\Delta t)O\left(\|E^{n}\|\right),$$
(15)
and
$$G(U^{n}+E^{n})=G(U^{n})+G_{y}^{n}E^{n}+O(\Delta t)O\left(\|E^{n}\|\right),$$
(16)
where $F_{y}^{n}=F_{y}(u(t_{n}))$ and $G_{y}^{n}=G_{y}(u(t_{n}))$.
Proof
This is simply due to the fact that $F$ is smooth enough and $E^{n}$ is small enough that we can expand:
$$\displaystyle F(U^{n}+E^{n})$$
$$\displaystyle=F(U^{n})+\left(\begin{array}[]{l}F_{y}(u(t_{n}+c_{1}\Delta t))e_%
{n+c_{1}}\\
F_{y}(u(t_{n}+c_{2}\Delta t))e_{n+c_{2}}\\
\vdots\\
F_{y}(u(t_{n}+c_{s}\Delta t))e_{n+c_{s}}\\
\end{array}\right)+O\left(\|E^{n}\|^{2}\right),$$
$$\displaystyle=F(U^{n})+\left(\begin{array}[]{cccc}F_{y}^{n+c_{1}}&0&\cdots&0\\
0&F_{y}^{n+c_{2}}&\cdots&0\\
\vdots&\vdots&\ddots&\vdots\\
0&0&\cdots&F_{y}^{n+c_{s}}\\
\end{array}\right)E^{n}+O\left(\|E^{n}\|^{2}\right),$$
where the error vector is $E^{n}=\left(e_{n+c_{1}},e_{n+c_{2}},...,e_{n+c_{s}}\right)^{T}$,
and we use the notation $F_{y}^{n+c_{j}}=F_{y}(u(t_{n}+c_{j}\Delta t))$.
Each term can be expanded as
$$F_{y}^{n+c_{j}}=F_{y}(u(t_{n}+c_{j}\Delta t))=F_{y}(u(t_{n}))+c_{j}\Delta tF_{%
yy}(u(t_{n}))+O(\Delta t^{2}),$$
so that, using the assumption that $\|E^{n}\|<<1$ we have
$$\displaystyle F(U^{n}+E^{n})$$
$$\displaystyle=F(U^{n})+\left(F_{y}^{n}I+O(\Delta t)\right)E^{n}+O\left(\|E^{n}%
\|^{2}\right)$$
$$\displaystyle=F(U^{n})+F_{y}^{n}E^{n}+O(\Delta t)O\left(\|E^{n}\|\right).$$
The proof for $G$ is identical.
${}_{\blacksquare}$
We now describe the growth of the error:
Lemma
Given a zero-stable method of the form (4) which satisfies the order conditions
$$\boldsymbol{\tau}^{n}_{j}=0\;\;\;\;\mbox{for}\;\;j=0,...,p$$
and where the functions $F$ and $G$ are smooth, the evolution of the error can be described by:
$$\displaystyle\left(I-\Delta t\left(\mathbf{R}_{F}F_{y}^{n}+\mathbf{R}_{G}G_{y}%
^{n}\right)+O(\Delta t^{2})\right)E^{n+1}$$
$$\displaystyle=$$
$$\displaystyle\left(\mathbf{D}+\Delta t\left(\mathbf{A}_{F}F_{y}^{n}+\mathbf{A}%
_{G}G_{y}^{n}\right)\right)E^{n}+\boldsymbol{\tau}^{n+1}.$$
${}_{\blacksquare}$
Proof
Subtracting (10) from (4), we obtain
$$\displaystyle V^{n+1}-U^{n+1}$$
$$\displaystyle=$$
$$\displaystyle\mathbf{D}V^{n}-\mathbf{D}U^{n}+\Delta t\mathbf{A}_{F}\left(F(V^{%
n})-F(U^{n})\right)$$
$$\displaystyle+\Delta t\mathbf{A}_{G}\left(G(V^{n})-G(U^{n})\right)+\Delta t%
\mathbf{R}_{F}\left(F(V^{n+1})-F(U^{n+1})\right)$$
$$\displaystyle+\Delta t\mathbf{R}_{G}\left(G(V^{n+1})-G(U^{n+1})\right)+%
\boldsymbol{\tau}^{n+1}$$
using Observation 1 we have
$$\displaystyle E^{n+1}$$
$$\displaystyle=$$
$$\displaystyle\mathbf{D}E^{n}+\Delta t\left(\mathbf{A}_{F}F_{y}^{n}E^{n}+%
\mathbf{R}_{F}F_{y}^{n+1}E^{n+1}+\mathbf{A}_{G}G_{y}^{n}E^{n}+\mathbf{R}_{G}G_%
{y}^{n+1}E^{n+1}\right)$$
$$\displaystyle+$$
$$\displaystyle\boldsymbol{\tau}^{n+1}+O(\Delta t)O\left(\|E^{n}\|^{2},\|E^{n+1}%
\|^{2}\right).$$
The term final term is very small: $O(\Delta t)O\left(\|E^{n}\|^{2},\|E^{n+1}\|^{2}\right)<<\boldsymbol{\tau}^{n+1}$
so we can neglect it.
We then have an equation for the error which is essentially linear because the terms $F_{y}^{n}$, $G_{y}^{n}$, $F_{y}^{n+1},$ and $G_{y}^{n+1}$ are all constant scalars, which depend only on the solution of the ODE.
Now move the $E^{n+1}$ terms to the left side:
$$\displaystyle\left(I-\Delta t\mathbf{R}_{F}F_{y}^{n+1}-\Delta t\mathbf{R}_{G}G%
_{y}^{n+1}\right)E^{n+1}$$
$$\displaystyle=$$
$$\displaystyle\mathbf{D}E^{n}+\Delta t\mathbf{A}_{F}F_{y}^{n}E^{n}+\Delta%
\mathbf{A}_{G}G_{y}^{n}E^{n}+\boldsymbol{\tau}^{n+1},$$
Noting that $F_{y}^{n+1}=F_{y}^{n}+O(\Delta t)$ and $G_{y}^{n+1}=G_{y}^{n}+O(\Delta t)$ we have the desired result
$$\displaystyle\left(I-\Delta t\left(\mathbf{R}_{F}F_{y}^{n}+\mathbf{R}_{G}G_{y}%
^{n}\right)+O(\Delta t^{2})\right)E^{n+1}$$
$$\displaystyle=$$
$$\displaystyle\left(\mathbf{D}+\Delta t\left(\mathbf{A}_{F}F_{y}^{n}+\mathbf{A}%
_{G}G_{y}^{n}\right)\right)E^{n}+\boldsymbol{\tau}^{n+1}.$$
${}_{\blacksquare}$
3 Constructing IMEX-EIS+ methods
Schneider, Lang, and Hundsdorfer [23] showed how to construct
IMEX methods of the form
(4) which satisfy the order conditions to order $p$
but are error inhibiting and so produce a solution of order $p+1$.
In this section we show that under additional conditions we can express the exact form
of the next term in error and define an associated post-processor that allow us to recover order
$p+2$ from a scheme that would otherwise be only $p$th-order accurate.
We note that also our main interest is in IMEX methods, and we will focus on these later in the paper,
the theory presented in this section is general enough for all additive methods.
3.1 IMEX error inhibiting schemes that can be post-processed
In this section we consider an additive method of the form (4)
where we assume that $\mathbf{D}$ is a rank one matrix that satisfies the consistency
condition $\mathbf{D}{\mathbb{1}}={\mathbb{1}}$ so that this scheme is zero-stable.
Furthermore, we assume that the coefficient matrices $\mathbf{D}$, $\mathbf{A}_{F}$, $\mathbf{R}_{F}$, $\mathbf{A}_{G}$, $\mathbf{R}_{G}$
are such that the order conditions are satisfied to order $p$, so that the method will give us a numerical solution that has error that is guaranteed of order $p$.
In [23] it was shown that if the truncation error vector $\boldsymbol{\tau}^{n}_{p+1}$
lives in the null-space of the operator $\mathbf{D}$ then the order of the error is of order $p+1$.
In the following theorem we establish additional conditions on the coefficient matrices
$\mathbf{D}$, $\mathbf{A}_{F}$, $\mathbf{R}_{F}$, $\mathbf{A}_{G}$, $\mathbf{R}_{G}$, which allow us to
determine precisely what the leading term of this error will look like and
therefore remove it by post-processing.
Theorem 1
Consider a zero-stable additive general linear method of the form
$$V^{n+1}=\mathbf{D}V^{n}+\Delta t\left[\mathbf{A}_{F}F(V^{n})+\mathbf{R}_{F}F(V%
^{n+1})+\mathbf{A}_{G}G(V^{n})+\mathbf{R}_{G}G(V^{n+1})\right],$$
(18)
where $\mathbf{D}$ is a rank one matrix that satisfies the consistency condition $\mathbf{D}{\mathbb{1}}={\mathbb{1}}$,
the coefficient matrices $\mathbf{D}$, $\mathbf{A}_{F}$, $\mathbf{R}_{F}$, $\mathbf{A}_{G}$, $\mathbf{R}_{G}$, satisfy the order conditions
$$\boldsymbol{\tau}^{n}_{j}=0\;\;\;\;\mbox{for}\;\;j=1,...,p\;,\;\;\mbox{and}\;%
\;n\geq 0$$
for $\boldsymbol{\tau}^{n}_{j}$, defined by (10), and
the error inhibiting condition
$$\displaystyle\mathbf{D}\boldsymbol{\tau}^{n}_{p+1}=0$$
(19a)
is satisfied (so that numerical solution produced by this method will have error $$E^{k}=O(\Delta t^{p+1})$$).
If the conditions
$$\displaystyle\mathbf{D}\boldsymbol{\tau}^{n}_{p+2}=0$$
(19b)
$$\displaystyle\mathbf{D}(\mathbf{A}_{F}+\mathbf{R}_{F})\boldsymbol{\tau}^{n}_{p%
+1}=0$$
(19c)
$$\displaystyle\mathbf{D}(\mathbf{A}_{G}+\mathbf{R}_{G})\boldsymbol{\tau}^{n}_{p%
+1}=0$$
(19d)
are satisfied, then error vector will have the more precise form:
$$E^{n}=\Delta t^{p+1}\boldsymbol{\tau}^{n}_{p+1}+O(\Delta t^{p+2}).$$
(20)
${}_{\Box}$
Proof
Using Lemma 1 we obtain the equation for the evolution of the error
$$\displaystyle E^{n+1}$$
$$\displaystyle=$$
$$\displaystyle\left[I-\Delta t\left(\mathbf{R}_{F}F_{y}^{n}+\mathbf{R}_{G}G_{y}%
^{n}\right)+O(\Delta t^{2})\right]^{-1}$$
$$\displaystyle \left[\left(\mathbf{D}+\Delta t\left(\mathbf{A}_{F}F_{y}^{n}%
+\mathbf{A}_{G}G_{y}^{n}\right)\right)E^{n}+\boldsymbol{\tau}^{n+1}\right].$$
the fact $F$ and $G$ are smooth assures us that $\|\Delta t\left(\mathbf{R}_{F}F_{y}^{n}+\mathbf{R}_{G}G_{y}^{n}\right)\|<<O(1)$ so that we can expand the first term
$$\displaystyle E^{n+1}$$
$$\displaystyle=$$
$$\displaystyle\left[I+\Delta t\left(\mathbf{R}_{F}F_{y}^{n}+\mathbf{R}_{G}G_{y}%
^{n}\right)+O(\Delta t^{2})\right]$$
$$\displaystyle \left[\left(\mathbf{D}+\Delta t\left(\mathbf{A}_{F}F_{y}^{n}%
+\mathbf{A}_{G}G_{y}^{n}\right)\right)E^{n}+\boldsymbol{\tau}^{n+1}\right]$$
$$\displaystyle=$$
$$\displaystyle\left(\mathbf{D}+\Delta tF^{n}_{y}(\mathbf{R}_{F}\mathbf{D}+%
\mathbf{A}_{F})+\Delta tG^{n}_{y}(\mathbf{R}_{G}\mathbf{D}+\mathbf{A}_{G})+O(%
\Delta t^{2})\right)E^{n}$$
$$\displaystyle +\left[\Delta t^{p+1}\boldsymbol{\tau}_{p+1}^{n+1}+\Delta t^%
{p+2}\left(\boldsymbol{\tau}_{p+2}^{n+1}+\left(\mathbf{R}_{F}F_{y}^{n}+\mathbf%
{R}_{G}G_{y}^{n}\right)\boldsymbol{\tau}_{p+1}^{n+1}\right)+O(\Delta t^{2})\right]$$
$$\displaystyle\equiv$$
$$\displaystyle{Q}^{n}E^{n}+\Delta tT_{e}^{n}.$$
As we showed in Lemma 3 in [6], we can assume that $\|E^{n}\|<<1$ for some reasonably large time interval $(0,T)$.
The discrete Duhamel principle (given as Lemma 5.1.1 in [12])
states that given an iterative process of this form
where $Q^{n}$ is a linear operator, we have
$$E^{n}\,=\,\prod_{\mu=0}^{n-1}Q^{\mu}E^{0}\,+\,\Delta t\,\sum_{\nu=0}^{n-1}%
\left(\prod_{\mu=\nu+1}^{n-1}Q^{\mu}\right)T_{e}^{\nu}\;.$$
(22)
To analyze the error (22), we separate it into four parts:
$$E^{n}\,=\,\underbrace{\prod_{\mu=0}^{n-1}Q^{\mu}E^{0}}_{I}\,+\underbrace{%
\Delta tT_{e}^{n-1}}_{II}+\underbrace{\Delta tQ^{n-1}T_{e}^{n-2}}_{III}+%
\underbrace{\Delta t\,\sum_{\nu=0}^{n-1}\left(\prod_{\mu=\nu+1}^{n-3}Q^{\mu}%
\right)T_{e}^{\nu}}_{IV}$$
and discuss each part separately:
I.
The method is initialized so that the numerical solution vector $V^{0}$ is accurate
enough to ensure that the initial error $E^{0}$ is negligible and we can ignore the first term.
II.
The final term in the summation is $\Delta tT_{e}^{n-1}$. Recall that
$$T_{e}^{n-1}=\Delta t^{p}\boldsymbol{\tau}_{p+1}^{n}+\Delta t^{p+1}\left(%
\boldsymbol{\tau}_{p+2}^{n}+\left(\mathbf{R}_{F}F_{y}^{n-1}+\mathbf{R}_{G}G_{y%
}^{n-1}\right)\boldsymbol{\tau}_{p+1}^{n}\right)+O(\Delta t^{p+2})$$
so that this term contributes to the final time error the term
$$\Delta tT_{e}^{n-1}=\Delta t^{p+1}\boldsymbol{\tau}_{p+1}^{n}+O(\Delta t^{p+2}).$$
III.
The term $\Delta tQ^{n-1}T_{e}^{n-2}$ is a product of the operator
$$Q^{n-1}=\left(\mathbf{D}+\Delta tF^{n-1}_{y}(\mathbf{R}_{F}\mathbf{D}+\mathbf{%
A}_{F})+\Delta tG^{n-1}_{y}(\mathbf{R}_{G}\mathbf{D}+\mathbf{A}_{G})+O(\Delta t%
^{2})\right)$$
and the approximation error
$$T_{e}^{n-2}=\Delta t^{p}\boldsymbol{\tau}_{p+1}^{n-1}+\Delta t^{p+1}\left(%
\boldsymbol{\tau}_{p+2}^{n-1}+\left(\mathbf{R}_{F}F_{y}^{n-2}+\mathbf{R}_{G}G_%
{y}^{n-2}\right)\boldsymbol{\tau}_{p+1}^{n-1}\right)+O(\Delta t^{p+2})$$
so we obtain
$$Q^{n-1}T_{e}^{n-2}=\Delta t^{p}\mathbf{D}\boldsymbol{\tau}_{p+1}^{n-1}+O(%
\Delta t^{p+1})=O(\Delta t^{p+1}),$$
due to the condition (19a) that states that $\mathbf{D}\boldsymbol{\tau}^{n}_{p+1}=0$.
The result in this theorem requires a closer look at the $O(\Delta t^{p+1})$ terms in this product
$$\displaystyle\mathbf{D}\left[\boldsymbol{\tau}_{p+2}^{n-1}+\left(\mathbf{R}_{F%
}F_{y}^{n-2}+\mathbf{R}_{G}G_{y}^{n-2}\right)\boldsymbol{\tau}_{p+1}^{n-1})\right]$$
$$\displaystyle +\left[F^{n-1}_{y}(\mathbf{R}_{F}\mathbf{D}+\mathbf{A}_{F})+%
G^{n-1}_{y}(\mathbf{R}_{G}\mathbf{D}+\mathbf{A}_{G})\right]\boldsymbol{\tau}_{%
p+1}^{n-1}+O(\Delta t)$$
$$\displaystyle=$$
$$\displaystyle\mathbf{D}\left[\boldsymbol{\tau}_{p+2}^{n-1}+\left(\mathbf{R}_{F%
}F_{y}^{n-2}+\mathbf{R}_{G}G_{y}^{n-2}\right)\boldsymbol{\tau}_{p+1}^{n-1})\right]$$
$$\displaystyle +\left[F^{n-1}_{y}\mathbf{A}_{F}+G^{n-1}_{y}\mathbf{A}_{G}%
\right]\boldsymbol{\tau}_{p+1}^{n-1}+O(\Delta t)$$
$$\displaystyle=$$
$$\displaystyle\mathbf{D}\left[\boldsymbol{\tau}_{p+2}^{n-1}+\left(\mathbf{R}_{F%
}F_{y}^{n-1}+\mathbf{R}_{G}G_{y}^{n-1}\right)\boldsymbol{\tau}_{p+1}^{n-1})\right]$$
$$\displaystyle +\left[F^{n-1}_{y}\mathbf{A}_{F}+G^{n-1}_{y}\mathbf{A}_{G}%
\right]\boldsymbol{\tau}_{p+1}^{n-1}+O(\Delta t)$$
$$\displaystyle=$$
$$\displaystyle\mathbf{D}\boldsymbol{\tau}_{p+2}^{n-1}+\left[F^{n-1}_{y}(\mathbf%
{D}\mathbf{R}_{F}+\mathbf{A}_{F})+G^{n-1}_{y}(\mathbf{D}\mathbf{R}_{G}+\mathbf%
{A}_{G})\right]\boldsymbol{\tau}_{p+1}^{n-1}+O(\Delta t)$$
where we applied (19a) and used the observation that
$F^{n-2}_{y}=F^{n-1}_{y}+O(\Delta t)$ and $G^{n-2}_{y}=G^{n-1}_{y}+O(\Delta t)$.
We now apply the conditions of the theorem (19b) - (19d),
which make the the $O(\Delta t)^{)}$ terms in (3.1) vanish
and allow us to conclude that:
$$Q^{n-1}T_{e}^{n-2}=O(\Delta t^{p+2}).$$
IV.
Finally we look at the rest of the sum and use the boundedness of the operator $Q^{n}$ to observe
$$\displaystyle\left\|\Delta t\,\sum_{\nu=0}^{n-3}\left(\prod_{\mu=\nu+1}^{n-1}Q%
^{\mu}\right)T_{e}^{\nu}\right\|$$
$$\displaystyle=$$
$$\displaystyle\left\|\Delta t\,\sum_{\nu=0}^{n-3}\left(\prod_{\mu=\nu+3}^{n-1}%
\hat{Q}^{\mu}\right)\left(\hat{Q}^{\nu+2}\hat{Q}^{\nu+1}T_{e}^{\nu}\right)\right\|$$
$$\displaystyle\leq$$
$$\displaystyle\Delta t\,\sum_{\nu=0}^{n-3}\left\|\prod_{\mu=\nu+3}^{n-1}Q^{\mu}%
\right\|\left\|Q^{\nu+2}Q^{\nu+1}T_{e}^{\nu}\right\|$$
$$\displaystyle\leq$$
$$\displaystyle\Delta t\,\sum_{\nu=0}^{n-3}\left(1+c\,\Delta t\right)^{n-\nu-3}%
\left\|Q^{\nu+2}Q^{\nu+1}T_{e}^{\nu}\right\|$$
$$\displaystyle=$$
$$\displaystyle\frac{\exp\left(ct_{n}\right)-1}{c}\max_{\nu=0,...,n-3}\left\|Q^{%
\nu+2}Q^{\nu+1}T_{e}^{\nu}\right\|.$$
Clearly, the first term here is a constant that depends only on the final time, so it is the product $Q^{\nu+2}Q^{\nu+1}T_{e}^{\nu}$ we need to bound.
Using the definition of the operators $Q^{\mu}$
$$\displaystyle Q^{\nu+2}Q^{\nu+1}$$
$$\displaystyle=$$
$$\displaystyle\mathbf{D}^{2}+\Delta t\left[F_{y}^{\nu+1}\mathbf{D}\left(\mathbf%
{R}_{F}\mathbf{D}+\mathbf{A}_{F}\right)+G_{y}^{\nu+1}\mathbf{D}\left(\mathbf{R%
}_{G}\mathbf{D}+\mathbf{A}_{G}\right)\right.$$
$$\displaystyle+\left.F_{y}^{\nu+2}\left(\mathbf{R}_{F}\mathbf{D}+\mathbf{A}_{F}%
\right)\mathbf{D}+G_{y}^{\nu+2}\left(\mathbf{R}_{G}\mathbf{D}+\mathbf{A}_{G}%
\right)\mathbf{D}\right]+O(\Delta t^{2})$$
and the Truncation error
$$T_{e}^{\nu}=\left[\Delta t^{p}\boldsymbol{\tau}_{p+1}^{\nu+1}+\Delta t^{p+1}%
\left(\boldsymbol{\tau}_{p+2}^{\nu+1}+\left(\mathbf{R}_{F}F_{y}^{\nu}+\mathbf{%
R}_{G}G_{y}^{\nu}\right)\boldsymbol{\tau}_{p+1}^{\nu+1}\right)+O(\Delta t^{p+2%
})\right]$$
we have
$$\displaystyle Q^{\nu+2}Q^{\nu+1}T_{e}^{\nu}$$
$$\displaystyle=$$
$$\displaystyle\Delta t^{p}\left[\mathbf{D}+\Delta tF_{y}^{\nu+2}\left(\mathbf{R%
}_{F}\mathbf{D}+\mathbf{A}_{F}\right)+\Delta tG_{y}^{\nu+2}\left(\mathbf{R}_{G%
}\mathbf{D}+\mathbf{A}_{G}\right)\right]\mathbf{D}\boldsymbol{\tau}^{\nu+1}_{p%
+1}$$
$$\displaystyle+$$
$$\displaystyle\Delta t^{p+1}\left[F_{y}^{\nu+1}\mathbf{D}\left(\mathbf{R}_{F}%
\mathbf{D}+\mathbf{A}_{F}\right)+G_{y}^{\nu+1}\mathbf{D}\left(\mathbf{R}_{G}%
\mathbf{D}+\mathbf{A}_{G}\right)\right]\boldsymbol{\tau}^{\nu+1}_{p+1}$$
$$\displaystyle+$$
$$\displaystyle\Delta t^{p+1}\mathbf{D}^{2}\left(\boldsymbol{\tau}_{p+2}^{\nu+1}%
+\left(\mathbf{R}_{F}F_{y}^{\nu}+\mathbf{R}_{G}G_{y}^{\nu}\right)\boldsymbol{%
\tau}_{p+1}^{\nu+1}\right)+O(\Delta t^{p+2}).$$
We use the facts that: (1) the form of $\mathbf{D}$ means that $\mathbf{D}^{2}=\mathbf{D}$; (2) $F_{y}^{\nu+2}=F_{y}^{\nu+1}+O(\Delta t)$, $G_{y}^{\nu+2}=G_{y}^{\nu+1}+O(\Delta t)$; and (3) $\mathbf{D}\boldsymbol{\tau}_{p+1}^{n}=0$ for any integer $n$, to obtain
$$\displaystyle Q^{\nu+2}Q^{\nu+1}T_{e}^{\nu}$$
$$\displaystyle=$$
$$\displaystyle\Delta t^{p+1}\left[F_{y}^{\nu+1}\mathbf{D}\left(\mathbf{R}_{F}+%
\mathbf{A}_{F}\right)+G_{y}^{\nu+1}\mathbf{D}\left(\mathbf{R}_{G}+\mathbf{A}_{%
G}\right)\right]\boldsymbol{\tau}^{\nu+1}_{p+1}$$
$$\displaystyle+$$
$$\displaystyle\Delta t^{p+1}\mathbf{D}\boldsymbol{\tau}_{p+2}^{\nu+1}+O(\Delta t%
^{p+2}).$$
The last terms disappear because of (19b), $\mathbf{D}\boldsymbol{\tau}_{p+2}^{\nu+1}=0\;,\;\;n=0,1,...$.
The first term is eliminated by (19c) and (19d), namely.
$$\mathbf{D}\left(\mathbf{R}_{F}+\mathbf{A}_{F}\right)\boldsymbol{\tau}_{p+1}.=0%
\;,\;\;n=0,1,....$$
and
$$\mathbf{D}\left(\mathbf{R}_{G}+\mathbf{A}_{G}\right)\boldsymbol{\tau}_{p+1}^{n%
}=0\;,\;\;n=0,1,....$$
So that
$$Q^{\nu+2}Q^{\nu+1}T_{e}^{\nu}=O(\Delta t^{p+2}).$$
All these parts together give us
$$\displaystyle E^{n}=\underbrace{\prod_{\mu=0}^{n-1}Q^{\mu}E^{0}}_{I}\,+%
\underbrace{\Delta tT_{e}^{n-1}}_{II}+\underbrace{\Delta tQ^{n-1}T_{e}^{n-2}}_%
{III}+\underbrace{\Delta t\,\sum_{\nu=0}^{n-1}\left(\prod_{\mu=\nu+1}^{n-3}Q^{%
\mu}\right)T_{e}^{\nu}}_{IV}$$
$$\displaystyle=0+\underbrace{\Delta t^{p+1}\boldsymbol{\tau}^{n}_{p+1}}_{II}+%
\underbrace{O(\Delta t^{p+2})}_{III}+\underbrace{O(\Delta t^{p+2})}_{IV}$$
$$\displaystyle=\Delta t^{p+1}\boldsymbol{\tau}^{n}_{p+1}+O(\Delta t^{p+2}).$$
(24)
${}_{\blacksquare}$
3.1.1 Implementation of the EIS conditions
Unlike the previous cases [6, 7], the terms $\boldsymbol{\tau}^{n}_{j}$ here are
the sum of two quantities, each of which must be zero.
As we saw in Section 2 above, there is some flexibility in these terms,
for example we may have
$$\boldsymbol{\tau}^{n}_{j}=\left(\boldsymbol{\tau}_{j}\left.\frac{d^{j}u}{dt^{j%
}}\right|_{t=t_{n}}+\hat{\boldsymbol{\tau}}_{j}\left.\frac{d^{j-1}G(u)}{dt^{j-%
1}}\right|_{t=t_{n}}\right),$$
or
$$\boldsymbol{\tau}^{n}_{j}=\left(\boldsymbol{\tau}^{F}_{j}\left.\frac{d^{j-1}F}%
{dt^{j-1}j}\right|_{t=t_{n}}+\boldsymbol{\tau}^{G}_{j}\left.\frac{d^{j-1}G(u)}%
{dt^{j-1}}\right|_{t=t_{n}}\right).$$
Th order conditions above
$$\boldsymbol{\tau}^{n}_{j}=0\;\;\;\;\forall j\leq p,$$
become
$$\boldsymbol{\tau}_{j}=0\;\;\mbox{and}\;\;\hat{\boldsymbol{\tau}}_{j}=0\;\;\;\;%
\forall j\leq p,$$
or
$$\boldsymbol{\tau}^{F}_{j}=0\;\;\mbox{and}\;\;\boldsymbol{\tau}^{G}_{j}=0\;\;\;%
\;\forall j\leq p,$$
respectively. It does not matter which formulation one chooses, as these two sets of conditions are equivalent.
However, once we also consider the conditions (19) which use the truncation
error vectors $\boldsymbol{\tau}^{n}_{p+1}$ and $\boldsymbol{\tau}^{n}_{p+2}$, we see a variety of different ways to write these
conditions. The straightforward way to write these conditions would be
$$\displaystyle\boldsymbol{\tau}_{j}=0\;\;\;\mbox{and}\;\;\hat{\boldsymbol{\tau}%
}_{j}=0,$$
$$\displaystyle\mbox{for}\;\;j=0,...,p$$
$$\displaystyle\mathbf{D}\boldsymbol{\tau}_{p+1}=0,\;\;\mathbf{D}\hat{%
\boldsymbol{\tau}}_{p+1}=0,$$
$$\displaystyle\;\;\mathbf{D}\boldsymbol{\tau}_{p+2}=0,\;\;\mathbf{D}\hat{%
\boldsymbol{\tau}}_{p+2}=0,$$
(25)
$$\displaystyle\mathbf{D}(\mathbf{A}_{F}+\mathbf{R}_{F})\boldsymbol{\tau}_{p+1}=0,$$
$$\displaystyle\;\;\mathbf{D}(\mathbf{A}_{F}+\mathbf{R}_{F})\hat{\boldsymbol{%
\tau}}_{p+1}=0$$
$$\displaystyle\mathbf{D}(\mathbf{A}_{G}+\mathbf{R}_{G})\boldsymbol{\tau}_{p+1}=0,$$
$$\displaystyle\;\;\mathbf{D}(\mathbf{A}_{G}+\mathbf{R}_{G})\hat{\boldsymbol{%
\tau}}_{p+1}=0$$
or equivalently,
$$\displaystyle\boldsymbol{\tau}^{F}_{j}=0\;\;\;\mbox{and}\;\;\boldsymbol{\tau}^%
{G}_{j}=0$$
$$\displaystyle\mbox{for}\;\;j=0,...,p$$
$$\displaystyle\mathbf{D}\boldsymbol{\tau}^{F}_{p+1}=0,\;\;\mathbf{D}\boldsymbol%
{\tau}^{G}_{p+1}=0,$$
$$\displaystyle\;\;\mathbf{D}\boldsymbol{\tau}^{F}_{p+2}=0,\;\;\mathbf{D}%
\boldsymbol{\tau}^{G}_{p+2}=0$$
(26)
$$\displaystyle\mathbf{D}(\mathbf{A}_{F}+\mathbf{R}_{F})\boldsymbol{\tau}^{F}_{p%
+1}=0,$$
$$\displaystyle\;\;\mathbf{D}(\mathbf{A}_{F}+\mathbf{R}_{F})\boldsymbol{\tau}^{G%
}_{p+1}=0$$
$$\displaystyle\mathbf{D}(\mathbf{A}_{G}+\mathbf{R}_{G})\boldsymbol{\tau}^{F}_{p%
+1}=0,$$
$$\displaystyle\;\;\mathbf{D}(\mathbf{A}_{G}+\mathbf{R}_{G})\boldsymbol{\tau}^{G%
}_{p+1}=0.$$
Again, these are mathematically equivalent.
However, we can create conditions that are more constrained than (3.1.1) or
(3.1.1) by requiring higher order conditions to be satisfied for the second component.
For example, conditions (3.1.1) can be satisfied by requiring that $\hat{\boldsymbol{\tau}}_{p+1}=0$,
which reduces the number of conditions:
$$\displaystyle\boldsymbol{\tau}_{j}=0\;\;\mbox{for}\;\;j=0,...,p,$$
$$\displaystyle\;\mbox{and}$$
$$\displaystyle\hat{\boldsymbol{\tau}}_{j}=0\;\;\mbox{for}\;\;j=0,...,p+1$$
$$\displaystyle\mathbf{D}\boldsymbol{\tau}_{p+1}=0,\;\;\mathbf{D}\boldsymbol{%
\tau}_{p+2}=0,$$
and
$$\displaystyle\;\;\mathbf{D}\hat{\boldsymbol{\tau}}_{p+2}=0,$$
(27)
$$\displaystyle\mathbf{D}(\mathbf{A}_{F}+\mathbf{R}_{F})\boldsymbol{\tau}_{p+1}=0$$
and
$$\displaystyle\;\;\mathbf{D}(\mathbf{A}_{G}+\mathbf{R}_{G})\boldsymbol{\tau}_{p%
+1}=0.$$
Or even requiring that $\hat{\boldsymbol{\tau}}_{p+1}=\hat{\boldsymbol{\tau}}_{p+2}=0,$ which further reduces the number of conditions:
$$\displaystyle\boldsymbol{\tau}_{j}=0\;\;\mbox{for}\;\;j=0,...,p,$$
and
$$\displaystyle\;\;\hat{\boldsymbol{\tau}}_{j}=0\;\;\mbox{for}\;\;j=0,...,p+2$$
$$\displaystyle\mathbf{D}\boldsymbol{\tau}_{p+1}=0$$
and
$$\displaystyle\;\;\mathbf{D}\boldsymbol{\tau}_{p+2}=0,$$
(28)
$$\displaystyle\mathbf{D}(\mathbf{A}_{F}+\mathbf{R}_{F})\boldsymbol{\tau}_{p+1}=0$$
and
$$\displaystyle\;\;\mathbf{D}(\mathbf{A}_{G}+\mathbf{R}_{G})\boldsymbol{\tau}_{p%
+1}=0.$$
We can make other reduced sets of conditions by placing additional requirments on $\hat{\boldsymbol{\tau}}_{p+1}$ instead of
$\boldsymbol{\tau}_{p+1}$, etc.
Alternatively, we can add similar constraints to the conditions
(3.1.1) by requiring $\boldsymbol{\tau}^{G}_{p+1}=0$
or even $\boldsymbol{\tau}^{G}_{p+1}=\boldsymbol{\tau}^{G}_{p+2}=0$ and reducing the number of conditions.
However, we stress that these although these forms seem to give simpler conditions, in fact they
constrain us further than (3.1.1) (or (3.1.1)) and actually
reduce the size of the possible solution space. This means that
while methods that satisfy (3.1.1) or (3.1.1) will always satisfy
(3.1.1) (or equivalently (3.1.1)), but that methods satisfying
(3.1.1) (and therefore (3.1.1)) may not satisfy
(3.1.1) or (3.1.1). Simply put, the set of conditions
(3.1.1), or equivalently (3.1.1)), allows for the broadest selection of methods.
3.2 Designing a post-processer to recover $p+2$ order
The form of the error vector (20)
$$E^{n}=\Delta t^{p+1}\boldsymbol{\tau}^{n}_{p+1}+O(\Delta t^{p+2}),$$
at any time-step $t_{n}$, allows us to remove the
leading order term $\Delta t^{p+1}\boldsymbol{\tau}^{n}_{p+1}$
at the end of the computation. This produces a final time solution of order $p+2$.
We observe that the form of the error for IMEX method (4)
is exactly the same as that of the error of the EIS methods
presented in [6] and [7],
the only difference being the definition of the truncation error vector, and the number of
truncation error vectors. Thus, we refer the reader to [6] for a
theoretical discussion of the post-processor. In this subsection, we review
one possible construction of a post-processor .
1.
Select the number of computation steps $m$ that will be used. For additive methods we
typically have two truncation error vectors and so require that
$$ms\geq p+4$$
where $s$ is the number of steps and $p$ the order of the time-stepping method.
However, in some cases the two truncation error vectors are linearly dependent and so
we only use one of them. In this case we
require that
$$ms\geq p+3.$$
2.
Define the time vector of all the temporal grid points in the last $m$ computation steps:
$$\tilde{\mathbf{t}}=\left(t_{n-m+1}+c_{1}\Delta t,...,t_{n-m+1}+c_{s}\Delta t,.%
..,t_{n}+c_{1}\Delta t,...,t_{n}+c_{s}\Delta t\right)^{T}$$
3.
Correspondingly, concatenate the final $m$ solution vectors
$$\tilde{V}^{n}\,=\,\left(\begin{array}[]{c}V^{n-m+1}\\
\vdots\\
V^{n}\\
\end{array}\right)\;,\;\;\;\mbox{and}\;\;\;\;\tilde{U}^{n}\,=\,\left(\begin{%
array}[]{c}U^{n-m+1}\\
\vdots\\
U^{n}\\
\end{array}\right).$$
4.
Stack $m$ copies of the final truncation error vectors $\boldsymbol{\tau}^{F}_{p+1}$ and $\boldsymbol{\tau}^{G}_{p+1}$
$$\tilde{\boldsymbol{\tau}}^{F}=\left(\underbrace{\left(\boldsymbol{\tau}^{F}_{p%
+1}\ \right)^{T},...,\left(\boldsymbol{\tau}^{F}_{p+1}\ \right)^{T}}_{m}\right%
)^{T}\;\mbox{and}\;\;\tilde{\boldsymbol{\tau}}^{G}=\left(\underbrace{\left(%
\boldsymbol{\tau}^{G}_{p+1}\ \right)^{T},...,\left(\boldsymbol{\tau}^{G}_{p+1}%
\ \right)^{T}}_{m}\right)^{T}$$
5.
Define the vertically flipped Vandermonde interpolation matrix
$\mathbf{T}$ on the vector $\tilde{\mathbf{t}}$, and replace the first two columns
(the ones corresponding to the highest polynomial term) by $\tilde{\boldsymbol{\tau}}^{F}$ and $\tilde{\boldsymbol{\tau}}^{G}$:
$$\mathbf{T}=\left(\tilde{\boldsymbol{\tau}}^{F},\tilde{\boldsymbol{\tau}}^{G},%
\tilde{\mathbf{t}}^{ms-2},\tilde{\mathbf{t}}^{ms-3},...,\tilde{\mathbf{t}}^{2}%
,\tilde{\mathbf{t}},{\mathbb{1}}\right)$$
where terms of the form $\tilde{\mathbf{t}}^{q}$ are understood as component-wise exponentiation.
If the two truncation error vectors are linearly dependent, we use only one of them, so the
matrix $\mathbf{T}$ would be defined as
$$\mathbf{T}=\left(\tilde{\boldsymbol{\tau}}^{F},\tilde{\mathbf{t}}^{ms-1},%
\tilde{\mathbf{t}}^{ms-2},...,\tilde{\mathbf{t}}^{2},\tilde{\mathbf{t}},{%
\mathbb{1}}\right).$$
Note that in either case $\mathbf{T}$ is a square matrix of dimension $ms\times ms$.
6.
Define the post-processing filter
$$\Phi=T\,\text{diag}\left(0,0,\underbrace{1,...,1}_{ms-2},\right)\,T^{-1}.$$
If two truncation error vectors are linearly dependent, we use instead
$$\Phi=T\,\text{diag}\left(0,\underbrace{1,...,1}_{ms-1},\right)\,T^{-1}.$$
Finally, left-multiply the solution vector $\tilde{V}^{n}$ by the post-processing filter $\Phi$ to obtain the post-processed solution
$$\hat{V}^{n}=\Phi\tilde{V}^{n},$$
which will be of order $p+2$.
As in [6] we remark that this process may break down if the matrix $\mathbf{T}$ is not invertible, and that numerical instabilities may result if
$\|\Phi\|$ is large. This should be verified while building these matrices.
In the examples we produced, we pre-computed the post-processing matrix and present the
post-processing procedure along with the numerical methods. The post-processor can be downloaded
along with the methods from our GitHub repository [10].
4 Error inhibiting IMEX schemes with post-processing
Using the truncation error conditions and the error inhibiting
conditions in Section 3.1, we construct new additive methods that are error inhibiting
and admit postprocessing to raise the order (after post-processing) to $P=p+2$ where $p$ is the order predicted by truncation
error analysis.
We are interested in the stability regions of these new methods: to study these we look
at the explicit and implicit methods separately. In other words, we consider separately the linear stability
regions when $G$ is zero (the explicit case) and when $F$ is zero (the implicit case).
We denote s-step methods of the form (4) that satisfy the order conditions
$$\boldsymbol{\tau}^{n}_{j}=0\;\;\;\;\mbox{for}\;\;j=0,...,p,$$
and the EIS conditions (19) by the notation IMEX-EIS+(s,P) where $P=p+2$.
We focus on IMEX methods, so we require $\mathbf{R}_{F}$ to be strictly lower triangular. We also restrict ourselves to
the diagonally implicit cases so we require $\mathbf{R}_{G}$ to be lower triangular.
If the methods have $\mathbf{R}_{G}$ and $\mathbf{R}_{F}$ that have only diagonal elements (i.e. $\mathbf{R}_{F}$ is the zero matrix and
$\mathbf{R}_{G}$ is a diagonal matrix) then the method can be implemented efficiently in parallel, and to highlight this
we denote the method pIMEX-EIS+(s,P) (where $P=p+2$).
All of our methods have an A-stable implicit part (i.e. when $F=0$).
For the explicit part (i.e. when $G=0$), we measure the radius $R_{stab}$
of the semicircle in the left half plane in which this method
is linearly stable. Table 1 lists the methods we found and shows the linear stability radius for which the explicit
part is stable. We note that some of the methods converged to the case where $\boldsymbol{\tau}^{n}_{j}=0$ for $j=0,...,p+1$, in which case the
conditions (19) reduce to $D\boldsymbol{\tau}^{n}_{p+2}=0$. This is a method which is in the previous class considered
by [23] and does not require post-processing. We denote such methods by IMEX-EIS(s,p+1),
and if they are also parallel efficient we denote them by pIMEX-EIS(s,p+1).
As shown in Table 1,
we found IMEX-EIS+ methods of up to five stages and order $P=p+2=6$.
Note that the methods that do not require the parallel (i.e. diagonal $\mathbf{R}_{F}$ and $\mathbf{R}_{G}$) structure
allow a significantly larger linear stability radius for the explicit part.
We are interested in comparing the linear stability regions of the IMEX-EIS+ methods we found here to
the IMEX methods in [23]. The comparison is limited by several
factors: first, the methods in [23] are not only diagonally implicit, but
singly diagonally implicit (i.e. the lower triangular matrix $\mathbf{R}_{G}$ has a diagonal that is all the same number),
which makes them different from ours; second, we were not able to find an IMEX-EIS+(2,3) method,
so we compare our IMEX-EIS(2,3) to the corresponding (but singly diagonally implicit) method in
[23]; finally, the (4,5) method in [23]
was obtained by loosening the conditions on the matrix $\mathbf{D}$ and requiring only that $\mathbf{D}^{k}\boldsymbol{\tau}^{n}_{5}=0$
for some value $k$, so this does not exactly satisfy the condition (19a), but still gives a fifth order results.
Subject to all these caveats in the comparison of the methods, we consider the stability regions in Figure 1
and observe that our methods have larger imaginary axis stability for the same value of $s,P$.
Due to space constraints we do not list all the coefficients of all the methods.
However, we list the coefficients of three methods of interest
(IMEX-EIS+(3,4), pIMEX-EIS+(4,5) and IMEX-EIS+(5,6)) in the appendix,
and all the coefficients can be downloaded from [10].
5 Numerical Results
In this section we test the IMEX methods developed above for convergence on a
well-studied nonlinear system of ODEs, and on a nonlinear ODE system coming from
the semi-discretization of a PDE. The numerical results confirm that we observe the
predicted order (or better) from our methods, both before and after post-processing.
Example 1: Van der Pol oscillator problem
The nonlinear system of ODEs is given by
$$\left(\begin{array}[]{l}y_{1}\\
y_{2}\end{array}\right)^{\prime}=\left(\begin{array}[]{c}y_{2}\\
a(1-y_{1}^{2})y_{2}-y_{1}\\
\end{array}\right)$$
with $a=2$ and initial condition $\mathbf{y}(0)=(2,0)^{T}$.
We split the right-hand-side into $\mathbf{y}^{\prime}=F(\mathbf{y})+G(\mathbf{y})$ where:
$$F=\left(\begin{array}[]{l}0\\
a(1-y_{1}^{2})y_{2}\\
\end{array}\right),\;\;\;\mbox{and}\;\;\;G=\left(\begin{array}[]{cc}0&1\\
-1&0\\
\end{array}\right)\left(\begin{array}[]{l}y_{1}\\
y_{2}\\
\end{array}\right).$$
In the following, $F$ is treated explicitly and $G$ is implicit.
We use a selection of parallel-effiicient and non-parallel-effiicient IMEX-EIS+ methods
with a range of stepsizes $\Delta t=\frac{T_{f}}{N}$ for different values of $N$
to evolve this problem to the final time $T_{f}=3.0$. We then postprocess the solution at the final
time as described in Section 3.2. We compute the reference solution at the final time
by Matlab’s ode45 routine, and subtract them from our numerical results at this final time,
the errors are then the sum of squares of the two errors, divided by $\sqrt{2}$.
In Table 2 we show the convergence rates, calculated .
For the methods that are third through fifth order we use a value of $N$ between 400 and 1200;
for the sixth order method, we must use smaller values of $N$ as larger values give machine-accuracy
solutions. For the IMEX-EIS+(3,3), IMEX-EIS+(3,4), and IMEX-EIS+(4,5) and
pIMEX-EIS+(3,3), pIMEX-EIS+(3,4), and pIMEX-EIS+(4,5)
methods we see the expected order of accuracy before ($p+1$) and after ($p+2$) post-processing.
The IMEX-EIS+(5,6) method, on the other hand, shows sixth order both before and after post-processing
for both the numerical examples we ran, so that the expected order before post-processing is higher than
expected but the overall order is as expected.
In Figure 2 we show the errors for different values of $\Delta t$, with
$log_{10}(errors)$ graphed against $log_{10}(\Delta t)$.
We show the the non-parallelizable methods on the right and
the parallelizable methods on the left.
The solid lines represent the errors before post-processing and the dashed ones are the
errors after post-processing.
This figure shows that these solutions attain the expected order of convergence
before and after post-processing. We omit the results from the IMEX-EIS+(5,6) method
from this graph due to the different values of $N$ used for this example.
Example 2: Viscous Burgers’ equation
We solve the PDE
$$u_{t}+\left(\frac{1}{2}u^{2}\right)_{x}=\frac{1}{10}u_{xx}$$
on the domain $x\in[0,2\pi]$ with initial condition $u(0)=sin(5x)+cos(2x)$
and periodic boundary conditions. We semidiscretize this problem using a
Fourier spectral collocation method with $41$ points in space.
We treat the Burgers’ term $\left(\frac{1}{2}u^{2}\right)_{x}$ explicitly and the
viscous term $\frac{1}{10}u_{xx}$ implicitly.
Using a selection of the IMEX-EIS+ methods we developed, we evolve this in time
to $T_{f}=0.5$, for $\Delta t=\frac{T_{f}}{N}$ for various values of $N$.
In Table 3 we show the convergence rates of the
IMEX-EIS+ methods and the pIMEX-EIS+ methods
(3,3), (3,4), and (4,5) for values of $N$ from 210 to 1440,
and the IMEX-EIS+(5,6) method for $N$ from 60 to 280
(due to the high accuracy of the solutions).
Convergence plots shown in Figure 3.
For most of the third through fifth order methods we observe a
convergence rate of $p+1$ before post-processing and $P=p+2$ after post-processing.
For the IMEX-EIS+(4,5) the order before post-processing is higher than expected, at
fifth order ($4.67$), while the order after post-processing is improved but also,
as expected, fifth order ($4.90$).
For the sixth order method we observe a higher-than-expected convergence rate of
sixth order ($5.69$) before post-processing, and the expected convergence rate of sixth order
(also $5.69$) after post-processing. Table 3 confirms that our methods perform
as expected or better than expected for the numerical tests.
6 Conclusions
It has been previously shown in [23] that IMEX methods of the form (4)
with truncation error of order $p$ can attain order $p+1$ if the coefficients satisfy the
condition (19a). In this work we showed how, under additional conditions on the coefficients
of (4), it is possible to devise a post-processor that can extract
a solution of order $p+2$.
In Section 3.1 we proved that if a method of the form
(4) with truncation error of order $p$
satisfies the set of conditions (19)
then the growth of the errors will be inhibited so that at any time $t^{n}$ the error has the form
$$E^{n}=\boldsymbol{\tau}^{n}\Delta t^{p+1}+O(\Delta t^{p+2}).$$
In [6] we showed that given an error of this form we can construct a post-processor
that allows us to remove this first term of the error and obtain a solution that has error
$$\hat{e}^{n}=O(\Delta t^{p+2}).$$
In Section 3.2 we detail the construction of one such post-processor.
We proceed to find method that satisfy the order conditions and error inhibiting conditions
and the post-processor for that method. We list the coefficients of two such methods and their
post-processor in Section 4 and provide coefficients of other methods in
our GitHub repository [10]. Our methods are designed to be A-stable for the
implicit part $G$ and have a good stability regions for the explicit part $F$, and we show the
radius of the semicircle in the complex plane that includes the linear stability region of
the explicit part.
Finally, in Section 5 we test some of
our methods on a nonlinear system of ODEs and on a nonlinear PDE and show that the order
of convergence before and after post-processing is as expected.
Appendix A The coefficients of few IMEX-EIS+ methods
In this appendix we provide the coefficients of three EIS+ methods: the non-parallelizable
IMEX-EIS+(3,4) and IMEX-EIS+(5,6) methods and the pIMEX-EIS+(4,5) method which is
arallelizable.
We also provide the details of a post-processor for each of these, and their linear stability regions
for the explicit case $G=0$. Note that these methods are all A-stable for the implicit case $F=0$.
Non-parallelizable IMEX-EIS+(3,4) This method has the form for each stage $i$
$$\displaystyle v_{n+1+c_{i}}$$
$$\displaystyle=$$
$$\displaystyle\sum_{j=1}^{i}\left[\mathbf{D}_{ij}v_{n+c_{j}}+\Delta t\left((%
\mathbf{A}_{F})_{ij}F(v_{n+c_{j}})+(\mathbf{A}_{G})_{ij}G(v_{n+c_{j}})\right)\right]$$
$$\displaystyle+$$
$$\displaystyle\Delta t\left[\sum_{j=1}^{i-1}(\mathbf{R}_{F})_{ij}F(v_{n+1+c_{j}%
})+\sum_{j=1}^{i}(\mathbf{R}_{G})_{ij}G(v_{n+1+c_{j}})\right],$$
where
$$c_{1}=0,\;\;\;c_{2}=0.726140175537503,\;\;\;c_{3}=0.673358282778651.$$
The coefficients in $\mathbf{D}$ are
$$\mathbf{D}_{1j}=0.669589009596231,\;\;\;\mathbf{D}_{2j}=-0.300415337558440,\;%
\;\;\mathbf{D}_{3j}=0.630826327962208.$$
The coefficient matrices $\mathbf{A}_{F}$ and $\mathbf{A}_{G}$ are
$$\mathbf{A}_{F}=\left(\begin{array}[]{llll}0.114204309138172&-0.400390083432031%
&1.079557287314509\\
0.464138154216379&1.845209074440007&-2.681606546815293\\
0.354696311057433&1.044611661302771&-1.341592157784282\\
\end{array}\right)$$
and
$$\mathbf{A}_{G}=\left(\begin{array}[]{llll}0.284198645406530&-0.015257351367544%
&0.236227411970908\\
0.324903855316460&-0.362534474009427&0.207162116344608\\
0.095825552702204&0.715560227998031&0.177838308334027\\
\end{array}\right)$$
The final coefficients are
$$\mathbf{R}_{F}=\left(\begin{array}[]{lll}0&0&0\\
1.891771006717059&0&0\\
1.309753253604631&0.099260727618746&0\\
\end{array}\right)$$
and
$$\mathbf{R}_{G}=\left(\begin{array}[]{lll}0.288202807010756&0&0\\
1.074901350783908&0.275078840122604&0\\
0.113098097583571&-0.492120079122587&0.856527688304053\\
\end{array}\right)$$
The truncation error vectors for post-processing are
$$\boldsymbol{\tau}^{F}_{3}=\left(\begin{array}[]{l}-0.029109337573875\\
-0.039680299841934\\
0.012001277545145\\
\end{array}\right),\;\;\;\;\boldsymbol{\tau}^{G}_{3}=\left(\begin{array}[]{l}0%
.079790724801134\\
0.108766469751468\\
-0.032896338895945\\
\end{array}\right)$$
We note that $\frac{\boldsymbol{\tau}^{F}_{3}}{\boldsymbol{\tau}^{G}_{3}}=-0.36482106969733$ so that the two are linearly dependent.
Using this form we construct the post-processor and find that to obtain a solution
$\hat{v}^{n}$ that is fifth order we use
$$\hat{v}^{n}=\sum_{j=1}^{3}w_{j}v_{n-1+c_{j}}+\sum_{j=1}^{3}w_{3+j}v_{n+c_{j}}.$$
where
$$\displaystyle w_{1}=-0.005813528106374,\;\;\;w_{2}=-0.825824388871650,\;\;\;w_%
{3}=0.671784878748904,$$
$$\displaystyle w_{4}=1.187717516309380,\;\;\;w_{5}=0.117883101641288,\;\;\;w_{6%
}=-0.145747579721548.$$
Parallelizable pIMEX-EIS+(4,5) Each stage $i$ of this method can be written as
$$\displaystyle v_{n+1+c_{i}}$$
$$\displaystyle=$$
$$\displaystyle\sum_{j=1}^{i}\left[\mathbf{D}_{ij}v_{n+c_{j}}+\Delta t\left((%
\mathbf{A}_{F})_{ij}F(v_{n+c_{j}})+(\mathbf{A}_{G})_{ij}G(v_{n+c_{j}})\right)%
\right]+\Delta t(\mathbf{R}_{G})_{ii}G(v_{n+1+c_{i}}),$$
where $c_{1}=0$ and
$$c_{2}=0.168033239597551,\;\;\;c_{3}=1.757182407781971,\;\;\;c_{4}=1.8594544713%
27513.$$
The matrix $\mathbf{D}$ has coefficients
$$\mathbf{D}_{1j}=-0.318365990733397,\;\;\;\mathbf{D}_{2j}=1.304472100371239,$$
$$\mathbf{D}_{3j}=0.549931869327788,\;\;\;\mathbf{D}_{4j}=-0.536037978965630.$$
The coefficients in $\mathbf{A}_{F}$ are:
$$\left(\begin{array}[]{llll}-1.664522119422666&2.437573230692123&-0.76966859604%
2686&0.807830422310789\\
-0.781689853324564&1.397193436278877&1.659473775700052&-1.295731181519254\\
1.321744800130381&-1.022763965721561&1.835477792707761&0.433936718202950\\
1.792224287866993&-1.556690154187516&1.162924903269568&1.272208371916028\\
\end{array}\right)$$
and the coefficients in $\mathbf{A}_{G}$ are:
$$\left(\begin{array}[]{llll}5.130504311291350&-6.868827443719447&-6.72255000847%
8589&4.949792109038540\\
1.365036148735676&-1.731952546469524&-8.799998237141496&6.717460091357383\\
-4.040734278322292&5.102367666085668&8.373021332707967&-8.044233252050056\\
-4.719539468031772&5.859796721307132&8.799997832663552&-8.486722018934611\\
\end{array}\right)$$
The matrix $\mathbf{R}_{F}$ has all zeros, and the matrix
$\mathbf{R}_{G}$ has only diagonal elements:
$$(\mathbf{R}_{G})_{11}=4.322293969405709,\;\;\;(\mathbf{R}_{G})_{22}=3.42870072%
0653071,$$
$$(\mathbf{R}_{G})_{33}=1.177973876898242,\;\;\;(\mathbf{R}_{G})_{44}=1.21713434%
1860772.$$
For this method, the truncation error vectors are
$$\boldsymbol{\tau}^{F}_{4}=\left(\begin{array}[]{l}0.488267196647527\\
-0.076569016719893\\
-1.995223087311692\\
-2.523266318943553\\
\end{array}\right),\;\;\;\;\;\boldsymbol{\tau}^{G}_{4}=\left(\begin{array}[]{l%
}-0.902269383509413\\
0.141491953557654\\
3.686974503536061\\
4.662746057189559\\
\end{array}\right),$$
and we observe that $\boldsymbol{\tau}^{F}_{4}=(-0.541154565999338)\boldsymbol{\tau}^{G}_{4}$.
These truncation error vectors are linearly dependent
so we only need to use one in the post-processing.
Using this form we construct the post-processor and find that to obtain a solution
$\hat{v}^{n}$ that is fifth order we use
$$\hat{v}^{n}=\sum_{j=1}^{4}w_{j}v_{n-1+c_{j}}+\sum_{j=1}^{4}w_{4+j}v_{n+c_{j}}.$$
where
$$\displaystyle w_{1}=-0.039322995751032,\;\;\;w_{2}=0.075926208780666,\;\;\;w_{%
3}=-1.415777364482847,$$
$$\displaystyle w_{4}=1.158626364485013,\;\;\;w_{5}=0.331161725962668,\;\;\;w_{6%
}=0.925152344959055,$$
$$\displaystyle w_{7}=-0.108628113639943,\;\;\;w_{8}=0.072861829686421.$$
These methods are both A-stable for the implicit case $F=0$, and
in Figure 4 we show the linear stability regions for the explicit case
$G=0$. Clearly, the non-parallelizable IMEX-EIS+(3,4) has a much larger explicit linear stability
region than the parallelizable pIMEX-EIS+(4,5) method.
Non-parallelizable IMEX-EIS+(5,6) The coefficient matrices for this method are only given
to the first four decimal places due to space constrains: please download the correct values from [10]
when using these in practice.
$$\mathbf{D}_{1j}=0.0659,\;\;\mathbf{D}_{2j}=-0.1352,\;\;\mathbf{D}_{3j}=0.2825,%
\;\;\mathbf{D}_{4j}=0.5511,\;\;\mathbf{D}_{5j}=0.2355.$$
$$\mathbf{A}_{F}=\left(\begin{array}[]{ccccc}0.0675&2.4133&4.3749&-6.0472&-0.208%
2\\
0.7635&-0.1599&-2.8276&3.0064&-0.8574\\
0.1610&-1.1021&-3.1348&4.3204&-0.1167\\
0.4623&-1.4528&-2.6697&4.2186&-0.3427\\
-0.4278&2.3974&4.5886&-6.1872&0.3184\\
\end{array}\right),$$
$$\mathbf{A}_{G}=\left(\begin{array}[]{ccccc}-2.4205&-0.1634&1.1489&-0.7285&2.66%
59\\
2.8434&-3.9695&-0.0444&4.8236&-2.3774\\
-3.8766&-0.9584&4.1722&-3.1383&4.5513\\
-1.1413&-1.1380&0.5537&1.0814&1.3959\\
-2.1190&0.1164&0.2777&-0.0147&2.2335\\
\end{array}\right),$$
$$\mathbf{R}_{F}=\left(\begin{array}[]{ccccc}0&0&0&0&0\\
1.0312&0&0&0&0\\
0.6213&0.4411&0&0&0\\
0.6179&0.2422&0.0284&0&0\\
-0.4486&1.8928&1.4348&-2.9536&0\\
\end{array}\right),$$
$$\mathbf{R}_{G}=\left(\begin{array}[]{ccccc}0.0978&0&0&0&0\\
-0.8007&0.4812&0&0&0\\
0.4695&-0.5361&0.5065&0&0\\
-0.0593&-0.5753&-1.3000&2.2870&0\\
-3.6049&1.6589&1.1568&-2.5215&3.4317\\
\end{array}\right).$$
The truncation error vectors are linearly independent:
$$\boldsymbol{\tau}^{F}_{5}=\left(\begin{array}[]{l}-0.013938778257283\\
-0.029232291698827\\
-0.030789029777504\\
0.012120485455712\\
-0.004300675470610\\
\end{array}\right),\;\;\;\boldsymbol{\tau}^{G}_{5}=\left(\begin{array}[]{l}-0.%
005170695208015\\
0.158776476412505\\
0.157237715751941\\
-0.023750671362163\\
-0.040451707588424\\
\end{array}\right).$$
Using this form we construct the post-processor and find that to obtain a solution
$\hat{v}^{n}$ that is sixth order we use
$$\hat{v}^{n}=\sum_{j=1}^{5}w_{j}v_{n-1+c_{j}}+\sum_{j=1}^{5}w_{5+j}v_{n+c_{j}}.$$
where
$$\displaystyle w_{1}=0.199430122528852,\;\;\;w_{2}=0.527176733379556,\;\;\;w_{3%
}=1.849342116119140,$$
$$\displaystyle w_{4}=-1.990042072366473,\;\;\;w_{5}=-0.236068245820308,\;\;\;w_%
{6}=-8.500340722029671,$$
$$\displaystyle w_{7}=-0.444089994252464,\;\;\;w_{8}=-0.105798566591708,\;\;\;w_%
{9}=0.318463049996237,$$
$$\displaystyle w_{10}=9.381927579036839.$$
Acknowledgment.
This publication is based on work supported by AFOSR grant FA9550-18-1-0383 and ONR-DURIP Grant N00014-18-1-2255.
A part of this research is sponsored by the Office of Advanced Scientific Computing Research; US Department of Energy,
and was performed at the Oak Ridge National Laboratory, which is managed by UT-Battelle, LLC under Contract no.
De-AC05-00OR22725. This manuscript has been authored by UT-Battelle, LLC, under contract DE-AC05-00OR22725
with the US Department of Energy. The United States Government retains and the publisher, by accepting the article for
publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable,
world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
References
[1]
U. Ascher, S. Ruuth, and B. Wetton,
Implicit-explicit methods for time-dependent partial differential equations,
SIAM Journal on Numerical Analysis, 32 (1995), p. 797–823.
[2]
U. M. Ascher, S. J. Ruuth, and R. J. Spiteri,
Implicit-explicit Runge-Kutta methods for time-dependent partial differential equations,
Applied Numerical Mathematics, 25 (1997), pp. 151–167. Special Issue on Time Integration.
[3]
M. Calvo, J. de Frutos, and J. Novo,
Linearly implicit Runge–Kutta methods for advection-reaction-diffusion equations,
Applied Numerical Mathematics, 37 (2001), pp. 535â 549.
[4]
M. Crouzeix,
Une mâethode multipas implicite-explicite pour lâapproximation
des equations dâevolution paraboliques.,
Numerische Mathematik, 35 (1980), pp. 257–276.
[5]
A. Ditkowski and S. Gottlieb,
Error Inhibiting Block One-step Schemes for Ordinary Differential Equations,
Journal of Scientific Computing 73(2) (2017) 691–711.
[6]
A. Ditkowski, S. Gottlieb and Z. Grant,
Explicit and implicit error inhibiting schemes with post-processing.
arxiv.org/abs/1910.02937
[7]
A. Ditkowski, S. Gottlieb and Z. Grant,
Two-derivative error inhibiting schemes with post-processing., (2019)
https://arxiv.org/abs/1912.04159
[8]
S. Gottlieb, Z.J. Grant, A. Ditkowski,
Explicit and Implicit EIS methods with post-processing, (2019),
GitHub repository,
https://github.com/EISmethods/EISpostprocessing.
[9]
S. Gottlieb, Z.J. Grant, A. Ditkowski,
EIS multiderivative methods with post-processing, (2019),
GitHub repository,
https://github.com/EISmethods/EIS_2derivative
[10]
S. Gottlieb, Z.J. Grant, A. Ditkowski,
Error inhibiting IMEX methods with post-processing, (2019),
GitHub repository,
https://github.com/EISmethods/EIS-IMEX
[11]
S. Gottlieb, Z. Grant, and D. C. Seal, Explicit
strong stability preserving multistage two-derivative time-stepping schemes,
Journal of Scientific Computing, 68 (2016), pp. 914–942.
[12]
B. Gustafsson, H.-O. Kreiss, and J. Oliger, Time dependent
problems and difference methods, vol. 24, John Wiley & Sons, 1995.
[13]
E. Hairer, G. Wanner, and S. P. Norsett,
Solving Ordinary Differential Equations I: Nonstiff Problems,
Springer Series in Computational Mathematics,
Springer-Verlag Berlin Heidelberg (1993).
[14]
G. Izzo, Z. Jackiewicz
Transformed implicit-explicit DIMSIMs with strong stability preserving explicit part,
Mathematics and Computers in Simulation (2019).
https://doi.org/10.1016/j.matcom.2019.11.008.
[15]
Z. Jackiewicz, General linear methods for ordinary differential
equations, John Wiley & Sons, 2009.
[16]
C. Kennedy and M. Carpenter,
Additive Runge-Kutta schemes for convection-diffusion- reaction equations,
Applied Numerical Mathematics, 44 (2003), p. 139–181.
[17]
G.Yu. Kulikov,
On quasi-consistent integration by Nordsieck methods,
Journal of Computational and Applied Mathematics 225 (2009) 268–287.
[18]
G. Yu. Kulikov and R. Weiner,
Variable-Stepsize Interpolating Explicit Parallel Peer Methods with Inherent Global Error Control,
SIAM Journal on Scientific Computing, 32(4) (2010) 1695–1723.
[19]
G.Yu. Kulikov and R. Weiner,
Doubly quasi-consistent fixed-stepsize numerical integration
of stiff ordinary differential equations with implicit two-step
peer methods,
Journal of Computational and Applied Mathematics,
340 (2018) 256–275.
[20]
J. Lang. W. Hundsdorfer,
Extrapolation-based implicit-explicit Peer methods with optimised stability regions,
Journal of Computational Physics 337 (2017) pp. 203–215.
[21]
P. D. Lax and R. D. Richtmyer, Survey of the stability of linear
finite difference equations, Communications on pure and applied mathematics
9 (1956), no. 2, 267–293.
[22]
A. Quarteroni, R. Sacco, and F. Saleri, Numerical
mathematics, vol. 37, Springer Science & Business Media, 2010.
[23]
M. Schneider, J. Lang, and W. Hundsdorfer,
Extrapolation-based super-convergent implicit-explicit Peer methods
with A-stable implicit part,
Journal of Computational Physics
367 (2018) pp. 121–133.
[24]
M. Schneider, J. Land, and R. Weiner,
Super-convergent implicitâexplicit Peer methods with variable step sizes
Journal of Computational and Applied Mathematics (2019),
urlhttps://doi.org/10.1016/j.cam.2019.112501
[25]
R.D. Skeel,
Analysis of fixed-stepsize methods,
SIAM Journal on Numerical Analysis
13 (1976) 664–685.
[26]
B. Soleimani, R. Weiner
IMEX peer methods for fast-wave-slow-wave problems,
Applied Numerical Mathematics (2017) pp. 221–237.
[27]
E. Suli and D.F. Mayers,
An Introduction to Numerical Analysis,
Cambridge University Press, Cambridge, 2003.
[28]
R. Weiner, G.Yu. Kulikov, and H. Podhaiskya,
Variable-stepsize doubly quasi-consistent parallel explicit peer methods
with global error control,
Applied Numerical Mathematics 62 (2012) 1591–1603.
[29]
R. Weiner, B. A. Schmitt, H.Podhaiskya, and S. Jebensc,
Superconvergent explicit two-step peer methods,
Journal of Computational and Applied Mathematics
223 (2009) 753–764.
[30]
H. Zhang, A. Sandu, and S. Blaise,
High order implicit-explicit general linear methods with optimized stability regions
SIAM Journal on Sciendtific Computing 38(3) (2016)
pp. A1430–A1453. |
Non-collapsing in fully nonlinear curvature flows
Ben Andrews
Mathematical Sciences Institute, Australian National University, ACT 0200 Australia; Mathematical Sciences Center, Tsinghua University, Beijing 100084, China; Morningside Center for Mathematics, Chinese Academy of Sciences, Beijing 100190, China
[email protected]
,
Mat Langford
Mathematical Sciences Institute, Australian National University, ACT 0200 Australia
[email protected]
and
James McCoy
Institute for Mathematics and its Applications,
School of Mathematics and Applied Statistics,
University of Wollongong,
Wollongong, NSW 2522,
Australia
[email protected]
Abstract.
We consider embedded hypersurfaces evolving by fully nonlinear flows in which the normal speed of motion is a homogeneous degree one, concave or convex function of the principal curvatures, and prove a non-collapsing estimate: Precisely, the function which gives the curvature of the largest interior sphere touching the hypersurface at each point is a subsolution of the linearized flow equation if the speed is concave. If the speed is convex then there is an analogous statement for exterior spheres. In particular, if the hypersurface moves with positive speed and the speed is concave in the principal curvatures, then the curvature of the largest touching interior sphere is bounded by a multiple of the speed as long as the solution exists. The proof uses a maximum principle applied to a function of two points on the evolving hypersurface. We illustrate the techniques required for dealing with such functions in a proof of the known containment principle for flows of hypersurfaces.
2010 Mathematics Subject Classification: Primary 53C44; Secondary 35K55, 58J35
This research was partly supported by ARC Discovery Projects grant DP0985802. The second and third authors appreciate the support of a University of Wollongong Faculty of Informatics Research Development Scheme grant, and for the support of the Institute for Mathematics and is Applications at the University of Wollongong.
1. Introduction
Let $M^{n}$ be a compact manifold, and $X:M^{n}\times[0,T)\to\mathbb{R}^{n+1}$ a family of smooth embeddings evolving by a curvature flow
(1)
$$\frac{\partial X}{\partial t}=-F\nu,$$
where $\nu$ is the unit normal, and the
speed $F$ is a homogeneous degree one, monotone increasing function of the principal curvatures on a convex cone $\Gamma$ containing the positive ray.
We will assume below that $F$ is either concave or convex. The purpose of this paper is to prove a non-collapsing result for such flows, analogous to the result proved for the mean curvature flow by the first author in [ANonCollapse]. We expect that this will provide a key step towards understanding the singular behaviour of such flows for non-convex solutions: In the case of the mean curvature flow, the monotonicity formula of Huisken [HuMono] provides a lot of information about the structure of singularities, and this is complemented by the asymptotic convexity results of Huisken and Sinestrari [HS1, HS2], and the differential Harnack or Li-Yau-Hamilton type inequality proved by Richard Hamilton [HamMCFHarnack]. The latter is available for a large class of flows [AHarnack], but there are no analogues of the monotonicity formula or the asymptotic convexity result. The non-collapsing estimate does not precisely replace either of these, but seems nevertheless a useful tool which may be used in their stead.
The non-collapsing estimate proved for the mean curvature flow in [ANonCollapse] amounts to the statement that every point of the evolving hypersurface is touched by interior or exterior spheres with radius equal to a constant $\delta$ divided by the mean curvature $H$. It was shown there that interior non-collapsing is equivalent to the inequality
$$\|X(x,t)-X(y,t)\|^{2}\geq\frac{2\delta}{H(x,t)}\langle X(x,t)-X(y,t),\nu(x,t)\rangle$$
for all $x,y\in M$. Equivalently, this amounts to the inequality
(2)
$$Z\left(x,y,t\right):=\frac{2\langle X(x,t)-X(y,t),\nu(x,t)\rangle}{\|X(x,t)-X(%
y,t)\|^{2}}\leq\frac{H(x,t)}{\delta}$$
for all $(x,y)\in(M\times M)\setminus D$, where $D$ is the diagonal $D=\{(x,x):\ x\in M\}$. Here we adopt the convention that the unit normal $\nu$ points outwards.
Note that the supremum of the left-hand side of (2) over $y$ gives the geodesic curvature of the largest interior sphere which touches at $x$. Below we will formulate a non-collapsing result for more general curvature flows in terms of this quantity.
Definition 1.
The interior sphere curvature $\overline{Z}(x,t)$ at the point $(x,t)$ is defined by $\overline{Z}(x,t)=\sup\left\{Z(x,y,t):\ y\in M,\ y\neq x\right\}$. The exterior sphere curvature $\underline{Z}(x,t)$ at the point $(x,t)$ is defined by $\underline{Z}(x,t)=\inf\left\{Z(x,y,t):\ y\in M,\ y\neq x\right\}$.
In the results to be described, an important role will be played by an equation we call the linearized flow. To motivate this consider a smooth family of solutions $X:\ M\times[0,T)\times(-a,a)\to\mathbb{R}^{n+1}$, and define $f:\ M\times[0,T)\to\mathbb{R}$ by $f(x,t)=\left\langle\frac{\partial}{\partial s}\left(X(x,t,s)\right)\Big{|}_{s=%
0},\nu(x,t)\right\rangle$. Then $f$ satisfies the equation
(3)
$$\frac{\partial f}{\partial t}=\dot{F}^{kl}\nabla_{k}\nabla_{l}f+\dot{F}^{kl}{h%
_{k}}^{p}h_{pl}f.$$
Here $\dot{F}^{kl}$ is the derivative of $F$ with respect to the components $h_{kl}$ of the second fundamental form, defined by $\dot{F}^{kl}\big{|}_{A}B_{kl}=\frac{d}{ds}\left(F(A+sB)\right)\big{|}_{s=0}$ for any symmetric $B$. Particular solutions of (3) include the speed $F$ (see [Aconvex]*Theorem 3.7), corresponding to time translation $X(x,t,s)=X(x,t+s)$, the functions $\langle\nu(x,t),{\vec{e}}\rangle$ for $\vec{e}\in\mathbb{R}^{n+1}$ fixed, corresponding to spatial translations $X(x,t,s)=X(x,t)+s\vec{e}$, and the function $\langle\nu(x,t),X(x,t)\rangle+2tF(x,t)$ (see [smoczyk] or [AMZConvexHypersurfaces]*Theorem 14), corresponding to the scalings $X(x,s,t)=(1+s)X(x,(1+s)^{-2}t)$.
To formulate our main result we need to recall the notion of viscosity subsolution or supersolution for parabolic equations: If $M$ is a manifold with (possibly time-dependent) connection $\nabla$ and $v:\ M\times[0,T)\to\mathbb{R}$ is continuous, then $v$ is a viscosity subsolution of the equation $\frac{\partial u}{\partial t}=G(x,t,u,\nabla u,\nabla^{2}u)$ if for every $(x_{0},t_{0})\in M\times[0,T)$ and every $C^{2}$ function $\phi$ on $M\times[0,T)$ such that $\phi(x_{0},t_{0})=v(x_{0},t_{0})$, $\phi\geq v$ for $x$ in a neighbourhood of $x_{0}$ and for $t\leq t_{0}$ sufficiently close to $t_{0}$, it is true that $\frac{\partial\phi}{\partial t}\leq G(x,t,\phi,\nabla\phi,\nabla^{2}\phi)$ at the point $(x_{0},t_{0})$. The function $v$ is a viscosity supersolution if the same holds with both inequalities for $\phi$ reversed.
Our main result is the following:
Theorem 2.
Assume that $X:\ M\times[0,T)\to\mathbb{R}^{n+1}$ is an embedded solution of (1). If $F$ is convex then $\underline{Z}$ is a viscosity supersolution of the linearised flow (3).
If $F$ is concave then $\overline{Z}$ is a viscosity subsolution of (3).
Before we prove Theorem 2, we mention an important consequence:
Corollary 3.
If $F$ is convex and positive and $X$ is an embedded solution of the curvature flow (1), then $\inf_{M}\frac{\underline{Z}(x,t)}{F(x,t)}$ is non-decreasing in $t$. If $F$ is concave and positive and $X$ is an embedded solution to the flow with speed $F$, then $\sup_{M}\frac{\overline{Z}(x,t)}{F(x,t)}$ is non-increasing in $t$.
Proof of Corollary 3.
Since $F$ satisfies equation (3) (see for example [AMZConvexHypersurfaces]*Lemma 9), the result reduces to a simple comparison property of viscosity subsolutions and supersolutions. We include the argument here for completeness: Assume $F$ is convex, and for each $t$ let $\phi(t)=\inf_{x\in M}\frac{\underline{Z}(x,t)}{F(x,t)}$. We must show that $\phi$ is non-decreasing in $t$. We will accomplish this by proving that $\underline{Z}(x,t)-\left(\phi(t_{0})-\varepsilon\text{\rm e}^{t-t_{0}}\right)F%
(x,t)\geq 0$ for any $t_{0}\in[0,T)$, $t\in[t_{0},T)$ and $\varepsilon>0$. Taking the limit $\varepsilon\to 0$ then gives $\underline{Z}(x,t)\geq\phi(t_{0})F(x,t)$ and hence $\phi(t)\geq\phi(t_{0})$ for $t\geq t_{0}$.
Fix $t_{0}\in[0,T)$ and $\varepsilon>0$. Then $\underline{Z}(x,t_{0})-(\phi(t_{0})-\varepsilon)F(x,t_{0})\geq\varepsilon F(x,%
t_{0})>0$ for all $x$, so if $\underline{Z}-\left(\phi(t_{0})-\varepsilon\text{\rm e}^{t-t_{0}}\right)F$ does not remain positive for $t>t_{0}$ then there exists a time $t_{1}>t_{0}$ and a point $x_{1}\in M$ such that $\underline{Z}-\left(\phi(t_{0})-\varepsilon\text{\rm e}^{t-t_{0}}\right)F$ is non-negative on $M\times[t_{0},t_{1}]$, but $\underline{Z}(x_{1},t_{1})-\left(\phi(t_{0})-\varepsilon\text{\rm e}^{t_{1}-t_%
{0}}\right)F(x_{1},t_{1})=0$. Since $\underline{Z}$ is a supersolution of equation (3), we have at this point
$$\displaystyle 0$$
$$\displaystyle\leq\frac{\partial}{\partial t}\left(\left(\phi(t_{0})-%
\varepsilon\text{\rm e}^{t-t_{0}}\right)F\right)-\dot{F}^{{kl}}\nabla_{k}%
\nabla_{l}\left(\left(\phi(t_{0})-\varepsilon\text{\rm e}^{t-t_{0}}\right)F%
\right)-\left(\phi(t_{0})-\varepsilon\text{\rm e}^{t-t_{0}}\right)F\dot{F}^{kl%
}h_{k}^{p}h_{pl}$$
$$\displaystyle=-\varepsilon\text{\rm e}^{t_{1}-t_{0}}F+\left(\phi(t_{0})-%
\varepsilon\text{\rm e}^{t_{1}-t_{0}}\right)\left(\dot{F}^{kl}\nabla_{k}\nabla%
_{l}F+\dot{F}^{kl}h_{k}^{p}h_{pl}\right)$$
$$\displaystyle\quad\hbox{}-\dot{F}^{{kl}}\nabla_{k}\nabla_{l}\left(\left(\phi(t%
_{0})-\varepsilon\text{\rm e}^{t_{1}-t_{0}}\right)F\right)-\left(\phi(t_{0})-%
\varepsilon\text{\rm e}^{t_{1}-t_{0}}\right)F\dot{F}^{kl}h_{k}^{p}h_{pl}$$
$$\displaystyle=-\varepsilon\text{\rm e}^{t_{1}-t_{0}}F$$
$$\displaystyle<0,$$
a contradiction proving that $\underline{Z}-\left(\phi(t_{0})-\varepsilon\text{\rm e}^{t-t_{0}}\right)F$ remains positive. The argument for $F$ concave is similar.
∎
Corollary 3 is equivalent to the statement that the interior (for $F$ concave) or exterior (for $F$ convex) of the evolving hypersurfaces remains $\delta$-non-collapsed on the scale of $F$, in the sense of [ANonCollapse].
We remark here that the interpretation of the non-collapsing estimate via subsolutions and supersolutions of the linearised flow (3) gives a new perspective even for the mean curvature flow. Indeed, our proof is quite different from that in [ANonCollapse], and rather more transparent.
2. Interlude: The Containment Principle
The proof of the main theorem uses computations of the second derivatives of the function $Z$ over the product $M\times M$, and involves a careful choice of coefficients particularly in the mixed second derivatives. We note that there are many precedents for computations of this sort:
Kruzhkov [Kruzhkov] applied maximum principles to the difference of values at two points for solutions of parabolic equations in one space variable; for elliptic problems quantities such as this were used by Korevaar [Korevaar], Kennington [Kennington] and Kawohl [Kawohl] to derive a variety of convexity properties of solutions. For parabolic equations estimates on the modulus of continuity have been developed in [AC1, AC2] and were applied in [AC3, Ni] to eigenfunctions and heat kernels. In geometric flow problems related ideas appear in work on the curve-shortening problem by Huisken [HuDistComp] and Hamilton [HamCSFComp] and on Ricci flow by Hamilton [HamRFComp]. More recent refinements of these techniques appear in [AB1, AB2, AB3].
Before proving the main result, we illustrate some of the techniques involved in a simpler problem: The containment principle for solutions of fully nonlinear curvature flows of hypersurfaces. For this problem we can consider speeds $F$ which need not be homogeneous of degree one, and need not be either convex or concave:
Theorem 4.
Assume that $F$ is an odd non-decreasing symmetric function of the principal curvatures defined on $\Gamma\cup(-\Gamma)$, where $\Gamma\subset\mathbb{R}^{n}$ is a symmetric cone containing the positive cone, and $-\Gamma=\{-A:\ A\in\Gamma\}$. Let $X_{i}:M_{i}\times\left[0,T\right)\rightarrow\mathbb{R}^{n+1}$, $i=1,2$ be two compact solutions to (1) with $X_{1}\left(M_{1},0\right)\cap X_{2}\left(M_{2},0\right)=\emptyset$. Then the distance from $X_{1}\left(M_{1},t\right)$ to $X_{2}\left(M_{2},t\right)$ is non-decreasing, and in particular $X_{1}\left(M_{1},t\right)\cap X_{2}\left(M_{2},t\right)=\emptyset$ for $t\in\left[0,T\right)$.
Proof.
Define $d:M_{1}\times M_{2}\times\left[0,T\right)\to\mathbb{R}$ by
$$d\left(x,y,t\right)=\left\|X_{1}\left(x,t\right)-X_{2}\left(y,t\right)\right\|%
\mbox{.}$$
We show
$$\min_{M_{1}\times M_{2}}d\left(\cdot,t\right)\geq\min_{M_{1}\times M_{2}}d%
\left(\cdot,0\right)\mbox{,}$$
which is positive, since the initial hypersurfaces are disjoint. As notation we will also set
$$w\left(x,y,t\right)=\frac{X_{1}\left(x,t\right)-X_{2}\left(y,t\right)}{d\left(%
x,y,t\right)}$$
and write $\partial_{i}^{x}=\frac{\partial X_{1}}{\partial x_{i}}$ and $\partial_{i}^{y}=\frac{\partial X_{2}}{\partial y_{j}}$.
The function $d$ evolves under (1) by
(4)
$$\frac{\partial}{\partial t}d=\left<w,-F_{x}\nu_{x}+F_{y}\nu_{y}\right>.$$
Suppose there is a spatial minimum of $d$ at $\left(x_{0},y_{0},t_{0}\right)$. Then at this point,
$$\nabla^{M_{1}\times M_{2}}d=0\mbox{ and }\mbox{Hess}^{M_{1}\times M_{2}}d\geq 0%
\mbox{.}$$
Choosing local orthonormal coordinates on $M_{1}\times M_{2}$ at $\left(x_{0},y_{0},t_{0}\right)$, that is, orthonormal coordinates $\left\{x^{i}\right\}$ at $x_{0}$ and orthonormal coordinates $\left\{y^{i}\right\}$ at $y_{0}$ we have
$$\nabla_{j}^{M_{1}}d=\left<\partial_{j}^{x},w\right>\mbox{ and }\nabla_{j}^{M_{%
2}}d=-\left<\partial_{j}^{y},w\right>\mbox{.}$$
Since we assumed that $F$ is odd, the flow is invariant under change of orientation and we can choose $\nu_{x}=\nu_{y}=w$.
In view of the definition of $w$, we have at $\left(x_{0},y_{0},t_{0}\right)$ that
(5)
$$\nabla_{j}^{M_{1}}w=\frac{1}{d}\,\partial_{j}^{x}\mbox{ and }\nabla_{j}^{M_{2}%
}w=-\frac{1}{d}\,\partial_{j}^{y}\mbox{.}$$
For the second spatial derivatives of $d$ we have
$$\nabla_{i}^{M_{1}}\nabla_{j}^{M_{1}}d=\left<\nabla_{i}^{M_{1}}\nabla_{j}^{M_{1%
}}X_{1},w\right>+\left<\partial_{j}^{x},\nabla_{i}^{M_{1}}w\right>\mbox{,}$$
$$\nabla_{i}^{M_{2}}\nabla_{j}^{M_{1}}d=\left<\partial_{j}^{x},\nabla_{i}^{M_{2}%
}w\right>\mbox{ and}$$
$$\nabla_{i}^{M_{2}}\nabla_{j}^{M_{2}}d=-\left<\nabla_{i}^{M_{2}}\nabla_{j}^{M_{%
2}}X_{2},w\right>-\left<\partial_{j}^{y},\nabla_{i}^{M_{2}}w\right>\mbox{.}$$
Using (5), at $\left(x_{0},y_{0},t_{0}\right)$ these become
$$\nabla_{i}^{M_{1}}\nabla_{j}^{M_{1}}d=\left<\nabla_{i}^{M_{1}}\nabla_{j}^{M_{1%
}}X_{1},w\right>+\frac{1}{d}\,g_{ij}^{M_{1}}\mbox{,}$$
$$\nabla_{i}^{M_{2}}\nabla_{j}^{M_{1}}d=-\frac{1}{d}\left<\partial_{j}^{x},%
\partial_{i}^{y}\right>\mbox{ and}$$
$$\nabla_{i}^{M_{2}}\nabla_{j}^{M_{2}}d=-\left<\nabla_{i}^{M_{2}}\nabla_{j}^{M_{%
2}},w\right>+\frac{1}{d}\,g_{ij}^{M_{2}}\mbox{.}$$
We derive the following at $\left(x_{0},y_{0},t_{0}\right)$: For any vector $v$ we have
$$\displaystyle 0$$
$$\displaystyle\;\leq v^{i}v^{j}\left(\nabla_{i}^{M_{1}}\nabla_{j}^{M_{1}}d+2\,%
\nabla_{i}^{M_{2}}\nabla_{j}^{M_{1}}d+\nabla_{i}^{M_{2}}\nabla_{j}^{M_{2}}d\right)$$
$$\displaystyle\;=-h^{x}_{ij}v^{i}v^{j}\langle\nu_{x},w\rangle+\frac{1}{d}g^{M_{%
1}}_{ij}v^{i}v^{j}+h^{y}_{ij}v^{i}v^{j}\langle\nu_{y},w\rangle+\frac{1}{d}g^{M%
_{2}}_{ij}v^{i}v^{j}-\frac{2}{d}v^{i}v^{j}\langle\partial^{x}_{i},\partial^{y}%
_{j}\rangle.$$
Since $w=\nu_{x}=\nu_{y}$, the local coordinates near $x$ and $y$ may be chosen such that $\partial_{i}^{x}=\partial_{i}^{y}$ for all $i$ and $g^{M_{1}}_{ij}=g^{M_{2}}_{ij}=\delta_{ij}$. The above becomes
$$h^{x}_{ij}v^{i}v^{j}\leq h^{y}_{ij}v^{i}v^{j},$$
or since $v$ is arbitrary, $h^{x}_{ij}\leq h^{y}_{ij}$. Finally, since $F$ is monotone, we have $F_{x}\leq F_{y}$, and hence by (4) we have
$$\frac{\partial d}{\partial t}=-F_{x}+F_{y}\geq 0.$$
∎
Remarks.
(1). If $F$ is odd, it can be shown using a similar argument as above that for compact solutions of (1) with embedded initial hypersurface, the evolving hypersurfaces remain embedded while the curvature remains bounded. Defining $d:M\times M\times\left[0,T\right)$, the curvature bound implies that there is a neigbourhood $E$ of $D=\left\{\left(x,x\right):x\in M\right\}$ in $M\times M$ such that
$$d_{\mathbb{R}^{n+1}}\left(x,y,\cdot\right)\geq Cd_{M}(x,y).$$
Consequently, the argument for the containment principle may be applied on $(M\times M)\setminus E$ to conclude that embeddedness is preserved.
(2). In the containment principle the assumption that $F$ is odd can be relaxed if we make an additional topological assumption on the hypersurfaces to guarantee the correct orientation: If we assume $F$ is defined on an arbitrary symmetric cone $\Gamma$ containing the positive cone, and $M_{1}=\partial\Omega_{1}$ and $M_{2}=\partial\Omega_{2}$ with $\Omega_{1}\subset\Omega_{2}$, and require that the unit normal to $M_{i}$ points out of $\Omega_{i}$ for $i=1,2$, then the above argument goes through with minor changes. Without such a condition disjointness may not be preserved: For example if $n=2$ and $F=H+|A|$, with the cone $\Gamma=\{(\kappa_{1},\kappa_{2}):\ \max\{\kappa_{1},\kappa_{2}\}>0\}$, then surfaces with opposite orientation having nearest points of saddle type will move closer together (and can cross). In this example it is also true that embedded initial surfaces can evolve smoothly to become non-embedded.
3. Proof of the Main Theorem
We now prove theorem 2, namely, that $\underline{Z}$ ($\overline{Z}$) is a viscosity supersolution (subsolution) of the linearised flow (3) when if $F$ is convex (concave).
As in the previous section, the proof involves computation with the second derivatives over the product $M\times M$. However, the computation here has an unexpected feature of the proof in the case of fully nonlinear flows: In all the previous computations of this type mentioned above, the two points $x$ and $y$ have appeared in a symmetric way, so that the choice of coefficients in the second derivatives is determined by information at both points. This has been a serious obstacle to applications of the methods to fully nonlinear flows, since the coefficients of the equation at different points would involve the second derivatives (or second fundamental form) at different points, and there is insufficient control on these to allow a useful comparison. However,
in the present computation $x$ and $y$ play very different roles, and in particular the function $Z$ only depends on $x$ at the level of the highest derivatives. Accordingly we are able to
use a choice of coefficients in the second derivatives which depends on $x$ but not on $y$, thus removing any need to compare the second fundamental form at different points. The key observation that makes this choice work is given in
Lemma 5.
Proof of Theorem 2.
The definitions of $\overline{Z}(x,t)$ and $\underline{Z}(x,t)$ involve extrema of $Z$ over the noncompact set $\{y\in M:\ y\neq x\}$. Accordingly we begin by extending $Z$ to a continuous function on a suitable compactification.
The diagonal
$D$ is a compact submanifold of dimension and codimension $n$ in $M\times M$. The normal subspace $N_{(x,x)}D$ of $D$ at $(x,x)$ is the subspace $\{(u,-u):\ u\in T_{x}M\}\subset T_{(x,x)}(M\times M)$. The tubular neighbourhood theorem provides $r>0$ such that the exponential map is a diffeomorphism on $\{(x,x,u,-u)\in TM\times TM:\ 0<\|u\|<r\}$.
We ‘blow up’ along $D$ to define a manifold with boundary $\hat{M}$ which compactifies $(M\times M)\setminus D$, as follows:
As a set, $\hat{M}$ is the disjoint union of $(M\times M)\setminus\{(x,x):\ x\in M\}$ with the unit sphere bundle $SM=\{(x,v)\in TM:\ \|v\|=1\}$. The manifold-with-boundary structure is defined by the atlas generated by all charts for $(M\times M)\setminus D$, together with the charts $\hat{Y}$ from $SM\times(0,r)$ defined by taking a chart $Y$ for $SM$, and setting $\hat{Y}(z,s):=(\exp(sY(z)),\exp(-sY(z)))$.
We extend the function $Z$ to $\hat{M}\times[0,T)$ as follows: For $(x,y)\in(M\times M)\setminus D$ and $t\in[0,T)$ we define
$$Z(x,y,t)=\frac{2\langle X(x,t)-X(y,t),\nu(x,t)\rangle}{\|X(x,t)-X(y,t)\|^{2}}.$$
For $(x,v)\in SM$ we define
$$Z(x,v,t)=h_{(x,t)}(v,v),$$
where $h_{(x,t)}$ is the second fundamental form of $M_{t}$ at $x$. Since $X$ is an embedding, $Z$ is continuous on $(M\times M)\setminus D$. A straightforward computation shows that the above extension of $Z$ to $\hat{M}$ is also continuous.
It follows that $\overline{Z}(x,t)$ is attained on $\hat{M}$, in the sense that either there exists $y\in M\setminus\{x\}$ such that $\overline{Z}(x,t)=Z(x,y,t)$, or there exists $v\in T_{x}M$ with $\|v\|=1$ such that $\overline{Z}(x,t)=Z(x,v,t)$. Also, since the supremum over $M\setminus\{x\}$ equals the supremum over $\hat{M}$, and this is no less than the supremum over the boundary $SM$, we have that $\overline{Z}(x,t)$ is no less than the maximum principal curvature $\kappa_{\max}(x,t)$. Similarly, $\underline{Z}(x,t)$ is attained on $\hat{M}$ and is no greater than the minimum principal curvature $\kappa_{\min}(x,t)$.
To prove that $\overline{Z}$ is a subsolution if $F$ is concave, we consider, for an arbitrary point, $(x_{0},t_{0})$, an arbitrary $C^{2}$ function $\phi$ which lies above $\overline{Z}$ on a neighbourhood of $(x_{0},t_{0})$ in $M\times[0,t_{0}]$, with equality at $(x_{0},t_{0})$, and prove a differential inequality for $\phi$ at $(x_{0},t_{0})$.
Observe that for all $x$ close to $x_{0}$, and all $t\leq t_{0}$ close to $t_{0}$ we have $Z(x,y,t)\leq\overline{Z}(x,t)\leq\phi(x,t)$ for each $y\neq x$ in $M$, and $Z(x,v,t)\leq\overline{Z}(x,t)\leq\phi(x,t)$ for all $v\in S_{x}M$. Furthermore equality holds in the last inequality in both cases when $(x,t)=(x_{0},t_{0})$. By definition of $\overline{Z}$ we either have $Z(x_{0},y_{0},t_{0})=\overline{Z}(x_{0},t_{0})$ for some $y_{0}\neq x_{0}$, or we have $Z(x_{0},\xi_{0},t_{0})=\overline{Z}(x_{0},t_{0})$ for some $\xi_{0}\in S_{x_{0}}M$.
We consider the latter case first: Define a smooth unit vector field $\xi$ near $(x_{0},t_{0})$ by choosing $\xi(x_{0},t_{0})=\xi_{0}$, extending to $(x,t_{0})$ for $x$ close to $x_{0}$ by parallel translation along geodesics, and extending in the time direction by solving $\frac{\partial\xi}{\partial t}=F{\mathcal{W}}(\xi)$, where $\mathcal{W}$ is the Weingarten map. This construction implies that $\nabla\xi(x_{0},t_{0})=0$ and $\nabla^{2}\xi(x_{0},t_{0})=0$, and from the evolution equation for the second fundamental form we find that
$$\frac{\partial}{\partial t}(h(\xi,\xi))=\dot{F}^{kl}\nabla_{k}\nabla_{l}(h(\xi%
,\xi))+\ddot{F}^{kl,pq}\nabla_{\xi}h_{kl}\nabla_{\xi}h_{pq}+h(\xi,\xi)\dot{F}^%
{kl}h_{k}^{p}h_{pl}$$
at the point $(x_{0},t_{0})$. The second term on the right is non-positive by the concavity of $F$. At the point $(x_{0},t_{0})$ we also have $\phi=h(\xi,\xi)$, and since $\phi\geq h(\xi,\xi)$ at nearby points and earlier times we also have $\frac{\partial\phi}{\partial t}\leq\frac{\partial}{\partial t}(h(\xi,\xi))$ and $\nabla^{2}\phi\geq\nabla^{2}(h(\xi,\xi))$ at this point. Combining these inequalities gives $\frac{\partial\phi}{\partial t}\leq\dot{F}^{kl}\nabla_{k}\nabla_{l}\phi+\phi%
\dot{F}^{kl}h_{k}^{p}h_{pl}$ at $(x_{0},t_{0})$ as required.
Next we consider the case where $Z(x_{0},y_{0},t_{0})=\phi(x_{0},t_{0})$ for some $y_{0}\neq x_{0}$, and $\phi(x,t)\geq Z(x,y,t)$ for all points $x$ near $x_{0}$, times $t\leq t_{0}$ near $t_{0}$, and arbitrary $y\neq x$ in $M$. This implies that $\frac{\partial\phi}{\partial t}(x_{0},t_{0})\leq\frac{\partial Z}{\partial t}(%
x_{0},y_{0},t_{0})$, that the first spatial derivatives of $\phi-Z$ in $x$ and $y$ vanish at $(x_{0},y_{0},t_{0})$ and that the second spatial derivatives of $\phi-Z$ are non-negative at $(x_{0},y_{0},t_{0})$. We compute these derivatives, working in local normal coordinates $\{x^{i}\}$ near $x$ and $\{y^{i}\}$ near $y$. To simplify notation we define $d=|X(x,t)-X(y,t)|$ and $w=\frac{X(x,t)-X(y,t)}{d}$ and write $\partial^{x}_{i}=\frac{\partial X}{\partial x^{i}}$. We first compute the first spatial derivatives with respect to $y$:
(6)
$$\frac{\partial}{\partial y^{i}}\left(\phi-Z\right)=\frac{2}{d^{2}}\left\langle%
\partial^{y}_{i},{\nu_{x}}-dZw\right\rangle.$$
The first derivatives with respect to $x$ are slightly more complicated:
(7)
$$\frac{\partial}{\partial x^{i}}\left(\phi-Z\right)=\frac{\partial\phi}{%
\partial x^{i}}-\frac{2}{d}\left({h^{x}}_{i}^{p}\langle w,\partial^{x}_{p}%
\rangle-Z\langle w,\partial_{i}^{x}\rangle\right).$$
The left, and therefore right, sides of equations (6) and (7) vanish at $(x_{0},y_{0},t_{0})$.
Now we differentiate further to find the second derivatives: Using the fact that the first derivatives of $Z$ with respect to $y$ vanish, we find
$$\displaystyle\frac{\partial^{2}}{\partial y^{i}\partial y^{j}}\left(\phi-Z\right)$$
$$\displaystyle=\frac{2}{d^{2}}\left\{\left\langle{h^{y}}_{ij}{\nu_{y}},dZw-{\nu%
_{x}}\right\rangle+Z\left\langle\partial^{y}_{i},\partial^{y}_{j}\right\rangle\right\}$$
(8)
$$\displaystyle=\frac{2}{d^{2}}\left(Z\delta_{ij}-{h^{y}}_{ij}\right).$$
Differentiating (6) with respect to the $x$ coordinates gives the mixed partial derivatives:
(9)
$$\frac{\partial^{2}}{\partial x^{j}\partial y^{i}}\left(\phi-Z\right)=-\frac{2}%
{d^{2}}\left(Z\delta_{j}^{p}-{h^{x}}_{j}^{p}\right)\langle\partial^{y}_{i},%
\partial^{x}_{p}\rangle-\frac{2}{d}\frac{\partial\phi}{\partial x^{j}}\langle w%
,\partial^{y}_{i}\rangle.$$
Differentiating (7) with respect to the $x$ coordinates gives:
(10)
$$\displaystyle\frac{\partial^{2}}{\partial x^{i}\partial x^{j}}\left(\phi-Z\right)$$
$$\displaystyle=\frac{2}{d^{2}}\left(Z\delta_{ij}-{h^{x}}_{ij}\right)+Z{h^{x}}_{%
jp}\delta^{pq}{h^{x}}_{qi}-\frac{2}{d}\nabla_{p}{h^{x}}_{ij}\delta^{pq}\langle
w%
,\partial^{x}_{q}\rangle$$
$$\displaystyle\quad\hbox{}-Z^{2}{h^{x}}_{ij}+\frac{2}{d}\frac{\partial\phi}{%
\partial x^{j}}\langle w,\partial^{x}_{i}\rangle+\frac{2}{d}\frac{\partial\phi%
}{\partial x^{i}}\langle w,\partial^{x}_{j}\rangle+\frac{\partial^{2}\phi}{%
\partial x^{i}\partial x^{j}}.$$
Finally we compute the time derivative:
$$\displaystyle\frac{\partial}{\partial t}\left(\phi-Z\right)$$
$$\displaystyle=\frac{\partial\phi}{\partial t}+\frac{2F_{x}}{d^{2}}-\frac{2F_{y%
}}{d^{2}}\langle{\nu_{y}},{\nu_{x}}-dZw\rangle-\frac{2}{d}\langle w,\nabla F_{%
x}\rangle-Z^{2}F_{x}$$
(11)
$$\displaystyle=\frac{\partial\phi}{\partial t}+\frac{2F_{x}}{d^{2}}-\frac{2F_{y%
}}{d^{2}}-\frac{2}{d}\langle w,\nabla F_{x}\rangle-Z^{2}F_{x}.$$
Combining equations (8)–(11) and the inequalities at $(x_{0},y_{0},t_{0})$ we obtain
$$\displaystyle 0$$
$$\displaystyle\leq-\frac{\partial}{\partial t}\left(\phi-Z\right)+\dot{F}_{x}^{%
ij}\left(\frac{\partial^{2}}{\partial x^{i}\partial x^{j}}\left(\phi-Z\right)+%
2\frac{\partial^{2}}{\partial x^{i}\partial y^{j}}\left(\phi-Z\right)+\frac{%
\partial^{2}}{\partial y^{i}\partial y^{j}}\left(\phi-Z\right)\right)$$
(12)
$$\displaystyle=-\frac{\partial\phi}{\partial t}+\dot{F}_{x}^{ij}\nabla_{i}%
\nabla_{j}\phi+\phi\dot{F}_{x}^{ij}{h^{x}}_{ip}\delta^{pq}{h^{x}}_{qj}-\frac{4%
F_{x}}{d^{2}}+\frac{4}{d^{2}}\dot{F}_{x}^{ij}{h^{x}}_{iq}\delta^{qp}\langle%
\partial^{y}_{j},\partial^{x}_{p}\rangle$$
$$\displaystyle\quad\hbox{}+\frac{2F_{y}}{d^{2}}-\frac{2}{d^{2}}\dot{F}_{x}^{ij}%
{h^{y}}_{ij}+\frac{4Z}{d^{2}}\dot{F}_{x}^{ij}\delta_{ij}-\frac{4Z}{d^{2}}\dot{%
F}_{x}^{ij}\langle\partial^{x}_{i},\partial^{y}_{j}\rangle+\frac{4}{d}\dot{F}_%
{x}^{ij}\frac{\partial\phi}{\partial x^{i}}\langle w,\partial^{x}_{j}-\partial%
^{y}_{j}\rangle.$$
Now note that, by the homogeneity of $F$, $F_{x}=\dot{F}_{x}^{ij}{h^{x}}_{ij}$, so that
$$-\frac{4F_{x}}{d^{2}}+\frac{4}{d^{2}}\dot{F}_{x}^{ij}{h^{x}}_{iq}\delta^{qp}%
\langle\partial^{y}_{j},\partial^{x}_{p}\rangle=-\frac{4}{d^{2}}\dot{F}_{x}^{%
ij}{h^{x}}_{iq}\delta^{qp}\left(\delta_{jp}-\langle\partial^{y}_{j},\partial^{%
x}_{p}\rangle\right).$$
We can also write
$$\frac{4Z}{d^{2}}\dot{F}_{x}^{ij}\delta_{ij}-\frac{4Z}{d^{2}}\dot{F}_{x}^{ij}%
\langle\partial^{x}_{i},\partial^{y}_{j}\rangle=\frac{4Z}{d^{2}}\dot{F}_{x}^{%
ij}\left(\delta_{ij}-\langle\partial^{y}_{j},\partial^{x}_{i}\rangle\right).$$
To control the first two terms on the second line of (12) we use the following observation:
Lemma 5.
If $F$ is concave, then for any $y\neq x$ we have
$$\dot{F}_{x}^{ij}{h^{y}}_{ij}\geq F_{y}.$$
If $F$ is convex, then the reverse inequality holds.
Proof of Lemma.
Let $A={h^{x}}$ and $B={h^{y}}$. Then concavity of $F$ gives
$$F(B)\leq F(A)+\dot{F}_{A}\left(B-A\right)=F(A)+\dot{F}_{A}(B)-\dot{F}_{A}(A).$$
The homogeneity of $F$ gives by the Euler relation that $\dot{F}_{A}(A)=F(A)$, yielding
$$F(B)\leq\dot{F}_{A}(B)$$
as claimed. The inequality is reversed for $F$ convex.
∎
Using these observations, together with the identity for $\frac{\partial\phi}{\partial x^{i}}$ coming from the vanishing of $\frac{\partial}{\partial x^{i}}\left(\phi-Z\right)$ in equation (7), we find:
$$\displaystyle 0$$
$$\displaystyle\leq-\frac{\partial\phi}{\partial t}+\dot{F}_{x}^{ij}\nabla_{i}%
\nabla_{j}\phi+\phi\dot{F}_{x}^{ij}{h^{x}}_{ip}\delta^{pq}{h^{x}}_{qj}$$
$$\displaystyle\quad\hbox{}+\frac{4}{d^{2}}\dot{F}_{x}^{ij}\left(Z\delta_{ip}-{h%
^{x}}_{ip}\right)\delta^{pq}\left(\delta_{qj}-\langle\partial^{y}_{j},\partial%
^{x}_{q}\rangle+2\langle w,\partial^{x}_{q}\rangle\langle w,\partial^{y}_{j}-%
\partial^{x}_{j}\rangle\right).$$
We now prove that the term in the final brackets is non-positive, that is,
Lemma 6.
The term: $\delta_{qj}-\langle\partial^{y}_{j},\partial^{x}_{q}\rangle+2\langle w,%
\partial^{x}_{q}\rangle\langle w,\partial^{y}_{j}-\partial^{x}_{j}\rangle$ is non-positive.
Proof of Lemma.
We now choose the local coordinates $\{x^{i}\}$ and $\{y^{i}\}$ more carefully. Throughout we continue to compute at the minimum $(x_{0},y_{0},t_{0})$. Then we may choose $\partial^{y}_{n}$ and $\partial^{x}_{n}$ to be coplanar with $w$, and $\partial^{y}_{i}=\partial^{x}_{i}$ for $i=1,\dots,n-1$. This ensures that $\delta_{qj}-\langle\partial^{y}_{j},\partial^{x}_{q}\rangle+2\langle w,%
\partial^{x}_{q}\rangle\langle w,\partial^{y}_{j}-\partial^{x}_{j}\rangle$ is non-zero only when $p=q=n$.
We have two cases to consider, first supposing that $\langle w,\nu_{x}\rangle\geq 0$. In this case we may define $\alpha\in[0,\pi/2)$ by $\langle w,\nu_{x}\rangle=\sin\alpha$. Note that we have one final degree of freedom in the coordinates, namely the directions of $\partial_{n}^{x}$ and $\partial_{n}^{y}$. Direct $\partial_{n}^{x}$ such that $\langle w,\partial^{x}_{n}\rangle=-\cos\alpha$. Now define $\theta\in[0,\pi/2)$ and the orientation of $\partial^{y}_{n}$ by the conditions $\langle\partial^{y}_{n},\partial^{x}_{n}\rangle=-\cos 2\theta$ and $\langle\partial^{y}_{n},\nu_{x}\rangle=\sin 2\theta$. Then the vanishing of $\partial_{y_{n}}(\phi-Z)$ implies
(13)
$$\displaystyle\langle\partial^{y}_{n},\nu_{x}\rangle$$
$$\displaystyle=2\langle w,\nu_{x}\rangle\langle\partial^{y}_{n},w\rangle$$
$$\displaystyle\Rightarrow\sin 2\theta\cos 2\alpha$$
$$\displaystyle=\sin 2\alpha\cos 2\theta\,.$$
That is, $\sin(2\theta-2\alpha)=0$ and we find $\theta=\alpha$. The identity (13) now implies that $\langle\partial^{y}_{n},w\rangle=\cos\theta$ and we may compute,
$$\displaystyle\delta_{qj}-\langle\partial^{y}_{j},\partial^{x}_{q}\rangle+2%
\langle w,\partial^{x}_{q}\rangle\langle w,\partial^{y}_{j}-\partial^{x}_{j}\rangle$$
$$\displaystyle=1+\cos(2\theta)+2\cos\theta(-\cos\theta-\cos\theta)$$
$$\displaystyle=2\cos^{2}\theta-4\cos^{2}\theta=-2\cos^{2}\theta\leq 0.$$
The second case, namely that of $\langle w,\nu_{x}\rangle\leq 0$, is proved similarly; this time we define $\alpha\in[0,\pi/2)$ by $\langle w,\nu_{x}\rangle=-\sin\alpha$, directing $\partial_{n}^{x}$ such that $\langle w,\partial^{x}_{n}\rangle=\cos\alpha$. In this case we define $\theta\in[0,\pi/2)$ and the orientation of $\partial^{y}_{n}$ to satisfy the conditions $\langle\partial^{y}_{n},\partial^{x}_{n}\rangle=-\cos 2\theta$ and $\langle\partial^{y}_{n},\nu_{x}\rangle=\sin 2\theta$. A similar calculation as in the first case then yields $\theta=\alpha$ but (13) instead implies $\langle\partial^{y}_{n},w\rangle=-\cos\theta$. We now compute:
$$\displaystyle\delta_{qj}-\langle\partial^{y}_{j},\partial^{x}_{q}\rangle+2%
\langle w,\partial^{x}_{q}\rangle\langle w,\partial^{y}_{j}-\partial^{x}_{j}\rangle$$
$$\displaystyle=1+\cos(2\theta)-2\cos\theta(\cos\theta+\cos\theta)$$
$$\displaystyle=2\cos^{2}\theta-4\cos^{2}\theta=-2\cos^{2}\theta\leq 0.$$
∎
The matrix $\dot{F}_{x}^{ij}\left(Z\delta_{ip}-{h^{x}}_{ip}\right)\delta^{pq}$ is non-negative definite and symmetric (since the factors
are each positive definite and commute), so in particular the component with $j=q=n$ is non-negative. We therefore conclude that
$$0\leq-\frac{\partial\phi}{\partial t}+\dot{F}_{x}^{ij}\nabla_{i}\nabla_{j}\phi%
+\phi\dot{F}_{x}^{ij}{h^{x}}_{ip}g_{x}^{pq}{h^{x}}_{qj},$$
which completes the proof that $\overline{Z}$ is a viscosity subsolution of (3).
In the case where $F$ is convex and we consider $\underline{Z}$ instead of $\overline{Z}$, all inequalities are reversed and we deduce that $\underline{Z}$ is a supersolution of (3).
∎
4. Conclusions and remarks
We mention here some immediate implications of the non-collapsing result:
(1).
Interior non-collapsing for concave $F$ rules out blow-up limits such as the product of the grim reaper with $\mathbb{R}^{n-1}$ (if the initial hypersurface has positive $F$), since this has the interior sphere curvature $\overline{Z}$ asymptotically constant while the speed $F$ approaches zero, violating Corollary 3. The exterior non-collapsing does not appear to rule out this possibility. Note that without the assumption of embeddedness, such singularities do indeed occur, even in mean curvature flow.
(2).
In the case of mean curvature flow where both interior and exterior non-collapsing hold, we are able to deduce directly that for uniformly convex hypersurfaces all principal curvatures are comparable, implying a simple proof of the Huisken and Gage-Hamilton theorems on the asymptotic behaviour for convex solutions [Huiskenconvex, GH]. If only one-sided non-collapsing holds then we cannot immediately conclude such a strong result, but nevertheless the convergence arguments in the convex case become rather easy: For example, in the case where $F$ is convex, we have $\underline{Z}(x,t)\geq\varepsilon F(x,t)\geq\varepsilon\kappa_{\max}(x,t)$, from which it follows that the circumradius (bounded by the reciprocal of $\underline{Z}(x,t)$ for any $x$) is bounded by $\varepsilon^{-1}$ times the inradius. No such result holds in the case where $F$ is concave, however — this should not be surprising since there are examples of concave, homogeneous degree one functions $F$ such that convex hypersurfaces can evolve to be non-convex under equation (1) (see [AMZConvexHypersurfaces]*Example 1).
(3).
As in the case of mean curvature flow, analogues of Corollary 3 hold with $F$ replaced by any positive solution of the linearized flow (3). In particular we can allow star-shaped initial hypersurfaces even if $F$ is not positive, by using the solution
$\langle X,\nu\rangle+2tF$ of (3).
References |
On Iwahori-Hecke algebras for $p$-adic loop groups: double coset basis and Bruhat order
Dinakar Muthiah
Abstract.
We study the $p$-adic loop group Iwahori-Hecke algebra $\mathcal{H}(G^{+},I)$ constructed by Braverman, Kazhdan, and Patnaik in [bkp] and give positive answers to two of their conjectures. First, we algebraically develop the “double coset basis” of $\mathcal{H}(G^{+},I)$ given by indicator functions of double cosets. We prove a generalization of the Iwahori-Matsumoto formula, and as a consequence, we prove that the structure coefficients of the double coset basis are polynomials in the order of the residue field. The basis is naturally indexed by a semi-group $\mathcal{W}_{\mathcal{T}}$ on which Braverman, Kazhdan, and Patnaik define a preorder. Their preorder is a natural generalization of the Bruhat order on affine Weyl groups, and they conjecture that the preorder is a partial order. We define another order on $\mathcal{W}_{\mathcal{T}}$ which is graded by a length function and is manifestly a partial order. We prove the two definitions coincide, which implies a positive answer to their conjecture. Interestingly, the length function seems to naturally take values in $\mathbb{Z}\oplus\mathbb{Z}\varepsilon$ where $\varepsilon$ is “infinitesimally” small.
1. Introduction
Let $\mathbf{G}$ be a Kac-Moody group equipped with a choice of positive Borel subgroup $\mathbf{B}$. Let $F$ be a local field, $\mathcal{O}$ be its ring of integers, $\pi$ be a choice of uniformizer, and $k$ be the residue field of $\mathcal{O}$. Let $G=\mathbf{G}(F)$, let $K=G(\mathcal{O})$, and let the Iwahori subgroup $I$ be those elements of $K$ that lie in $\mathbf{B}(k)$ modulo the uniformizer.
When $\mathbf{G}$ is finite-dimensional, the Iwahori-Hecke algebra $\mathcal{H}(G,I)$ is defined to be the set of complex valued functions on $G$ that are $I$-biinvariant and supported on finitely many $I$ double cosets. The multiplication in $\mathcal{H}(G,I)$ is given by convolution. The $I$ double cosets of $G$ are indexed by the affine Weyl group. The “double coset basis” of $\mathcal{H}(G,I)$ is given by indicator functions of $I$ double cosets, and the structure coefficients of this basis are given by the Iwahori-Matsumoto presentation of the algebra. Alternatively, Bernstein gave another presentation of $\mathcal{H}(G,I)$ by making use of the principal series representation of $G$. In this presentation, $\mathcal{H}(G,I)$ is generated by a finite Hecke algebra and the group algebra of the coweight lattice of $G$.
In the case when $\mathbf{G}$ is an untwisted affine Kac-Moody group, i.e. a “loop group”, the definition of Iwahori-Hecke algebra due to Braverman, Kazhdan and Patnaik [bkp] is more subtle. An initial issue is that the Cartan decomposition no longer holds. To handle this, let $G^{+}$ be the subset of $G$ where the Cartan decomposition does hold, and restrict attention to only those $I$-biinvariant functions whose support is contained in $G^{+}$. Then one can prove that $G^{+}$ is in fact a semi-group, and therefore the condition of having support in $G^{+}$ is preserved under convolution. Moreover, they prove that the convolution is well defined, i.e. the structure coefficients are finite. Also, the condition of being supported on finitely many cosets is preserved under convolution (i.e. one does not need to pass to a completion as one does in the spherical case [bk, gr]). Let us write $\mathcal{H}(G^{+},I)$ for the $p$-adic loop group Iwahori-Hecke algebra consisting of the set of complex-valued functions on $G^{+}$ supported on finitely many $I$ double cosets.
Additionally, Braverman, Kazhdan, and Patnaik prove that $\mathcal{H}(G^{+},I)$ has a Bernstein-type presentation under which it is generated by an affine Hecke algebra and the semi-group algebra of the Tits cone of $\mathbf{G}$.
In this way, they show that $\mathcal{H}(G^{+},I)$ is a version of Cherednik’s DAHA (see [cher]). The only difference is that Cherednik’s DAHA arises when one uses the coroot lattice of $G$ instead of the Tits cone in the Bernstein presentation (see Section 2.4.3).
1.1. The double coset basis
There is another basis of $\mathcal{H}(G^{+},I)$: the “double coset basis” given by indicator functions of $I$ double cosets. The $I$ double cosets contained in $G^{+}$ are naturally indexed by the semi-group $\mathcal{W}_{\mathcal{T}}$, which is the semi-direct product of the Weyl group of $G^{+}$ with the Tits cone. The stucture coefficients of this basis are given by the cardinalities of certain finite sets (see Theorem 2.45) that arise from $p$-adic integration and are mysterious from an algebraic perspective. Braverman, Kazhdan, and Patnaik conjecture [bkp, Section 1.2.4] that there should be a combinatorial way to develop this basis; in particular, the structure constants of this basis should be polynomials in $q$, the order of the residue field $k$.
In this paper, we give a way to combinatorially develop the coset basis of $\mathcal{H}(G^{+},I)$. To do this, we prove a generalization of the Iwahori-Matsumoto relation (see Theorem 3.1 and its left-handed variation 3.15) that holds in the case of loop groups.
Combining this with the algorithm developed in [bkp, Section 6.2] for writing the generators of the Bernstein presentation in terms of the coset basis we give a positive answer to the conjecture of Braverman, Kazhdan, and Patnaik:
Theorem 1.1.
The structure constants of the double coset basis of $\mathcal{H}(G^{+},I)$ are polynomials in $q$, the order of the residue field $k$.
1.2. The Bruhat order
In the second part of this paper, we study candidates for the Bruhat order on $\mathcal{W}_{\mathcal{T}}$, which is the semi-group indexing the double coset basis of $\mathcal{H}(G^{+},I)$. One candidate is proposed in [bkp, Section B.2]. The authors define the notion of a double affine root, and associated to such a root $\beta$, they define a reflection $s_{\beta}$. If $w,ws_{\beta}\in\mathcal{W}_{\mathcal{T}}$, they declare that $w<ws_{\beta}$ if $w(\beta)$ is positive, and $w>ws_{\beta}$ otherwise. The Bruhat preorder is then defined to be the preorder generated by such inequalities. This definition generalizes a similar characterization of the Bruhat order for Weyl groups. However in the case of $\mathcal{W}_{\mathcal{T}}$, it is not at all clear that this preorder is in fact a partial order. Braverman, Kazhdan and Patnaik conjecture [bkp, Section B.2] that this preorder is a partial order.
We propose another candidate for the Bruhat order. The first ingredient is a new length function on $\mathcal{W}_{\mathcal{T}}$ whose definition (see Definition 4.14) is inspired by our generalized Iwahori-Matsumoto formula. We define an order generated by double affine reflections as above, but we say that $w<ws_{\beta}$ if the length of $ws_{\beta}$ is greater than the length of $w$. This order is manifestly a partial
order because it is graded by a length function. We then prove the following.
Theorem 1.2.
The two notions of Bruhat order agree. In particular, the Bruhat preorder considered by Braverman, Kazhdan, and Patnaik is in fact a partial order. This gives a positive answer to their conjecture.
However, we must note that the length function used in the second partial ordering is not a naive generalization of the Coxeter length function which takes values in $\mathbb{N}$. Instead, our length function take values in $\mathbb{Z}\oplus\mathbb{Z}\varepsilon$ ordered lexicographically (i.e. so $\varepsilon$ is “infinitesimally small” compared to an integer).
Because the ordinary length function for Weyl groups of Kac-Moody groups records the dimension of Schubert varieties, this seems to indicate that the dimensions of Schubert varieties in the “double affine flag variety” may naturally take values in $\mathbb{Z}\oplus\mathbb{Z}\varepsilon$. We do not currently have a good explanation for why this should be true geometrically, but it seems to indicate some very interesting phenomena.
1.3. Towards Kazhdan-Lusztig theory
The longer term goal is to use the algebraic theory of $\mathcal{H}(G^{+},I)$ to understand the geometry of the yet-to-be-defined double affine flag variety.
Thus to develop Kazhdan-Lusztig theory we need to accomplish the following tasks:
(1)
Explicitly understand the double coset basis. In particular, show that the structure constants depend polynomially on $q$.
(2)
Develop the strong Bruhat order.
(3)
Define and develop the Kazhdan-Lusztig involution.
In this paper, we have made progress towards the first two tasks. What remains is to explicitly understand the Kazhdan-Lusztig involution, which we plan to address in a future paper. In finite type, this reduces to understanding $SL_{2}$, where the flag variety is $\mathbb{P}^{1}$. However, in the double affine case we do not have such a simplification essentially because the double affine Weyl group is far from being a Coxeter group,.
1.4. The work of Bardy-Panse, Gaussent, Rousseau and generalizations
We should mention that at the same time as this work, independent work by Bardy-Panse, Gaussent and Rousseau has appeared [bgr], which defines Iwahori-Hecke algebras in the general Kac-Moody case. Their main technical tool is the notion of a hovel, a generalization of the notion of the affine building to Kac-Moody groups. We don’t use hovels; instead, we make repeated use of the familiar Iwahori factorization to prove our computations. We both produce the same generalization of the Iwahori-Matsumoto formula [bgr, Proposition 4.1]. They do not however study the Bruhat order.
Our results in Section 3 are currently stated only in the untwisted affine case, but the methods of proof are not specific to this case. We have restricted to this case because we use the results of Braverman, Kazhdan, and Patnaik for reasons related to the well-definedness of the algebra. If one knows well-definedness more generally, our proofs should work without modification.
1.5. Acknowledgements
I thank Alexander Braverman, Manish Patnaik, and Anna Puskás for numerous fruitful conversations.
2. Preliminaries
2.1. Kac-Moody root data
The following definitions are standard, and we refer the reader to [kac, kumar, tits] for more details. Because we will mostly be working with coweights and coroots, we use the superscript $\vee$ to refer to weights and roots unlike the usual convention.
Let $P$ be a finite-rank lattice, i.e. a finite-rank free abelian group, and let $P^{\vee}$ be its dual lattice. We will call $P$ the coweight lattice and $P^{\vee}$ the weight lattice. Let $I$ be a finite indexing set, and suppose we are given two embeddings
(2.1)
$$\displaystyle\alpha:I\hookrightarrow P$$
(2.2)
$$\displaystyle\alpha^{\vee}:I\hookrightarrow P^{\vee}$$
For $i\in I$, we will follow the usual notation and write $\alpha_{i}$ (resp. $\alpha_{i}^{\vee}$) for $\alpha(i)$ (resp. $\alpha^{\vee}(i)$). Let $\Pi=\{\alpha_{i}|i\in I\}$ and $\Pi^{\vee}=\{\alpha^{\vee}_{i}|i\in I\}$. We call $\Pi$ (resp. $\Pi^{\vee}$) the set of simple coroots (resp. simple roots). A Kac-Moody root datum $D$ is a tuple $(P,P^{\vee},I,\alpha,\alpha^{\vee})$ as above such that the matrix $A=(\langle\alpha_{i},\alpha^{\vee}_{j}\rangle)_{i,j\in I}$ is a generalized Cartan matrix. Let $Q$ be the sublattice of $P$ generated by $\Pi$, and let $Q^{\vee}$ be the sublattice of $P^{\vee}$ generated by $\Pi^{\vee}$. We call $Q$ (resp. $Q^{\vee}$) the coroot lattice (resp. the root lattice).
To each $i\in I$, we let $s_{i}$ be the linear automorphism of $P$ given by the following formula.
(2.3)
$$\displaystyle s_{i}(\mu)=\mu-\langle\mu,\alpha_{i}^{\vee}\rangle\alpha_{i}$$
The Weyl group $W$ of the root datum is defined to be the subgroup of $GL(P)$ generated by $\{s_{i}\mid i\in I\}$. It is known that $W$ is a Coxeter group.
There is an obvious notion of direct sum of root data, and we say that a root datum is irreducible if it cannot be written as a direct sum of non-trivial root data.
2.1.1. Fundamental (co)weights and $\rho^{\vee}$
The dominant cone $P^{++}\subset P$ is defined by the following.
(2.4)
$$\displaystyle P^{++}=\{\lambda\in P|\langle\lambda,\alpha^{\vee}_{i}\rangle%
\geq 0\text{ for all }i\in I\}$$
The Tits cone $\mathcal{T}\subset P$ is defined as $\mathcal{T}=\bigcup_{w\in W}w(P^{++})$.
We say that a set $\{\Lambda_{i}\mid i\in I\}$ of coweights indexed by $I$ is a set of fundamental coweights if the following holds for all $i,j\in I$.
(2.5)
$$\displaystyle\langle\Lambda_{i},\alpha^{\vee}_{j}\rangle=\delta_{i,j}$$
Similarly we say that a set of weights $\{\Lambda^{\vee}_{i}\mid i\in I\}$ is a set of fundamental weights if the analagous statement holds with the positions of the superscript $\vee$ reversed.
Let us choose fundamental coweights and fundamental weights, and let us define
(2.6)
$$\displaystyle\rho^{\vee}=\sum\Lambda^{\vee}_{i}$$
Note that unlike in the case of a finite-dimensional Kac-Moody algebra (i.e. a semi-simple Lie Algebra), in general we must make a choice to define the fundamental coweights and weights because the simple coroots (resp. roots) do not form a basis of the coweight (resp. weight) space.
2.2. Kac-Moody groups
To a Kac-Moody root datum $D$, Tits [tits] associates a group functor $\mathbf{G}_{D}$ on the category of commutative rings called the Kac-Moody group functor associated to $D$. Unless the generalized Cartan matrix of $D$ is finite-type, this group functor is infinite-dimensional and will not be representable by a scheme. However, $\mathbf{G}_{D}$ is representable by an affine group ind-scheme of ind-finite type (see [mathieu]). We will only refer to a single root datum at a time, so we will drop the subscript $D$.
The group $\mathbf{G}$ comes equipped with a pair of Borel subgroups $\mathbf{B}^{+}$ and $\mathbf{B}^{-}$. The subgroups $\mathbf{U}^{+}$ and $\mathbf{U}^{-}$ are their respective unipotent radicals, and the subgroup $\mathbf{A}=\mathbf{B}^{+}\cap\mathbf{B}^{-}$ is a finite-dimensional split torus. Note that we work with the minimal Kac-Moody group, so neither $\mathbf{B}^{+}$ nor $\mathbf{B}^{-}$ are completed.
We have natural identifications:
(2.7)
$$\displaystyle P=\operatorname{Hom}(\mathbb{G}_{m},\mathbf{A})$$
and
(2.8)
$$\displaystyle P^{\vee}=\operatorname{Hom}(\mathbf{A},\mathbb{G}_{m})$$
Moreover, we can identify $\mathbf{A}=\mathbf{B}^{+}/\mathbf{U}^{+}=\mathbf{B}^{-}/\mathbf{U}^{-}$.
2.2.1. Roots and inversion sets.
If we take points over $\mathbb{C}$ (any characteristic zero field would do), we can look at $\mathbf{A}(\mathbb{C})$ acting on $\mathfrak{u}^{+}=\text{Lie}(U^{+})$ via the adjoint action. The set of positive roots $\Delta_{+}$ is the set of weights for this action. Similarly, the negative roots $\Delta_{-}$ are the weights for the action on $\mathfrak{u}^{-}=\text{Lie}(U^{-})$. By the construction of $\mathbf{G}$, one sees that the simple roots are positive roots, and the set of real roots are defined to be those roots obtained by translating simple roots by the Weyl group.
Let us write $\Delta_{\text{re}}$ for the set of real roots, $\Delta_{+,\text{re}}$ for the set of positive real roots, and $\Delta_{-,\text{re}}$ for the set of negative real roots.
For each $w\in W$, define the inversion set $\Delta(w,-)$ by
(2.9)
$$\displaystyle\Delta(w,-)=\{\beta^{\vee}\in\Delta_{+}\mid w(\beta^{\vee})\in%
\Delta_{-}\}$$
2.2.2. Lifting Weyl group elements
For each $i\in I$, we have an $\mathbf{SL_{2}}$-root subgroup
(2.10)
$$\displaystyle\varphi_{i}:\mathbf{SL_{2}}\hookrightarrow\mathbf{G}$$
Let us define
(2.11)
$$\displaystyle\overline{s_{i}}=\varphi_{i}\left(\begin{bmatrix}0&-1\\
1&0\\
\end{bmatrix}\right)$$
The map
(2.12)
$$\displaystyle s_{i}\mapsto\overline{s_{i}}$$
is a homomorphism from the braid group corresponding to $W$ to $\mathbf{G}$. In particular, we can define $\overline{w}=\overline{s_{i_{1}}}\cdots\overline{s_{i_{k}}}\in\mathbf{G}$ where $w=s_{i_{1}}\cdots s_{i_{k}}$ is a reduced decomposition. To declutter the notation we will omit the overline, and simply write $w\in\mathbf{G}$ to denote $\overline{w}$.
2.2.3. Steinberg relations
To each real root $\beta$, there is an associated one-parameter subgroup
(2.13)
$$\displaystyle\ x_{\beta}:\mathbb{G}_{a}\rightarrow\mathbf{G}$$
If $\beta$ is positive, this morphism factors through $\mathbf{U}^{+}$, and if $\beta$ is negative it factors through $\mathbf{U}^{-}$.
Following [tits] and [bkp, Section 2.2.1], we say that a set $\Psi\subset\Delta_{\text{re}}$ of real roots is pre-nilpotent if there exist $w,w^{\prime}\in W$ such that
(2.14)
$$\displaystyle w\Psi\subset\Delta_{+,\text{re}}$$
(2.15)
$$\displaystyle w^{\prime}\Psi\subset\Delta_{-,\text{re}}$$
Given a pre-nilpotent pair $\{\alpha,\beta\}$, set $\theta(\alpha,\beta)=(\mathbb{N}\alpha+\mathbb{N}\beta)\cap\Delta_{\text{re}}$. Then for any total order on $\theta(\alpha,\beta)-\{\alpha,\beta\}$, there exist a unique set of integers $k(\alpha,\beta;\gamma)$ such that for any ring $S$ we have
(2.16)
$$\displaystyle x_{\alpha}(u)x_{\beta}(\widetilde{u})x_{\alpha}(-u)x_{\beta}(-%
\widetilde{u})=\prod_{\gamma=m\alpha+n\beta\in\theta(\alpha,\beta)-\{\alpha,%
\beta\}}x_{\gamma}(k(\alpha,\beta;\gamma)u^{m}{\widetilde{u}}^{n})$$
for all $u,\widetilde{u}\in S$.
2.2.4. Affine Kac-Moody group
For our purposes, an untwisted affine Kac-Moody group is a Kac-Moody group whose generalized Cartan matrix appears in the classification given in [kac, Chapter 4, Table Aff 1]. These groups are of central interest because they can be constructed from the loop groups of finite-type Kac-Moody groups. This relationship is well documented, so we refer the reader to [kac, kumar, bkp] for details. There is a more general notion of affine Kac-Moody group that includes twisted loop groups. We do not address this case, so from now on we will simply write “affine” to mean “untwisted affine”. Below we recall a few relevant facts about affine Kac-Moody root data.
There is a canonical central cocharacter $\delta\in Q$, and a canonical imaginary root $\delta^{\vee}\subset Q^{\vee}$. We get a natural map:
(2.17)
$$\displaystyle P\rightarrow\mathbb{Z},\ \mu\mapsto\langle\mu,\delta^{\vee}\rangle$$
This is called the level of the coweight.
We write $P_{k}$ for the level-$k$ elements of $P$. In the affine case, we can describe the Tits cone explicitly
(2.18)
$$\displaystyle\mathcal{T}=\mathcal{T}_{0}\oplus\bigoplus_{k>0}P_{k}$$
where $\mathcal{T}_{0}=\{r\delta\mid r\in\mathbb{Z}\}$.
2.2.5. “Affine” Weyl groups
Because the Weyl group $W$ acts on the abelian group $P$ by automorphisms, we can form the semi-direct product
(2.19)
$$\displaystyle\mathcal{W}_{P}=W\ltimes P$$
For $\mu\in P$, we denote by $\pi^{\mu}$ the corresponding element of $\mathcal{W}_{P}$. The pair $(w,\mu)\in\mathcal{W}_{\mathcal{T}}$ will be written $w\pi^{\mu}$.
One can easily verify that $Q\subset P$ and $\mathcal{T}\subset P$ are each preserved under the Weyl group action. So we can also form:
(2.20)
$$\displaystyle\mathcal{W}_{Q}=W\ltimes Q$$
and
(2.21)
$$\displaystyle\mathcal{W}_{\mathcal{T}}=W\ltimes\mathcal{T}$$
Because $\mathcal{T}$ is not closed under subtraction, $\mathcal{W}_{\mathcal{T}}$ is only a semi-group.
When $\mathbf{G}$ is a simply-connected finite-type Kac-Moody group, we have
(2.22)
$$\displaystyle\mathcal{W}_{Q}=\mathcal{W}_{P}=\mathcal{W}_{\mathcal{T}}$$
but in general we have
(2.23)
$$\displaystyle\mathcal{W}_{Q}\subset\mathcal{W}_{P}\supset\mathcal{W}_{\mathcal%
{T}}$$
In general, $\mathcal{W}_{Q}$ and $\mathcal{W}_{\mathcal{T}}$ are not comparable.
When $\mathbf{G}$ is affine type, $\mathcal{W}_{\mathcal{T}}$ has a natural “level” grading by non-negative integers where $\left(\mathcal{W}_{\mathcal{T}}\right)_{n}=\{w\pi^{\mu}\in\mathcal{W}_{%
\mathcal{T}}\mid\text{level}(\mu)=n\}$. For each non-negative integer $n$, we say that $(\mathcal{W}_{\mathcal{T}})_{n}$ is the set of elements in $\mathcal{W}_{\mathcal{T}}$ of level $n$.
2.3. Taking $p$-adic points
2.3.1. Non-archimedean local fields
Let $F$ be a non-archimedean local field. This means that $F$ is either isomorphic to the field of Laurent series over a finite field, or it is isomorphic to a finite extension of the field $\mathbb{Q}_{p}$ of $p$-adic numbers.
Let $\mathcal{O}$ be the ring of integers in $F$, let $\pi\in\mathcal{O}$ be a uniformizing element, and let $k$ be the residue field of $\mathcal{O}$. We let $q$ denote the cardinality of $k$.
2.3.2. Various subgroups of the $p$-adic group.
We write $G=\mathbf{G}(F)$. Abusing terminology, we call $G$ a $p$-adic group even if $F$ has positive characteristic.
We write $K=\mathbf{G}(\mathcal{O})$. We write $U^{+}_{\mathcal{O}}=\mathbf{U}^{+}(\mathcal{O})$, $U^{-}_{\mathcal{O}}=\mathbf{U}^{-}(\mathcal{O})$, $A_{\mathcal{O}}=\mathbf{A}(\mathcal{O})$, and $U^{-}_{\pi}=\{u\in\mathbf{U}^{-}(\mathcal{O})\mid u\equiv 1\mod\pi\}$.
The Iwahori subgroup $I$ is defined as
(2.24)
$$\displaystyle I=\{i\in K\mid i\in\mathbf{B}^{+}(k)\mod\pi\}$$
We then have the following group decomposition known as the Iwahori factorization (see [im, Section 2] and [bkp, Section 3.1.2]).
Proposition 2.25.
(2.26)
$$\displaystyle I=U^{+}_{\mathcal{O}}\cdot U^{-}_{\pi}\cdot A_{\mathcal{O}}$$
This also holds if we reorder the three factors in any way.
We will also need the following lemma.
Lemma 2.27.
(2.28)
$$\displaystyle\left(\mathbf{U}^{+}(F)\mathbf{U}^{-}(F)\right)\cap\mathbf{G}(%
\mathcal{O})=\mathbf{U}^{+}(\mathcal{O})\mathbf{U}^{-}(\mathcal{O})$$
Proof.
When $\mathbf{G}$ is untwisted affine (which is the only case where we will actually need the lemma), this lemma is [bkp, Appendix A.7] (see also [bgkp, Lemma 3.3]).
We give another argument that works in general. We claim the product $\mathbf{U}^{+}\mathbf{U}^{-}\subset\mathbf{G}$ is a closed sub-indscheme defined over $\mathbb{Z}$. For each antidominant weight $\lambda$, consider the integrable representation $L(\lambda)$ of lowest weight $\lambda$. It is known that this representation is defined over $\mathbb{Z}$. Let $v_{\lambda}$ be a lowest weight vector generating the lowest weight line, and let $v_{\lambda}^{*}$ be the covector in the dual representation of weight $-\lambda$ such that $\langle v_{\lambda}^{*},v_{\lambda}\rangle=1$. We consider the following function on $\mathbf{G}$
(2.29)
$$\displaystyle\Delta_{\lambda}:g\mapsto\langle v_{\lambda}^{*},gv_{\lambda}\rangle$$
Using the Bruhat decomposition one can verify that, up to nilpotents, $\mathbf{U}^{+}\mathbf{U}^{-}$ is cut out by the equations $\Delta_{\lambda}=1$ as $\lambda$ varies over all anti-dominant weights.
In particular, regardless of nilpotents, we see that $\mathbf{U}^{+}\mathbf{U}^{-}$ is a closed subscheme defined over $\mathbb{Z}$.
We appeal to the following general fact: Let $B$ be a commutative ring, and let $A$ be a subring of $B$. Suppose $\mathbf{Y}$ is an affine scheme defined over $A$, and suppose $\mathbf{X}\subset\mathbf{Y}$ is a closed subscheme defined over $A$. Then we have
(2.30)
$$\displaystyle\mathbf{X}(A)=\mathbf{X}(B)\cap\mathbf{Y}(A)$$
In particular, this also applies when $\mathbf{X}$ and $\mathbf{Y}$ are ind-affine ind-schemes. As $\mathbf{G}$ is an ind-affine ind-scheme, we apply this in the case of $\mathbf{U}^{+}\mathbf{U}^{-}\subset\mathbf{G}$.
∎
2.3.3. Failure of the Cartan and Iwahori decomposition
Recall that we identified $P$ with the cocharacter lattice of algebraic homomorphisms from $\mathbb{G}_{m}$ to $\mathbf{A}$. Taking $F$-points, for each $\mu\in P$, we obtain a group homomorphism
(2.31)
$$\displaystyle F^{*}\rightarrow\mathbf{A}(F)$$
We denote the image of $\pi$ under this map by $\pi^{\mu}$.
If $\mathbf{G}$ is finite-type, i.e. it is a split semi-simple group, then we have the Cartan decomposition.
(2.32)
$$\displaystyle G=\bigsqcup_{\lambda\in P^{++}}K\pi^{\lambda}K$$
However, Garland observed [garland] that this is no longer true when $\mathbf{G}$ is infinite-type. In this case, we define the following subset of $G$.
Definition 2.33.
(2.34)
$$\displaystyle G^{+}=\bigsqcup_{\lambda\in P^{++}}K\pi^{\lambda}K$$
Theorem 2.35.
[bk, garland],[bkp, Appendix A]
If $\mathbf{G}$ is an untwisted affine Kac-Moody group, then $G^{+}$ is a sub-semi-group of $G$.
If $\mathbf{G}$ is finite-type, we also have the following Iwahori-decompostion
(2.36)
$$\displaystyle G=\bigsqcup_{w\pi^{\mu}\in\mathcal{W}_{P}}Iw\pi^{\mu}I$$
Again this fails in infinite-type, but in affine type we have the following.
Proposition 2.37.
[bkp, Proposition 3.4.2]
Suppose $\mathbf{G}$ is untwisted affine type. Then we have
(2.38)
$$\displaystyle G^{+}=\bigsqcup_{w\pi^{\mu}\in\mathcal{W}_{P}}Iw\pi^{\mu}I$$
2.3.4. The $p$-adic loop group Iwahori-Hecke algebra.
When $\mathbf{G}$ is finite-type, the group $G$ acquires a natural topology under which it is locally compact. In particular, we can choose the Haar measure normalized so that $I$ has measure $1$. In this case, the Iwahori-Hecke algebra $\mathcal{H}(G,I)$ is the space of compactly-supported complex-valued functions on $G$ that are biinvariant under $I$. The multiplication is convolution.
However, looking carefully at the definition, one can see that the existence of Haar measure are not necessary in order to define the convolution structure on $\mathcal{H}(G,I)$. The compact-support condition is exactly the condition that a function be supported on finitely many $I$ double cosets, and the well-definedness of the multiplication corresponds exactly to the finiteness of of certain sets. The following is an easy exercise in $p$-adic integration (see, for example, [im, Section 3.1]).
Proposition 2.39.
Let $\mathbf{G}$ be a finite-type Kac-Moody group. For all $x\in\mathcal{W}$, let $T_{x}$ be the indicator function of $IxI$ in $\mathcal{H}(G,I)$ and write
(2.40)
$$\displaystyle T_{x}T_{y}=\sum_{z\in\mathcal{W}}a^{z}_{x,y}T_{z}$$
then,
(2.41)
$$\displaystyle a^{z}_{x,y}=|I\backslash\left(Ix^{-1}Iz\cap IyI\right)|$$
In particular, the set of double cosets $IzI$ such that
(2.42)
$$\displaystyle I\backslash\left(Ix^{-1}Iz\cap IyI\right)\neq\varnothing$$
is finite.
When $\mathbf{G}$ is of affine type, one uses (2.41) as the definition of the convolution product. However, to obtain a well-defined multiplication, one needs to restrict to functions supported on $G^{+}$.
Definition 2.43.
Let $\mathbf{G}$ be an untwisted affine Kac-Moody group, and let $G$ be the corresponding $p$-adic group. Then the Iwahori-Hecke algebra (for the $p$-adic loop group $G$) $\mathcal{H}(G^{+},I)$ is the vector space of complex-valued functions on $G^{+}$ that are supported on finitely-many double cosets.
For all $x\in\mathcal{W}_{\mathcal{T}}$, let $T_{x}$ be the indicator function of $IxI$. Then it is clear that
(2.44)
$$\displaystyle\{T_{x}\mid x\in\mathcal{W}_{\mathcal{T}}\}$$
is a basis for $\mathcal{H}(G^{+},I)$. We call this the double coset basis of $\mathcal{H}(G^{+},I)$.
One of the main results of [bkp] is the following theorem, which says that $\mathcal{H}(G^{+},I)$ has an algebra structure coming from convolution.
Theorem 2.45.
[bkp, Theorem 5.2.1]
Let $\mathbf{G}$ be an untwisted affine Kac-Moody group, and let $x,y\in\mathcal{W}_{\mathcal{T}}$. Then for all $z\in\mathcal{W}_{\mathcal{T}}$, the set
(2.46)
$$\displaystyle I\backslash\left(Ix^{-1}Iz\cap IyI\right)$$
is finite. Let $a^{z}_{x,y}$ be the cardinality of this set. For all but finitely many $z\in\mathcal{W}_{\mathcal{T}}$, we have $a^{z}_{x,y}=0$, and the formula
(2.47)
$$\displaystyle T_{x}T_{y}=\sum_{z\in\mathcal{W}}a^{z}_{x,y}T_{z}$$
defines an associative algebra structure on $\mathcal{H}(G^{+},I)$.
2.4. Various versions of the Double Affine Hecke Algebra.
2.4.1. Coxeter-Hecke Algebras
Let $W$ be a Coxeter group with simple reflections $\{s_{i}\mid i\in I\}$ where $I$ is some indexing set. To $W$ we can associate a corresponding Hecke algebra $\mathcal{H}_{W}$, which is the algebra over $R=\mathbb{C}[v,v^{-1}]$ generated by symbols $T_{w}$ for $w\in W$ subject to the following relations.
•
$T_{w_{1}}T_{w_{2}}=T_{w_{1}w_{2}}$ if $\ell(w_{1}w_{2})=\ell(w_{1})+\ell(w_{2})$ where $\ell$ is the usual length function on a Coxeter group.
•
$(T_{s_{i}}+1)(T_{s_{i}}-v^{2})=1$ for all simple reflections $i\in I$.
We will follow the usual convention and write $T_{i}$ for $T_{s_{i}}$ when $i\in I$.
2.4.2. The Garland-Gronowski DAHA
Let $R=\mathbb{C}[v,v^{-1}]$. Consider the following $R$-module.
(2.48)
$$\displaystyle\mathbb{H}=\mathcal{H}_{W}\otimes_{R}R[P]$$
For $\mu\in P$, let us write $\Theta_{\mu}$ for the element $1\otimes\mu\in\mathbb{H}$.
Then following Garland-Gronowski [gg] and [bkp, Section 5.1] we give an algebra structure to $\mathbb{H}$ by requiring that
•
$\mathcal{H}_{W}\otimes 1$ be a copy of the Coxeter-Hecke algebra,
•
$1\otimes R[P]$ be a copy of the group algebra $R[P]$,
•
the Bernstein relation:
(2.49)
$$\displaystyle T_{i}\Theta_{\mu}-\Theta_{s_{i}(\mu)}T_{i}=(v^{-2}-1)\frac{%
\Theta_{\mu}-\Theta_{s_{i}(\mu)}}{1-\Theta_{-\alpha_{i}}}$$
When $\mathbf{G}$ is affine, $\mathbb{H}$ carries a natural $\mathbb{Z}$ grading where $\mathcal{H}_{W}$ has degree $0$, and $\deg\Theta_{\mu}=\text{level}(\mu)$.
2.4.3. Cherednik’s DAHA and Tits DAHA
The subspace $\mathbb{H}_{Q}=\mathcal{H}_{W}\otimes_{R}R[Q]\subset\mathbb{H}$ is a subalgebra.
When $\mathbf{G}$ is untwisted affine, then $\mathbb{H}_{Q}=\mathbb{H}_{0}$ (the degree-$0$ part of $\mathbb{H}$ under the level grading) is naturally isomorphic to Cherednik’s double affine Hecke algerbra [cher] (Cherednik’s parameter $t$ corresponds to $v^{-2}$, and the parameter $q$ corresponds to the central element $\Theta_{\delta}$).
We can also form the subalgebra
(2.50)
$$\displaystyle\mathbb{H}_{\mathcal{T}}=\mathcal{H}_{W}\otimes_{R}R[\mathcal{T}]$$
We propose that when $\mathbf{G}$ is affine that this algebra be called the Tits DAHA.
2.4.4. The relationship with $\mathcal{H}(G^{+},I)$.
The following result is due to Braverman, Kazhdan, and Patnaik.
Theorem 2.51.
[bkp, Theorem 5.34]
When $\mathbf{G}$ is affine, there is an algebra isomorphism
between the Tits DAHA specialized at $v=q^{-1/2}$ and the $p$-adic loop group Iwahori-Hecke algebra.
(2.52)
$$\displaystyle\varphi:{\mathbb{H}_{\mathcal{T}}}|_{v=q^{-1/2}}\rightarrow%
\mathcal{H}(G^{+},I)$$
We recall the main properties of this isomorphism. First, for $w\in W$, we have $\varphi(T_{w})=T_{w}$. Second, for $\lambda$ dominant $\varphi(\Theta_{\lambda})=q^{\langle\rho^{\vee},\lambda\rangle}T_{\pi^{\lambda}}$.
Finally, for general $\mu\in\mathcal{T}$ the authors provide an explicit algorithm for writing $\varphi(\Theta_{\mu})$ in terms of the double coset basis with coefficients that are Laurent polynomials in $q$ with integer coefficients (see [bkp, Section 6.2]).
2.5. Preorders and partial order.
Recall that a preorder on a set $X$ is a binary relation $\leq$ on $X$ satisfying the following properties.
•
For all $x\in X$, $x\leq x$.
•
If $x\leq y$ and $y\leq z$, then $x\leq z$.
We write $x<y$ to mean $x\leq y$ and $x\neq y$.
We furthermore say that $\leq$ is a partial order if the following property holds: suppose $x,y\in X$ are such that $x\leq y$ and $y\leq x$, then $x=y$.
Suppose $X$ and $Y$ are both preordered sets. Then we say that a map
(2.53)
$$\displaystyle\ell:X\rightarrow Y$$
is a grading if
(2.54)
$$\displaystyle\ell(x_{1})<\ell(x_{2})\text{ whenever }x_{1}<x_{2}$$
We then have the following lemma.
Lemma 2.55.
Suppose that $X$ is a preordered set, that $Y$ is a partially ordered set, and that $\ell:X\rightarrow Y$ is a grading. Then the preorder on $X$ is a partial order.
3. The double coset basis
The algebra $\mathcal{H}(G^{+},I)$ has two natural bases; there is the “Bernstein basis” $\{\Theta_{\mu}T_{w}\mid\pi^{\mu}w\in W_{\mathcal{T}}\}$ and the double coset basis $\{T_{\pi^{\mu}w}\mid\pi^{\mu}w\in W_{\mathcal{T}}\}$.
Using the Bernstein relation (2.49), it is easy to see that the structure coefficients of the Bernstein basis are Laurent polynomials in $q$. Furthermore, Braverman, Kazhdan and Patnaik [bkp, Section 6.2] provides an algorithm to write the Bernstein basis in terms of the double coset basis.
From this algorithm, one can see that the coefficients of the Bernstein basis vectors, when written in the double coset basis, are Laurent polynomials in $q$.
One of the results of this section is an inverse algorithm. We will develop the double coset basis combinatorially, and as a consequence we will see that the coefficients of the double coset basis when written in the Bernstein basis are Laurent polynomials in $q$. As a corollary, we see that the structure coefficients of the double coset basis are Laurent polynomials in $q$. Because these structure coefficients are known to be integers for all $q$ that are prime powers, we can conclude that the structure coefficients are in fact ordinary polynomials in $q$.
3.1. The Iwahori-Matsumoto relation
In $\mathcal{H}(G^{+},I)$, we have the following relations.
Theorem 3.1.
Let $\mu\in\mathcal{T}$ be a Tits coweight, $w\in W$ be an element of the single affine Weyl group, and let $i\in I$ be a node of the single affine Dynkin diagram. Then:
(3.2)
$$\displaystyle{}T_{\pi^{\mu}ws_{i}}=\left\{\begin{array}[]{c l}T_{\pi^{\mu}w}T_%
{i}\text{ if }\langle\mu,w(\alpha_{i})\rangle>0\text{ or if }\langle\mu,w(%
\alpha_{i})\rangle=0\text{ and }w(\alpha_{i})>0\\
T_{\pi^{\mu}w}T_{i}^{-1}\text{ if }\langle\mu,w(\alpha_{i})\rangle<0\text{ or %
if }\langle\mu,w(\alpha_{i})\rangle=0\text{ and }w(\alpha_{i})<0\end{array}\right.$$
Proposition 3.3.
Let us suppose the setup of the above theorem.
If $\langle\mu,w(\alpha_{i})\rangle>0$ or if $\langle\mu,w(\alpha_{i})\rangle=0\text{ and }w(\alpha_{i})>0$, then
(3.4)
$$\displaystyle I\pi^{\mu}wIs_{i}I=I\pi^{\mu}ws_{i}I$$
If $\langle\mu,w(\alpha_{i})\rangle<0$ or if $\langle\mu,w(\alpha_{i})\rangle=0\text{ and }w(\alpha_{i})<0$
(3.5)
$$\displaystyle I\pi^{\mu}ws_{i}Is_{i}I=I\pi^{\mu}wI$$
Proof.
For the first equation, we calculate:
(3.6)
$$\displaystyle I\pi^{\mu}wIs_{i}I=I\pi^{\mu}w\cdot x_{\alpha_{i}}(\mathcal{O})%
\cdot s_{i}I=I\pi^{\mu}x_{w(\alpha_{i})}(\mathcal{O})ws_{i}I=$$
(3.7)
$$\displaystyle Ix_{w(\alpha_{i})}(\pi^{\langle\mu,w\alpha_{i}\rangle}\mathcal{O%
})\pi^{\mu}ws_{i}I=I\pi^{\mu}ws_{i}I$$
The first equality comes from the Iwahori factorization and Bruhat decompositions. The last equality follows because of the assumption that $\langle\mu,w(\alpha_{i})\rangle>0$ or $\langle\mu,w(\alpha_{i})\rangle=0\text{ and }w(\alpha_{i})>0$ .
For the second equation, we calculate:
(3.8)
$$\displaystyle I\pi^{\mu}ws_{i}Is_{i}I=I\pi^{\mu}ws_{i}x_{\alpha_{i}}(\mathcal{%
O})s_{i}I=I\pi^{\mu}x_{-w\alpha_{i}}(\mathcal{O})wI=$$
(3.9)
$$\displaystyle Ix_{-w(\alpha_{i})}(\pi^{\langle\mu,-w\alpha_{i}\rangle}\mathcal%
{O})\pi^{\mu}wI=I\pi^{\mu}wI$$
Where the last equality follows because of our assumptions on $\mu,w,\alpha_{i}$.
∎
This proves (3.2) is true up to a constant. So all that remains is showing that the constant is $1$.
Proof of Theorem 3.1 .
Let $\mu\in\mathcal{T}$, $w\in W$, $i\in I$. Let us consider the case when $w(\alpha_{i})$ is positive and $\langle w^{-1}(\mu),\alpha_{i}\rangle\geq 0$. The other cases are similar.
It suffices to show that
(3.10)
$$\displaystyle I\backslash(Iw^{-1}\pi^{-\mu}I\pi^{\mu}ws_{i}\cap Is_{i}I)$$
is a point.
On the one hand, we have: $Is_{i}I=Is_{i}x_{\alpha_{i}}(\mathcal{O})$. By the Iwahori factorization, we have
(3.11)
$$\displaystyle Iw^{-1}\pi^{-\mu}I\pi^{\mu}ws_{i}=Iw^{-1}\pi^{-\mu}U_{\mathcal{O%
}}U^{-}_{\pi}\pi^{\mu}ws_{i}$$
We need to consider all $i\in I$, $u_{+}\in U_{\mathcal{O}}$, $u_{-}\in U^{-}_{\pi}$, and $f\in\mathcal{O}$ such that
(3.12)
$$\displaystyle iw^{-1}\pi^{-\mu}u_{+}u_{-}\pi^{\mu}ws_{i}=s_{i}x_{\alpha_{i}}(f)$$
Because $I\backslash Is_{i}x_{\alpha^{\vee}_{i}}(\pi\mathcal{O})$ is a point, it will suffice to show that $f\in\pi\mathcal{O}$. Also note that $\pi^{-\mu}u_{+}\pi^{\mu}\in U_{\mathcal{O}}$ and $\pi^{-\mu}u_{-}\pi^{\mu}\in U^{-}_{\mathcal{O}}$ by Lemma 2.27.
Moreover, we can factorize $u_{+}=u_{1}u_{2}$ where $w^{-1}u_{1}w\in U$ and $w^{-1}u_{2}w\in U^{-}$. In particular, $\pi^{-w^{-1}(\mu)}w^{-1}u_{1}w\pi^{w^{-1}(\mu)}\in I$. We can also factorize $u_{-}=u_{3}u_{4}$ where $s_{i}w^{-1}u_{3}ws_{i}\in U^{-}$ and $s_{i}w^{-1}u_{4}ws_{i}\in U^{+}$. We can further factorize $u_{4}=u_{5}x_{-w(\alpha_{i})}(g)$ where $w^{-1}u_{5}w\in U^{+}$ and $g\in\pi\mathcal{O}$ (it is here that we use the assumption that $w(\alpha_{i})$ is positive).
So we then have the following.
(3.13)
$$\displaystyle\left(\pi^{-w^{-1}(\mu)}w^{-1}u_{2}w\pi^{w^{-1}(\mu)}\right)\left%
(\pi^{-w^{-1}(\mu)}w^{-1}u_{3}w\pi^{-w^{-1}(\mu)}\right)\left(\pi^{-w^{-1}(\mu%
)}w^{-1}u_{5}w\pi^{w^{-1}(\mu)}\right)x_{-\alpha_{i}}(\pi^{\langle w^{-1}(\mu)%
,\alpha_{i}\rangle}g-f)\in I$$
By the Steinberg relations (2.16), when we commute
$\left(\pi^{-w^{-1}(\mu)}w^{-1}u_{5}w\pi^{w^{-1}(\mu)}\right)$ past $\left(x_{-\alpha_{i}}(\pi^{\langle w^{-1}(\mu),\alpha_{i}\rangle}g-f)\right)$, we only get terms in $U^{+}_{\mathcal{O}}$. In particular, they lie in $I$. So we see the following.
(3.14)
$$\displaystyle\left(\pi^{-w^{-1}(\mu)}w^{-1}u_{2}w\pi^{w^{-1}(\mu)}\right)\left%
(\pi^{-w^{-1}(\mu)}w^{-1}u_{3}w\pi^{-w^{-1}(\mu)}\right)x_{-\alpha_{i}}(\pi^{%
\langle w^{-1}(\mu),\alpha_{i}\rangle}g-f)\in I$$
Because the first two terms lie in $\{u\in U^{-}\mid s_{i}us_{i}\in U^{-}\}$, we must have $\pi^{\langle w^{-1}(\mu),\alpha_{i}\rangle}g-f\in\pi\mathcal{O}$. As we have assumed ${\langle w^{-1}(\mu),\alpha_{i}\rangle}\geq 0$, we must have $f\in\pi\mathcal{O}$.
∎
We also have the following left-hand version of the Iwahori-Matsumoto formula, whose proof is analogous to the right-hand version.
Theorem 3.15.
(Left-handed version of Theorem 3.1)
Let $\mu\in\mathcal{T}$ be a Tits coweight, $w\in W$ be an element of the single affine Weyl group, and let $i\in I$ be a node of the single affine Dynkin diagram. Then:
(3.16)
$$\displaystyle{}T_{s_{i}\pi^{\mu}w}=\left\{\begin{array}[]{c l}T_{i}T_{\pi^{\mu%
}w}\text{ if }\langle\mu,\alpha_{i}\rangle>0\text{ or if }\langle\mu,\alpha_{i%
}\rangle=0\text{ and }w^{-1}(\alpha_{i})>0\\
T_{i}^{-1}T_{\pi^{\mu}w}\text{ if }\langle\mu,\alpha_{i}\rangle<0\text{ or if %
}\langle\mu,\alpha_{i}\rangle=0\text{ and }w^{-1}(\alpha_{i})<0\end{array}\right.$$
With these formulas, we deduce the folowing formula for those double coset basis elements corresponding to arbitrary coweights in the Tits cone.
Corollary 3.17.
Let $w\in W$, and let $\lambda$ be a dominant coweight, then we have
(3.18)
$$\displaystyle T_{\pi^{\lambda}}T_{w^{-1}}=T_{\pi^{\lambda}w^{-1}}=T_{w^{-1}}T_%
{\pi^{w\left(\lambda\right)}}$$
In particular, this implies
(3.19)
$$\displaystyle T_{\pi^{w\left(\lambda\right)}}=T_{w^{-1}}^{-1}T_{\pi^{\lambda}}%
T_{w^{-1}}$$
Recalling that $\Theta_{\lambda}=q^{\langle\rho^{\vee},\lambda\rangle}T_{\pi^{\lambda}}$ for dominant coweights $\lambda$, we see that when one writes double coset basis elements in terms of the Bernstein basis, the coefficients are Laurent polynomials in $q$. Therefore, as discussed at the beginning of this section, we can conclude that the structure coefficients for the double coset basis are Laurent polynomials in $q$. Because we know that the these structure coefficients always specialize to non-negative integers when $q$ is a prime power, we can in fact conclude the following.
Theorem 3.20.
The structure coefficients of the double coset basis are polynomials in $q$.
4. Bruhat orders and the enhanced length function
The results of this section hold for any Kac-Moody group $\mathbf{G}$, but we will be most interested in the case when $\mathbf{G}$ is affine type. We use the adjective “double-affine” to refer to many of the concepts considered in this section, but we caution that this terminology is only really appropriate when $\mathbf{G}$ is affine-type.
4.1. Double affine roots and reflections
Let us consider the space $Q^{\vee}\oplus\mathbb{Z}\pi$, which we can think of as the “double affine root lattice”. We say an element $\beta^{\vee}+n\pi$ is a (real) double affine root if $\beta^{\vee}$ is a real affine root for $\mathfrak{g}$. We say that $\beta^{\vee}+n\pi$ is a positive double affine real root if $\beta^{\vee}>0$ and $n\geq 0$ or $\beta^{\vee}<0$ and $n>0$.
Definition 4.1.
Let $\beta^{\vee}+n\pi$ be a positive double affine root. We define the associated reflection as follows
(4.2)
$$\displaystyle s_{\beta^{\vee}+n\pi}=\left\{\begin{array}[]{c l}\pi^{n\beta}s_{%
\beta}\text{ if }\beta^{\vee}>0\\
\pi^{-n\beta}s_{\beta}\text{ if }\beta^{\vee}<0\end{array}\right.$$
Note that this element lies in the double affine Weyl group $W_{Q}$, but not in the Tits double affine Weyl group.
We define an action of $\mathcal{W}_{\mathcal{P}}$ on $Q^{\vee}\oplus\mathbb{Z}\pi$ as follows:
(4.3)
$$\displaystyle\pi^{\mu}w(\gamma+n\pi)=\pi^{\mu}(w(\gamma)+n\pi)=w(\gamma)+(n+%
\langle\mu,\gamma\rangle)\pi$$
Remark 4.4.
This definition is a verbatim generalization of the notion of affine real root and affine reflections when $G$ is a finite-type Kac-Moody group.
4.2. The Bruhat preorder defined by Braverman, Kazhdan, and Patnaik
In [bkp, Section B.2], the authors define a preorder on $\mathcal{W}_{\mathcal{T}}$ as follows. Let $x,y\in\mathcal{W}_{\mathcal{T}}$, and suppose that there is a positive double affine root $\beta^{\vee}+n\pi$ such that
(4.5)
$$\displaystyle x=ys_{\beta^{\vee}+n\pi}$$
and
(4.6)
$$\displaystyle y(\beta^{\vee}+n\pi)\text{ is positive}$$
Then we say that $y\leq x$, and we say the (first) Bruhat preorder $<$ on $W_{\mathcal{T}}$ is the preorder generated by all such inequalities. It isn’t clear from the definition that this preorder is in fact an order, but the authors of [bkp] conjecture it to be so.
Remark 4.7.
The definition above is slightly different than that given by Braverman, Kazhdan, and Patnaik. They define a preorder on all of $\mathcal{W}_{P}$ using the above formulas, and restrict this order to $\mathcal{W}_{\mathcal{T}}$. The most interesting situation is for elements of strictly positive level; here the orders coincide because the positive level elements of $\mathcal{W}_{P}$ and $\mathcal{W}_{\mathcal{T}}$ coincide. For elements of level zero, however, it is not clear whether the two orders coincide.
But we believe that the definition given by working in $\mathcal{W}_{P}$ and then restricting to $\mathcal{W}_{\mathcal{T}}$ is unnatural. The level-zero elements of $\mathcal{W}_{\mathcal{T}}$ are isomorphic to the product of $W$ and a copy of $\mathbb{Z}$ corresponding to the central cocharacter. In this case, we would expect the Bruhat order on each subset $W\times\{n\}$ to be isomorphic to the Bruhat order on $W$. The definition above gives exactly this order for level-zero elements.
Remark 4.8.
Also, the definition considered in [bkp] involves a right action of $W_{\mathcal{T}}$ on double affine roots, but it is easy to check that it is equivalent to the one we consider.
4.3. Length function and another Bruhat order
Let us define the length function $\ell$ as follows. Lengths take values in $\mathbb{Z}\oplus\mathbb{Z}\varepsilon$, which we order lexicographically. Here $\varepsilon$ is a formal symbol which we can think of as being infinitesimally smaller than one, i.e. we have $n\varepsilon<1$ for any integer $n$.
When $\lambda$ dominant, we define :
(4.9)
$$\displaystyle\ell(\pi^{\lambda})=2\langle\lambda,\rho^{\vee}\rangle$$
For general $\mu\in\mathcal{T}$, pick $w\in W$ so that $w(\mu)$ is dominant. Then we make the following definition.
(4.10)
$$\displaystyle\ell(\pi^{\mu})=2\langle w(\mu),\rho^{\vee}\rangle$$
The next proposition follows immediately from the definition of $\ell$ and the property that for all $w\in W$ we have:
(4.11)
$$\displaystyle w(\rho^{\vee})=\rho^{\vee}-\sum_{\beta^{\vee}\in\Delta(w,-)}%
\beta^{\vee}$$
Proposition 4.12.
For any Tits coweight $\mu$, we have:
(4.13)
$$\displaystyle\ell(\pi^{\mu})=\max_{w\in W}2\langle w(\mu),\rho^{\vee}\rangle$$
Definition 4.14.
We define the length function $\ell:\mathcal{W}_{\mathcal{T}}\rightarrow\mathbb{Z}\oplus\mathbb{Z}\varepsilon$ as follows. For $\mu\in\mathcal{T}$, we define $\ell(\pi^{\mu})$ using the above formulas, and for general elements $\pi^{\mu}w\in\mathcal{W}_{\mathcal{T}}$ we define:
(4.15)
$$\displaystyle\ell(\pi^{\mu}w)=\ell(\pi^{\mu})+\varepsilon\cdot\left(|\{\beta^{%
\vee}\in\Delta(w^{-1},-):\langle\mu,\beta^{\vee}\rangle\geq 0\}|-|\{\beta^{%
\vee}\in\Delta(w^{-1},-):\langle\mu,\beta^{\vee}\rangle<0\}|\right)$$
Definition 4.16.
Suppose $\pi^{\mu}w\in W_{\mathcal{T}}$. Then we can write $\ell(\pi^{\mu}w)=\ell_{\text{big}}(\pi^{\mu}w)+\ell_{\text{small}}(\pi^{\mu}w)\varepsilon$. We call $\ell_{\text{big}}$ the big length and $\ell_{\text{small}}$ the small length.
Lemma 4.17.
The length function satisfies the following recursive relation.
(4.18)
$$\displaystyle\ell(\pi^{\mu}ws_{i})=\left\{\begin{array}[]{c l}\ell(\pi^{\mu}w)%
+\varepsilon\text{ if }\langle\mu,w(\alpha_{i})\rangle>0\text{ or if }\langle%
\mu,w(\alpha_{i})\rangle=0\text{ and }w(\alpha_{i})>0\\
\ell(\pi^{\mu}w)-\varepsilon\text{ if }\langle\mu,w(\alpha_{i})\rangle<0\text{%
or if }\langle\mu,w(\alpha_{i})\rangle=0\text{ and }w(\alpha_{i})<0\end{array%
}\right.$$
Note that the dichotomy of this recurrence is precisely the dichotomy of the generalized Iwahori-Matsumoto relations for the Tits DAHA that we produced in the previous section.
We also have the following left-hand version of the above recursion relation.
Lemma 4.19.
The length function satisfies the following recursive relation.
(4.20)
$$\displaystyle\ell(s_{i}\pi^{\mu}w)=\left\{\begin{array}[]{c l}\ell(\pi^{\mu}w)%
+\varepsilon\text{ if }\langle\mu,\alpha_{i}\rangle>0\text{ or if }\langle\mu,%
\alpha_{i}\rangle=0\text{ and }w^{-1}(\alpha_{i})>0\\
\ell(\pi^{\mu}w)-\varepsilon\text{ if }\langle\mu,\alpha_{i}\rangle<0\text{ or%
if }\langle\mu,\alpha_{i}\rangle=0\text{ and }w^{-1}(\alpha_{i})<0\end{array}\right.$$
Definition 4.21.
Let $x,y\in W_{\mathcal{T}}$, and suppose that there is a positive double affine root $\beta^{\vee}+n\pi$ such that
(4.22)
$$\displaystyle x=ys_{\beta^{\vee}+n\pi}$$
and
(4.23)
$$\displaystyle\ell(x)>\ell(y)$$
Then we write $y\preceq x$, and we say the (second) Bruhat order $\prec$ on $W_{\mathcal{T}}$ is the preorder generated by such inequalities. Unlike in the case of the first Bruhat order, it is manifestly clear that $\prec$ is a partial order because it is graded by the length function.
4.4. Proving that the two Bruhat orders coincide
The rest of this section is devoted to proving the following theorem.
Theorem 4.24.
The two orders $<$ and $\prec$ coincide.
Remark 4.25.
In particular, we see that the preorder $<$ is a partial order, which gives a positive answer to a conjecture of Braverman, Kazhdan, and Patnaik [bkp, Section B.2].
Lemma 4.26.
Let $\nu\in\mathcal{T}$, let $\beta^{\vee}$ be a positive real root, and let $\beta$ be the corresponding coroot. Suppose $\langle\nu,\beta^{\vee}\rangle>0$. Then for integers $m$ such that $0<m<\langle\nu,\beta^{\vee}\rangle$ and $\nu-m\beta\in\mathcal{T}$, we have:
(4.27)
$$\displaystyle\ell(\pi^{\nu-m\beta})<\ell(\pi^{\nu})$$
Proof.
Because the length function is invariant for lattice elements under conjugation by $W$, we can assume $\beta$ is a simple coroot $\alpha_{i}$ (choose $w$ that sends $\beta$ to $\alpha_{i}$, and replace $\nu$ by $w(\nu)$).
Then we claim
(4.28)
$$\displaystyle\langle\nu-m\alpha_{i},v(\rho^{\vee})\rangle<\langle\nu,v(\rho^{%
\vee})\rangle$$
for all $v\in W$.
There are two cases, depending on whether $v^{-1}(\alpha_{i})$ is positive or negative.
If $v^{-1}(\alpha_{i})$ is positive, then applying $v^{-1}$ and using the fact that $m>0$, we have the inequality.
If $v^{-1}(\alpha_{i})$ is negative, then we can write $v=s_{i}u$, where $u^{-1}(\alpha_{i})$ is positive. In this case:
(4.29)
$$\displaystyle\langle\nu-m\alpha_{i},v(\rho^{\vee})\rangle=\langle\nu-m\alpha_{%
i},s_{i}u(\rho^{\vee})\rangle=\langle\nu+(m-\langle\nu,\alpha_{i}\rangle)%
\alpha_{i},u(\rho^{\vee})\rangle$$
Because $(m-\langle\nu,\alpha_{i}\rangle)<0$, we can argue as we did in the first case.
∎
Lemma 4.30.
Let $\mu\in\mathcal{T}$, and let $\beta$ be a positive affine coroot such that $\langle\mu,\beta^{\vee}\rangle\neq 0$. Suppose $k\in\mathbb{Z}$ is such that $\mu-k\beta\in\mathcal{T}$.
Let $t=\frac{k}{\langle k,\beta^{\vee}\rangle}$, which is the unique real number satisfying
(4.31)
$$\displaystyle\mu-k\beta=(1-t)\cdot\mu+t\cdot s_{\beta}(\mu)$$
If $0<t<1$, then we have:
(4.32)
$$\displaystyle\ell(\pi^{\mu-k\beta})<\ell(\pi^{\mu})$$
If $t<0$ or $t>1$, then we have:
(4.33)
$$\displaystyle\ell(\pi^{\mu-k\beta})>\ell(\pi^{\mu})$$
Of course, if $t=0$ or $t=1$, we have:
(4.34)
$$\displaystyle\ell(\pi^{\mu-k\beta})=\ell(\pi^{\mu})$$
Proof.
The various cases can be handled by applying Lemma 4.26 using the following particular choices of $\nu$ and $m$.
•
If $\langle\mu,\beta^{\vee}\rangle>0$ and $0<k<\langle\mu,\beta^{\vee}\rangle$, use $\nu=\mu$ and $m=k$.
•
If $\langle\mu,\beta^{\vee}\rangle>0$ and $k>\langle\mu,\beta^{\vee}\rangle$, use $\nu=s_{\beta}(\mu)+k\beta$ and $m=k-\langle\mu,\beta\rangle$.
•
If $\langle\mu,\beta^{\vee}\rangle<0$ and $0>k>\langle\mu,\beta^{\vee}\rangle$, use $\nu=s_{\beta}(\mu)$ and $m=k-\langle\mu,\beta^{\vee}\rangle-k$.
•
If $\langle\mu,\beta^{\vee}\rangle<0$ and $k<\langle\mu,\beta^{\vee}\rangle$, use $\nu=\mu-k\beta$ and $m=-k$.
∎
Lemma 4.35.
Let $\beta^{\vee}$ be a positive (single affine) real root, and let $\mu$ be a coweight. Then
(4.36)
$$\displaystyle\left|\{\gamma^{\vee}\in\Delta(s_{\beta},-):\langle\mu,\gamma^{%
\vee}\rangle\geq 0\}\right|-\left|\{\gamma^{\vee}\in\Delta(s_{\beta},-):%
\langle\mu,\gamma^{\vee}\rangle<0\}\right|$$
is strictly positive if and only if
(4.37)
$$\displaystyle\langle\mu,\beta^{\vee}\rangle\geq 0$$
Proof.
Consider the involution $\iota$ of $\Delta(s_{\beta},-)$ defined by the following formula.
(4.38)
$$\displaystyle\iota(\gamma^{\vee})=-s_{\beta}(\gamma^{\vee})$$
It is easy to see that the only fixed point of $\iota$ is $\beta^{\vee}$.
In particular, we see that $\Delta(s_{\beta},-)$ has odd order.
Suppose $\langle\mu,\beta^{\vee}\rangle\geq 0$.
Let $\gamma^{\vee}\in\Delta(s_{\beta},-)$.
Then we must have $\langle\gamma^{\vee},\beta\rangle\neq 0$, and we also have
(4.39)
$$\displaystyle\langle\mu,\gamma^{\vee}\rangle+\langle\mu,\iota(\gamma^{\vee})%
\rangle=\langle\beta,\gamma^{\vee}\rangle\langle\mu,\beta^{\vee}\rangle$$
In particular, at least one of $\langle\mu,\gamma^{\vee}\rangle$ or $\langle\mu,\iota(\gamma^{\vee})\rangle$ must have the same sign as $\langle\mu,\beta^{\vee}\rangle$ (where we interpret zero to be positive for this purpose). So a majority of the elements $\gamma^{\vee}\in\Delta(s_{\beta},-)$ must have the property that $\langle\mu,\gamma^{\vee}\rangle\geq 0$. The other case follows similarly.
∎
Proof of Theorem 4.24.
Let $\pi^{\mu}w\in\mathcal{W}_{\mathcal{T}}$, and let $\beta^{\vee}+n\pi$ be a positive double affine root. And furthermore, suppose that $\pi^{\mu}ws_{\beta^{\vee}+n\pi}\in\mathcal{W}_{\mathcal{T}}$. Then we need to show that if
(4.40)
$$\displaystyle\pi^{\mu}w(\beta^{\vee}+n\pi)>0$$
then,
(4.41)
$$\displaystyle\ell(\pi^{\mu}ws_{\beta^{\vee}+n\pi})>\ell(\pi^{\mu}w)$$
and to show the similar statement where the inequality signs are reversed.
Let us consider the case when
•
$\beta^{\vee}>0$
•
$\pi^{\mu}w(\beta^{\vee}+n\pi)>0$
In this case,
(4.42)
$$\displaystyle s_{\beta^{\vee}+n\pi}=\pi^{n\beta}s_{\beta}$$
We have
(4.43)
$$\displaystyle\pi^{\mu}w(\beta^{\vee}+n\pi)=w(\beta^{\vee})+(n+\langle(w(\beta^%
{\vee}),\mu\rangle)\pi$$
and we also have
(4.44)
$$\displaystyle\pi^{\mu}ws_{\beta^{\vee}+n\pi}=\pi^{\mu+nw(\beta)}ws_{\beta}$$
Therefore, we have
(4.45)
$$\displaystyle n+\langle(w(\beta^{\vee}),\mu\rangle\geq 0$$
If the inequality is strict, using Lemma 4.30 we compute that $\ell_{\text{big}}(\pi^{\mu}ws_{\beta^{\vee}+n\pi})>\ell_{\text{big}}(\pi^{\mu}w)$, which implies our desired result.
So all that remains is the case where
(4.46)
$$\displaystyle n=-\langle(w(\beta^{\vee}),\mu\rangle$$
In this case, we have
(4.47)
$$\displaystyle\pi^{\mu}w(\beta^{\vee}+n\pi)=w(\beta^{\vee})$$
In particular, we have $w(\beta^{\vee})>0$, and we have
(4.48)
$$\displaystyle\pi^{\mu}ws_{\beta^{\vee}+n\pi}=\pi^{s_{w(\beta)}(\mu)}ws_{\beta}%
=s_{w(\beta)}\pi^{\mu}w$$
By repeated use of Lemma 4.19, we see that
(4.49)
$$\displaystyle\ell(s_{w(\beta)}\pi^{\mu}w)=\ell(\pi^{\mu}w)+\varepsilon\cdot%
\left(\left|\{\gamma\in\Delta(s_{w(\beta)},-):\langle\mu,\gamma\rangle\geq 0\}%
\right|-\left|\{\gamma\in\Delta(s_{w(\beta)},-):\langle\mu,\gamma\rangle<0\}%
\right|\right)$$
By Lemma 4.35, we know that the sign of
(4.50)
$$\displaystyle\left(\left|\{\gamma\in\Delta(s_{w(\beta)},-):\langle\mu,\gamma%
\rangle\geq 0\}\right|-\left|\{\gamma\in\Delta(s_{w(\beta)},-):\langle\mu,%
\gamma\rangle<0\}\right|\right)$$
is the same as the sign of $\langle\mu,w(\beta^{\vee})\rangle$, which is positive (recall here that when $\langle\mu,w(\beta^{\vee})\rangle=0$ we also say it has “positive” sign for this purpose). So we have that
(4.51)
$$\displaystyle\ell(\pi^{\mu}ws_{\beta^{\vee}+n\pi})>\ell(\pi^{\mu}w)$$
as desired. The other cases follow by a similar argument.
∎
5. Some remarks when $\mathbf{G}$ is finite-type
In this section, we will consider the case when $\mathbf{G}$ is finite-type and compare the usual development of the Bruhat order and length function with the proofs given in the previous section. For simplicity, let us additionally assume that $\mathbf{G}$ is simply connected. In this case, $\mathcal{W}_{P}=\mathcal{W}_{Q}=\mathcal{W}_{\mathcal{T}}$. This group is usually denoted $W_{\text{aff}}$ for the (single) affine Weyl group, and notably $W_{\text{aff}}$ is a Coxeter group. There is a general Coxeter-group notion of Bruhat order on $W_{\text{aff}}$, which is graded by the usual length function $\ell_{\text{cox}}$ taking values in non-negative integers.
Moreover, we can study the Iwahori-Hecke algebra of functions on $G(F)$ that are biinvariant for the action of the Iwahori subgroup and supported on finitely many double cosets. We have the basis given by indicator functions of double cosets $\{T_{x}\mid x\in W_{\text{aff}}\}$, and we have the usual Iwahori-Matsumoto relations.
Proposition 5.1.
[im, Corollary 3.6]
(5.2)
$$\displaystyle{}T_{\pi^{\mu}ws_{i}}=\left\{\begin{array}[]{c l}T_{\pi^{\mu}w}T_%
{i}\text{ if }\ell(\pi^{\mu}ws_{i})=\ell(\pi^{\mu}w)+1\\
T_{\pi^{\mu}w}T_{i}^{-1}\text{ if }\ell(\pi^{\mu}ws_{i})=\ell(\pi^{\mu}w)-1%
\end{array}\right.$$
We also have the following well-known facts.
Proposition 5.3.
Let $\lambda$ be a dominant coweight. Then we have
(5.4)
$$\displaystyle\ell_{\text{cox}}(\pi^{\lambda})=\langle 2\rho^{\vee},\lambda\rangle$$
In addition, for $w\in W$, we have
(5.5)
$$\displaystyle\ell_{\text{cox}}(\pi^{w(\lambda)})=\ell_{\text{cox}}(\pi^{%
\lambda})$$
In addition to this classical story, the methods of section 4 apply when $\mathbf{G}$ is finite-type. In particular, by the Braverman-Kazhdan-Patnaik definition of the Bruhat order, we see that the Bruhat order considered in section 4 agrees with the general Coxeter-group definition of Bruhat order on $W_{\text{aff}}$. As a consequence, the length function $\ell:W_{\text{aff}}\rightarrow\mathbb{Z}\oplus\mathbb{Z}\varepsilon$ also gives a grading of the Bruhat order.
Let $t\in\mathbb{R}$, and let $\ell_{t}:W_{\text{aff}}\rightarrow\mathbb{R}$ be the composed map $W_{\text{aff}}\rightarrow\mathbb{Z}\oplus\mathbb{Z}\varepsilon\rightarrow%
\mathbb{R}$, where the first map is $\ell$, and the second map is given by setting $\varepsilon$ equal to $t$. Then looking at the usual Iwahori-Matsumoto relation, we see that $\ell_{1}=\ell_{\text{cox}}$, the usual Coxeter length function on $W_{\text{aff}}$. From this we obtain the following fact.
Proposition 5.6.
For all $t\in(0,1]$, $\ell_{t}$ is a grading for the Bruhat order on $W_{\text{aff}}$.
Proof.
Suppose $x\in W_{\text{aff}}$, $r\in W_{\text{aff}}$ is a reflection corresponding to real affine root, and that
(5.7)
$$\displaystyle\ell_{1}(xr)>\ell_{1}(x)$$
By the results of section 4, we also have that
(5.8)
$$\displaystyle\ell(xr)>\ell(x)$$
Then we need to prove that $\ell_{t}(xr)>\ell_{t}(x)$ for all $t\in(0,1]$.
By (5.7), we have
(5.9)
$$\displaystyle\ell_{\text{big}}(xr)-\ell_{\text{big}}(x)>\ell_{\text{small}}(xr%
)-\ell_{\text{small}}(x)$$
There are two cases.
•
$\ell_{\text{small}}(xr)-\ell_{\text{small}}(x)>0$:
In this case, we have $\ell_{\text{big}}(xr)-\ell_{\text{big}}(x)>\ell_{\text{small}}(xr)-\ell_{\text%
{small}}(x)\geq t\cdot(\ell_{\text{small}}(xr)-\ell_{\text{small}}(x))$ for all $t\in(0,1]$, which is equivalent to our desired result.
•
$\ell_{\text{small}}(xr)-\ell_{\text{small}}(x)\leq 0$:
In this case, by (5.8), we have $\ell_{\text{big}}(xr)-\ell_{\text{big}}(x)>0$, from which we have $\ell_{\text{big}}(xr)-\ell_{\text{big}}(x)>0\geq t\cdot(\ell_{\text{small}}(xr%
)-\ell_{\text{small}}(x))$ for all $t\in(0,1]$.
∎
We can ask how much of this carries through when $\mathbf{G}$ is infinite-type.
Question 5.10.
Let $\mathbf{G}$ be an infinite-type Kac-Moody group. Is $\ell_{1}$ a grading for the Bruhat order on $\mathcal{W}_{\mathcal{T}}$?
It would not be very surprising if the answer to this is no. In that case, we can still ask the following weaker question.
Question 5.11.
Let $\mathbf{G}$ be an infinite-type Kac-Moody group. Can the Bruhat order on $\mathcal{W}_{\mathcal{T}}$ be graded by $\mathbb{Z}$?
{bibsection}
Bardy-PanseNicoleGaussentStéphaneRousseauGuyIwahori-hecke algebras for kac-moody groups over local fields2014arXiv:1412.7503@article{bgr,
author = {Bardy-Panse, Nicole },
author = {Gaussent, St\'ephane },
author = {Rousseau, Guy},
title = {Iwahori-Hecke algebras for Kac-Moody groups over local fields},
year = {2014},
eprint = {arXiv:1412.7503}}
BjörnerAndersBrentiFrancescoCombinatorics of Coxeter groupsGraduate Texts in Mathematics231Springer, New York2005xiv+363ISBN 978-3540-442387; 3-540-44238-305-01 (05E15 20F55)2133266 (2006d:05001)Jian-yi ShiMathReview (Jian-yi Shi)@book{bb,
author = {Bj{\"o}rner, Anders},
author = {Brenti, Francesco},
title = {Combinatorics of {C}oxeter groups},
series = {Graduate Texts in Mathematics},
volume = {231},
publisher = {Springer, New York},
year = {2005},
pages = {xiv+363},
isbn = {978-3540-442387; 3-540-44238-3},
mrclass = {05-01 (05E15 20F55)},
mrnumber = {2133266 (2006d:05001)},
mrreviewer = {Jian-yi Shi}}
BravermanA.GarlandH.KazhdanD.PatnaikM.An affine Gindikin-Karpelevich formulaPerspectives in representation theoryContemp. Math.61043–64Amer. Math. Soc., Providence, RI201422E67 (17B22)3220625DocumentLinkMathReview Entry@article{bgkp,
author = {Braverman, A.},
author = {Garland, H.},
author = {Kazhdan, D.},
author = {Patnaik, M.},
title = {An affine {G}indikin-{K}arpelevich formula},
booktitle = {Perspectives in representation theory},
series = {Contemp. Math.},
volume = {610},
pages = {43–64},
publisher = {Amer. Math. Soc., Providence, RI},
year = {2014},
mrclass = {22E67 (17B22)},
mrnumber = {3220625},
doi = {10.1090/conm/610/12193},
url = {http://dx.doi.org.myaccess.library.utoronto.ca/10.1090/conm/610/12193}}
BravermanAlexanderKazhdanDavidThe spherical hecke algebra for affine kac-moody groups iAnn. of Math. (2)174201131603–1642ISSN 0003-486XReview MathReviewsDocument@article{bk,
author = {Braverman, Alexander},
author = {Kazhdan, David},
title = {The spherical Hecke algebra for affine Kac-Moody groups I},
journal = {Ann. of Math. (2)},
volume = {174},
date = {2011},
number = {3},
pages = {1603–1642},
issn = {0003-486X},
review = {\MR{2846488}},
doi = {10.4007/annals.2011.174.3.5}}
BravermanAlexanderKazhdanDavidPatnaikManishIwahori-hecke algebras for p-adic loop groups2014arXiv:1403.0602@article{bkp,
author = {Braverman, Alexander},
author = {Kazhdan, David},
author = {Patnaik, Manish},
title = {Iwahori-Hecke algebras for p-adic loop groups},
year = {2014},
eprint = {arXiv:1403.0602}}
CherednikIvanDouble affine hecke algebras and macdonald’s conjecturesAnn. of Math. (2)14119951191–216ISSN 0003-486XReview MathReviewsDocument@article{cher,
author = {Cherednik, Ivan},
title = {Double affine Hecke algebras and Macdonald's conjectures},
journal = {Ann. of Math. (2)},
volume = {141},
date = {1995},
number = {1},
pages = {191–216},
issn = {0003-486X},
review = {\MR{1314036 (96m:33010)}},
doi = {10.2307/2118632}}
GarlandH.A cartan decomposition for $p$-adic loop groupsMath. Ann.30219951151–175ISSN 0025-5831Review MathReviewsDocument@article{garland,
author = {Garland, H.},
title = {A Cartan decomposition for $p$-adic loop groups},
journal = {Math. Ann.},
volume = {302},
date = {1995},
number = {1},
pages = {151–175},
issn = {0025-5831},
review = {\MR{1329451 (96i:22042)}},
doi = {10.1007/BF01444491}}
GaussentS.Spherical Hecke algebras for Kac-Moody groups over local fieldsRousseau, G. TITLE =Ann. of Math. (2)Annals of Mathematics. Second Series180201431051–1087ISSN 0003-486X20G44 (20C08)3245012DocumentLinkMathReview Entry@article{gr,
author = {Gaussent, S.},
author = {{Rousseau, G.}
TITLE = {Spherical {H}ecke algebras for {K}ac-{M}oody groups over local
fields}},
journal = {Ann. of Math. (2)},
fjournal = {Annals of Mathematics. Second Series},
volume = {180},
year = {2014},
number = {3},
pages = {1051–1087},
issn = {0003-486X},
mrclass = {20G44 (20C08)},
mrnumber = {3245012},
doi = {10.4007/annals.2014.180.3.5},
url = {http://dx.doi.org/10.4007/annals.2014.180.3.5}}
GarlandH.title=Affine Hecke Algebras associated to Kac-Moody GroupsGrojnowski, I.1995 journal=arXiv: 9508019@article{gg,
author = {Garland, H.},
author = {{Grojnowski, I.}
title={Affine Hecke Algebras associated to Kac-Moody Groups}},
year = {{1995}
journal={arXiv: 9508019}}}
IwahoriN.MatsumotoH.On some bruhat decomposition and the structure of the hecke rings of ${\germ p}$-adic chevalley groupsInst. Hautes Études Sci. Publ. Math.2519655–48ISSN 0073-8301Review MathReviews@article{im,
author = {Iwahori, N.},
author = {Matsumoto, H.},
title = {On some Bruhat decomposition and the structure of the Hecke rings
of ${\germ p}$-adic Chevalley groups},
journal = {Inst. Hautes \'Etudes Sci. Publ. Math.},
number = {25},
date = {1965},
pages = {5–48},
issn = {0073-8301},
review = {\MR{0185016 (32 \#2486)}}}
KacVictor G.Infinite-dimensional lie algebras3Cambridge University PressCambridge1990xxii+400ISBN 0-521-37215-1ISBN 0-521-46693-8Review MathReviewsDocument@book{kac,
author = {Kac, Victor G.},
title = {Infinite-dimensional Lie algebras},
edition = {3},
publisher = {Cambridge University Press},
place = {Cambridge},
date = {1990},
pages = {xxii+400},
isbn = {0-521-37215-1},
isbn = {0-521-46693-8},
review = {\MR{1104219 (92k:17038)}},
doi = {10.1017/CBO9780511626234}}
KumarShrawanKac-Moody groups, their flag varieties and representation theoryProgress in Mathematics204Birkhäuser Boston, Inc., Boston, MA2002xvi+606ISBN 0-8176-4227-722E46 (14M15 17B67 22E65)1923198 (2003k:22022)Guy RousseauDocumentLinkMathReview (Guy Rousseau)@book{kumar,
author = {Kumar, Shrawan},
title = {Kac-{M}oody groups, their flag varieties and representation
theory},
series = {Progress in Mathematics},
volume = {204},
publisher = {Birkh\"auser Boston, Inc., Boston, MA},
year = {2002},
pages = {xvi+606},
isbn = {0-8176-4227-7},
mrclass = {22E46 (14M15 17B67 22E65)},
mrnumber = {1923198 (2003k:22022)},
mrreviewer = {Guy Rousseau},
doi = {10.1007/978-1-4612-0105-2},
url = {http://dx.doi.org/10.1007/978-1-4612-0105-2}}
MathieuOlivierCompositio Mathematicaprojective normality; generalized Cartan matrix; ind-group scheme; Borel subgroup scheme; Schubert variety; PRV-conjecture; symmetrizable Kac- Moody algebras; Frobenius splittingfre137–60Noordhoff InternationalConstruction d’un groupe de kac-moody et applicationsLink691989@article{mathieu,
author = {Mathieu, Olivier},
journal = {Compositio Mathematica},
keywords = {projective normality; generalized Cartan matrix; ind-group scheme; Borel %
subgroup scheme; Schubert variety; PRV-conjecture; symmetrizable Kac- Moody %
algebras; Frobenius splitting},
language = {fre},
number = {1},
pages = {37-60},
publisher = {Noordhoff International},
title = {Construction d'un groupe de Kac-Moody et applications},
url = {http://eudml.org/doc/89941},
volume = {69},
year = {1989}}
TitsJacquesUniqueness and presentation of kac-moody groups over fieldsJ. Algebra10519872542–573ISSN 0021-8693Review MathReviewsDocument@article{tits,
author = {Tits, Jacques},
title = {Uniqueness and presentation of Kac-Moody groups over fields},
journal = {J. Algebra},
volume = {105},
date = {1987},
number = {2},
pages = {542–573},
issn = {0021-8693},
review = {\MR{873684 (89b:17020)}},
doi = {10.1016/0021-8693(87)90214-6}} |
Modeling Multi-Wavelength Stellar Astrometry. II. Determining Absolute Inclinations, Gravity Darkening Coefficients, and Spot Parameters of Single Stars with SIM Lite
Jeffrey L. Coughlin11affiliation: Department of Astronomy, New Mexico State University, P.O. Box 30001, MSC 4500, Las Cruces, New Mexico 88003-8001; [email protected] 33affiliation: NSF Graduate Research Fellow , Thomas E. Harrison11affiliation: Department of Astronomy, New Mexico State University, P.O. Box 30001, MSC 4500, Las Cruces, New Mexico 88003-8001; [email protected] , & Dawn M. Gelino22affiliation: NASA Exoplanet Science Institute, California Institute of Technology, Pasadena, CA 91125
Abstract
We present a novel technique to determine the absolute inclination of single stars using multi-wavelength sub-milliarcsecond astrometry. The technique exploits the effect of gravity darkening, which causes a wavelength-dependent astrometric displacement parallel to a star’s projected rotation axis. We find this effect is clearly detectable using SIM Lite for various giant stars and rapid rotators, and present detailed models for multiple systems using the reflux code. We also explore the multi-wavelength astrometric reflex motion induced by spots on single stars. We find that it should be possible to determine spot size, relative temperature, and some positional information for both giant and nearby main-sequence stars utilizing multi-wavelength SIM Lite data. This data will be extremely useful in stellar and exoplanet astrophysics, as well as supporting the primary SIM Lite mission through proper multi-wavelength calibration of the giant star astrometric reference frame, and reduction of noise introduced by starspots when searching for extrasolar planets.
Subject headings:astrometry — stars: fundamental parameters
1. Introduction
SIM Lite is currently expected to have $\sim$80 spectral channels (Davidson et al., 2009), spanning 450 to 900 nm, thus allowing multi-wavelength microarcsecond astrometry, which no current or planned ground or space-based astrometric project, (GAIA, CHARA, VLT/PRIMA, etc.) is able to match. We showed in our first paper (Coughlin et al., 2010), hereafter referred to as Paper I, the implications multi-wavelength microarcsecond astrometry has for interacting binary systems. In this paper, we discuss an interesting effect we encountered while modeling binary systems, namely that gravity darkening in stars produces a wavelength dependent astrometric offset from the center of mass that increases with decreasing wavelength. It is possible to use this effect to derive both the inclination and gravity darkening exponent of a star in certain cases.
Determining the absolute inclination of a given star has many practical applications. There is much interest in the formation of binary stars, where whether or not the spin axis of each star is aligned with the orbital axis provides insight into the formation history of the system (Turner et al., 1995). The mutual inclination between the stellar spin axes and orbital axis can greatly affect the rate of precession, which is used to probe stellar structure and test general relativity (Sterne, 1939a, b, c; Kopal, 1959; Jeffery, 1984). Albrecht et al. (2009) recently reconciled a 30-year-old discrepancy between the observed and predicted precession rate of DI Herculis through observations which showed the stellar spin axes were nearly perpendicular to the orbital axis. Along similar lines, extrasolar planets discovered via the radial velocity technique only yield the planetary mass as a function of the inclination of the orbit (Mayor & Queloz, 1995; Noyes et al., 1997; Marcy & Butler, 2000), and thus, if one assumes the planetary orbit and stellar rotation axes are nearly parallel, determining the absolute inclination of the host star yields the absolute mass of the planet. If the stellar spin axis is found not to be parallel to the planetary orbital axis, this provides valuable insights into the planet’s formation, migration, and tidal evolution histories (Winn et al., 2006; Fabrycky & Winn, 2009). A final example is the study of whether or not the spin axes of stars in clusters are aligned, which both reveals insight into their formation processes, as well as significantly affects the determination of the distances to those clusters (Jackson & Jeffries, 2010).
Our proposed technique can also be used in conjunction with other methods of determining stellar inclination to yield more precise inclination values and other stellar parameters of interest. Gizon & Solanki (2003) and Ballot et al. (2006) have shown that one can derive the inclination of the rotation axis for a given star using the techniques of astroseismology given high-precision photometry with continuous coverage over a long baseline, such as that provided by the CoRoT and Kepler missions. This technique is sensitive to rotation rates as slow as the Sun’s, but becomes easier with faster rotation rates. Domiciano de Souza et al. (2004) discuss how spectro-interferometry can yield both the inclination angle and amount of differential rotation for a star, parameterized by $\alpha$. For both eclipsing binaries and transiting planets, the observation of the Rossiter-McLaughlin (RM) effect can yield the relative co-inclination between the two components (Winn et al., 2006; Albrecht et al., 2009; Fabrycky & Winn, 2009). The technique we propose in this paper would be complementary to these techniques in several ways. First, it would provide an independent check on the derived inclination axis from each method, confirming or refuting the astroseismic models and spectro-interferometric and RM techniques. Second, in principle the astroseismic technique is not dependent on the gravity darkening coefficient $\beta_{1}$, and the spectro-interferometric technique is correlated with the value for $\alpha$; combining techniques would yield direct and robust observationally determined values for $i$, $\alpha$, and $\beta_{1}$. Finally, the accurate, observational determination of $\alpha$ and $\beta_{1}$, (along with stellar limb-darkening), is critical to accurately deriving the co-inclination from the RM effect, as well as other quantities in stellar and exoplanet astrophysics.
In this paper, we also present models for and discuss the determination of spot location, temperature, and size on single stars, which produce a wavelength-dependent astrometric signature as they rotate in and out of view. Star spots are regions on the stellar surface where magnetic flux emerges from bipolar magnetic regions, which blocks convection and thus heat transport, effectively cooling the enclosed gas, and thus are fundamental indicators of stellar magnetic activity and the internal dynamos that drive it. Işik et al. (2007) discuss how the observation of spot location, duration, stability, and temperature can probe the stellar interior and constrain models of magnetic flux transport. Through the observation of the rotation rates of starspots at varying latitudes, one is able to derive the differential rotation rate of the star (Collier Cameron, 2002), which may be directly related to the frequency of starspot cycles. Mapping spots in binary star systems provides insight into the interaction between the magnetic fields of the two components, which can cause orbital period changes (Applegate, 1992), radii inflation (López-Morales, 2007; Morales et al., 2008), and may possibly explain the $\sim$2-3 hour period gap in cataclysmic variable systems (Watson et al., 2007). Detecting and characterizing star spots via multi-wavelength astrometry would be complementary to other existing techniques, namely optical interferometry (Wittkowski et al., 2002), tomographic imaging (Donati et al., 2006; Aurière et al., 2008), photometric monitoring (Alekseev, 2004; Mosser et al., 2009), and in the future, microlensing (Hwang & Han, 2010).
We present the details of our modeling code, reflux, in §2, discuss the inclination effect and present models for multiple stars in §3, discuss the spot effects and present models in §4, and present our conclusions in §5.
2. The reflux Code
reflux111reflux can be run via a web interface from http://astronomy.nmsu.edu/jlcough/reflux.html. Additional details as to how to set-up a model are presented there. is a code that computes the flux-weighted astrometric reflex motion of binary systems. We discussed the code in detail in Paper I, but in short, it utilizes the Eclipsing Light Curve (ELC) code, which was written to compute light curves of eclipsing binary systems (Orosz & Hauschildt 2000). The ELC code represents the surfaces of two stars as a grid of individual luminosity points, and calculates the resulting light curve given the provided systemic parameters. ELC includes the dominant physical effects that shape a binary’s light curve, such as non-spherical geometry due to rotation, gravity darkening, limb darkening, mutual heating, reflection effects, and the inclusion of hot or cool spots on the stellar surface. For the work in this paper we have simply turned off one of the stars, thus allowing us to probe the astrometric effects of a single star. To compute intensity, ELC can either use a blackbody formula or interpolate from a large grid of NextGen model atmospheres (Hauschildt et al. 1999). For all the simulations in this paper, we have used the model atmosphere option, and will note now, and discuss more in detail later, that the calculation of limb-darkening is automatically included in NextGen model atmospheres. These artificially derived limb-darkening coefficients have recently been shown to be in error by as much as $\sim$10-20% in comparison to observationally derived values (Claret, 2008), and thus their uncertainties must be included, although for this work, due to symmetry, we find the introduced error is negligible. For all our simulations, we model the U, B, V, R, I, J, H, and K bands for completeness and comparison to future studies, though we note that SIM Lite will not be able to observe in the U, J, H, or K bandpasses.
3. Inclination and Rotation
The astrophysical phenomenon of gravity darkening, also sometimes referred to as gravity brightening, is the driving force behind the ability to determine the inclination of a single star using multi-wavelength astrometry. A rotating star is geometrically distorted into an oblate spheroid, such that its equatorial radius is greater than its polar radius, and thus the poles have a higher surface gravity, and the equator a lower surface gravity, than a non-rotating star with the same mass and average radius. This increased surface gravity, $g$, at the poles results in a higher effective temperature, T${}_{\rm eff}$, and thus luminosity; decreased $g$ at the equator results in a lower T${}_{\rm eff}$ and luminosity. This temperature and luminosity differential causes the star’s center of light, or photocenter, to be shifted towards the visible pole, away from the star’s gravitational center of mass. Since the inclination determines how much of the pole is visible, the amount of displacement between the photocenter and the center of mass is directly related to the inclination. Furthermore, since the luminosity difference effectively results from a ratio of blackbody luminosities of differing temperatures, the effect is wavelength dependent, with shorter wavelengths shifted more than longer wavelengths. Thus, the amount of displacement between the measured photocenter in two or more wavelengths is directly related to the inclination. See Figure 1 for an illustration of the effect.
An additional complicating factor is the exact dependence of temperature on local gravity. von Zeipel (1924) was the first to derive the quantitative relationship between them, showing that $T_{\rm eff}^{4}\propto g^{\beta_{1}}$, where $\beta_{1}$ is referred to as the gravity darkening exponent. The value of $\beta_{1}$ has been a subject of much study and debate; for a complete review, see Claret (2000), who presents both an excellent discussion of past studies, as well as new, detailed computations of $\beta_{1}$ using modern models of stellar atmospheres and internal structure that encompass stars from 0.08 to 40 M${}_{\sun}$. Since the value of $\beta_{1}$ affects the temperature differential between equator and pole, the multi-wavelength displacement will also be dependent on the value of $\beta_{1}$. The total amplitude of the effect will be scaled by the angular size of the star, which depends on both its effective radius and distance. Thus, in total, the components of this inclination effect are the effective stellar radius, distance, effective temperature, rotation rate, $\beta_{1}$, and inclination of the star. In principle, one is able to determine the effective stellar radius, effective temperature, rotation rate, and distance of a target star using ground-based spectroscopy and space-based parallax measurements, including from SIM Lite. Thus, when modeling the multi-wavelength displacement of the stellar photocenter, the only two components that need to be solved for are the inclination and $\beta_{1}$, with $\beta_{1}$ already having some constraints from theory.
A good trio of stars for modeling and testing this inclination effect are the components of the binary system Capella, (Aa and Ab), and the single star Vega. Torres et al. (2009) has very recently published an extremely detailed analysis of both the binary orbit of Capella and the physical and evolutionary states of the individual components, providing both new observations, as well as drawing from the previous observations and analyses of Hummel et al. (1994) and Strassmeier et al. (2001). Vega, in addition to being one of the most well-studied stars in the sky, has recently been discovered to be a very rapid rotator seen nearly pole-on (Aufdenberg et al., 2006; Peterson et al., 2006; Hill et al., 2010). In total, these three stars represent both slow and rapid rotators for giant and main-sequence stars at a range of temperatures, as Capella Aa is a slow-rotating K-type giant, Capella Ab is a fast-rotating G-type giant, and Vega is a very fast-rotating A-type main-sequence star. With many ground-based interferometric observations to compare with, and being bright and nearby, these stars also present excellent targets for SIM Lite.
We use the reflux code to generate models of the astrometric displacement from U-band to H-band, with respect to the K-band photocenter, for inclinations from 0 to 90$\arcdeg$, for each star, as shown in Figures 2, 3, and 4. We use systemic parameters given by Torres et al. (2009) for Capella Aa and Ab, and by Aufdenberg et al. (2006) and Peterson et al. (2006) for Vega, listed in Tables 1, 2, and 3 respectively. We employ the model atmospheres incorporated into the ELC code, as well as automatically chosen values for $\beta_{1}$ based on Figure 1 of Claret (2000). Additionally, in each figure we show a dashed line to indicate the effect of decreasing the gravity darkening coefficient by 10% to simulate the uncertainty of the models (Claret, 2000) and explore the correlation with other parameters.
As can be seen from these models, we find that the effect is quite large for a Capella Ab-like or Vega-like fast rotator, but only marginally detectable for a slower-rotating system like Capella Aa. This also implies that this effect would not be detectable for a slow-rotating, main-sequence star like our Sun. Our modeling confirms this, showing a total U-K amplitude of $\ll$ 0.1 $\mu$as for a 1.0 M${}_{\sun}$, 1.0 R${}_{\sun}$ star with a rotation period of 30.0 days at 10.0 parsecs. These conclusions on detectability are made with the assumption that, for bright stars like these, SIM Lite can achieve its microarcsecond benchmark. We show this is possible in narrow angle (NA) mode by employing the SIM Differential Astrometry Performance Estimator (DAPE) (Plummer, 2009). For a target star with magnitude V$=$5, and a single comparison star with V$=$10 located within a degree of it on the sky, by integrating 15 seconds on the target, and 30 seconds on the reference, for 10 visits at 5 chop cycles each, a final precision of $\pm$1.01 $\mu$as is achieved in only 1.04 hours of total mission time. For a fainter target with $V$=10, this precision is only reduced to $\pm$1.32 $\mu$as in the same amount of mission time. In utilizing NA mode, one must be careful in choosing the reference star(s), to ensure that they are not stars with a substantial wavelength dependant centroid. Given the only constraints on reference stars are that they need to have V $\gtrsim$ 10 and are within one degree on the sky, one could easily choose a slow-rotating, main-sequence star, determined as such via ground-based observations, as a wavelength-independent astrometric reference star. We also note that wide angle SIM Lite measurements, with a precision of $\sim$5 $\mu$as, may not detect the wavelength dependent photoceter of a system like Capella, but will have no difficulty detecting it in stars like Capella Ab or Vega.
The effect of decreasing the gravity darkening exponent is to decrease the total amplitude of the effect in each wavelength, with shorter wavelengths affected more than longer wavelengths. Thus, the choice of gravity darkening exponent is intimately tied to the derived inclination. If one were to model observed data with a gravity darkening exponent that was $\sim$10% different than the true value, they would derive an inclination that would also be $\sim$10% different from the true inclination. However, the two combinations of inclination and gravity darkening exponent do not produce identical results, and can be distinguished with a sufficient precision at a number of wavelengths. For example, if one were to adopt the nominal value for $\beta_{1}$ and derive an inclination of 40 degrees for a Vega-like star, then adopt a $\beta_{1}$ value that was 10% lower, one would derive an inclination of 43 degrees, a 7.5% change. In this case though, with the lower $\beta_{1}$ value, the measured photocenter in the U, B, V, R, I, J, and H bandpasses, with respect to the K-band photocenter, would differ from the nominal $\beta_{1}$ model by $\sim$0.5, -1.0, -2.0, -2.0, -1.6, -1.0, and 0.2 $\mu$as respectively. Note that for B, V, R, and I, where SIM Lite can observe, these discrepancies, on the order of $\sim$1.0 $\mu$as, should be large enough to be distinguished in NA mode. Thus, a unique solution exists for the values of i and $\beta_{1}$ if the photocenter is measured in three or more wavelengths. (The photocenter of one wavelength is used as a base measurement that the photocenters of other wavelengths are measured with respect to, as we have chosen K-band as the base measurement in our models. With the photocenter measured in three or more wavelengths total, there are two or more photocenter difference measurements, with two unknown variables for which to solve.) Another complication is the possibility of having equally good fitting high and low solutions for i. For example, if one observed and determined a best-fit inclination of 70 degrees for a Vega-like star, one could obtain a reasonably good fit as well at 46 degrees, (see Fig 4). However, just as in the case of the uncertainity in the value of $\beta_{1}$, discernible discrepancies would exist. In this case, the discrepencies in the measured photocenter in the U, B, V, R, I, J, and H bandpasses, with respect to the K-band photocenter, would be $\sim$0.1, -9.0, -2.0, 1.5, 6.0, 1.0, and 0.2 $\mu$as respectively. Just as in the case of the uncertainity in the value of $\beta_{1}$, this discrepancy between equally good fitting high and low inclination solutions can be resolved if one has three or more wavelengths obtained in NA mode.
As mentioned in §1, we note that the limb-darkening function, which was automatically chosen by the ELC code as incorporated into the model atmospheres, can differ from actual observed values by $\sim$10% (Claret, 2008). We have tested how changing the limb-darkening coefficients by 10% affects the resulting astrometric displacements, and find that the result is less than 0.5% for all wavelengths, and thus is negligible in the modeling. The reason is that limb-darkening is symmetric, and thus while increased limb-darkening damps the visible pole, it also damps the rest of the star, and thus the relative brightness between regions is maintained.
Additionally, this inclination technique yields the orientation of the projected stellar rotation axis on the sky, which is parallel to the wavelength dispersion direction. When coupled with the derived inclination, this technique thus yields the full 3-dimensional orientation of the rotation axis. This could be a powerful tool in determining the overall alignment of stellar axes in the local neighborhood and in nearby clusters.
4. Star Spots
Another area of astrophysical interest to which multi-wavelength astrometric measurements from SIM Lite can contribute is the study of star spots. As the cause of star spots are intense magnetic fields at the photosphere, they are typically found in stars with convective envelopes, especially rapidly rotating stars. Thus, both low-mass, main-sequence K and M dwarfs, as well as rapidly rotating giant and sub-giant stars, are known to host large spots on their surface. The study of the distribution, relative temperature, and size of these spots would greatly contribute to the study of magnetic field generation in stellar envelopes. A starspot that rotates in and out of view will cause a shift of the photocenter for a single star, which has been a subject of much recent discussion in the literature (e.g. Hatzes, 2002; Unwin, 2005; Eriksson & Lindegren, 2007; Catanzarite et al., 2008; Makarov et al., 2009; Lanza et al., 2008), especially in light of its potential to mimic, or introduce noise when characterizing, an extrasolar planet. However, there has been no mention in the literature of the multi-wavelength astrometric signature of stellar spots, where, just as in the case of the gravity darkening inclination effect, we are looking at essentially two blackbodies with varying temperatures, and thus shorter wavelengths will be more affected by a spot than longer wavelengths.
To characterize the multi-wavelength astrometric signature of stellar spots, we model two spotty systems, again using the reflux code. We model Capella Ab, which shows evidence of large spots and is suspected of being a RS CVn variable (Hummel et al., 1994), and a typical main-sequence K dwarf. For Capella Ab, we use the parameters listed in Table 2, along with the star’s determined inclination of 42.788$\arcdeg$ (Torres et al., 2009), and add a cool spot that has a temperature that is 60% of the average surface temperature, located at the equator, at a longitude such that it is seen directly at phase 270$\arcdeg$, and having an angular size of 10$\arcdeg$, (where 90$\arcdeg$ would cover exactly one half of the star). For the K dwarf system, we use the physical parameters listed in Table 4, simulating a typical K Dwarf at 10 parsecs, and add a cool spot with the same parameters as we do for Capella Ab. Additionally, to investigate the effects of cool versus hot spots or flares, we also run a model with a hot spot by changing the spot temperature to be 40% greater than the average surface temperature. We present our models in Figures 5, 6, and 7.
As can be seen for CapellaAb, the gravity darkening inclination effect presented in §3 dominates the spread of colors in the y-direction, the direction parallel to the stars’ projected rotation axis. However, the amplitude of the spot motion is quite large, with a total amplitude of $\sim$40 $\mu$as in all bandpasses, which would be easily detectable by SIM Lite. For the K dwarf with a cool spot, we see a much smaller, but still detectable shift of amplitude $\sim$5-8 $\mu$as, depending on the wavelength. In the case of a hot spot or flare, we see a much larger displacement, on the order of $\sim$10-200 $\mu$as, depending on the wavelength, which would be easily detectable by SIM and provide extremely precise values in deriving the spot parameters.
In general, the temperature of the spot, in relation to the mean stellar surface temperature, is related to the spread in observed wavelengths, with a larger spread indicating a larger temperature difference. The duration of the astrometric displacement in phase, coupled with the overall amplitude of the astrometric displacement, yields the size of the spot, as larger spots will cause larger displacements and be visible for a larger amount of rotational phase. The latitude of the spot can also affect the total duration. Finally, the amplitude of the astrometric displacement in the x versus the y direction is dependent on both the latitude of spot as well as the inclination of the star. Thus, when modeled together, one is able to recover these parameters. This work can also be combined with our work in Paper I to derive the location of spots in binary systems, as the astrometric signature of the spot is simply added to the astrometric signature of the binary system.
The astrometric motion induced upon a parent star by a host planet does not have a wavelength dependence. Spots however, as we have shown via our modeling, have a clear wavelength dependence. Thus, if one has a candidate planetary signal from astrometry, but it shows a wavelength-dependant motion, it must then be a false positive introduced from star spots at the rotation period of the star, (assuming that the planet’s emitted flux is negligible compared to the star.) Furthermore, when SIM is launched, there will likely be many cases where a marginally detectable signal due to a planetary companion is found at a very different period than the rotation period of the star. However, starspots will still introduce extra astrometric jitter which will degrade the signal from the planetary companion. Multi-wavelength astrometric data can be used to model and remove the spots, which will have a wavelength dependence, and thus strengthen the planetary signal, which will not have a wavelength dependence.
5. Discussion and Conclusion
We have presented detailed models of the multi-wavelength astrometric displacement that SIM Lite will observe due to gravity darkening and stellar spots using the reflux code. We find that SIM Lite observations, especially when combined with other techniques, will be able to determine the absolute inclination, gravity darkening exponent, and 3-dimensional orientation of the rotational axis for fast and slow rotating giant stars, and fast-rotating main-sequence stars. This technique will be especially useful in probing binary star and exoplanet formation and evolution, as well as the physics of star forming regions. Direct observational determination of the gravity darkening exponent has direct applications in both stellar and exoplanet astrophysics. This technique is also relatively inexpensive in terms of SIM Lite observing time, as one need only to observe a given star once, as opposed to binary stars and planets, which require constant monitoring over an entire orbit. It should be noted that this effect should be taken into account when constructing the SIM Lite astrometric reference frame, such that fast-rotating giants should be excluded so as not to produce a wavelength-dependent astrometric reference fame.
We also have presented models of star spots on single stars, and find that SIM Lite should be able to discern their location, temperature, and size. Combined with other techniques, this will provide great insight into stellar differential rotation, magnetic cycles and underlying dynamos, and magnetic interaction in close binaries. From this modeling, it should especially be noted that multi-wavelength astrometry is a key tool in the hunt for extrasolar planets, either by ruling out false signals created by spots, or simply removing extra astrometric jitter introduced by spots. Thus, it remains critical that SIM Lite maintains a multi-wavelength astrometric capability in its final design.
This work was sponsored in part by a SIM Science Study (PI: D. Gelino) from the National Aeronautics and Space Administration through a contract with the Jet Propulsion Laboratory, California Institute of Technology. J.L.C. acknowledges additional support from a New Mexico Space Grant Consortium Fellowship. We thank the referee for comments which greatly helped to improve this manuscript, particularly in making the presentation of our ideas much clearer.
References
Albrecht et al. (2009)
Albrecht, S., Reffert, S., Snellen, I. A. G., & Winn, J. N. 2009,
Nature, 461, 373
Alekseev (2004)
Alekseev, I. Y. 2004, Sol. Phys., 224, 187
Applegate (1992)
Applegate, J. H. 1992, ApJ, 385, 621
Aufdenberg et al. (2006)
Aufdenberg, J. P., et al. 2006, ApJ, 645, 664
Aurière et al. (2008)
Aurière, M., et al. 2008, A&A, 491, 499
Ballot et al. (2006)
Ballot, J., García, R. A., & Lambert, P. 2006, MNRAS, 369, 1281
Catanzarite et al. (2008)
Catanzarite, J., Law, N., & Shao, M. 2008, in Society of Photo-Optical
Instrumentation Engineers (SPIE) Conference Series, Vol. 7013, Society of
Photo-Optical Instrumentation Engineers (SPIE) Conference Series
Claret (2000)
Claret, A. 2000, A&A, 359, 289
Claret (2008)
—. 2008, A&A, 482, 259
Collier Cameron (2002)
Collier Cameron, A. 2002, Astronomische Nachrichten, 323, 336
Coughlin et al. (2010)
Coughlin, J. L., et al. 2010, ApJ, 717, 776
Davidson et al. (2009)
Davidson, J., Edberg, S., Danner, R., Nemati, B., & Unwin, S., eds.
2009, SIM Lite: Astrometric Observatory (National Aeronautics and Space
Administation)
Domiciano de Souza et al. (2004)
Domiciano de Souza, A., Zorec, J., Jankov, S., Vakili, F., Abe, L.,
& Janot-Pacheco, E. 2004, A&A, 418, 781
Donati et al. (2006)
Donati, J., Forveille, T., Cameron, A. C., Barnes, J. R., Delfosse,
X., Jardine, M. M., & Valenti, J. A. 2006, Science, 311, 633
Eriksson & Lindegren (2007)
Eriksson, U., & Lindegren, L. 2007, A&A, 476, 1389
Fabrycky & Winn (2009)
Fabrycky, D. C., & Winn, J. N. 2009, ApJ, 696, 1230
Gizon & Solanki (2003)
Gizon, L., & Solanki, S. K. 2003, ApJ, 589, 1009
Hatzes (2002)
Hatzes, A. P. 2002, Astronomische Nachrichten, 323, 392
Hill et al. (2010)
Hill, G., Gulliver, A. F., & Adelman, S. J. 2010, ApJ, 712, 250
Hummel et al. (1994)
Hummel, C. A., Armstrong, J. T., Quirrenbach, A., Buscher, D. F.,
Mozurkewich, D., Elias, II, N. M., & Wilson, R. E. 1994, AJ, 107,
1859
Hwang & Han (2010)
Hwang, K., & Han, C. 2010, ApJ, 709, 327
Işik et al. (2007)
Işik, E., Schüssler, M., & Solanki, S. K. 2007, A&A, 464, 1049
Jackson & Jeffries (2010)
Jackson, R. J., & Jeffries, R. D. 2010, MNRAS, 402, 1380
Jeffery (1984)
Jeffery, C. S. 1984, MNRAS, 207, 323
Kopal (1959)
Kopal, Z. 1959, Close binary systems, ed. Kopal, Z.
Lanza et al. (2008)
Lanza, A. F., De Martino, C., & Rodonò, M. 2008, New Astronomy, 13,
77
López-Morales (2007)
López-Morales, M. 2007, ApJ, 660, 732
Makarov et al. (2009)
Makarov, V. V., Beichman, C. A., Catanzarite, J. H., Fischer, D. A.,
Lebreton, J., Malbet, F., & Shao, M. 2009, ApJ, 707, L73
Marcy & Butler (2000)
Marcy, G. W., & Butler, R. P. 2000, PASP, 112, 137
Mayor & Queloz (1995)
Mayor, M., & Queloz, D. 1995, Nature, 378, 355
Morales et al. (2008)
Morales, J. C., Ribas, I., & Jordi, C. 2008, A&A, 478, 507
Mosser et al. (2009)
Mosser, B., Baudin, F., Lanza, A. F., Hulot, J. C., Catala, C.,
Baglin, A., & Auvergne, M. 2009, A&A, 506, 245
Noyes et al. (1997)
Noyes, R. W., Jha, S., Korzennik, S. G., Krockenberger, M., Nisenson,
P., Brown, T. M., Kennelly, E. J., & Horner, S. D. 1997, ApJ, 483,
L111+
Peterson et al. (2006)
Peterson, D. M., et al. 2006, Nature, 440, 896
Plummer (2009)
Plummer, K. 2009, TaPE Webtool User Guide 1.1
Sterne (1939a)
Sterne, T. E. 1939a, MNRAS, 99, 451
Sterne (1939b)
—. 1939b, MNRAS, 99, 662
Sterne (1939c)
—. 1939c, MNRAS, 99, 670
Strassmeier et al. (2001)
Strassmeier, K. G., Reegen, P., & Granzer, T. 2001, Astronomische
Nachrichten, 322, 115
Torres et al. (2009)
Torres, G., Claret, A., & Young, P. A. 2009, ApJ, 700, 1349
Turner et al. (1995)
Turner, J. A., Chapman, S. J., Bhattal, A. S., Disney, M. J.,
Pongracic, H., & Whitworth, A. P. 1995, MNRAS, 277, 705
Unwin (2005)
Unwin, S. C. 2005, in Astronomical Society of the Pacific Conference Series,
Vol. 338, Astrometry in the Age of the Next Generation of Large Telescopes,
ed. P. K. Seidelmann & A. K. B. Monet, 37–45
von Zeipel (1924)
von Zeipel, H. 1924, MNRAS, 84, 665
Watson et al. (2007)
Watson, C. A., Steeghs, D., Shahbaz, T., & Dhillon, V. S. 2007,
MNRAS, 382, 1105
Winn et al. (2006)
Winn, J. N., et al. 2006, ApJ, 653, L69
Wittkowski et al. (2002)
Wittkowski, M., Schöller, M., Hubrig, S., Posselt, B., & von der
Lühe, O. 2002, Astronomische Nachrichten, 323, 241 |
Bayesian Model Selection Methods for Mutual and Symmetric $k$-Nearest Neighbor Classification
Hyun-Chul Kim
H.-C. Kim is with R${}^{2}$ Research,
Seoul, South Korea.
E-mail: [email protected]
Abstract
The $k$-nearest neighbor classification method ($k$-NNC) is one of the simplest nonparametric classification methods. The mutual $k$-NN classification method (M$k$NNC) is a variant of $k$-NNC based on mutual neighborship. We propose another variant of $k$-NNC, the symmetric $k$-NN classification method (S$k$NNC) based on both mutual neighborship and one-sided neighborship. The performance of M$k$NNC and S$k$NNC depends on the parameter $k$ as the one of $k$-NNC does. We propose the ways how M$k$NN and S$k$NN classification can be performed based on Bayesian mutual and symmetric $k$-NN regression methods with the selection schemes for the parameter $k$. Bayesian mutual and symmetric $k$-NN regression methods are based on Gaussian process models, and it turns out that they can do M$k$NN and S$k$NN classification with new encodings of target values (class labels). The simulation results show that the proposed methods are better than or comparable to $k$-NNC, M$k$NNC and S$k$NNC with the parameter $k$ selected by the leave-one-out cross validation method not only for an artificial data set but also for real world data sets.
$k$-NN classification, mutual $k$-NN classification, symmetric $k$-NN classification, Selecting $k$ in $k$-NN, symmetric $k$-NN regression, Bayesian symmetric $k$-NN regression, Gaussian processes, Bayesian model selection
I Introduction
One of the well-known nonparametric classification methods is $k$-nearest neighbor ($k$-NN) classifiation method [1, 2, 3]. It uses one of the simplest rules among nonparametric classification methods. It assigns to a given test data point the most frequent class label appearing in the set of $k$ nearest data points to the test data point. Performance of $k$-NN classifiers is influenced by the distance measure [4] and the parameter $k$111More accurately $k$ should be called the hyperparameter since $k$-NN is a nonparametric method, but in this paper we also call it the parameter according to the convention. [5, 6]. So it is an important issue to select the best distance mesure and the best parameter $k$ in $k$-NN classification. In this paper we focus on the selection of the best parameter $k$ in the variates of $k$-NN classification although the selection of the best distance measure is also important.
[5] proposed an approximate Bayesian approach to $k$-NN classification, where a single parameter $k$ was not selected but its posterior distribution was estimated. It was not exactly probabilistic because of the missing of the proper normalization constant in the model as mentioned in [7]. It provided the class probability for a test data point and the approximate distribution of the parameter $k$ by MCMC methods. It was followed by an alternative model with likelihood-based inference and a method to select the best parameter $k$ based on BIC (Bayesian information criterion) method [8]. While those models are not fully probabilistic, [7] proposed a full
Bayesian probabilistic model for $k$-NN classification based on a symmetrized Boltzmann modelling with various kinds of sampling methods including a perfect sampling. Due to the symmetrized modification, their model does not fully reflect $k$-NN classification any more (e.g. it does not have asymmetry such as the one in $k$-NN classification.). So the most probable $k$ in their model may not be optimal in $k$-NN classification.
[6] has proposed another method to select the optimal parameter $k$ based on approximate Bayes risk. They modelled class probabilities for each training data point based on $k$-NN density estimation in the leave-one-out manner. Starting from those class probabilities by applying Bayes’ rule they get the accuracy index $\alpha(k)$. They proved that $1-\alpha(k)$ asymptotically converges to the optimal Bayes risk. Their simulation results showed that their proposed methods were better than cross-validation and likelihood cross-validation techniques.
Mutual $k$-NN (M$k$NN) classification is a variate of $k$-NN classfication based on mutual neighborship rather than one-sided neighborship. M$k$NN concept was applied to clustering tasks [9, 10]. More recently, M$k$NN methods have been applied to classification [11], outlier detection [12], object retrieval [13], clustering of interval-valued symbolic patterns [14], and regression [15]. [16] used M$k$NN concept to semi-supervised classification of natural language data and showed that the case of
using M$k$NN concept consistently outperform the case of using $k$-NN concept.
We propose another variate of $k$-NN classification, symmetric $k$-NN (S$k$NN) classification motivated by a symmetrized modelling used in [7]. S$k$NN consider neighbors both with mutual neighborship and one-sided neighborship. In S$k$NN classification, one-sided neighbors contribute to the decision in the same way as in $k$-NN classification, and mutual $k$-nearest neighbors contribute to the deicision twice more than one-sided $k$ nearest neighbors.
We propose Bayesian methods to select the parameter $k$ for M$k$NN222To our knowledge no Bayesian model selection method for M$k$NN classiifcation has been proposed. and S$k$NN classification.
This paper does not propose Bayesian probabilistic models for M$k$NN and S$k$NN classification, but model selection methods for them in the Bayesian evidence framework are proposed. The methods are based on Bayesian M$k$NN and S$k$NN regression methods, with which M$k$NN and S$k$NN classification can be done. A model selection method for S$k$NN classification is related to the estimation of the parameter $k$ in [7], because the model proposed in [7] can be regarded a Bayesian probabilistic model for S$k$NN classification.
The paper is organized as follows. In section II we describe mutual
$k$-NN and symmetric $k$-NN regression, and their Bayesian extensions with the selection method for the parameter $k$. In Section III we explain how we can do M$k$NN and S$k$NN classification with Bayesian M$k$NN and S$k$NN regression methods with their own selection schemes for the parameter $k$.
In section IV we show simulation results for an artificial data set and real-word data sets. Finally a conclusion is drawn.
II Bayesian Mutual and Symmetric $k$-Nearest Neighbor Regression
II-A Mutual and Symmetric $k$-Nearest Neighbor Regression
Let $\mathcal{N}_{k}({\rm\bf{x}})$ be the set of the $k$ nearest neighbors of ${\rm\bf{x}}$ in $\mathcal{D}_{n}$, $\mathcal{N}^{\prime}_{k}({\rm\bf{x}}_{i})$ the set of $k$ nearest neighbors of ${\rm\bf{x}}_{i}$ in $(\mathcal{D}_{n}\backslash\{{\rm\bf{x}}_{i}\})\cup\{{\rm\bf{x}}\}.$ The set of mutual nearest neighbors of ${\rm\bf{x}}$ is defined as
$$\displaystyle\mathcal{M}_{k}({\rm\bf{x}})=\{{\rm\bf{x}}_{i}\in\mathcal{N}_{k}(%
{\rm\bf{x}}):{\rm\bf{x}}\in\mathcal{N}^{\prime}_{k}({\rm\bf{x}}_{i})\}.$$
(1)
Then, the mutual $k$-nearest neighbor regression estimate is defined as
$$\displaystyle m^{\mbox{M$k$NNR}}_{n}({\rm\bf{x}})=\left\{\begin{array}[]{ll}%
\frac{1}{M_{k}({\rm\bf{x}})}\sum_{i:{\rm\bf{x}}_{i}\in\mathcal{M}_{k}({\rm\bf{%
x}})}^{k}y_{i}&\mbox{if $M_{k}({\rm\bf{x}})\neq 0$};\\
0&\mbox{if $M_{k}({\rm\bf{x}})=0$}.\end{array}\right.$$
(2)
where $M_{k}({\rm\bf{x}})=|\mathcal{M}_{k}({\rm\bf{x}})|$ [15].
Motivated by the symmetrised modelling for the $k$-NN classification in [7], we define the symmetric $k$-nearest neighbor regression estimate as
$$\displaystyle m^{\mbox{S$k$NNR}}_{n}({\rm\bf{x}})$$
$$\displaystyle=\frac{1}{N_{k}({\rm\bf{x}})+N^{\prime}_{k}({\rm\bf{x}})}\sum_{i:%
{\rm\bf{x}}_{i}\in\mathcal{N}_{k}({\rm\bf{x}})}^{k}(\delta_{{\rm\bf{x}}_{i}\in%
\mathcal{N}_{k}({\rm\bf{x}})}+\delta_{{\rm\bf{x}}\in\mathcal{N}^{\prime}_{k}({%
\rm\bf{x}}_{i})})y_{i}.$$
(3)
where $N_{k}({\rm\bf{x}})=|\mathcal{N}_{k}({\rm\bf{x}})|$ and $N^{\prime}_{k}({\rm\bf{x}})=|\mathcal{N}^{\prime}_{k}({\rm\bf{x}})|$.
II-B Bayesian Mutual and Symmetric $k$-NN Regression via Gaussian Processes
II-B1 Gaussian Process Regression
Assume that we have a data set $D$ of data points ${\rm\bf{x}}_{i}$ with
continuous target values $y_{i}$: $D=\{({\rm\bf{x}}_{i},y_{i})|i=1,2,\ldots,n\}$, $X=\{{\rm\bf{x}}_{i}|i=1,2,\ldots,n\}$, ${\rm\bf{y}}=[y_{1},y_{2},\ldots,y_{n}]^{T})$. We assume that the observations of target values are nosiy,
and set $y_{i}=f({\rm\bf{x}}_{i})+\epsilon_{i}$, where $f(\cdot)$ is a target function to be estimated
and $\epsilon_{i}\sim\mathcal{N}(0,v_{1})$. A function $f(\cdot)$ to be estimated given
$D$ is assumed to have Gaussian process prior, which means that any
collection of functional values are assumed to be multivariate
Gaussian.
The prior for the function values ${\rm\bf{f}}(=[f({\rm\bf{x}}_{1})f({\rm\bf{x}}_{2})\ldots f({\rm\bf{x}}_{n})]^{%
T})$ is assumed to be Gaussian:
$$\displaystyle p({\rm\bf{f}}|X,\Theta_{f})=\mathcal{N}({\rm\bf{0}},{\rm\bf{C}}_%
{f}).$$
(4)
Then the density function
for the target values can be described as follows.
$$\displaystyle p({\rm\bf{y}}|X,\Theta)$$
$$\displaystyle=\mathcal{N}({\rm\bf{0}},{\rm\bf{C}}_{f}+v_{1}{\rm\bf{I}})$$
(5)
$$\displaystyle=\mathcal{N}({\rm\bf{0}},{\rm\bf{C}}),$$
(6)
where ${\rm\bf{C}}$ is a matrix whose elements $C_{ij}$ is a covariance
function value $c({\rm\bf{x}}_{i},{\rm\bf{x}}_{j})$ of ${\rm\bf{x}}_{i}$, ${\rm\bf{x}}_{j}$ and
$\Theta$ is the set of hyperparameters in the covariance function.
It can be shown that GPR provides the following
distribution of target value $f_{\mathrm{new}}(=f({\rm\bf{x}}_{\mathrm{new}}))$ given a
test data ${\rm\bf{x}}_{\mathrm{new}}$:
$$\displaystyle p(f_{\mathrm{new}}|{\rm\bf{x}}_{\mathrm{new}},D,\Theta)=\mathcal%
{N}({\rm\bf{k}}^{T}{\rm\bf{C}}^{-1}{\rm\bf{f}},\kappa-{\rm\bf{k}}^{T}{\rm\bf{C%
}}^{-1}{\rm\bf{k}}),$$
(7)
where ${\rm\bf{k}}=[c({\rm\bf{x}}_{\mathrm{new}},{\rm\bf{x}}_{1})\ldots c({\rm\bf{x}}%
_{\mathrm{new}},{\rm\bf{x}}_{n})]^{T}$, $\kappa=c({\rm\bf{x}}_{\mathrm{new}},{\rm\bf{x}}_{\mathrm{new}})$. The
variance of the target value $f_{\mathrm{new}}$ is related to the degree of
its uncertainty. We can select the proper $\Theta$ by
maximizing the marginal likelihood $p({\rm\bf{y}}|X,\Theta)$
[17, 18, 19],
or we can average
over the hyperparameters with MCMC methods [17, 20].
II-B2 Laplacian-based Covariance Matrix
The combinatorial Laplacian ${\rm\bf{L}}$ is defined as follows.
$$\displaystyle{\rm\bf{L}}={\rm\bf{D}}-{\rm\bf{W}},$$
(8)
where ${\rm\bf{W}}$ is an $N\times N$ edge-weight matrix with the edge
weight between two points ${\rm\bf{x}}_{i}$,${\rm\bf{x}}_{j}$ given as $w_{ij}(=w({\rm\bf{x}}_{i},{\rm\bf{x}}_{j}))$ and ${\rm\bf{D}}=\mathrm{diag}(d_{1},...,d_{N})$ is a diagonal matrix with diagonal
entries $d_{i}=\sum_{j}w_{ij}$.
Similarly to [21], to avoid the singularity we set Laplacian-based covariance matrix as
$$\displaystyle{\rm\bf{C}}=$$
$$\displaystyle({\rm\bf{L}}+\sigma^{2}{\rm\bf{I}})^{-1}=\tilde{{\rm\bf{C}}}^{-1}.$$
(9)
Then, we have Gaussian process prior as follows.
$$\displaystyle p({\rm\bf{y}}|X,\Theta)$$
$$\displaystyle=\mathcal{N}({\rm\bf{0}},{\rm\bf{C}}),$$
(10)
The predictive distiribution for $y_{\mathrm{new}}$ is as follows (See [22] for the detailed derivation).
$$\displaystyle p(y_{\mathrm{new}}|{\rm\bf{y}},X,{\rm\bf{x}}_{\mathrm{new}},\Theta)$$
$$\displaystyle\propto\mathcal{N}(-\frac{1}{\tilde{\kappa}}\tilde{{\rm\bf{k}}}^{%
T}{\rm\bf{y}},\frac{1}{\tilde{\kappa}}),$$
(11)
where
$$\displaystyle\tilde{\kappa}$$
$$\displaystyle=\sum_{i=1}^{N}w({\rm\bf{x}}_{\mathrm{new}},{\rm\bf{x}}_{i})+%
\sigma^{2},$$
(12)
$$\displaystyle\tilde{{\rm\bf{k}}}^{T}$$
$$\displaystyle=-[w({\rm\bf{x}}_{\mathrm{new}},{\rm\bf{x}}_{1}),w({\rm\bf{x}}_{%
\mathrm{new}},{\rm\bf{x}}_{2}),\ldots,w({\rm\bf{x}}_{\mathrm{new}},{\rm\bf{x}}%
_{N})].$$
(13)
The mean and variance of $y_{\mathrm{new}}$ is represented as
$$\displaystyle\mu_{y_{\mathrm{new}}}$$
$$\displaystyle=-\frac{1}{\tilde{\kappa}}\tilde{{\rm\bf{k}}}^{T}{\rm\bf{y}}_{L}=%
\frac{\sum_{i=1}^{N}w({\rm\bf{x}}_{\mathrm{new}},{\rm\bf{x}}_{i})y_{i}}{\sum_{%
i=1}^{N}w({\rm\bf{x}}_{\mathrm{new}},{\rm\bf{x}}_{i})+\sigma^{2}},$$
(14)
$$\displaystyle\sigma^{2}_{y_{\mathrm{new}}}$$
$$\displaystyle=\frac{1}{\tilde{\kappa}}=\frac{1}{\sum_{i=1}^{N}w({\rm\bf{x}}_{%
\mathrm{new}},{\rm\bf{x}}_{i})+\sigma^{2}}.$$
(15)
II-B3 Bayesian Mutual and Symmetric $k$-NN Regression
First, we describe Bayesian mutual $k$-NN regression proposed in [22].
When we replace $w_{ij}(=w({\rm\bf{x}}_{i},{\rm\bf{x}}_{j}))$ with the function
$$\displaystyle w_{\mathrm{M}\it{k}\mathrm{NN}}({\rm\bf{x}}_{i},{\rm\bf{x}}_{j})%
=\sigma_{0}\delta_{{\rm\bf{x}}_{j}\sim_{k}{\rm\bf{x}}_{i}}\cdot\delta_{{\rm\bf%
{x}}_{i}\sim_{k}{\rm\bf{x}}_{j}},$$
(16)
where the relation $\sim_{k}$ is defined as
$$\displaystyle{\rm\bf{x}}_{i}\sim_{k}{\rm\bf{x}}_{j}=\left\{\begin{array}[]{ll}%
T&\mbox{if $j\neq i$ and ${\rm\bf{x}}_{j}$ is a $k$-nearest neighbor}\\
&\mbox{of ${\rm\bf{x}}_{i}$};\\
F&\mbox{otherwise,}\end{array}\right.$$
(17)
and apply Eq (15), we get Bayesian mutual $k$-NN regression estimate given a new data ${\rm\bf{x}}_{\mathrm{new}}$ as follows.
$$\displaystyle m^{\mbox{BM}k\mbox{NNR}}_{n}({\rm\bf{x}}_{\mathrm{new}})=\mu_{f_%
{\mathrm{new}},\mathrm{M}\it{k}\mathrm{NN}},$$
(18)
where
$$\displaystyle\mu_{f_{\mathrm{new}},\mathrm{M}\it{k}\mathrm{NN}}=$$
$$\displaystyle\frac{\sum_{i=1}^{N}w_{\mathrm{M}\it{k}\mathrm{NN}}({\rm\bf{x}}_{%
\mathrm{new}},{\rm\bf{x}}_{i})y_{i}}{\sum_{i=1}^{N}w_{\mathrm{M}\it{k}\mathrm{%
NN}}({\rm\bf{x}}_{\mathrm{new}},{\rm\bf{x}}_{i})+\sigma^{2}}$$
$$\displaystyle=$$
$$\displaystyle\frac{\sum_{i=1}^{N}\delta_{{\rm\bf{x}}_{j}\sim_{k}{\rm\bf{x}}_{i%
}}\cdot\delta_{{\rm\bf{x}}_{i}\sim_{k}{\rm\bf{x}}_{j}}y_{i}}{\sum_{i=1}^{N}%
\delta_{{\rm\bf{x}}_{j}\sim_{k}{\rm\bf{x}}_{i}}\cdot\delta_{{\rm\bf{x}}_{i}%
\sim_{k}{\rm\bf{x}}_{j}}+\sigma^{2}/\sigma_{0}}.$$
(19)
We have two following theorems about the validity of the covariance matrix with $w_{\mathrm{M}\it{k}\mathrm{NN}}({\rm\bf{x}}_{i},{\rm\bf{x}}_{j})$ and asymptotic property of the regression estimate. (See [22] for the proofs.)
Theorem 1.
Covairance matrix $\tilde{{\rm\bf{C}}}$ with $w_{ij}(=w_{\mathrm{M}\it{k}\mathrm{NN}}({\rm\bf{x}}_{i},{\rm\bf{x}}_{j}))$ is valid for Gaussian processes if $\sigma^{2}>0$.
Theorem 2.
$\mu_{f_{\mathrm{new}},{\mathrm{M}{\it k}\mathrm{NN}}}(=-\frac{1}{\tilde{\kappa%
}}\tilde{{\rm\bf{k}}}^{T}{\rm\bf{y}})$ converges to mutual $k$-$\mathrm{NN}$ regression as $\sigma^{2}/\sigma_{0}$ approaches $0$.
Now related to symmetric $k$-NN regression, we propose Bayesian symmetric $k$-NN regression.
$$\displaystyle w_{\mathrm{S}\it{k}\mathrm{NN}}({\rm\bf{x}}_{i},{\rm\bf{x}}_{j})%
=\sigma_{0}(\delta_{{\rm\bf{x}}_{j}\sim_{k}{\rm\bf{x}}_{i}}+\delta_{{\rm\bf{x}%
}_{i}\sim_{k}{\rm\bf{x}}_{j}})$$
(20)
Similarly to Eq (14) Bayesian symmetric $k$-NN regression is obtained as follows.
$$\displaystyle m^{\mbox{BM}k\mbox{NNR}}_{n}({\rm\bf{x}}_{\mathrm{new}})=\mu_{f_%
{\mathrm{new}},\mathrm{M}\it{k}\mathrm{NN}},$$
(21)
where
$$\displaystyle\mu_{f_{\mathrm{new}},\mathrm{S}\it{k}\mathrm{NN}}=$$
$$\displaystyle\frac{\sum_{i=1}^{N}w_{\mathrm{S}\it{k}\mathrm{NN}}({\rm\bf{x}},{%
\rm\bf{x}}_{i})y_{i}}{\sum_{i=1}^{N}w_{\mathrm{S}\it{k}\mathrm{NN}}({\rm\bf{x}%
},{\rm\bf{x}}_{i})+\sigma^{2}}$$
(22)
$$\displaystyle=$$
$$\displaystyle\frac{\sum_{i=1}^{N}(\delta_{{\rm\bf{x}}_{j}\sim_{k}{\rm\bf{x}}_{%
i}}+\delta_{{\rm\bf{x}}_{i}\sim_{k}{\rm\bf{x}}_{j}})y_{i}}{\sum_{i=1}^{N}(%
\delta_{{\rm\bf{x}}_{j}\sim_{k}{\rm\bf{x}}_{i}}+\delta_{{\rm\bf{x}}_{i}\sim_{k%
}{\rm\bf{x}}_{j}})+\sigma^{2}/\sigma_{0}}.$$
(23)
The symmetric $k$-NN regression estimate in Eq (3) can be described as follows.
$$\displaystyle m_{n}^{\mathrm{S{\it k}NNR}}({\rm\bf{x}})=\frac{\sum_{i=1}^{N}w_%
{\mathrm{S}\it{k}\mathrm{NN}}({\rm\bf{x}},{\rm\bf{x}}_{i})y_{i}}{\sum_{i=1}^{N%
}w_{\mathrm{S}\it{k}\mathrm{NN}}({\rm\bf{x}},{\rm\bf{x}}_{i})}$$
(24)
We have two following theorems about the validity of the covariance matrix with $w_{\mathrm{S}\it{k}\mathrm{NN}}({\rm\bf{x}}_{i},{\rm\bf{x}}_{j})$ and asymptotic property of the regression estimate. (See Appendix A and B for the proofs.)
Theorem 3.
Covairance matrix $\tilde{{\rm\bf{C}}}$ is valid for Gaussian processes if $\sigma^{2}>0$.
Theorem 4.
$\mu_{f_{\mathrm{new}},{\mathrm{S}{\it k}\mathrm{NN}}}(=-\frac{1}{\tilde{\kappa%
}}\tilde{{\rm\bf{k}}}^{T}{\rm\bf{y}})$ converges to symmetric $k$-$\mathrm{NN}$ regression as $\sigma^{2}/\sigma_{0}$ approaches $0$.
II-B4 Hyperparameter Selection
We describe the hyperparameter selection method for Bayesian M$k$NN regression proposed in [22]. It can be also used for the hyperparameter selection for Bayesian S$k$NN regression proposed in this paper. We have the set of hyperparameters is $\Theta=\{k,\sigma_{0},\sigma\}$, where $k$ is a interger greater than $0$. These sets of hyperparameters can be selected through the Bayesian evidence framework by maximizing the log of the marginal likelihood [18] as follows.
$$\displaystyle\Theta^{*}=$$
$$\displaystyle\mbox{argmax}_{\Theta}\mathcal{L}(\Theta),$$
(25)
where
$$\displaystyle\mathcal{L}(\Theta)=$$
$$\displaystyle\log p({\rm\bf{y}}|\Theta)$$
(26)
$$\displaystyle=$$
$$\displaystyle\log\{|2\pi{\rm\bf{C}}|^{-\frac{1}{2}}\exp(-\frac{1}{2}{\rm\bf{y}%
}^{T}{\rm\bf{C}}^{-1}{\rm\bf{y}})\}$$
(27)
$$\displaystyle=$$
$$\displaystyle\frac{1}{2}\log|\tilde{{\rm\bf{C}}}|-\frac{1}{2}{\rm\bf{y}}^{T}%
\tilde{{\rm\bf{C}}}{\rm\bf{y}}-\frac{N}{2}\log 2\pi,$$
(28)
where $\tilde{{\rm\bf{C}}}={\rm\bf{L}}+\sigma^{2}{\rm\bf{I}}$.
For the
continuous hyperparameters (e.g., $\sigma$, $\sigma_{0}$), the derivative can be used to
optimize $\mathcal{L}$ with respect to $\Theta$, where the
derivative of $\mathcal{L}$ with respect to $\theta$ is given by
$$\displaystyle\frac{\partial\mathcal{L}}{\partial\theta}=\frac{1}{2}\mathrm{%
trace}(\tilde{{\rm\bf{C}}}^{-1}\frac{\partial\tilde{{\rm\bf{C}}}}{\partial%
\theta})-\frac{1}{2}{\rm\bf{y}}^{T}\frac{\partial\tilde{{\rm\bf{C}}}}{\partial%
\theta}{\rm\bf{y}},$$
(29)
where $\tilde{{\rm\bf{C}}}={\rm\bf{L}}+\sigma^{2}{\rm\bf{I}}$.
The discrete hyperparameter $k$ can be
selected based on the value of $\mathcal{L}$ as
$$\displaystyle K^{*}$$
$$\displaystyle=\mbox{argmax}_{k}\mathcal{L}(\{k,\sigma,\sigma_{0}\}).$$
(30)
On the other hand, the posterior distributions of the
hyperparameters given the data can be inferred by the Bayesian method via
Markov Chain Monte Carlo methods
similarly to [20, 17]. And the regression estimate can be averaged
over the hyperparameters rather than obtained by one fixed set of hyperparameters. This would produce better results but cost more computational power. This approach has not been taken in this paper
III Mutual and Symmetric $k$-NN Classification and Bayesian Selection Methods for $k$
III-A Mutual and Symmetric $k$-Nearest Neighbor Classification
Let us assume we have the data set $\mathcal{D}_{n}=\{({\rm\bf{x}}_{1},y_{1}),\ldots,({\rm\bf{x}}_{n},y_{n})\}$, where ${\rm\bf{x}}_{i}\in\mathbf{R}^{d}$ and $y_{i}\in\{C_{1},C_{2},\ldots,C_{J}\}$. We describe mutual and symmetric $k$-NN classfication methods with the notations $\mathcal{N}_{k}({\rm\bf{x}})$, $\mathcal{N}^{\prime}_{k}({\rm\bf{x}}_{j}),$ and $\mathcal{M}_{k}({\rm\bf{x}})$ used to describe mutual and symmetric $k$-nearest neighbor regression in Section II-A. The mutual $k$-NN classification method is described as
$$\displaystyle m^{\mbox{M$k$NNC}}_{n}({\rm\bf{x}})=C_{\mathrm{argmax}_{c}|\{{%
\rm\bf{x}}_{j}\in\mathcal{M}_{k}({\rm\bf{x}})|y_{j}=c\}|}.$$
(31)
Motivated by the symmetrised modelling used in [7], we describe the symmetric $k$-NN classification method as
$$\displaystyle m^{\mbox{S$k$NNC}}_{n}({\rm\bf{x}})=C_{\mathrm{argmax}_{c}[|\{{%
\rm\bf{x}}_{j}\in\mathcal{N}_{k}({\rm\bf{x}})|y_{j}=c\}|+|\{{\rm\bf{x}}\in%
\mathcal{N}^{\prime}_{k}({\rm\bf{x}}_{j})|y_{j}=c\}|]}.$$
(32)
It is trivial to show that the class label that the model in [7] estimates with the highest class probability
is the same as the one that the above method presents. The model proposed by [7] can be regarded as a full Bayesian model for the symmetric $k$-NN classification mentioned above.
III-B Bayesian Selection Methods for $k$
In Section II-B we described Bayesian mutual and symmetric $k$-NN regression methods with the selection schemes for the hyperparameters including $k$. We show that mutual and symmetric $k$-NN classification can be done with Bayesian mutual and symmetric $k$-NN regression methods, if the target values of the data set is encoded properly from class labels. We describe how it can be done for the cases of binary-class and multi-class (more than 2 classes) classification.
III-B1 Binary-class Classification
In case of the binary-class classification, we set a new training data set $\mathcal{D}^{\mathrm{CR}}_{n}=\{({\rm\bf{x}}_{1},y^{\mathrm{NE}}_{1}),\ldots,(%
{\rm\bf{x}}_{n},y^{\mathrm{NE}}_{n})\}$ with new class label encodings, where
$$\displaystyle y^{\mathrm{NE}}_{i}=\left\{\begin{array}[]{ll}-1&\mbox{if $y_{i}%
=C_{1}$}\\
1&\mbox{if $y_{i}=C_{2}$}.\end{array}\right.$$
(33)
Now given a new test data point ${\rm\bf{x}}$ we apply Bayesian M$k$NN regression for the new training data set $\mathcal{D}^{\mathrm{CR}}_{n}$, and then we have M$k$NN classification method based on the result of Bayesian M$k$NN regression:
$$\displaystyle y_{\mathrm{new}}^{\mathrm{M}{\it k}\mathrm{NN},\mathrm{NE}}$$
$$\displaystyle=\mathrm{sgn}(\mu_{f_{\mathrm{new},{\mathrm{M}{\it k}\mathrm{NN}}%
}})$$
(34)
$$\displaystyle=\mathrm{sgn}(\sum_{i=1}^{N}\delta_{{\rm\bf{x}}_{\mathrm{new}}%
\sim_{k}{\rm\bf{x}}_{i}}\cdot\delta_{{\rm\bf{x}}_{i}\sim_{k}{\rm\bf{x}}_{%
\mathrm{new}}}y_{i}^{\mathrm{NE}})$$
(35)
$$\displaystyle=\mathrm{sgn}(-|\{{\rm\bf{x}}_{j}\in\mathcal{M}_{k}({\rm\bf{x}}_{%
\mathrm{new}})|y_{j}=C_{1}\}|$$
$$\displaystyle+|\{{\rm\bf{x}}_{j}\in\mathcal{M}_{k}({\rm\bf{x}}_{\mathrm{new}})%
|y_{j}=C_{2}\}|).$$
(36)
It is also trivial to show that
$$\displaystyle y_{\mathrm{new}}^{\mathrm{M}{\it k}\mathrm{NN}}=C_{(y_{\mathrm{%
new}}^{\mathrm{M}{\it k}\mathrm{NN},\mathrm{NE}}+3)/2}=m^{\mbox{M$k$NNC}}_{n}(%
{\rm\bf{x}}).$$
(37)
For the symmetric $k$-NN classification,
we apply Bayesian S$k$NN regression for the new training data set $\mathcal{D}^{\mathrm{CR}}_{n}$, and then we have
S$k$NN classification method based on the result of Bayesian S$k$NN regression:
$$\displaystyle y_{\mathrm{new}}^{\mathrm{S}{\it k}\mathrm{NN},\mathrm{NE}}$$
$$\displaystyle=\mathrm{sgn}(\mu_{f_{\mathrm{new},{\mathrm{S}{\it k}\mathrm{NN}}%
}})$$
(38)
$$\displaystyle=\mathrm{sgn}(\sum_{i=1}^{N}(\delta_{{\rm\bf{x}}_{\mathrm{new}}%
\sim_{k}{\rm\bf{x}}_{i}}+\delta_{{\rm\bf{x}}_{i}\sim_{k}{\rm\bf{x}}_{\mathrm{%
new}}})y_{i}^{\mathrm{NE}})$$
(39)
$$\displaystyle=\mathrm{sgn}[-[|\{{\rm\bf{x}}_{j}\in\mathcal{N}_{k}({\rm\bf{x}}_%
{\mathrm{new}})|y_{j}=C_{1}\}|$$
$$\displaystyle~{}+|\{{\rm\bf{x}}_{\mathrm{new}}\in\mathcal{N}^{\prime}_{k}({\rm%
\bf{x}}_{j})|y_{j}=C_{1}\}|]$$
$$\displaystyle+[|\{{\rm\bf{x}}_{j}\in\mathcal{N}_{k}({\rm\bf{x}}_{\mathrm{new}}%
)|y_{j}=C_{2}\}|$$
$$\displaystyle~{}+|\{{\rm\bf{x}}_{\mathrm{new}}\in\mathcal{N}^{\prime}_{k}({\rm%
\bf{x}}_{j})|y_{j}=C_{2}\}|]].$$
(40)
It is also trivial to show that
$$\displaystyle y_{\mathrm{new}}^{\mathrm{S}{\it k}\mathrm{NN}}=C_{(y_{\mathrm{%
new}}^{\mathrm{S}{\it k}\mathrm{NN},\mathrm{NE}}+3)/2}=m^{\mbox{S$k$NNC}}_{n}(%
{\rm\bf{x}}).$$
(41)
The hyperparameters including $k$ can be selected by the methods described in Section II-B4.
III-B2 Multi-class Classification
For the multi-class classification (with more than 2 classes), we present two kinds of methods.
First, we use the traditional formulation (formulation I) used in multi-class Gaussian process classification [23]. We consider Bayesian mutual and symmetric $k$-NN regression with $J$ outputs when we have $J$ classes. The outputs are expressed as $f^{1},f^{2},\ldots,f^{J}$. We assume that the $Jn\times Jn$ covariance matrix
of the prior of ${\rm\bf{f}}$ is
$$\displaystyle{\rm\bf{C}}=\left[\begin{array}[]{llll}{\rm\bf{C}}_{f^{1}}&{\rm%
\bf{0}}&\ldots&{\rm\bf{0}}\\
{\rm\bf{0}}&{\rm\bf{C}}_{f^{1}}&\ldots&{\rm\bf{0}}\\
{\rm\bf{0}}&{\rm\bf{0}}&\ldots&{\rm\bf{C}}_{f^{J}}\end{array}\right],$$
(42)
with the covariance function $\mathrm{Cov}(f_{i}^{j},f_{k}^{l})=\delta(j,l)c({\rm\bf{x}}_{i},{\rm\bf{x}}_{k})$. Then we have
$$\displaystyle{{\rm\bf{C}}}$$
$$\displaystyle=({\rm\bf{D}}-{\rm\bf{W}}+\sigma^{2}{\rm\bf{I}}_{Jn\times Jn})^{-1}$$
(43)
$$\displaystyle{{\rm\bf{C}}_{f^{l}}}$$
$$\displaystyle=({\rm\bf{D}}_{f^{l}}-{\rm\bf{W}}_{f^{l}}+\sigma^{2}{\rm\bf{I}}_{%
n\times n})^{-1}$$
(44)
$$\displaystyle{\rm\bf{W}}$$
$$\displaystyle=\left[\begin{array}[]{llll}{\rm\bf{W}}_{f^{1}}&{\rm\bf{0}}&%
\ldots&{\rm\bf{0}}\\
{\rm\bf{0}}&{\rm\bf{W}}_{f^{1}}&\ldots&{\rm\bf{0}}\\
{\rm\bf{0}}&{\rm\bf{0}}&\ldots&{\rm\bf{W}}_{f^{J}}\end{array}\right],$$
(45)
where $[{\rm\bf{W}}_{f^{l}}]_{ij}=w_{\mathrm{M}\it{k}\mathrm{NN}}({\rm\bf{x}}_{i},{%
\rm\bf{x}}_{j})$ for M$k$NN case, or $w_{\mathrm{S}\it{k}\mathrm{NN}}({\rm\bf{x}}_{i},{\rm\bf{x}}_{j})$ for S$k$NN case. When we set a new encoding for a target value as
$$\displaystyle y^{\mathrm{NE}_{2}}_{il}$$
$$\displaystyle=\left\{\begin{array}[]{ll}1&\mbox{if $y_{i}=C_{l}$};\\
0&\mbox{otherwise,}\end{array}\right.$$
(46)
given a new test data point ${\rm\bf{x}}$ we have the predictive mean
$$\displaystyle\mu_{f^{l}_{\mathrm{new}},\mathrm{M}\it{k}\mathrm{NN}}=$$
$$\displaystyle\frac{\sum_{i=1}^{N}w_{\mathrm{M}\it{k}\mathrm{NN}}({\rm\bf{x}}_{%
\mathrm{new}},{\rm\bf{x}}_{i})y^{\mathrm{NE}_{2}}_{il}}{\sum_{i=1}^{N}w_{%
\mathrm{M}\it{k}\mathrm{NN}}({\rm\bf{x}}_{\mathrm{new}},{\rm\bf{x}}_{i})+%
\sigma^{2}}$$
(47)
$$\displaystyle=$$
$$\displaystyle\frac{\sum_{i=1}^{N}\delta_{{\rm\bf{x}}_{\mathrm{new}}\sim_{k}{%
\rm\bf{x}}_{i}}\cdot\delta_{{\rm\bf{x}}_{i}\sim_{k}{\rm\bf{x}}_{\mathrm{new}}j%
}y^{\mathrm{NE}_{2}}_{il}}{\sum_{i=1}^{N}\delta_{{\rm\bf{x}}_{\mathrm{new}}%
\sim_{k}{\rm\bf{x}}_{i}}\cdot\delta_{{\rm\bf{x}}_{i}\sim_{k}{\rm\bf{x}}_{%
\mathrm{new}}}+\sigma^{2}/\sigma_{0}}.$$
(48)
Then we have the classification method based on the result of multivariate Bayesian M$k$NN regression
$$\displaystyle y_{\mathrm{new}}^{\mathrm{M}{\it k}\mathrm{NN},\mathrm{MUL-I}}$$
$$\displaystyle~{}=C_{\mathrm{argmax}_{l}\mu_{f^{l}_{\mathrm{new}},\mathrm{M}\it%
{k}\mathrm{NN}}}$$
(49)
$$\displaystyle~{}=C_{\mathrm{argmax}_{l}\sum_{i=1}^{N}\delta_{{\rm\bf{x}}_{%
\mathrm{new}}\sim_{k}{\rm\bf{x}}_{i}}\cdot\delta_{{\rm\bf{x}}_{i}\sim_{k}{\rm%
\bf{x}}_{\mathrm{new}}}y^{\mathrm{NE}_{2}}_{il}}$$
(50)
$$\displaystyle~{}=C_{\mathrm{argmax}_{c}|\{{\rm\bf{x}}_{j}\in\mathcal{M}_{k}({%
\rm\bf{x}}_{\mathrm{new}})|y_{j}=c\}|}$$
(51)
$$\displaystyle~{}=m^{\mbox{M$k$NNC}}_{n}({\rm\bf{x}}).$$
(52)
Similarly, for symmetric $k$-NN classification we have the classification method based on the result of multivariate Bayesian S$k$NN regression
$$\displaystyle y_{\mathrm{new}}^{\mathrm{S}{\it k}\mathrm{NN},\mathrm{MUL-I}}$$
$$\displaystyle=C_{\mathrm{argmax}_{l}\mu_{f^{l}_{\mathrm{new}},\mathrm{S}\it{k}%
\mathrm{NN}}}$$
(53)
$$\displaystyle=C_{\mathrm{argmax}_{l}\sum_{i=1}^{N}(\delta_{{\rm\bf{x}}_{%
\mathrm{new}}\sim_{k}{\rm\bf{x}}_{i}}+\delta_{{\rm\bf{x}}_{i}\sim_{k}{\rm\bf{x%
}}_{\mathrm{new}}})y^{\mathrm{NE}_{2}}_{il}}$$
(54)
$$\displaystyle=C_{\mathrm{argmax}_{c}[|\{{\rm\bf{x}}_{j}\in\mathcal{N}_{k}({\rm%
\bf{x}}_{\mathrm{new}})|y_{j}=c\}|+|\{{\rm\bf{x}}_{\mathrm{new}}\in\mathcal{N}%
^{\prime}_{k}({\rm\bf{x}}_{j})|y_{j}=c\}|]}$$
(55)
$$\displaystyle=m^{\mbox{S$k$NNC}}_{n}({\rm\bf{x}}),$$
(56)
where
$$\displaystyle\mu_{f^{l}_{\mathrm{new}},\mathrm{S}\it{k}\mathrm{NN}}=$$
$$\displaystyle\frac{\sum_{i=1}^{N}w_{\mathrm{S}\it{k}\mathrm{NN}}({\rm\bf{x}}_{%
\mathrm{new}},{\rm\bf{x}}_{i})y^{\mathrm{NE}_{2}}_{il}}{\sum_{i=1}^{N}w_{%
\mathrm{S}\it{k}\mathrm{NN}}({\rm\bf{x}}_{\mathrm{new}},{\rm\bf{x}}_{i})+%
\sigma^{2}}$$
$$\displaystyle=$$
$$\displaystyle\frac{\sum_{i=1}^{N}(\delta_{{\rm\bf{x}}_{\mathrm{new}}\sim_{k}{%
\rm\bf{x}}_{i}}+\delta_{{\rm\bf{x}}_{i}\sim_{k}{\rm\bf{x}}_{\mathrm{new}}j})y^%
{\mathrm{NE}_{2}}_{il}}{\sum_{i=1}^{N}(\delta_{{\rm\bf{x}}_{\mathrm{new}}\sim_%
{k}{\rm\bf{x}}_{i}}+\delta_{{\rm\bf{x}}_{i}\sim_{k}{\rm\bf{x}}_{\mathrm{new}}}%
)+\sigma^{2}/\sigma_{0}}.$$
(57)
As in [24] we use another formulation (formulation II) to avoid a redundancy in the traditional formulation pointed by [20]. We use $J-1$ outputs only without redundancy, which are $(J-1)$ differences among $\{f^{1},f^{2},\ldots,f^{J}\}$. We define $g_{i}^{y_{i},j}(=f_{i}^{y_{i}}-f_{i}^{j})$ for $j\neq y_{i}$. We set ${\rm\bf{g}}_{i}$ to $[g_{1}^{y_{1},1},\ldots,g_{1}^{y_{1},y_{1}-1},g_{1}^{y_{1},y_{1}+1},$ $\ldots,g_{1}^{y_{1},J}]^{T}$, and set ${\rm\bf{g}}$ to $[{\rm\bf{g}}_{1}^{T},{\rm\bf{g}}_{2}^{T},\ldots,{\rm\bf{g}}_{n}^{T}]^{T}$.
For ${\rm\bf{g}}$ we have the $(J-1)n\times(J-1)n$ covariance marix ${\rm\bf{C}}^{\mathrm{MUL}}$ with the covariance function
$\mathrm{Cov}(g_{i}^{y_{i},j},g_{k}^{y_{k},l})=(\delta(y_{i},y_{k})-\delta(y_{i%
},l)-\delta(y_{k},j)+\delta(j,l))c({\rm\bf{x}}_{i},{\rm\bf{x}}_{k})$ for $y_{i}\neq j$ and $y_{k}\neq l$. (For the derivation, see [24].)
Given a new test data point ${\rm\bf{x}}$, we have the multiple outputs $g_{\mathrm{new}}^{y_{\mathrm{new}},l}$ for $y_{\mathrm{new}}\neq l$. For the simplicity we try to get the estimates of $g_{\mathrm{new}}^{1,l}$ ($l\neq 1$). For mutual $k$-NN classification, we have the predictive mean as in typical GP regression, as follows.
$$\displaystyle\mu_{g^{1,l}_{\mathrm{new},\mathrm{M}k\mathrm{NN}}}$$
$$\displaystyle=\mu_{f^{1}_{\mathrm{new}},\mathrm{M}\it{k}\mathrm{NN}}-\mu_{f^{l%
}_{\mathrm{new}},\mathrm{M}\it{k}\mathrm{NN}}$$
(58)
$$\displaystyle=-\frac{1}{\kappa_{l,\mathrm{M}k\mathrm{NN}}}{\rm\bf{k}}_{l,%
\mathrm{M}k\mathrm{NN}}^{T}{\rm\bf{1}},$$
(59)
where $\kappa_{l,\mathrm{M}k\mathrm{NN}}$ and ${\rm\bf{k}}_{l,\mathrm{M}k\mathrm{NN}}$ are obtained from ${\rm\bf{C}}^{\mathrm{MUL}}$
with the function $w_{\mathrm{M}\it{k}\mathrm{NN}}$ and ${\rm\bf{C}}^{\mathrm{MUL}}_{J(n-1)\times J(n-1)+1}$ with one additional $g^{1,l}_{\mathrm{new},\mathrm{M}k\mathrm{NN}}$ [18, 19].
Based on $\{\mu_{g_{\mathrm{new},\mathrm{M}k\mathrm{NN}}^{1,l}}|l\neq 1\}$, we have the classification method based on the result of multivariate Bayesian M$k$NN regression.
$$\displaystyle y_{\mathrm{new}}^{\mathrm{M}{\it k}\mathrm{NN},\mathrm{MUL-II}}$$
$$\displaystyle=\left\{\begin{array}[]{lll}C_{1}&\mbox{if $\mu_{g^{1,l}_{\mathrm%
{new},\mathrm{M}k\mathrm{NN}}}>0$}\\
&\mbox{for all $l\neq 1$};\\
C_{\mathrm{argmin}_{l}~{}\mu_{g^{1,l}_{\mathrm{new},\mathrm{M}k\mathrm{NN}}}}&%
\mbox{otherwise}\end{array}\right.$$
(60)
$$\displaystyle=C_{\mathrm{argmax}_{l}\mu_{f^{l}_{\mathrm{new}},\mathrm{M}\it{k}%
\mathrm{NN}}}$$
(61)
$$\displaystyle=y_{\mathrm{new}}^{\mathrm{M}{\it k}\mathrm{NN},\mathrm{MUL-I}}$$
(62)
Similarly, for symmetric $k$-NN classification we have the predictive mean as in typical GP regression, as follows.
$$\displaystyle\mu_{g^{1,l}_{\mathrm{new},\mathrm{S}k\mathrm{NN}}}$$
$$\displaystyle=\mu_{f^{1}_{\mathrm{new}},\mathrm{S}\it{k}\mathrm{NN}}-\mu_{f^{l%
}_{\mathrm{new}},\mathrm{S}\it{k}\mathrm{NN}}$$
(63)
$$\displaystyle=-\frac{1}{\kappa_{l,\mathrm{S}k\mathrm{NN}}}{\rm\bf{k}}_{l,%
\mathrm{S}k\mathrm{NN}}^{T}{\rm\bf{1}},$$
(64)
where $\kappa_{l,\mathrm{S}k\mathrm{NN}}$ and ${\rm\bf{k}}_{l,\mathrm{S}k\mathrm{NN}}$ are obtained from ${\rm\bf{C}}^{\mathrm{MUL}}$
with the function $w_{\mathrm{S}\it{k}\mathrm{NN}}$ and ${\rm\bf{C}}^{\mathrm{MUL}}_{J(n-1)\times J(n-1)+1}$ with one additional $g^{1,l}_{\mathrm{new},\mathrm{S}k\mathrm{NN}}$ [18, 19].
Based on $\{\mu_{g_{\mathrm{new},\mathrm{S}k\mathrm{NN}}^{1,l}}|l\neq 1\}$, we have the classification method based on the result of multivariate Bayesian S$k$NN regression.
$$\displaystyle y_{\mathrm{new}}^{\mathrm{S}{\it k}\mathrm{NN},\mathrm{MUL-II}}$$
$$\displaystyle=\left\{\begin{array}[]{ll}C_{1}&\mbox{if $g^{1,l}_{\mathrm{new},%
\mathrm{S}k\mathrm{NN}}>0$}\\
&\mbox{for all $l\neq 1$};\\
C_{\mathrm{argmin}_{l}~{}\mu_{g^{1,l}_{\mathrm{new},\mathrm{S}k\mathrm{NN}}}}&%
\mbox{otherwise}\end{array}\right.$$
(65)
$$\displaystyle=C_{\mathrm{argmax}_{l}\mu_{f^{l}_{\mathrm{new}},\mathrm{S}\it{k}%
\mathrm{NN}}}$$
(66)
$$\displaystyle=y_{\mathrm{new}}^{\mathrm{S}{\it k}\mathrm{NN},\mathrm{MUL-I}}$$
(67)
This latter formulation (formulation II) exactly leads to the binary classification formulation described in Section III-B1, when $J$ is 2.
As can be seen in Eq (62) and Eq (67), both the formulations produce the same classification results when they have the same hyperparameters. The hyperparameters including $k$ can be selected by the methods described in Section II-B4. However, the hyperparameters selected by the methods with the formulation I and II can be different because they use different covariance matrixes in the marginal likelihood. (The former one uses the $Jn\times Jn$ covariance matrix and the latter one uses $(J-1)n\times(J-1)n$ covariance matrix.)
In the computer simulations even with the identical $k$ there can be cases where the classification results by M$k$NN (or S$k$NN), the ones based on Bayesian M$k$NN (or S$k$NN) regression methods with formulation I, and the ones by Bayesian M$k$NN (or S$k$NN) regression methods with formulation II are different. One of the reasons for that is that the matrix calculation is approximate. Another reason is that they are different in the ways how dealing with vote tie cases. When vote ties occur, in M$k$NN (or S$k$NN) the class label of the nearest neighbor among tied mutual neighbors (or among tied symmetric neighbors) is assigned. However, in the methods based on Bayesian M$k$NN (or S$k$NN) regression, the class label with the lowest index is assigned, because information on nearest neighbors are not available in themselves.
IV Simulation Results
To demonstrate the proposed methods first we did simulations for an artificial data set. To generate an artificial data set, we used the equation $\mathrm{sinc}(x)=\frac{\sin(\pi x)}{\pi x}$ for the sinc function. We took the points equally spaced with the interval 0.17 between -5 and 5. We assigned class labels 1, 2, 3 to those points according to intervals which the function values at those points belong to among $(-\infty,0),[0,0.2),[0.2,-\infty)$. We made up the training set with those points as inputs and with the assigned labels as target values. The data set is plotted in Figure 1. We call this data set the Sinc3C data set.
We applied M$k$NN and S$k$NN classification methods based on Bayesian M$k$NN and S$k$NN regression methods. We used both the formulation I requiring $J$ outputs and the formulation II requiring $J-1$ outputs. We tried the simulation repeatedly with different initial values for $\sigma_{0},\sigma$, and found that one of the lowest marginal likelihoods is reached with the initial value 300, 3. We also applied M$k$NN and S$k$NN classification methods with $k$ selected in the proposed methods, respectively, for each formuation. For comparison, we also applied $k$-NN333When vote ties occur, in $k$-NN classification the class label of the nearest neighbor among tied neighbors is assigned., M$k$NN, S$k$NN classification methods with the parameter $k$ selected by the leave-one-out cross-validation method.
Figure 2 shows the leave-out-errors of $k$-NN, M$k$NN, S$k$NN classification methods for the Sinc3C training set according to the parameter $k$. Figure 3 shows the log evidence of BM$k$NN, BS$k$NN regression models with the multi-class formulation I and II for the Sinc3C training set according to the parameter $k$. BM$k$NN-I and BS$k$NN-I represent M$k$NN and S$k$NN classification with the formulation I based on Bayesian M$k$NN and S$k$NN regression, respectively. Likewise, BM$k$NN-II and BS$k$NN-II represent M$k$NN and S$k$NN classification with the formulation II based on Bayesian M$k$NN and S$k$NN regression, respectively.
Table I shows the classification error rates and $k$ selected for all the methods applied to the Sinc3C data set. M$k$NN (B-I $k$) and S$k$NN (B-I $k$) represent M$k$NN and S$k$NN classification with the parameter $k$ selected in BM$k$NN-I, and BS$k$NN-I, respectively.
M$k$NN (B-II $k$) and S$k$NN (B-II $k$) represent M$k$NN and S$k$NN classification with the parameter $k$ selected in BM$k$NN-II, and BS$k$NN-II, respectively. As can be seen in Table I, BM$k$NN-II, BS$k$NN-II, M$k$NN (B-I $k$), and S$k$NN (B-II $k$) perform significantly better than all the other methods.
We applied the proposed methods and all the other methods to two real world data sets. As the first real world data set, we use the Pima data set444Available from https://www.stats.ox.ac.uk/pub/PRNN/. We used only the training set. It has 200 instances, 7 real-valued attributes, and 2 classes. We did 10 fold cross-validation to evaluate the performances of all the methods applied for the data set. The best results were obtained when the initial values of $(\sigma_{0},\sigma^{2})$ were set to $(1,10^{-6})$ for BS$k$NN, and when $(\sigma_{0},\sigma^{2})$ was fixed to $(150,1.5)$ for BM$k$NN.
Table II shows the parameter $k$’s selected by various methods such as cross-validation for $k$-NN, M$k$NN, S$k$NN classifications, and the proposed methods (binary-class case) for M$k$NN and S$k$NN classifications. The abbreviation for the methods is the same as in Table I except that this case has only the one (binary-class) formulation rather than two formulations. Table III shows the means and standard deviations of mean squared errors (for 10 fold cross-validation) for the Pima data set. As can be seen in Table III, M$k$NN and BM$k$NN perform better than all the other methods.
To show how the methods work for the real world data set with more than two classes, we use New Thyriod data set [25]. It has 215 instances, 5 real-valued attributes, and 3 classes. We did 10 fold cross-validation to evaluate the performances of all the methods applied for the data set. The best results were obtained when the initial values of $(\sigma_{0},\sigma^{2})$ were set to $(100,1)$, $(1,0.01)$, and $(100,1)$ for BS$k$NN-I, BM$k$NN-II, and BS$k$NN-II, respectively. In case of BM$k$NN-I $(\sigma_{0},\sigma^{2})$ were set and fixed to $(1,0.0001)$.
Table IV shows the parameter $k$’s selected by various methods such as cross-validation for $k$-NN, M$k$NN, S$k$NN classifications, and the proposed methods (formulation I and II) for M$k$NN and S$k$NN classifications. The abbreviation for the methods is the same as in Table I. Table V shows the means and standard deviations of mean squared errors (for 10 fold cross-validation) for the New Thyroid data set. As can be seen in Table V, BS$k$NN-I perform better than all the other methods.
V Conclusion
We have proposed symmetric $k$-NN classification method, which is another variate of $k$-NN classification method. We have proposed methods to select the parameter $k$ in mutual and symmetric $k$-NN classification methods. The selection problems boil down to the ones for the parameter $k$ in Bayesian mutual and symmetric $k$-NN regression methods, because Bayesian mutual and symmetric $k$-NN classifications can be done by Bayesian mutual and symmetric $k$-NN regression methods with new multiple-output encodings of target values. For that purpose two kinds of encodings were proposed. The simulation results showed the proposed methods is comparable to or better than the selection by the leave-one-out cross-validation methods.
Appendix A Proof of Theorem 3
Theorem 3 can be done similarly to the proof of Theorem 1 in [22] as follows.
(1) Since Laplacian matrix ${\rm\bf{L}}(={\rm\bf{D}}-{\rm\bf{W}})$ is positive semidefinite [26], for $\sigma^{2}>0$ $\tilde{{\rm\bf{C}}}(={\rm\bf{L}}+\sigma^{2}{\rm\bf{I}})$ is positive definite. So $\tilde{{\rm\bf{C}}}$ is positive definite.
(2) Since $\tilde{{\rm\bf{C}}}^{T}=({\rm\bf{D}}-{\rm\bf{W}}+\sigma^{2}{\rm\bf{I}})^{T}={%
\rm\bf{D}}^{T}-{\rm\bf{W}}^{T}+\sigma^{2}{\rm\bf{I}}^{T}={\rm\bf{D}}-{\rm\bf{W%
}}+\sigma^{2}{\rm\bf{I}}=\tilde{{\rm\bf{C}}}$,
$\tilde{{\rm\bf{C}}}$ is symmetric.
From (1) & (2), by Theorem 7.5 in [27] $\tilde{{\rm\bf{C}}}$ is a valid covariance matrix. QED.
Appendix B Appendix B. Proof of Theorem 4
In case $\sum_{i=1}^{N}\{\delta_{{\rm\bf{x}}_{j}\sim_{k}{\rm\bf{x}}_{i}}+\delta_{{\rm%
\bf{x}}_{i}\sim_{k}{\rm\bf{x}}_{j}}\}=0$, it is trivial by Eq (3) and (23).
Otherwise, take a small positive $\epsilon<m^{\mbox{S}k\mbox{NNR}}_{n}({\rm\bf{x}})$ .
Set $\delta=[\sum_{i=1}^{N}\{\delta_{{\rm\bf{x}}_{j}\sim_{k}{\rm\bf{x}}_{i}}+\delta%
_{{\rm\bf{x}}_{i}\sim_{k}{\rm\bf{x}}_{j}}\}]/\{\frac{m^{\mbox{S}k\mbox{NNR}}_{%
n}({\rm\bf{x}})}{\epsilon}-1\}$.
Then,
if $||\sigma^{2}/\sigma_{0}||<\delta$,
$||\mu_{{\rm\bf{f}}_{U},\mathrm{S}{\it k}\mathrm{NN}}-m^{\mbox{S}k\mbox{NNR}}_{%
n}({\rm\bf{x}})||<\epsilon$. By the $(\epsilon,\delta)$ definition of the limit of a function, we get the statement in the theorem.
QED.
References
[1]
E. Fix and J. L. Hodges Jr, “Discriminatory analysis-nonparametric
discrimination: consistency properties,” DTIC Document, Tech. Rep., 1951.
[2]
E. Fix and J. Hodges, “Discriminatory analysis: small sample performance,”
1952.
[3]
T. Cover and P. Hart, “Nearest neighbor pattern classification,” IEEE
Transactions on Information Theory, vol. 13, no. 1, pp. 21–27, 1967.
[4]
R. Short and K. Fukunaga, “The optimal distance measure for nearest neighbor
classification,” IEEE Transactions on Information Theory, vol. 27,
no. 5, pp. 622–627, 1981.
[5]
C. C. Holmes and N. M. Adams, “A probabilistic nearest neighbour method for
statistical pattern recognition,” J. R. Stat. Soc. Ser. B Stat.
Methodol., vol. 64, no. 2, pp. 295–306, 2002.
[6]
A. K. Ghosh, “On optimum choice of k in nearest neighbor classification,”
Computational Statistics & Data Analysis, vol. 50, no. 11, pp.
3113–3123, 2006.
[7]
L. Cucala, J.-M. Marin, C. P. Robert, and D. M. Titterington, “A Bayesian
Reassessment of Nearest-Neighbor Classification,” Journal of the
American Statistical Association, vol. 104, no. 485, pp. 263–273, 2009.
[8]
C. C. Holmes and N. M. Adams, “Likelihood inference in nearest-neighbour
classification models,” Biometrika, vol. 90, no. 1, pp. 99–112,
2003.
[9]
K. C. Gowda and G. Krishna, “Agglomerative clustering using the concept of
mutual nearest neighbourhood,” Pattern recognition, vol. 10, no. 2,
pp. 105–112, 1978.
[10]
——, “The condensed nearest neighbor rule using the concept of mutual
nearest neighborhood,” IEEE Transactions on Information Theory,
vol. 25, no. 4, pp. 488–490, 1979.
[11]
H. Liu, S. Zhang, J. Zhao, X. Zhao, and Y. Mo, “A new classification algorithm
using mutual nearest neighbors,” in 9th International Conference on
Grid and Cooperative Computing (GCC). IEEE, 2010, pp. 52–57.
[12]
V. Hautamäki, I. Kärkkäinen, and P. Fränti, “Outlier detection
using k-nearest neighbour graph.” in ICPR (3), 2004, pp. 430–433.
[13]
H. Jegou, C. Schmid, H. Harzallah, and J. Verbeek, “Accurate image search
using the contextual dissimilarity measure,” IEEE Transactions on
Pattern Analysis and Machine Intelligence, vol. 32, no. 1, pp. 2–11, 2010.
[14]
D. Guru and H. Nagendraswamy, “Clustering of interval-valued symbolic patterns
based on mutual similarity value and the concept of k-mutual nearest
neighborhood,” in Computer Vision–ACCV 2006. Springer, 2006, pp. 234–243.
[15]
A. Guyader and N. Hengartner, “On the mutual nearest neighbors estimate in
regression,” Journal of Machine Learning Research, vol. 14, pp.
2361–2376, 2013.
[16]
K. Ozaki, M. Shimbo, M. Komachi, and Y. Matsumoto, “Using the mutual k-nearest
neighbor graphs for semi-supervised classification of natural language
data,” in Proceedings of the fifteenth conference on computational
natural language learning. Association for Computational Linguistics, 2011, pp. 154–162.
[17]
C. K. I. Williams and C. E. Rasmussen, “Gaussian processes for regression,”
in Advances in Neural Information Processing Systems, vol. 8, 1995.
[18]
M. Gibbs and D. J. MacKay, “Efficient implementation of gaussian processes,”
Tech. Rep., 1997.
[19]
C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for Machine
Learning (Adaptive Computation and Machine Learning). The MIT Press, 2005.
[20]
R. Neal, “Regression and classification using gaussian process priors,”
Bayesian Statistics, vol. 6, pp. 475–501, 1997.
[21]
X. Zhu, J. D. Lafferty, and Z. Ghahramani, “Semi-supervised learning: From
Gaussian fields to Gaussian processes,” 2003.
[22]
H.-C. Kim, “Bayesian Kernel and Mutual $k$-Nearest Neighbor Regression,”
ArXiv e-prints, Aug. 2016.
[23]
C. K. Williams and D. Barber, “Bayesian classification with Gaussian
processes,” IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 20, no. 12, pp. 1342–1351, 1998.
[24]
H.-C. Kim and Z. Ghahramani, “Bayesian Gaussian Process Classification
with the EM-EP Algorithm,” IEEE Transactions On Pattern Analysis
and Machine Intelligence, vol. 28, no. 12, pp. 1948–1959, 2006.
[25]
M. Lichman, “UCI machine learning repository,” 2013. [Online]. Available:
http://archive.ics.uci.edu/ml
[26]
R. Merris, “Laplacian matrices of graphs: a survey,” Linear algebra and
its applications, vol. 197, pp. 143–176, 1994.
[27]
D. Stefanica, A Linear Algebra Primer for Financial Engineering. Fe Press, 2014. |
A chemical evolution model for galaxy clusters
L. Portinari
A. Moretti and C. Chiosi
Abstract
We develop a toy–model for the chemical evolution of the intra–cluster
medium, polluted by the galactic winds from elliptical galaxies.
The model follows the “galaxy formation history” of cluster
galaxies, constrained by the observed luminosity function.
Theoretical Astrophysics Center, Juliane Maries Vej 30,
DK-2100 Copenhagen Ø, Denmark
Dipartimento di Astronomia, Vicolo dell’Osservatorio 2,
I-35122 Padova, Italy
1. Introduction
To account for the large amount of metals observed in the intra–cluster
medium (ICM), some non-standard stellar Initial Mass Function (IMF)
has been often invoked for cluster ellipticals,
such as a more top–heavy IMF than the Salpeter one (Matteucci & Gibson 1995;
Gibson & Matteucci 1997ab; Loewenstein & Mushotzky 1996), or
a bimodal IMF with an early generation of massive stars
(Arnaud et al. 1992; Elbaz, Arnaud & Vangioni-Flam 1995);
see also the review by Matteucci (this conference).
A non–standard IMF in ellipticals has been suggested
also on the base of other arguments:
a top–heavy IMF best reproduces their photometric properties
(Arimoto & Yoshii 1987),
and sistematic variations of the IMF in ellipticals of increasing mass might
explain the observed trend $M/L\propto L$
(Larson 1986, Renzini & Ciotti 1993, Zepf & Silk 1996).
Chiosi et al. (1998) developed chemo-spectrophotometric models for
elliptical galaxies with galactic winds, adopting the variable IMF
by Padoan, Nordlund & Jones (1997, hereinafter PNJ) which is naturally skewed
toward more massive stars in the early galactic phases, in more massive
galaxies and for higher redshifts of formation (see also Chiosi 2000).
These galactic models were successful at reproducing a number of
spectro–photometric features of observed ellipticals;
now, an immediate question is: what do these galactic models predict for
the pollution of the ICM through galactic winds (GWs)?
2. Galactic wind ejecta: PNJ vs. Salpeter IMF
To address this issue, Chiosi (2000) calculated multi–zone chemical models
of elliptical galaxies with the PNJ IMF, together with models with
the standard Salpeter IMF for the sake of comparison.
Before discussing the resulting global chemical evolution of the ICM,
let’s inspect the different GW ejecta of the model ellipticals
when the two IMFs are adopted in turn.
In Fig. 1 (left panels) we compare the mass fraction of gas ejected in the GW,
and the complementary mass fraction locked into stars, for galactic models
with the variable PNJ IMF and for models with the Salpeter IMF
(thick and thin lines, respectively). Mass fractions refer to
the total initial baryonic mass of the galaxy. The amount
of ejected gas is larger in the case of the PNJ IMF, since in the early
galactic phases this IMF is skewed toward more massive stars and less
mass remains locked into long–lived, low-mass stars.
The difference with the Salpeter case gets sharper
for larger galactic masses, and for higher redshifts of formation.
Models with the Salpeter IMF evidently bear no dependence on the
redshift of formation.
The rightmost panel in Fig. 1 shows the iron abundances in the gas ejected
as GW, again comparing the Salpeter IMF and the PNJ IMF case.
In most cases, the galactic ejecta in the PNJ models are more metal–rich
than in the Salpeter case, up to a factor of 5 or more for the
more massive galaxies, and for high redshifts of formation.
In the PNJ models, in fact, more gas in the galaxy gets recycled
through massive stars, effective metal contributors, while less gas
gets locked into low–mass stars.
From the trends described above, we expect galactic models
with the PNJ IMF to predict, for the ICM, a more efficient metal pollution
and a higher fraction of the gas originating from GWs,
with respect to “standard” models. The first results in this respect
are discussed in Chiosi (2000).
3. The chemical evolution of the ICM: a toy model
Since the GW ejecta of ellipticals modelled with the PNJ IMF are sensitive
to the detailed redshift of formation of the individual galaxies,
to predict the chemical enrichment of the ICM we need to model
the history of galaxy formation in the cluster.
To this aim, we developed a global, self-consistent
chemical model for the cluster, which can follow the
simultaneous evolution of all its components: the galaxies, the primordial
gas, and the gas processed and re-ejected via GWs (Moretti et al. 2001).
Our chemical model for clusters is developed in analogy with the usual
chemical models for galaxies, as illustrated in the scheme below.
primordial gas
primordial gas
$$\Downarrow$$
$$\Downarrow$$
$$\swarrow$$
SFR, IMF
GFR, GIMF
$$\searrow$$
$$\Downarrow$$
$$\Downarrow$$
ISM
stars
galaxies
ICM
$$\Downarrow$$
$$\Downarrow$$
$$\nwarrow$$
stellar yields
GW yields
$$\nearrow$$
$$\Downarrow$$
$$\Downarrow$$
enriched gas
enriched gas
As the interstellar medium (ISM) is polluted by stars, the ICM
is polluted by galaxies.
The primordial gas in the ICM gets consumed in time by
some prescribed Galactic Formation Rate (GFR); at each time
galaxies form distributed
in mass according to a Galactic Initial Mass Function (GIMF), derived from the
Press-Schechter mass function suited to that redshift.
Through GWs, galaxies restitute chemically enriched gas,
which mixes with the overall ICM; the latter consists of the primordial
gas not yet consumed by galaxy formation (if any) and of the gas re-ejected
by galaxies up to the present age.
Model equation parallel those of galactic chemical models, with the
substitutions SFR $\rightarrow$ GFR,
IMF $\rightarrow$ Press-Schechter GIMF,
stellar yields $\rightarrow$ GW yields.
Model parameters are calibrated so that the resulting
galaxy formation history matches the observed present–day
luminosity function (LF) at the end of the simulation.
For all details, see Moretti et al. (2001).
4. The “best case” models
In Fig. 2 we show our case of “best match” with the observed LF
in the B–band (Trentham 1998, top panels). The left panels refer
to the case when galactic models with the PNJ IMF are adopted;
the right panels display results for the same cluster parameters
(i.e. same galaxy formation history),
but adopting ellipticals with the Salpeter IMF.
The Salpeter case predicts somewhat more galaxies in the high–luminosity
bins, due to the fact that for massive galaxies a larger mass fraction
remains locked into stars in the Salpeter case than with the PNJ IMF
(cf. Fig. 1). Anyways, the LF is still in agreement
with the observed one within errors.
Although the predicted LF is virtually the same in the two models,
strong differences are found in the predicted gas and metallicity content
in the ICM. The mid panels in Fig. 2 show the predicted abundance
evolution in the ICM.
Adopting galactic models with the PNJ IMF clearly improves
predictions about the metallicity of the ICM.
The bottom panels in Fig. 2 show the evolution of the mass fraction
of the various components of the cluster: the primordial gas,
which gets consumed by galaxy formation; the processed gas,
namely the gas that has been involved in galaxy formation
and then re-ejected as GW; the total gas, sum of the primordial
and of the processed gas; the mass in galaxies, that is in the stellar
component we see today, “left over” after the GW. While in the Salpeter
case (right panel) the overall mass that remains locked into galaxies
(long–dashed line) is larger that the mass ejected in the GWs
(dotted line), the opposite is true for the cluster model
with the PNJ galaxies (left panel), as qualitatively expected from § 2.
In the latter case, the mass of the re-ejected
gas is $\sim 1.5$ times larger than that locked into galaxies. Although
this is not enough to account for the whole of the observed intra-cluster gas
(whith a mass 2–5 times larger than that in galaxies, Arnaud et al. 1992),
the amount of gas re-ejected by galaxies is expected
to make up for a remarkable fraction of the overall ICM.
5. Open issues and future perspectives
Once the model is calibrated to reproduce the observed LF in the B–band
(Fig. 2, top panels), it turns out that the match with LFs in redder bands
is not as good. Fig. 3 (left panel) shows the comparison to the observed LF
in the R–band: the “best–case” model calibrated on the B–band seems to
underestimate the number of luminous galaxies with a red stellar population.
A similar effect is seen for the LF in the K–band. We used
the B–band LF for the calibration, as it offered the deepest and most
extensive dataset, but the LF in the red bands is probably a better track
of the old stellar population responsible for the bulk of the metal enrichment,
while the B–band might be sensitive to recent minor bursts of star formation.
Hence, calibrating the model over the red stellar population should provide
a better estimate of the galaxy formation history and of the consequent
chemical enrichment of the ICM.
In particular, a larger number of old giant galaxies will help
to obtain higher values of the Iron Mass to Light Ratio
(IMLR, for a definition see Renzini 1997 and references therein),
closer to the very high IMLRs measured in real clusters ($\geq 0.01$,
Finoguenov, David & Ponman 2000).
To illustrate this point, in the right panel of Fig. 3 we plot
the present–day IMLR of individual ellipticals modelled with the PNJ IMF,
as a function of the initial
mass of the galaxy and for different redshifts of formation.
Higher values of the IMLR pertain to more massive and older galaxies.
With the PNJ IMF in fact, galaxies which are more massive and/or formed
at higher redshifts, store less mass in the stellar component, while ejecting
more, and more metal–rich, gas in the GW (Fig. 1 and § 2);
both effects tend to enhance the corresponding IMLR.
Hence, with a galaxy formation history producing more red giant galaxies
in the “cluster mixture”, as suggested by the LFs in the red bands,
we expect to predict high values for the overall IMLR in the cluster.
Acknowledgments
LP and AM are grateful to F. Matteucci and to the organizers of this conference
for giving them the opportunity to participate and contribute.
References
Arimoto N., Yoshii Y., 1987, A&A 173, 23
Arnaud M., Rothenflug R., Boulade O.,
et al., 1992, A&A 254, 49
Chiosi C., 2000, A&A 364, 423
Chiosi C., Bressan A., Portinari L., Tantalo R., 1998, A&A 339, 355
Driver S.P., Couch W.J., Phillips S., 1998, MNRAS 301, 369
Elbaz D., Arnaud M., Vangioni–Flam E., 1995, A&A 303, 345
Finoguenov A., David L.P., Ponman T.J., 2000, ApJ 544, 188
Fukazawa Y., Makishima K., Tamura T.,
et al., 1998, PASJ 50, 187
Gibson B.K., Matteucci F., 1997a, ApJ 475, 47
Gibson B.K., Matteucci F., 1997b, MNRAS 291, L8
Larson R.B., 1986, MNRAS 218, 409
Loewenstein M., Mushotzky R.F., 1996, ApJ 466, 695
Matsumoto H., Tsuru T.G., Fukazawa Y.,
et al., 2000, PASJ 52, 153
Matteucci F., Gibson B.K., 1995, A&A 304, 11
Moretti A., Portinari L., Chiosi C., 2001, in preparation
Mushotzky R., Loewenstein M., 1997, ApJ 481, L63
Padoan P., Nordlund A.P., Jones B.J.T., 1997, MNRAS 288, 145 (PNJ)
Renzini A., 1997, ApJ 488, 35
Renzini A., Ciotti L., 1993, ApJ 416, L49
Trentham N., 1998, MNRAS 294, 193
Zepf S.E., Silk J., 1996, ApJ 466, 114 |
ATLAS: A High-Cadence All-Sky Survey System
J. L. Tonry
Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822
[email protected]
L. Denneau
Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822
[email protected]
A. N. Heinze
Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822
[email protected]
B. Stalder
LSST, 950 N. Cherry Ave, Tucson, AZ 85719
[email protected]
K. W. Smith
Astrophysics Research Centre, School of Mathematics and Physics, Queen’s University Belfast, Belfast, BT7 1NN, UK
[email protected]
S. J. Smartt
Astrophysics Research Centre, School of Mathematics and Physics, Queen’s University Belfast, Belfast, BT7 1NN, UK
[email protected]
C. W. Stubbs
Department of Physics, Harvard University, Cambridge, MA 02138, USA
[email protected]
H. J. Weiland
Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822
[email protected]
A. Rest
Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA
Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD 21218, USA
[email protected]
Abstract
Technology has advanced to the point that it is possible to image the
entire sky every night and process the data in real time. The sky is
hardly static: many interesting phenomena occur, including variable
stationary objects such as stars or QSOs, transient stationary objects
such as supernovae or M dwarf flares, and moving objects such as
asteroids and the stars themselves. Funded by NASA, we have designed
and built a sky survey system for the purpose of finding dangerous
near-Earth asteroids (NEAs). This system, the “Asteroid Terrestrial-impact Last Alert
System” (ATLAS), has been optimized to produce the best survey
capability per unit cost, and therefore ATLAS
is an efficient and competitive system for finding potentially
hazardous asteroids (PHAs) but also for tracking variables and finding transients.
While carrying out its NASA mission, ATLAS now discovers the greatest number of bright ($m<19$)
supernovae candidates of any ground based survey, frequently detecting
very young explosions due to its 2 day cadence. ATLAS discovered the afterglow
of a gamma-ray burst independent of the high energy trigger and has
released a variable star catalogue of 5$\times 10^{6}$ sources.
This, the first of a
series of articles describing ATLAS, is devoted to the design and
performance of the ATLAS system. Subsequent articles will describe in more detail the software, the survey strategy, ATLAS-derived NEA population statistics, transient detections, and the first
data release of variable stars and transient lightcurves.
surveys; minor planets, asteroids: general; stars: variables: general; supernovae: general
1 Introduction
The remarkable progress of silicon technology in recent decades has
made it possible to examine the entire sky for moving, variable, or
transient objects every night to a meaningful depth. Optimizing
survey performance is a complex task, however. Resources need to be
divided between the cost of a facility to protect the system from the elements;
a telescope, a mount, and a detector to collect the light; and computers, operations, and software to run the
survey and process the results. Any of these features can limit
performance.
Combining the detector technology advances with venerable Schmidt telescopes
or newly designed wide-field facilities has rapidly changed the astronomical survey landscape in the last few years.
The ambitious Pan-STARRS1 survey (PS1; Chambers et al., 2016)
has mapped 3$\pi$ steradians of the sky (30,000 square degrees) in 6 wavebands and is
having a major impact — not only in transients and moving objects
(Hsieh et al., 2012; Rest et al., 2014)
but from low mass stars (Liu et al., 2013), through Milky Way stellar populations
(Laevens et al., 2015)
to the highest redshift quasars (Bañados et al., 2014).
The Palomar Transient Factory (PTF; Law et al., 2009)
reinvigorated the scientific capability of the Palomar Schmidt telescope producing
a wide range of discoveries of novel objects
(e.g. Quimby et al., 2011; Gal-Yam et al., 2011; Nugent et al., 2011).
The QUEST camera was installed on the Schmidt telescope at La Silla
to run the La Silla QUEST survey, (LSQ; Baltay et al., 2013) which
combined with the the Public ESO Spectroscopic Survey
of Transient Objects (PESSTO Smartt et al., 2015) for spectroscopic follow-up, again producing a range of
discoveries (e.g. Nicholl et al., 2014, 2015).
The Catalina Real Time Survey (CRTS; Drake et al., 2009) is a very successful
time-domain survey which has influenced survey science from the solar system through
supernovae and AGN variability.
The SkyMapper survey (Keller et al., 2007) is
now producing its first public data products, completing the multi-color coverage of the
whole sky (Wolf et al., 2018).
Other surveys on large aperture telescopes such as the Dark Energy Survey
(Dark Energy Survey Collaboration et al., 2016)
and HyperSuprimeCam
(Moriya et al., 2018)
are now playing a major role with exceptional depth and photometric
performance over smaller sky areas.
At the other end, novel use and
fast processing of data from small 14cm lens systems by the
All-Sky Automated Survey for SuperNovae (ASSASN; Holoien et al., 2017)
have been impressively productive, providing some rare and surprising finds
(Dong et al., 2016).
From the tens of centimeters to 10m sized apertures,
survey astronomy truly has changed in the last few
years; a revolution that has made it into orbit with
ESA’s Gaia facility using its scanning capability to produce transient
alerts (Walton et al., 2015; Gaia Collaboration et al., 2016).
ATLAS was proposed as a replicable system that NASA could use to find dangerous
asteroids, and optimization for the NASA mission opens synergistic
opportunities for many other types of science
(Tonry, 2011).
Predicting
asteroid collisions with Earth places constraints on system capability,
for example, warning of at least one day for a $\sim$1 Mton explosion
requires all-sky monitoring at a sensitivity of $m>19$. Funded in
2013, ATLAS achieved first light in June 2015 and now consists of two
independent units, one on Haleakala (HKO), and one on Mauna Loa (MLO) in
the Hawai‘ian islands.
A number of papers have been written about sky survey design and optimization
including
Tonry (2011),
Terebizh (2011, 2016),
and
Bellm (2016).
Tonry (2011)
summarized survey performance in terms of a “survey speed”
that expresses the rate at which objects can be observed
to a limiting magnitude $m$ with signal to noise ratio SNR. In the
background limited, random distribution on sky, Poisson regime this becomes
$$SS=\frac{A\;\Omega_{0}\;\epsilon\;\delta}{\omega}\;10^{+0.4(\mu+m_{0})}=\frac{%
\hbox{SNR}^{2}\;\Omega\;}{t_{cad}}\;10^{+0.8m}$$
(1)
where $A$ [m${}^{2}$] is the collecting area, $\Omega_{0}$ [deg${}^{2}$] is the
solid angle covered by the detector, $\epsilon$ is the efficiency for
light to be detected (relative to a fiducial $m_{0}=25.10$ that
provides 1 photon per second per m${}^{2}$ per 0.2 in natural log of
bandpass), $\delta$ is the duty cycle over cadence time $t_{cad}$ [sec]
that the shutter is open, $\omega$ [arcsec${}^{2}$] is the point spread function (PSF) noise
footprint solid angle (essentially $3.5d^{2}$ where $d$ is the PSF
FWHM), $\mu$ is the sky brightness [mag/arcsec${}^{2}$], and $\Omega$
[deg${}^{2}$] is net solid angle surveyed during $t_{cad}$. In effect,
the left hand side of equation 1 describes a survey in design,
how well it ought to perform, the right hand side describes a survey
in operation, how well it actually performs.
Because this equation describes an extensible quantity, it is possible
to examine tradeoffs, such as doubling the collecting area or building
two identical systems in order to double the rate at which objects can
be found. Less obvious trades that double the rate include halving
the PSF footprint solid angle $\omega$, looking for objects that are
0.4 mag brighter, or lowering the SNR requirement.
Since the bottom line for many surveys is how many objects can be
surveyed per unit time and the bottom line for any project is how
productive it is per unit resource, the metric by which a survey
project should be judged is survey speed per unit cost.
Tonry (2011)
showed that an array of 0.25 m astrographs could
inexpensively observe all sky each night to $m\sim 19$.
The actual implementation of the
funded ATLAS program employs a single 0.5 m Schmidt telescope rather
than an array of smaller telescopes. This modification resulted from
a coupling of the optimization between telescope and detector cost — in the
original design the marginal cost of detector improvement (prior to the availability of the latest generation of 10k CCDs) was far
higher than that of telescope improvement.
As we will describe in this paper, for one ATLAS unit the relevant design numbers are
$A=0.14$ m${}^{2}$ (including vignetting),
$\Omega_{0}=29$ deg${}^{2}$,
$\epsilon=1.25$ for transmission through atmosphere, optics,
detector QE, and $o$ filter bandpass width,
$\delta=0.75$,
$\omega=52$ arcsec${}^{2}$ for 2 pixel FWHM PSF,
$\mu=20.7$ mag/arcsec${}^{2}$, for a predicted speed of $\log(SS)=17.15$.
What an ATLAS unit actually achieves in a 30 sec
exposure and 40 sec cadence is $m_{5\sigma}\sim 19.7$, corresponding to a survey speed of
$\log(M)=17.02$, a bit less than Eq. 1 because Eq. 1 does
not include the terms for read noise or dark current.
Autonomous operation is another requirement for a cost efficient
survey, as well as enabling the low latency processing and discovery
essential for impending impacts. For the NASA mission, ATLAS has built
a system that consists of summit operations, reduction pipeline, and a
science client that processes the output for moving objects. The
summit operations automatically close, open, and observe when
possible, following an automatic schedule. The reduction pipeline
calibrates the images, subtracts from them a static sky image, and
produces a table of detections of sources that have changed from the static sky.
The moving object science client waits until multiple observations
arrive for a given area on the sky and then links detections into
plausible asteroid tracklets. Candidate unknown asteroids are screened by
a human for accuracy and are then posted to the Minor
Planet Center, who coordinates followup of unknown near-Earth asteroids.
Other science clients also tap the results from the sky subtracted
images. Using computer resources at Queen’s University Belfast (QUB), we
search the same detection tables for
stationary transients. These stationary transients are spatially matched against star, galaxy,
active galactic nucleii, and QSO catalogues. The variable objects are filtered out leaving
supernova candidates which are automatically reported publicly to the International Astronomical
Union (IAU) Transient Name Server (Smith et al. in prep.). We also mirror all the ATLAS raw data at QUB as a safe,
off-site backup. ATLAS is designed and operated to be optimal for asteroid discovery,
but the sky survey synergistically contributes significant results in many other science areas.
•
Among regionally dangerous ($>30\mathrm{m}$) asteroids detected during very close approaches ($<0.01\mathrm{AU})$ to the Earth, ATLAS detects as many or more than any other asteroid survey, demonstrating its successful optimization as a ‘Last Alert’ system for potential impactors.
•
Up to the end of 2017, ATLAS reported 1175 candidate supernovae to the IAU Transient Name Server 111https://wis-tns.weizmann.ac.il. Notable discoveries include detection of the shock break-out signature of SN2016gkg (Arcavi et al., 2017a) and the discovery of the unusual interacting type Ic supernova SN2017dio (Kuncarayakti et al., 2017).
•
Since 1 Jan 2016, ATLAS has discovered the most spectroscopically classified transient objects (302, compared to ASASSN’s 280), as reported in the TNS. This is enabling a host of ongoing science projects. For example, through an alliance with the Public ESO Spectroscopic Survey for Transient Objects
(PESSTO; Smartt et al., 2015), ATLAS provides young supernovae for the Foundation Supernova Survey (Foley et al., 2018), working to create a definitive low redshift type Ia supernova sample to anchor cosmological analyses.
•
The large nightly ATLAS sky footprint has allowed searches for counterparts of gravitational wave sources from the LIGO - Virgo collaboration. During the first two observing runs ATLAS was a signatory to the agreement to share triggers. We searched for possible bright counterparts to binary black hole (BBH) mergers and discovered the afterglow of a gamma ray burst (GRB) before the high energy source was localized on the sky (Stalder et al., 2017). This object, ATLAS17aeu, was discovered within the sky map of GW170104 (Abbott et al., 2017a), but is likely an unrelated GRB exploding 24hrs after the gravitational wave trigger.
This is only the third GRB afterglow detected independently of a high energy trigger (the others discovered by Cenko et al., 2013, 2015).
•
A merging neutron star system produced the source GW170817 (Abbott et al., 2017b, c) and was accompanied by the discovery of an optical and near-infrared bright kilonova. It was discovered in NGC4993 at a distance of only 40 Mpc by several telescopes as soon as night fell in Chile (Arcavi et al., 2017b; Coulter et al., 2017; Lipunov et al., 2017; Soares-Santos et al., 2017; Tanvir et al., 2017; Valenti et al., 2017). ATLAS had been continually observing NGC4993 until 16 days before GW170817, and we showed it was not a variable source over the previous 601 days in (Smartt et al., 2017, see also Valenti et al. 2017).
ATLAS will provide meaningful limits
on the rate of kilonovae (irrespective of GW triggers) within 60 Mpc (Scolnic et al., 2018, and Coughlin et al. in prep).
•
During its first two years ATLAS observed 140 million stars hundreds of times and
has detected variability (pulsation, rotation, occultations, outbursts) in 5 million
objects (Heinze et al. in prep.). We will be releasing these lightcurves through the Mikulski Archive for Space Telescopes (MAST). The ensuing 6 months has doubled the number of detections and increased the number of stars to 240 million, and there will be periodic data releases and updates.
•
Asteroid characterization: color, rotation, volatile emission, and collisions are all
measureable in the ATLAS lightcurves.
•
ATLAS regularly detects satellites in geosynchronous orbit and beyond, and our multiple
observations allow us to determine accurate 3D positions and velocities.
The NEA optimised survey strategy employed by the first two ATLAS units is equally good for transients and variables. We view the ATLAS unit as an inexpensive, reproducible system that could be deployed at sites judiciously separated in latitude and longitude to give 24hr, all sky coverage to $m\sim 20$ with a 1-day multi-exposure cadence.
This paper is the first in a series describing the ATLAS hardware and software systems. It gives a broad overview of all the components that make the survey functional, more specialised papers are in preparation giving details of the subsystems.
2 Enclosure
We considered a number of possible enclosures for ATLAS including
traditional Ash domes with an over-the-top shutter, clamshell designs
such as Astrohaven, and enclosures with roll-off roofs.
We even designed “ATLAS-in-a-can”, an ATLAS unit in a
commodity truck with a fold-off roof and a hole in the floor so the
mount could be lowered onto a solid pedestal. As far as we know
“ATLAS-in-a-can” would perform well, and with a very compliant shipping
truss to hold the ATLAS telescope and mount within the truck as well as
the truck’s suspension, transportation would be simple and safe.
Concerns over wind buffeting, water leakage in severe storms, ambient
light and overall reliability led us to choose Ash domes as the ATLAS
enclosure. Because we must operate autonomously, reliability is
an extremely important consideration.
We do create unusual stress on the Ash dome by rotating every
$\sim$40 sec, which leads to bolts loosening, so threadlocking adhesive on most fasteners is a required upgrade from the nominal Ash construction.
The dome is a standard 16.5 foot diameter half-sphere over an 8 foot tall cylinder. The pier is offset south from the center by 14 inches and is 41 inches tall and 30 inches diameter. Steel rebar is epoxied into the slab to provide stiffening and support for the concrete pier.
We modified the Ash dome by adding I-beam mount points for a 500 pound rated chain hoist and by putting a Canarm 20-inch exhaust fan in
the wall which has motorized louvers that close when not in operation.
During daylight hours in good weather we open the dome slit by 6 inches for air intake and run the exhaust fan continuously. This keeps the interior of the dome close to ambient temperature. Without the fan, the sun beating down on the aluminum would greatly increase the air temperature inside the dome, resulting in bad seeing during the early part of the night and unnecessary thermal stress on the equipment.
The Ash domes use a servo loop controlled stepper motor, and therefore
have precise acceleration and movement. The absolute zero position is
set by a switch that is engaged at a particular position and the position thereafter is known by counts. The domes use slip rings to
get power to the shutter motor, so have no limitations on rotation.
We have some concerns that our $\sim$900 dome moves each night may be
causing inordinate wear, but so far our monthly maintenance has
revealed no more than bolts vibrating loose and needing to be retightened.
The dome angular velocity is about 3.75 deg/sec and about 2 sec is
spent in acceleration, so the dome can move $\sim$25${}^{\circ}$ in the
$\sim$9 sec of CCD readout and shutter overhead.
In addition to the normal electrical wiring required, we built a
“mezzanine loft” and stairs that permit easy access to the telescope. The following equipment
supports the operations of mount, telescope, and camera:
•
Switch and fiber connections to our “computer room”.
•
Various “low power” industrial computers, one running
Windows to interface to the DFM telescope (see Section 4), another running Linux with a dedicated ethernet to the camera controller.
•
Various Raspberry Pi computers to provide IP network interfacing with individual devices (dome, cannon cameras, etc)
•
Thermotek T-200 water chiller for CCD cooling.
•
Puregas CDA-10 dehumidifier for mitigating moisture condensation on the camera window.
•
A pair of webcams, a microphone, and an ipod to monitor the inside of the dome and communicate with personnel working at the summit.
•
Keyboard and monitor for on-site manual control and system monitoring.
•
A “fail safe” Raspberry Pi that uses a Hydreon rain sensor
and monitors the electrical power to the dome. If the power fails or the Hydreon reports
rain or mist this computer closes the dome immediately.
•
Various UPS to provide temporary power while the observatory safely shuts down automatically.
•
A fisheye camera and meteorology box located nearby provides environmental telemetry (see Section 7).
3 Mount
We considered an equatorial mount to be
essential, since an az-alt mount and image rotator adds significant risk of technical failure.
Furthermore,
the degraded performance near the
zenith for an az-alt telescope causes problems for an all-sky survey.
The ATLAS mount is a German equatorial mount (GEM) built by APM Telescopes
of Saarbruecken Germany. We considered a fork mount, but for a
telescope of this size a GEM is simpler, less expensive, and the advantage of cable routing for a fork mount
is not a significant factor at this scale. The APM mount has 7 large counterweights
of about 35 kg apiece to counterbalance the mass of the
telescope.
This mount is very fast. The slew velocity is 15 deg/sec,
and for moves smaller than 45 deg the time to slew and resume tracking
is $6.5\pm 0.8$ sec, comfortably less than the CCD readout time. A
meridian flip requires a rotation of $\sim$180${}^{\circ}$ in both axes and
typically takes 25 sec. Our scheduling software is mindful of the
cost of a meridian flip and minimizes them.
There are small issues with servo loop stability that occasionally
cause some image elongation. Re-tuning the servo parameters
cures this, but we do not fully understand why it returns. Winds higher than
40 km h${}^{-1}$ can buffet the large shutter at the top of the telescope depending on
dome position, elongating the images.
(The wind speed is measured by a Boltwood sensor; ATLAS is allowed to open for speeds below 30 kph and is required to close for speeds above 60 kph.)
The APM mount can track in both axes, and we have an elaborate and
accurate mount model. Since our exposures are short, we do not
need to have perfect polar alignment.
4 Telescope
Our telescopes were designed and built by DFM Engineering of Longmont, Colorado. They are a
variant of a “Wright Schmidt”, comprising a 0.5 m Schmidt
corrector, an 0.65 m spherical primary mirror, a three element field
corrector, a filter, the cryostat window, and the detector.
These are illustrated in Figure 2.
The overall focal length is 1.0 m, for a system $f$-ratio of f/2.0. The
optics perform well over a field diameter of about 7.5${}^{\circ}$, and are
designed to have modest chromatic aberration over the broad cyan bandpass ($c$, covering
420-650 nm) used by ATLAS, but images are distinctly sharper
in our redder survey bandpasss, called orange
($o$, 560-820 nm.
The oversized primary mirror minimizes vignetting. The field corrector assembly and camera housing
shown in Figure 2 has a maximum diameter of
250mm which determines the amount of central pupil obscuration.
The first telescope was installed on Haleakala in Jun 2015 and the
second in Feb 2017 on Mauna Loa. The figure on the Schmidt correctors
was not perfect however, and the delivered image
quality on the focal plane for the first
telescope was about 3.8 pixels FWHM (7${}^{\prime\prime}$)
when initially installed. Efforts to improve
the second corrector at DFM were not successful so we initiated a
contract with Coherent Technologies (Tinsley) for a pair of corrector
lenses. These were installed in May 2017 on both ATLAS units and the telescopes were
collimated. The optics on Haleakala do an excellent job, producing
images slightly better than 2.0 pixels (3.5${}^{\prime\prime}$), but there is some
residual astigmatism in the corrector on Mauna Loa, so the best images
are about 2.8 pixels (5${}^{\prime\prime}$). The astigmatism rotates with the
Schmidt corrector, but was not apparent in the Tinsley test results so
we are currently puzzled about its origin.
Collimation of the telescope is accomplished by adjusting push-pull screws
attached to the four ends of the spider assembly. We
have written ray tracing software that calculates out-of-focus donuts
as a function of screw turns, so collimation proceeds by taking an
out-of-focus image, assembling a mosaic of donuts across the field of
view, judging from ray traces how many screw turns are required to
correct the donuts, and iterating. We believe it prudent to keep
human judgement in this collimation loop. Once collimated the
telescopes seem to hold their adjustment very well.
The focus is performed using an absolute encoder that seems very
accurate, and the telescope has an athermal design so there is
extremely little focus shift as a function of temperature. The final
adjustment is the tip-tilt of the detector with respect to the focal
surface, and this is adjusted using the motors within the cryostat.
In-focus image elongation is a sensitive diagnostic of detector tilt.
DFM also designed and provided a full aperture shutter and a filter
changer. The shutter uses bi-parting blades that are carefully balanced
to exert no force or torque on the telescope. At this time the
shutter on Haleakala has operated nearly a million times and shows no
sign of wear or degradation. The DFM filter changer comprises a
cassette that holds 8 filters in frames and lifts them to an insertion
mechanism that advances them into a slot between the last field corrector
lens and the camera.
ATLAS filters are 125mm square and 9mm thick. We use broad band
filters for our normal asteroid search, a “cyan” ($c$) band from
420–650 nm, an “orange” ($o$) band from 560–820 nm, and a
“tomato” ($t$) band from 560–975 nm intended to be differentially
sensitive to the silicate band of stony asteroids relative to $o$
band. Haleakala normally switches between $c$ and $o$ during survey operations in a
lunation, whereas Mauna Loa stays in $o$ or $t$.
Table 1 provides details of our primary filters as best we
currently know them.
We also have a set of filters in standard bandpasses, including one set of
Johnson/Cousins filters $B$, $V$, $R_{c}$, $I_{c}$,
and one set of $g$, $r$, $i$, $z$ which are similar to
SDSS and Pan-STARRS1 (Fukugita et al., 1996; Tonry et al., 2012).
ATLAS also has Skymapper-like ultra-violet filters
$u$ and $v$ (Bessell et al., 2011)
and two narrow band filters centered to trace $H\alpha$, and
[O iii]. Discussions with the Skymapper team led us to
adjust the center and widths of $u$, $v$ and $g$ to ensure
better delineation than those of Bessell et al. (2011).
The $o$, $c$, Johnson, and $H\alpha$ filters were
provided by Materion (Barr), and the rest by Asahi. Details
found in Table 1 and Table 2 are calculated from manufacturer’s curves
for the filters, AR coatings, 1.2 airmasses of atmosphere, 0.92
reflectivity of overcoated aluminum, and the measured detector QE.
The bandpasses have been adjusted for the ATLAS f/2 beam using an
effective index of $n=2$, but no in-situ measurements have been made.
Note that the $u$, $v$, and $z$ filters have low
transmission because the field corrector AR coatings are very
reflective outside of 380–850 nm. Should more ATLAS units be built
we intend to open up the IR and UV transmission of the optics.
Table 2 summarizes the parameters of all the ATLAS bandpasses.
5 Camera
The ATLAS camera was required to satisfy a number of requirements that could not
be met by any commercial product, therefore we designed and built it in house.
These requirements we imposed included:
•
Fill as much of the 130 mm diameter optical field of view as
possible, but sample the expected PSF of 7 $\mu$m RMS with pixels no
larger than 10 $\mu$m.
•
Read out in less than 10 sec so that the duty cycle for a
30 sec exposure is no worse than 0.75, with a read noise of no
more than 10 e${}^{-}$.
•
Camera diameter can be no larger than 200 mm, length no
longer than 200 mm, mass no more than 7 kg.
•
The cryostat window is no more than 10 mm thick and distance to detector
surface is 6mm.
•
The detector must be colder than $-50$ C for dark
current to be negligible compared to the sky background.
•
There must be a means to remotely tip and tilt the detector
to align with the f/2 focal surface.
•
All connections to the camera must pass along a 3/4-inch
channel on a spider vane, and the detector controller may be a distant as 0.5 m.
•
The cryostat must be capable of maintaining vacuum of
1 mtorr or less for at least a year (preferably much longer) because
of the difficulty in extracting the camera from the center of the
telescope.
After a competitive procurement we selected STA (Semiconductor Technology Associates of San Clemente
Califronia) as the vendor for the
CCDs, and chose their STA-1600 as the ATLAS detector. This is a
monolithic CCD with 10560$\times$10560 9 $\mu$m pixels, thinned,
passivated, and AR coated by the Imaging Technology Laboratories of the University of Arizona. STA also provided the controllers
and cables. We collaborated closely with STA, both on the mechanical
and electrical interfaces of fitting the CCD inside the cryostat as
well as tasking STA to build a custom board for ATLAS auxiliary
functions such as temperature and pressure monitoring, thermoelectric
cooling, and piezoelectric motor operation.
The cryostat consists of a 6-inch “bell jar” that has a 3/8-inch
fused silica window brazed on one end, and a standard CF-8 flange on
the other end which is the solid base plate on which the internals are
mounted. Outside the baseplate are a metal seal vacuum valve, an MKS
micro-Pirani vacuum gauge, a Modion ion pump, a warm zeolite getter,
and connectors for cooling water.
Inside the cryostat are a pair of water-fed heat exchangers,
a pair of two stage thermoelectric coolers (TEC), and a pair of pyrolytic
graphite cold straps to a cold plate that carries the CCD. The heat
exchangers have bistable thermal switches to disconnect TEC power at
+50 C if water flow is interrupted. The cold
plate is mounted on top of a flexure and is pulled down by a set of
springs and pushed up by a trio of “picomotors”, vacuum rated units
that use piezoelectric slabs to turn a fine pitch screw with nanometer
precision. They have an enormous travel, 12 mm in our case, and are
electrically inert when not in use. The position of the cold plate is
monitored using linear Hall sensors that measure the radial field near
the center line of a cylindrical magnet. Although we close the
position loop by observing stars, we calibrate these Hall sensors in
the lab using a microscope to observe the cold detector through the
window, and we achieve absolute accuracies of about 1 $\mu$m
A pair of printed circuit boards from STA are mounted within the
cryostat and are cooled by the heat exchangers. These buffer drive signals to
the CCD and convert the CCD output to true differential signals that
pass along the cable to the controller. This provides us with
excellent noise immunity and we see no interference of any sort, even
when the ion pump or picomotors are active.
The CCDs are set to a gain of about 2 e${}^{-}$/ADU, and they have a full
well in excess of 80,000 $e^{-}$. We read out at 1 MHz through 16
amplifiers so the total read time is about 9 sec. With our normal
30 sec exposure this gives a shutter-open duty cycle of about
75%. The read noise is about 11 e${}^{-}$ and in a 30 second exposure
the typical moonless sky background is about 300 e${}^{-}$ in $c$ band and
350 e${}^{-}$ in $o$ band, so the read noise degrades our SNR by about
16-18%.
The quantum efficiency and total throughput through our two primary
filters are illustrated in Figure 3.
The dark current in our STA1600 CCDs has quite a bit of pattern to it,
particularly between 8 horizontal bands. The average is about
0.8 e${}^{-}$/pix/sec at a CCD temperature of $-50$ C, rising a factor of 2
every 5 K. The thermoelectric coolers keep the CCDs at a temperature
of about $-53$ C, so dark current is not a significant contributor to
noise in our wide 250 nm $c$ and $o$ filters, but is a concern for
our 10 nm wide narrow band filters or our UV filters. The water to
the heat exchangers in the cryostat is cooled to $+6$ C by a Thermotek
T-200 chiller, and the detectors cool by 2 K for each 3 K decrease in
water temperature, so it is feasible to lower the dark current to
about 0.2 e${}^{-}$/pix/sec (below the sky rate) if needed.
Detector flatness is a concern in our fast, f/2 beam.
Scans of the surfaces of Acam1 (HKO) and Acam2 (MLO) through the cryostat window
when cold reveal an RMS deviation from flatness of 4um for Acam1 and 9um
for Acam2 and a peak-to-peak deviation of 18um and 36um.
Although Acam2 is a factor of 2 worse than Acam1, the maximum blur
circle is only 9$\mu$m (1.8${}^{\prime\prime}$) and the RMS is half of that.
These CCDs are cosmetically quite good: Acam1 has a region with bad
CTE of about 0.2% of the area and Acam2 has a few blocked columns.
The thinning leaves some artifacts but these flatten quite well; about
1% of the area is lost to the thinning border. Both of these CCDs
have peculiar, low level flattening artifacts that are particularly
evident in some of the horizontal sections. Acam1 has little
“dipoles”, where charge from one pixel is borrowed by the one
immediately below it, and Acam2 has a “bamboo forest”, where
adjacent columns seem to exchange charge in a wavey pattern. It is
believed that these are the result of CCD manufacturing masks with
insufficient resolution, and more recent CCDs are better. These artifacts
flatten quite well so for our purposes they are not
important.
Fast readout confers other problems such as bias levels that have a
“plaid” pattern and must be corrected by using both serial and
parallel overclocks, and cross-talk. Our $16\times 16$ cross-talk
matrix shows values ranging between $2\times 10^{-4}$ to
$3\times 10^{-5}$, with prominent correlations between amplifiers and
video boards. The cross-talk diminishes dramatically with slower
clocking and we had some success in reducing it by carefully selecting
pedestal and signal samples. We mark our output mask for each image
with a bit indicating pixels that might be under the influence of
cross-talk from some bright star, and we subtract a fraction of the
image from itself according to the cross-talk matrix, but
the cross-talk is not a serious issue.
Without any elastomer seals, and thanks to the ion pumps the cryostats
hold vacuum very well; the cryostat on HKO was last pumped three years ago
and shows no sign of leakage.
6 Computers and software
Because we want our system to be able to run autonomously even when
there is no internet connection to the observatory, we maintain a rack
of computers on each summit.
This rack is in a “computer room” (in
fact just a reasonably dry and safe place separate from the dome), and
carries an ethernet switch and six 1U “Supermicro” server computers. One is
dedicated to be our gateway, one is an “admin” computer that is
devoted to running the camera, and four are general purpose compute
nodes. These computers each have about 12 TB of RAID1 disk, 24 cores,
and 128 GB of memory, so they are very capable and each can save
approximately a month’s worth of observation. They provide
substantial redundancy in case of failure.
A normal night of observation produces approximately 900 images from
each of the main cameras, the auxiliary 35mm cameras, and the fisheye
35mm cameras, for a total of about 150 GB of raw, compressed data per
night.
To date our bandwidth has been adequate, so our minimum latency
strategy has been to perform the first reduction steps of computation
at the observatory and the image subtraction and science processing at our
“base cluster”. This strategy doubles the volume of data that
needs to be stored at the observatory and copied, but it allows us to
operate when bandwidth is poor or non-existent, and it avoids a future
bottleneck if ATLAS units were to proliferate. We have a hard requirement
of a latency of
less than an hour from shutter close to final results because that is
the interval between the first and last observations of each field on
a given night (we observe 4 or 5 times at each position across a 1 hour
period). If bandwidth becomes an issue it is simple to carry out all the reductions at
the observatory and copy only the final detection tables in real time,
using the daylight hours to copy the images.
Our “base cluster” in Honolulu
currently consists of 16 of these 1U Supermicro
compute nodes and 5 4U storage computers. The storage computers carry
24 disks apiece with hardware RAID6 and each provides 120 TB (160 TB
with more modern 8 GB disks) of storage.
A 1U node costs less than $3,000, and a 4U storage computer
costs about $12,000. This is far less expensive than
cloud storage and computation, and although we foresee eventually
moving our data and processing to the cloud, it is not cost effective
to do it at present.
We have adopted a philosophy of “less is better” as regards
scripting languages, so we avoid high-level languages such Perl, Python, Java, or other
variants with complex dependencies so that we may provide a simplified computing environment. We have
instead restricted our diversity of languages to C, bash and the usual
Unix tools, and Google’s go language222https://golang.org/ for the TCS. This confers
significant benefit in terms of stability, computation efficiency, and
most critically comprehensibility between the various software
developers and users. By judicious creation of “Unix tools”,
meaning programs that are designed to be used like any other Unix
utility and have a man page, we have not only managed to get efficient
code written efficiently, we have also created a system that is more
portable, agile, and less complex than the usual GUI-oriented, scripting
language rat’s nest.
Our software systems are described in detail elsewhere (Denneau et al. in prep.), but broadly
speaking our components are:
•
A telescope control system (TCS) deployed as a collection of lightweight executables written go that share system state through a Redis database.
•
A scheduler that creates the desired pointings for each
night and executes them.
•
A reduction pipeline that is responsible for converting raw
camera files into flattened, calibrated images.
•
An image subtraction pipeline that matches all-sky reference images (that we internally
call “wallpaper”)
to each image, subtracts it, and finds all the remaining sources. This employs
a modified version of hotpants for image subtraction (Becker, 2015).
•
“Science clients”, specialized pieces of software that
receive final images or tables and perform object selection and scientific analysis.
•
A post-processing pipeline that executes processing that is
important but not time critical e.g. final photometry of all stars in a
reduced image.
Our primary science client is an adapted version of the Pan-STARRS Moving Object Pipeline System
(MOPS; Denneau et al. (2013)), that links detections from different images into a plausible
moving object and reports observations to the IAU Minor Planet Center (MPC).
Typical execution times for a 10k observation to be processed from telescope pointing to MPC reporting
are shown in Table 3.
The photometry and detection programs
dophot
(Schechter et al., 1993; Alonso-García et al., 2012)
and tphot,
(described in a future paper, performance reported by Sonnett et al. (2013))
are multithreaded, so the elapsed time is less than the CPU time.
Therefore the total processing time from the moment the telescope settles on tracking the field center
and the shutter opens
to having an object catalogue with calibrated photometry and astrometry is
approximately 40 minutes
of CPU time and 25 minutes of real time. The code could be optimized
for somewhat better performance, but right now we are focused on higher priority
development such as better science processing, better scheduling, and
documentation.
The deep
photometry of all stars in the reduced image is an example of
post-processing that is not required for our time critical science
client, and this runs on some of the redundant summit computers.
A secondary science client is the ATLAS Transient Server that runs on
a computer cluster at Queen’s University and links individual stationary detections
into objects and reports supernova candidates to the IAU Transient Name Server
(Smith et al, in prep.). It runs after the final detection table is created and
uses the same input files as the moving object pipeline.
7 System Features
We have implemented a number of features to support autonomous
operations and to ensure high quality data. For meteorological
information we have a small “metfish” that consists of a Boltwood
CloudSensor system, a Garmin GPS, and a Canon 10mm f/4 fisheye lens on a Canon
5DIII body. This “metfish” is a watertight box that uses an AR-coated
glass dome intended for underwater diving to protect the lens. The
Boltwood system reports temperature, wind speed, humidity, rain, and
uses a thermal IR pixel to examine the sky for clouds on a 1 second
cadence. The fisheye camera in our “metfish” is similar to commercial
cameras333http://www.alcor-system.com/new/index.html,
but with additional functionality and software. A “metfish” costs
about $10,000. We use the GPS to synchronize a Stratum 1 time server, and
Network Time Protocol (NTP) brings
the rest of our computers to absolute time with an accuracy of a few
microseconds, regardless of network connectivity to the external world.
We use the geosynchronous satellite Galaxy-15 to calibrate our shutter
latency. Because Galaxy-15 is part of the GPS constellation, its
position is tracked at the centimeter level by JPL and published by
the National Satellite Test Bed web site444http://www.nstb.tc.faa.gov/rt_waassatellitestatus.htm. We find that the
time between initiation of a shutter movement command and when the
shutter blade is half way across the aperture is $0.281\pm 0.017$ sec
(this uncertainty arises for each Galaxy-15 observation;
the repeatability suggests that the average delay is known better).
Obviously our absolute exposure accuracy is limited to a few
milliseconds, regardless of computer clocks that are accurate to the
nearest microsecond.
The fisheye camera takes images every 5 minutes during the day, but at
night switches to 32 second exposures on a 40 second cadence.
The fisheye cameras at HKO and MLO are staggered by 20 seconds
so that one shutter is always open in Hawaii at night. These
images are treated as scientific data and subjected to the same
rigorous flattening, astrometric, and photometric calibration. We
achieve astrometric residuals of about 0.1 pixel, primarily limited by
the undersampled PSF and co-adding the color pixels into monochrome
super-pixels, and we can achieve photometric accuracy of 0.02 mag
for suitably bright stars.
The 5$\sigma$ limiting magnitude for a single fisheye image is about
$m=7$ (depending on declination and the degree of trailing), but we
have found that the sensitivity improves as $N^{1/2}$ for $N$ much larger
than 1000. Thus a source which has average magnitude over 40 seconds
of $m=7$ is detectable, likewise $m=9.5$ over an hour or $m=12$ over
an entire night. The fisheye pixels are about 4.3 arcmin, so very
faint sources are confused in stacked images, but image subtraction is
very effective at removing the non-varying background.
Each fisheye image provides us with extinction measurements
for up to 10,000 stars and it is easy to quantify where clouds are,
how opaque they are, and how they are moving. We intend to develop
a scheduler that is responsive to this information.
Figure 4 illustrates the utility of the fisheye on a
night when partial clouds interrupted otherwise clear sky.
The fisheye
images are visible from our public website fallingstar.com.
Each dome is equipped with a Hydreon rain sensor, and each dome is powered
by a small 1350 VA UPS that has enough power to close the shutter
on a power failure. A Raspberry-pi monitors the Hydreon and the AC
power and closes the dome when necessary regardless of what the rest
of the system is doing.
Finally, we mount a Canon 5DIII and Canon 135mm f/2 lens on
the telescope and take images synchronized with the main science
camera. The field of view is 15${}^{\circ}$$\times$10${}^{\circ}$, easily
encompassing our main field of view, and the 5$\sigma$ limiting
magnitude is about $m\sim 14$
in a 25 sec exposure. These data are processed exactly
like any other, and therefore the combination of the three optical
instruments allows us to monitor the sky over $0<m<20$.
A summary of the ATLAS camera system is in Table 4.
8 Performance Results
Excluding the borders of the CCDs, our net field of view on the ATLAS Acam
cameras is
5.375${}^{\circ}$ square for a total of 28.9 deg${}^{2}$, and the
900 exposures taken during a night cover 26,000 deg${}^{2}$. The
declination range $-45<\delta<+90$ encompasses
85% of the sky (35,065 deg${}^{2}$) and
25% of the sky lies within 60${}^{\circ}$ of the Sun, which is essentially
unobservable. This leaves about 24,500 deg${}^{2}$ of sky accessible on a given night.
A single ATLAS unit could therefore cover the entire accessible sky in
one night with a single 30 second exposure at each pointing.
Our mission for NASA requires us to distinguish moving objects from
stationary transients, to provide a meaningful trajectory for moving
objects, and to have minimal false alarms. We therefore observe each
field four times in a given night and reduce our Dec coverage for each
unit so as to cover 1/4 of the visible sky. With two ATLAS units, our
four exposure coverage is 1/2 of the visible sky each night.
We therefore cover the entire accessible sky with a cadence of 2 days, with
four exposures (over a 1 hour interval) reaching
$o\sim 19.5$ in each individual frame ($\sim$20.5 for a 4 exposure co-add to
find stationary transients)
when the sky is dark and seeing good.
Figure 7 shows the ATLAS sky coverage during a recent set of
four nights.
There were many motivations to site both of the first two ATLAS units
in Hawaii on Haleakala and Mauna Loa, one of which is weather
diversity. Over the 122 days of Jun–Sep 2017, there were only 10
nights when both summits produced fewer than 450 successfully
differenced images, i.e. 92% of nights were at least half workable
for at least one summit. For each individual summit about 80% of
nights are at least half productive, illustrating a degree of
decorrelation of the weather. Similarly the fraction of nights
during these 122 days for which the median zeropoint was within 0.1
mag of clear was 69% on Haleakala and 89% on Mauna Loa, and 92% of
nights were clear on at least one summit.
We normally observe in $c$ band during dark time on Haleakala and $o$
band during bright time, and we have always observed in $o$ band on Mauna
Loa but will soon start using $t$. This provides color information for all the asteroids in the sky
as well as other transients and variables, without compromising our
sensitivity to find asteroids.
Currently Haleakala can achieve a 1.9 pixel FWHM in $o$ band and has a
median of about 2.1 pixels. This degrades to 2.4 and 2.5 pixels in
$c$ band, presumably from chromatic aberration, but the darker sky
compensates and the zeropoint is about the same. The best 5$\sigma$
limiting magnitude we achieve in
a 30 second exposure
is 19.8 (a zeropoint of 19.5 is common), and the
median over all lunations and sky conditions is 19.12. On Mauna Loa
the sensitivity is just about 0.3 magnitudes worse because of the
degraded PSF, and we can achieve 19.4, we often achieve 19.2, and the
median is 18.83.
The FWHM and limiting magnitude performance over a recent,
representative 4 month period (Jun-Sep 2017) are shown in
Figure 6. It is an important priority for us to
understand and improve the PSF in order to realize the full potential
of our system: we are making progress but work remains.
To disentangle seeing effects from defocus or mount shake we
have installed a 125 mm, f/20 refractor on MLO and we have started
collecting observations simultaneous with every science exposure.
Figure 7 shows the number of times each spot on the
sky has been observed by ATLAS to date. These include both
$c$ and $o$ filters, but only observations that are photometrically
calibratable. By tying to Pan-STARRS1 reference stars
(Magnier et al., 2016)
most
observations north of Dec $-30$ should have photometric errors at the
0.01 mag level or lower. South of Dec $-30$, APASS (Henden et al., 2012) photometry is good
to about 0.05 mag in systematics. We are in the process of gathering
our own photometry to fill in $g$, $r$, and $i$ reference stars down
to Dec $-50$. Using Gaia astrometry, there should be negligible
systematic error in
the positions, and the median RMS astrometric error for stars
brighter than $m=17$ is about 70 milliarcsec; for fainter objects the error increases
inversely as SNR.The release of SkyMapper’s First Data Release Wolf et al. (2018)
provides another source of calibration for us to use below Dec=$-30$.
As of the end of January 2018 ATLAS has discovered 125 NEAs, 16 PHAs, and (somewhat
surprisingly) 9 comets, and we have submitted 5.5 million
observations of 128,000 distinct asteroids to the MPC.
This capability is a direct result of efforts to
improve sensitivity and characterization for trailed asteroids on the
ATLAS detectors. ATLAS participated in the October 2017 flyby exercise of asteroid 2012 TC${}_{4}$
and was able to detect TC${}_{4}$ during routine operations three days
before its close approach, despite the asteroid being very close to
the waning full moon. ATLAS observations were submitted normally,
posted on the MPC confirmation page, and even flagged as ‘‘very
close’’ by JPL Scout hazard assessment service555https://cneos.jpl.nasa.gov/scout/ before it was confirmed to be TC${}_{4}$.
One of the more important features of ATLAS for the overall NASA NEOO
program is the ability to find very nearby objects, follow them, and
report them quickly. Figure 8 shows the
“candle flame” volume accessible to ATLAS for an asteroid of diameter
30 m and ATLAS discovery statistics as a function of Earth distance
at time of discovery. Depending on whether the near Earth population is normalized by
Earth impact rates estimated by
Brown et al. (2002) or Brown et al. (2013), the expectation for the number
of asteroids in this volume at any given instant is either 2 or 9, and
the refresh rate is approximately 5 days. Given full sensitivity to
streaked detections and allowing for moon and clouds, ATLAS should be
able to see about 10–50 NEAs of size 30 m per year, depending on
which of these normalizations is correct. The probability of entering
this volume is 80,000 times greater than the probability of striking
the Earth.
NEA detection and discovery statistics shown in Figure 9 illustrate ATLAS’ successful optimization for its mission of detecting asteroids passing near the Earth. While other surveys with larger telescopes and greater sensitivity discover many more asteroids per year, ATLAS’ very rapid sky coverage gives it a unique ability to detect almost all the asteroids that brighten past its sensitivity limit in any given interval of time. During the period covered by Figure 9 (June-September 2017), ATLAS detected 75% of all NEAs determined by after-the-fact ephemerides to have brightened past magnitude 19.0 at declination north of -35 degrees and solar elongation greater than 90 degrees. The 287 NEAs used to calculate this statistic include challenging objects that remained above the 19th magnitude detection threshold for less than one day; became observable only near full Moon; or were moving as fast as 50 deg/day at the moment of their closest approach. Of the 75% of sufficiently bright NEAs that were detected by ATLAS, 19% (40 NEAs) were ATLAS discoveries, and most of these would not even be known to exist or to have passed Earth apart from ATLAS. The 25% of potentially detectable NEAs that ATLAS missed includes some that were discovered earlier and not recovered by any survey during their 2017 apparition. Thus, within a generous range of parameters that includes very difficult cases, NEAs passing Earth had only a 25% chance of escaping ATLAS’ net in late 2017.
This 25% statistic does not, of course, include an unknown number of NEAs that passed by Earth without being discovered by any survey. The total number of small yet dangerous NEAs that inhabit the Solar System is still uncertain to within a factor of a few – but ATLAS is poised to measure it much more accurately over the coming years.
In this context, we can consider ATLAS’ ability to detect asteroids in the size range of the object that likely produced the Tunguska explosion in 1908, which devastated more than 2000 km${}^{2}$ of Siberian forest. This object likely had an absolute magnitude of $H\sim 25$, which corresponds to a diameter of 60m for a 5% albedo. We like to use the term ‘Tunguska-level near miss’ to describe cases when asteroids at least this large are discovered during Earth encounters that bring them closer than 0.01 AU. Neglecting gravitational focusing, there should be one actual Tunguska-like impact for every 55,000 Tunguska-level near misses. During the June-September 2017 period covered by Figure 9, there were seven known Tunguska-level near misses, which may be naively translated into an impact rate of one per 3000 years. Of the seven near-misses, ATLAS discovered three and Catalina and Pan-STARRS each discovered two. Thus, although the larger telescopes of the other surveys enable them to discover far more NEAs than ATLAS overall (including larger, globally hazardous asteroids passing Earth at larger distances), in the specific case of regionally dangerous asteroids passing very close to the Earth, ATLAS is competitive with far more expensive surveys. We note also that six out of the seven Tunguska-level near misses were detected by only one of the major surveys. This lack of overlap suggests that only a minority of the actual Tunguska-level near misses are detected by any survey – a gap in our current planetary defence that can be most efficiently filled by building more ATLAS units. When the NEA population statistics are finally determined, the rate of Tunguska-like impacts will likely turn out to be considerably higher than the naive estimate quoted above.
Figure 10 shows lightcurves for a typical bright asteroid
and a typical bright variable star.
Figure 11 shows lightcurves for an asteroid that is
15 times smaller and a variable star that is 100 times fainter.
ATLAS has analyzed 2 year lightcurves
with at least 100 points for about 20,000 asteroids and 140 million
stars between $-30<\delta<+60$ and $m<18$, and detected 5 million
stars which show variability (Heinze et al. in prep.).
These lightcurves will be available from MAST at Space Telescope Science Institute.
Observation to date has
increased the number of stars to 240 million between $-45<\delta<+90$.
Our improved photometry enables light curve science with ATLAS data.
We have observations of $\sim$300,000 numbered asteroids, and as of 2017
(prior to replacement Schmidt correctors) we were able to assign
a period to 20,000 of them using standard period searches followed
by deep machine learning using the R programming language to classify
the candidate lightcurves. This dataset alone is as large as the
current published asteroid lightcurve database and includes
thousands of previously unknown asteroid lightcurves. Continued ATLAS
observations will only improve the number and quality of asteroid
lightcurves in the ATLAS dataset.
ATLAS has been a partner in LIGO - Virgo Consortium searching for optical
counterparts for gravitational wave events
(Abbott et al., 2016, 2017c; Stalder et al., 2017)
The ATLAS survey is particularly
well-suited because our system covers such a large sky area that we
do not require targeted scheduling
to cover areas of interest when LIGO events occur. We can provide
the history of variablity or transient activity in LIGO-Virgo skymap
to reduce the astrophysical false positives and reject associations.
The one electromagnetic counterpart discovered so far, AT2017gfo
(associated with GW170817), was remarkably bright when discovered at 0.47 days
after the GW trigger. From the early $gri$ photometry of Arcavi et al. (2017b); Drout et al. (2017); Coulter et al. (2017), we estimate its peak
brightness within the first day
was $c\sim 17.4$ and $o\sim 17.3$, comfortably above the routine ATLAS survey
limits. We were unfortunate with the sky placing, since we had stopped observing
that RA range just 16 days before. Up to that point ATLAS had 601 individual images
of NGC4993, providing a simple but important statement that no variable or transient object
had ever been detected at that position and provided a temporal constraint for
the 4-dimensional probability (3 space and one of time)
of coincidence of the optical transient and the gravitational wave (as discussed in Smartt et al., 2017).
With our large sky area coverage, 2 day cadence and rapid data processing ATLAS will play
an important role in the detection of future electromagnetic counterparts in LIGO-Virgo’s O3 run
beginning 2018, and will also provide an independent search for
kilonova without a GW trigger, within a volume of about 60 Mpc.
Figure 12 shows the earliest ATLAS magnitudes of all
2476
candidate extragalactic transients we have identified since
21 Dec 2015, the overwhelming majority
of which are supernovae.
In this figure we do not distinguish
between those supernovae that were discovered (i.e. first report
to the TNS) by other surveys and those that were exclusively
discovered first by ATLAS. The reason for doing this is to highlight the ATLAS detection performance, rather than speed and competitiveness of reporting. At $m=19\pm 0.2$ we are roughly 50% complete assuming that supernovae are isotropically distributed in the local volume. We also show an example of the lightcurve of an ATLAS supernova (ATLAS17mgh, recorded as SN2017hjw on the TNS, also seen by Gaia as Gaia17crm) with
forced photometry (flux vs epoch). This is a type Ia in UGC03245
at $z=0.016161$ and a spectrum was
taken about 1 week before maximum light by the
Asiago Transient Classification Program
(Tomasella et al., 2014; Tomasella et al., 2017), enabled by the immediate, public discovery announcement by ATLAS. The secondary peak in the lightcurve is nicely visible in the $o$-band filter. We also show the
redshift distribution of the 654 spectroscopically classified
supernovae in Figure 12.
A full description of the ATLAS transient pipeline and early science results will be discussed in Smith et al. (in prep).
9 Conclusions
ATLAS represents another step in the relentless march toward increased time-domain coverage of the sky. With ATLAS, the complete northern sky is now observed every two days to fainter than $m=19$. The system routinely and automatically executes its mission of surveying for dangerous NEAs. The low cost and reproducibility of an ATLAS unit means that the system capability is relatively easy to extend.
ATLAS stands on the shoulders of many other broad advances—Gaia for exquisite astrometry, Pan-STARRS for photometric calibration and its MOPS pipeline, astrometry.net for blind astrometric reduction, to name several—all facilitated by inexpensive, high-performance computers. Although we are not outfitted to serve ATLAS data products to the community, ATLAS data are available to any institution able to receive our data, with no proprietary period.
Prospects for extending the ATLAS system to cover the remaining southern sky are excellent – there are plans awaiting funding to construct two ATLAS units in the southern hemisphere longitudinally opposite Hawai‘i. These additional units will allow the entire system to re-observe the entire sky every 24 hours (and every 12 hours for some of the sky). The weather diversity and continuous sky coverage of such a configuration will improve the ATLAS dataset in areas where it already excels:
•
Tightening the net for NEA discovery. While the current collection of all-sky NEA surveys (Pan-STARRS 1, Catalina Sky Survey and ATLAS) continue to increase their discovery rates, many detectable NEAs still go undiscovered each lunation. A full-sky, nightly ATLAS system will reduce the number of undetected NEAs that sneak by the Earth, and the well-calibrated and characterized ATLAS system will help quantify global NEA survey effectiveness.
•
Denser coverage for transients. Detectable supernovae will be observed within 12-24 hours of explosion, increasing the likelihood of seeing shock breakout and obtaining followup spectra during the very interesting early stages of the explosion.
•
New variable stars in challenging classes. These include stars with very small amplitudes; variables currently difficult to measure due to frequency aliasing such as RR Lyrae stars and contact binaries with periods near 0.5 or 1.0 sidereal days; and extremely long-period Mira stars and other pulsating supergiants. ATLAS will double its catalog of 5 million candidate variable stars.
•
Immediate followup of any LIGO/Virgo transients. The flexible asteroid survey can rearrange its target list such that an ATLAS telescope can be pointed at a candidate LIGO/Virgo event within 60 seconds.
10 Acknowledgements
We acknowledge useful discussions with Gaspar Bakos, Klaus Hodapp, Robert Jedicke, Eileen Ryan, Tim Spahr and Richard Wainscoat. Support for this work was provided by NASA grant
NN12AR55G under the guidance of Lindley Johnson and Kelly Fast. We acknowledge support for transient science exploitation from the EU FP7/2007-2013 ERC Grant agreement n${}^{\rm o}$ [291222], STFC Grants ST/P000312/1, ST/N002520/1 and support from the QUB Kelvin HPC cluster,
and the QUB International Engagement Fund. We thank Mike Bessell for discussion on the
design of our filter set and sharing the Skymapper experience.
References
Abbott et al. (2016)
Abbott, B. P., Abbott, R., Abbott, T. D., et al. 2016,
ApJ,
826, L13
Abbott et al. (2017a)
—. 2017a,
Physical
Review Letters, 118, 221101
Abbott et al. (2017b)
—. 2017b,
Physical
Review Letters, 119, 161101
Abbott et al. (2017c)
—. 2017c,
ApJ, 848,
L12
Alonso-García et al. (2012)
Alonso-García, J., Mateo, M., Sen, B., et al. 2012,
AJ, 143,
70
Arcavi et al. (2017a)
Arcavi, I., Hosseinzadeh, G., Brown, P. J., et al. 2017a,
ApJ, 837,
L2
Arcavi et al. (2017b)
Arcavi, I., Hosseinzadeh, G., Howell, D. A., et al. 2017b,
Nature, 551, 64
Bañados et al. (2014)
Bañados, E., Venemans, B. P., Morganson, E., et al. 2014,
AJ, 148,
14
Baltay et al. (2013)
Baltay, C., Rabinowitz, D., Hadjiyska, E., et al. 2013,
PASP, 125, 683
Becker (2015)
Becker, A. 2015, HOTPANTS: High Order Transform of PSF ANd Template
Subtraction, Astrophysics Source Code Library,
ascl:1504.004
Bellm (2016)
Bellm, E. C. 2016,
PASP,
128, 084501
Bessell et al. (2011)
Bessell, M., Bloxham, G., Schmidt, B., et al. 2011,
PASP, 123, 789
Bessell & Murphy (2012)
Bessell, M., & Murphy, S. 2012,
PASP, 124, 140
Brown et al. (2002)
Brown, P., Spalding, R. E., ReVelle, D. O., Tagliaferri, E., &
Worden, S. P. 2002,
Nature, 420, 294
Brown et al. (2013)
Brown, P. G., Assink, J. D., Astiz, L., et al. 2013,
Nature, 503, 238
Cenko et al. (2013)
Cenko, S. B., Kulkarni, S. R., Horesh, A., et al. 2013,
ApJ,
769, 130
Cenko et al. (2015)
Cenko, S. B., Urban, A. L., Perley, D. A., et al. 2015,
ApJ,
803, L24
Chambers et al. (2016)
Chambers, K. C., Magnier, E. A., Metcalfe, N., et al. 2016,
ArXiv e-prints,
arXiv:1612.05560
[astro-ph.IM]
Coulter et al. (2017)
Coulter, D. A., Foley, R. J., Kilpatrick, C. D., et al. 2017,
Science,
358, 3
Dark Energy Survey Collaboration et al. (2016)
Dark Energy Survey Collaboration, Abbott, T., Abdalla, F. B., et al.
2016, MNRAS,
460, 1270
Denneau et al. (2013)
Denneau, L., Jedicke, R., Grav, T., et al. 2013,
Publications
of the Astronomical Society of the Pacific, 125, 357
Dong et al. (2016)
Dong, S., Shappee, B. J., Prieto, J. L., et al. 2016,
Science, 351,
257
Drake et al. (2009)
Drake, A. J., Djorgovski, S. G., Mahabal, A., et al. 2009,
ApJ,
696, 870
Drout et al. (2017)
Drout, M. R., Piro, A. L., Shappee, B. J., et al. 2017,
Science, 358, 1570
Foley et al. (2018)
Foley, R. J., Scolnic, D., Rest, A., et al. 2018,
MNRAS, 475,
193
Fukugita et al. (1996)
Fukugita, M., Ichikawa, T., Gunn, J. E., et al. 1996,
AJ, 111, 1748
Gaia Collaboration et al. (2016)
Gaia Collaboration, Prusti, T., de Bruijne, J. H. J., et al. 2016,
A&A,
595, A1
Gal-Yam et al. (2011)
Gal-Yam, A., Kasliwal, M. M., Arcavi, I., et al. 2011,
ApJ,
736, 159
Henden et al. (2012)
Henden, A. A., Levine, S. E., Terrell, D., Smith, T. C., & Welch, D.
2012, Journal of the American Association of Variable Star
Observers (JAAVSO), 40, 430
Holoien et al. (2017)
Holoien, T. W.-S., Stanek, K. Z., Kochanek, C. S., et al. 2017,
MNRAS, 464,
2672
Hsieh et al. (2012)
Hsieh, H. H., Yang, B., Haghighipour, N., et al. 2012,
ApJ,
748, L15
Keller et al. (2007)
Keller, S. C., Schmidt, B. P., Bessell, M. S., et al. 2007,
PASA, 24, 1
Kuncarayakti et al. (2017)
Kuncarayakti, H., Maeda, K., Ashall, C. J., et al. 2017,
ArXiv e-prints,
arXiv:1712.00027
[astro-ph.SR]
Laevens et al. (2015)
Laevens, B. P. M., Martin, N. F., Bernard, E. J., et al. 2015,
ApJ, 813,
44
Lang et al. (2010)
Lang, D., Hogg, D. W., Mierle, K., Blanton, M., & Roweis, S. 2010,
AJ,
139, 1782
Law et al. (2009)
Law, N. M., Kulkarni, S. R., Dekany, R. G., et al. 2009,
PASP, 121, 1395
Lipunov et al. (2017)
Lipunov, V. M., Gorbovskoy, E., Kornilov, V. G., et al. 2017,
ApJ, 850,
L1
Liu et al. (2013)
Liu, M. C., Magnier, E. A., Deacon, N. R., et al. 2013,
ApJ,
777, L20
Magnier et al. (2016)
Magnier, E. A., Schlafly, E. F., Finkbeiner, D. P., et al. 2016,
ArXiv e-prints,
arXiv:1612.05242
[astro-ph.IM]
Moriya et al. (2018)
Moriya, T. J., Tanaka, M., Yasuda, N., et al. 2018, ArXiv
e-prints, arXiv:1801.08240
[astro-ph.HE]
Nicholl et al. (2014)
Nicholl, M., Smartt, S. J., Jerkstrand, A., et al. 2014,
MNRAS, 444,
2096
Nicholl et al. (2015)
—. 2015,
ApJ,
807, L18
Nugent et al. (2011)
Nugent, P. E., Sullivan, M., Cenko, S. B., et al. 2011,
Nature, 480, 344
Quimby et al. (2011)
Quimby, R. M., Kulkarni, S. R., Kasliwal, M. M., et al. 2011,
Nature, 474, 487
Rest et al. (2014)
Rest, A., Scolnic, D., Foley, R. J., et al. 2014,
ApJ, 795,
44
Schechter et al. (1993)
Schechter, P. L., Mateo, M., & Saha, A. 1993,
PASP, 105, 1342
Scolnic et al. (2018)
Scolnic, D., Kessler, R., Brout, D., et al. 2018,
ApJ, 852,
L3
Smartt et al. (2015)
Smartt, S. J., Valenti, S., Fraser, M., et al. 2015,
A&A,
579, A40
Smartt et al. (2017)
Smartt, S. J., Chen, T.-W., Jerkstrand, A., et al. 2017,
Nature, 551, 75
Soares-Santos et al. (2017)
Soares-Santos, M., Holz, D. E., Annis, J., et al. 2017,
ApJ, 848,
L16
Sonnett et al. (2013)
Sonnett, S., Meech, K., Jedicke, R., et al. 2013,
PASP, 125, 456
Stalder et al. (2017)
Stalder, B., Tonry, J., Smartt, S. J., et al. 2017,
ApJ, 850,
149
Tanvir et al. (2017)
Tanvir, N. R., Levan, A. J., González-Fernández, C., et al.
2017, ApJ,
848, L27
Terebizh (2011)
Terebizh, V. Y. 2011,
Astronomische
Nachrichten, 332, 714
Terebizh (2016)
—. 2016,
AJ, 152,
121
Tomasella et al. (2014)
Tomasella , L., Benetti, S., Cappellaro, E., et al. 2014,
Astronomische
Nachrichten, 335, 841
Tomasella et al. (2017)
Tomasella, L., Benetti, S., & Cappellaro, E. 2017, The
Astronomer’s Telegram, 10863
Tonry (2011)
Tonry, J. L. 2011,
PASP, 123, 58
Tonry et al. (2012)
Tonry, J. L., Stubbs, C. W., Kilic, M., et al. 2012,
ApJ, 745,
42
Valenti et al. (2017)
Valenti, S., David, Sand, J., et al. 2017,
ApJ, 848,
L24
Walton et al. (2015)
Walton, N., Hodgkin, S., & van Leeuwen, F. 2015, IAU
General Assembly, 22, 2257872
Wolf et al. (2018)
Wolf, C., Onken, C. A., Luvaul, L. C., et al. 2018, ArXiv
e-prints, arXiv:1801.07834
[astro-ph.IM] |
The Residue Determinant
Simon
Scott
Abstract
The purpose of this paper is to present the
construction of a canonical determinant functional on elliptic
pseudodifferential operators ($\psi{\rm do}$s) associated to the
Guillemin-Wodzicki residue trace.
The resulting residue determinant functional is multiplicative, a
local invariant, and not defined by a regularization procedure.
The residue determinant is consequently a quite different object
to the zeta function determinant, which is non-local and
non-multiplicative. Indeed, the residue determinant does not arise
as the derivative of a trace on the complex power operators, and
does not depend on a choice of spectral cut. The identification of
a certain residue determinant with the index of an elliptic $\psi{\rm do}$
shows the residue determinant to be topologically significant.
111This work arose following conversations with Steve
Rosenberg concerning higher Chern-Weil invariants, my thanks to
him for his support and interest. I am also indebted to Kate
Okikiolu for a helpful suggestion, the essential role of
[Ok1, Ok2] in the current work is evident. I am grateful to
Gerd Grubb for helpful comments and for pointing out a number of
technical improvements. My thanks to Sylvie Paycha for interesting
discussions and, in particular, for pointing out the fact in
Remark(1.11).
1. Definition and Properties of the Residue Determinant
Let $A$ be a $\psi{\rm do}$ of order $\alpha\in\mathbb{R}$ acting on the space of
smooth sections $C^{\infty}(E)$ of a rank $N$ vector bundle $E$ over a
compact boundaryless manifold $M$ of dimension $n$. This means
that in each local trivialization of $E$ with $U\times\mathbb{R}^{N}$,
with $U$ an open subset of $M$ identified with an open set in
$\mathbb{R}^{n}$, and smooth functions $\phi,\psi$ with ${\rm supp}(\phi),\ {\rm supp}(\psi)\subset U$, then for $x\in U$ and
$f\in C^{\infty}_{c}(U,\mathbb{R}^{N})$
(1.1)
$$(\phi A\psi)f(x)=\frac{1}{(2\pi)^{n}}\int_{\mathbb{R}^{n}}\int_{U}e^{i(x-y).%
\xi}\ \textsf{a}(x,\xi)f(y)\ dy\,d\xi\ ,$$
where $\textsf{a}\in S^{\alpha}(U)$. We may write $A={\rm OP}(\textsf{a})$ on $U$.
Here, $S^{\alpha}(U)$ is the symbol space of functions
$\textsf{a}(x,\xi)\in C^{\infty}(U\times\mathbb{R}^{n},(\mathbb{R}^{N})^{*}%
\otimes\mathbb{R}^{N})$ with
values in $N\times N$ matrices such that for all multi-indices
$\mu,\nu\in\mathbb{N}^{n}$,
$\partial_{x}^{\mu}\partial_{\xi}^{\nu}\textsf{a}(x,\xi)$ is $O(\ (1+|\xi|)^{\alpha-|\nu|}\ )$, uniformly in $\xi$, and, on compact subsets of $U$,
uniformly in $x$. Write $S(U)$ for $\cup_{\alpha\in\mathbb{R}}S^{\alpha}(U)$ and
$S^{-\infty}(U)$ for $\cap_{\alpha\in\mathbb{R}}S^{\alpha}(U)$. Symbols $\textsf{a},\textsf{b}\in S(U)$ are said to be equivalent if $\textsf{a}-\textsf{b}\in S^{-\infty}(U)$,
written $\textsf{a}\sim\textsf{b}$.
A symbol $\textsf{a}\in S^{\alpha}(U)$ is classical (1-step polyhomogeneous)
of degree $\alpha$ if there is a sequence $\textsf{a}_{0},\textsf{a}_{1},\textsf{a}_{2},\ldots$ with $\textsf{a}_{j}\in C^{\infty}(U\times\mathbb{R}^{n}\backslash\{0\},(\mathbb{R}^%
{N})^{*}\otimes\mathbb{R}^{N})$ homogeneous in $\xi$ of degree $\alpha-j$
for $|\xi|\geq 1$ such that $\textsf{a}(x,\xi)\sim\sum_{j=0}^{\infty}\textsf{a}_{j}(x,\xi);$ thus, $\textsf{a}_{j}(x,t\xi)=t^{\alpha-j}\textsf{a}_{j}(x,\xi)$ for
$t\geq 1,|\xi|\geq 1$, and
$$\textsf{a}(x,\xi)-\sum_{j=0}^{J-1}\textsf{a}_{j}(x,\xi)\in S^{\alpha-J}(U)\ .$$
We may then write $\textsf{a}\sim(\textsf{a}_{0},\textsf{a}_{1},\ldots)$. A symbol
$\textsf{b}\in S(U)$ is called logarithmic of type $c\in\mathbb{R}$ if it
has the form
$$\textsf{b}(x,\xi)\sim c\,\log[\xi]\,I+\textsf{q}(x,\xi)\ ,$$
where $\textsf{q}\sim(\textsf{q}_{0},\textsf{q}_{1},\ldots)\in S^{0}(U)$ is a degree $0$
classical symbol, and $[\ ]:\mathbb{R}^{n}\rightarrow\mathbb{R}_{+}$ is a strictly
positive function with $[\xi]=|\xi|$ for $|\xi|\geq 1$.
A $\psi{\rm do}$ $A$ on $C^{\infty}(E)$ is classical of degree $\alpha$ (resp.
logarithmic of type $c$) if the local symbol of $A$ in each local
trivialization of $E$ is classical of degree $\alpha$ (resp.
logarithmic of type $c$). A logarithmic $\psi{\rm do}$ has order $\varepsilon$ for
any $\varepsilon>0$. We denote the space of classical $\psi{\rm do}$s of order $\alpha$
(resp less than $\alpha$) by $\Psi^{\alpha}(E)$ (resp.
$\Psi^{<\alpha}(E)$), and the algebra of all integer order classical
$\psi{\rm do}$s by $\Psi^{\mathbb{Z}}(E)$.
The various homogeneous terms $\textsf{a}_{j}(x,\xi)$ (resp
$\textsf{q}_{j}(x,\xi))$ in the local symbol of a classical (resp.
logarithmic) $\psi{\rm do}$ do not, in general, have a global invariant
meaning as bundle endomorphisms over $T^{*}M$. However, it was
observed by Guillemin [Gu] and Wodzicki [Wo2] for
classical $\psi{\rm do}$s, and extended to logarithmic operators by
Okikiolu [Ok2], that if $\sigma(A)_{-n}(x,\xi)$ is the term of
homogeneity $-n$ (so $\sigma(A)_{-n}(x,\xi)=a_{\alpha+n}(x,\xi)$ if $A$
is classical of degree $\alpha$, while if $A$ is logarithmic
$\sigma(A)_{-n}(x,\xi)=q_{n}(x,\xi)$), then
$$\frac{1}{(2\pi)^{n}}\left(\int_{|\xi|=1}\sigma(A)_{-n}(x,\xi)\ dS(\xi)\right)%
\,dx$$
with $dS(\xi)$ the sphere measure on $S^{n-1}$, defines a global
density on $M$. The number
(1.2)
$$\mbox{\rm res}(A)=\frac{1}{(2\pi)^{n}}\int_{M}\int_{|\xi|=1}\mbox{\rm tr\,}(\,%
\sigma(A)_{-n}(x,\xi)\,)\,dS(\xi)\,dx$$
is the Guillemin-Wodzicki residue trace of the $\psi{\rm do}$ $A$.
Evidently, if $A$ is a classical $\psi{\rm do}$ of order $\alpha$, then
(1.3)
$$\alpha\notin\mathbb{Z}\hskip 19.916929pt{\rm or}\hskip 19.916929pt\alpha<-n\ %
\ \ \Rightarrow\ \ \ \mbox{\rm res}(A)=0\ ,$$
and res drops down to a map on the quotient algebra
(1.4)
$$\mbox{\rm res}:\Psi^{\mathbb{Z}}(E)/\Psi^{-\infty}(E)\longrightarrow\mathbb{C}\ .$$
The following linearity properties of the residue trace are
immediate from its definition.
Lemma 1.1.
Let $A,B$ be $\psi{\rm do}$s. If $A$ and $B$ are both logarithmic, or, if
$A$ is classical of order $\alpha\in\mathbb{Z}$ and $B$ is logarithmic, or,
if $A$ is classical of order $\alpha\in\mathbb{R}$ and $B$ is classical of
order $\beta\in\mathbb{R}$ such that222If $A$ and $B$ are classical
and $\alpha-\beta\notin\mathbb{Z}$ then $A+B$ is not a classical $\psi{\rm do}$
(the symbol is then not 1-step, its expansion does not drop in
integer orders). Consequently, though $\Psi^{\mathbb{Z}}(E)$ is an
algebra, the space $\Psi(E)=\cup_{\alpha\in\mathbb{R}}\Psi^{\alpha}(E)$ is not,
but forms a semi-group with respect to the usual composition
product. This is relevant for linearity properties of traces on
classical $\psi{\rm do}$s, and the reason why $\sigma(\log A)(x,\xi)=-(d/ds)|_{s=0}\sigma(A^{-s})(x,\xi)$, as a limit of differences of
classical symbols, is not quite classical. $\alpha-\beta\in\mathbb{Z}\ ,$
then
(1.5)
$$\mbox{\rm res}(A+B)=\mbox{\rm res}(A)+\mbox{\rm res}(B)\ .$$
The characterizing tracial property of res is due to Guillemin,
Wodzicki and extended to include logarithmic operators by
Okikiolu:
Proposition 1.2.
[Wo2],[Gu],[Ok2].
Let $A,B$ be classical or logarithmic $\psi{\rm do}$s. Then
(1.6)
$$\mbox{\rm res}([A,B])=0\ .$$
It follows that (1.4) is a trace
functional. It is, moreover, projectively unique.
This has consequences for determinants.
Let $A$ be a classical $\psi{\rm do}$ $A$ of order $\alpha$ admitting a
principal angle $\theta$, meaning that the principal symbol
$\textsf{a}_{0}(A)(x,\xi)$ considered as a bundle endomorphism over
$T^{*}X\backslash 0$ has no eigenvalue on the spectral cut $R_{\theta}=\{re^{i\theta}\ |\ r\geq 0\}$; in particular, $A$ is elliptic.
Then, as recalled below, the functional calculus constructs the
log symbol $(\log_{\theta}\textsf{a})_{-n}(x,\xi)$ of homogeneity $-n$ and
$$\frac{1}{(2\pi)^{n}}\left(\int_{|\xi|=1}(\log_{\theta}\textsf{a})_{-n}(x,\xi)%
\ dS(\xi)\right)\,dx$$
defines a global density on $M$ [Ok2]. If $\alpha>0$, then
$\log_{\theta}A$ exists as a logarithmic $\psi{\rm do}$ of type $\alpha$ and
$(\log_{\theta}\textsf{a})_{-n}(x,\xi)=\sigma(\log_{\theta}A)_{-n}(x,\xi)$. We
can hence define canonically the following determinant functional
on classical $\psi{\rm do}$s.
Definition 1.3.
The residue determinant $\det_{\rm res}A$ of a classical $\psi{\rm do}$
$A$ with principal angle $\theta$ is the complex number
(1.7)
$$\log{\rm det}_{\rm res}A:=\mbox{\rm res}(\log A)\ ,$$
that is,
(1.8)
$$\log{\rm det}_{{\rm res}}A:=\frac{1}{(2\pi)^{n}}\int_{M}\int_{|\xi|=1}\mbox{%
\rm tr\,}\left((\log_{\theta}\textsf{a})_{-n}(x,\xi)\right)\ dS(\xi)\,dx\ .$$
Remark 1.4.
From Lemma 1.1, one has the linearity
(1.9)
$$\mbox{\rm res}(\log A+\log B)=\mbox{\rm res}(\log A)+\mbox{\rm res}(\log B)\ $$
for all classical $\psi{\rm do}$s $A$ and $B$, of any real orders
$\alpha,\beta\in\mathbb{R}$.
The properties of the residue determinant are as follows.
Proposition 1.5.
The residue determinant ${\rm det}_{\mbox{\rm res}}A$ is a local
invariant, depending only on the first $n+1$ homogeneous terms
$\textsf{a}_{0},\textsf{a}_{1},\ldots,\textsf{a}_{n}$ in the local symbol of $A$, and is
independent of the choice of principal angle $\theta$ used to define
$\log_{\theta}A$.
The subscript $\theta$ is therefore omitted in the notation
(1.7).
Corollary 1.6.
Let $A\in\Psi^{\alpha}(E),\ S\in\Psi^{\sigma}(E)$ with $\alpha>\sigma+n$.
Then
(1.10)
$${\rm det}_{{\rm res}}(A+S)={\rm det}_{{\rm res}}(A)\ .$$
The residue determinant is multiplicative:
Theorem 1.7.
Let $A,B$ be classical
$\psi{\rm do}$s of order $\alpha,\beta\in\mathbb{R}$, and suppose that $A,B,AB$ admit
principal angles. Then
(1.11)
$${\rm det}_{{\rm res}}(AB)={\rm det}_{{\rm res}}(A)\cdot{\rm det}_{{\rm res}}(B%
)\ .$$
The residue determinant does not vanish on non-invertible
operators, however.
In fact, since an elliptic operator is invertible modulo smoothing
operators, the above properties imply that it never vanishes.
If $A\in\Psi(E)$ has order $\alpha>0$ and is invertible, then
$\zeta(A,0)|^{{\rm mer}}$, the meromorphically continued spectral
zeta-function of $A$ evaluated at $s=0$ (see Sect.2), is known to
have the properties in Proposition 1.5; the relation with
$\det_{\rm res}A$ is the following.
Theorem 1.8.
Let $A$ be a classical $\psi{\rm do}$ of order $\alpha>0$ with principal
angle. If $A$ is invertible
(1.12)
$${\rm det}_{{\rm res}}(A)=e^{-\alpha\,\zeta(A,0)|^{{\rm mer}}}\ .$$
When $A$ is not invertible
(1.13)
$$\hskip 34.143307pt{\rm det}_{{\rm res}}(A)=e^{-\alpha\,(\zeta(A,0)|^{{\rm mer}%
}\,+\ {\rm h}_{0}(A))}\ ,$$
where $\mbox{\rm h}_{0}(A)=\mbox{\rm Tr\,}(\Pi_{0}(A))$ and $\Pi_{0}(A)$ is a
projection onto the finite-dimensional generalized $0$-eigenspace
$E_{0}(A)=\{\tau\in C^{\infty}(E)\ |\ A^{N}\tau=0\ {\rm some}\ N\in\mathbb{N}\}$ (the projection $\Pi_{0}(A)$ is defined in
(3.33)). If $\mbox{\rm Ker}(A^{2})=\mbox{\rm Ker}(A)$, in particular for A
self-adjoint, then $\mbox{\rm h}_{0}(A)=\dim\mbox{\rm Ker}(A)$.
The additional ${\rm h}_{0}(A)$ term on the right-side of
(1.13) thus corrects for the discontinuities of
$\zeta(A,0)|^{{\rm mer}}$ at non-invertible $A$. Notice, also, that
$\zeta(A,0)|^{{\rm mer}}$ is locally determined only if $\mbox{\rm h}_{0}(A)=0$, otherwise it is $\zeta(A,0)|^{{\rm mer}}\,+\ {\rm h}_{0}(A)$
which is local; this follows from Proposition 1.5 and
(1.12), and is seen directly in the proof.
Remark 1.9.
[1] The number $\zeta(A,0)|^{{\rm mer}}:=\mbox{\rm Tr\,}(I.A^{-s})|^{{\rm mer}}_{s=0}$ defines a quasi- (or weighted-)
trace of the identity operator $I$ on $C^{\infty}(E)$ and hence
(1.12) associates $\det_{\rm res}$ with a notion
of regularized dimension, rather than regularized volume.
[2] The continuity of $\det_{\rm res}$ on families of
admissible operators in $\Psi(E)$ contrasts with the
$\zeta$-determinant, which is continuous only on families of
invertible operators.
Remark 1.10.
Subsequently two proofs of (1.13) of
independent interest have been given. In [Pa] (see also
[PaSc]) the identity is proved using a microlocal result of
[KoVi]. In [Gr2] a proof is obtained via the resolvent
trace $\mbox{\rm Tr\,}((A-\lambda I)^{-k})$.
Remark 1.11.
Since $\log_{\theta}A$ is ‘almost’ in the subalgebra
$\Psi^{\leq 0}(E)\cap\Psi^{\mathbb{Z}}(E)$ on which the residue trace is
not the unique trace [PaRo], ${\rm det}_{{\rm res}}$ is not
quite the unique multiplicative functional on elliptic $\psi{\rm do}$s.
Indeed ${\rm det}_{0}$ defined by $\log{\rm det}_{0}(A)=({\rm vol}_{\sigma}(S^{*}M))^{-1}\int_{S^{*}M}\log{\rm det%
}(a_{0}(x,\xi))\ d\sigma$, where $a_{0}(x,\xi)$ is the leading symbol of $A$ and $d\sigma$
is a volume form on the cosphere bundle $S^{*}M$, is
multiplicative.
Example. Let $\Sigma$ be a closed Riemann surface
and $E$ a complex vector bundle of degree ${\rm deg}(E)=\int_{\Sigma}c_{1}(E)$. Let $\overline{\partial}_{\Sigma}:C^{\infty}(E)\longrightarrow C^{\infty}(E\otimes T%
^{0,1}\Sigma)$ be an invertible
$\overline{\partial}$-operator; thus locally, $\overline{\partial}_{\Sigma}=(\partial_{\overline{z}}+a(z))d\overline{z}$. Then, from
(1.12) and [Gi] Thm(4.1.6)(see also
[Bo] §1.5), we have
(1.14)
$${\rm det}_{{\rm res}}(\overline{\partial}_{\Sigma}^{\,*}\overline{\partial}_{%
\Sigma})=\exp\left(-\,{\rm deg}(E)-\frac{\chi(\Sigma)\,{\rm rk}(E)}{3}\right)\ ,$$
with $\chi(\Sigma)$ the Euler number, and ${\rm rk}(E)$ the rank
of $E$.
This is independent of $\overline{\partial}_{\Sigma}^{\,*}\overline{\partial}_{\Sigma}$, but in general the residue determinant of a
second-order differential operator over a surface will depend on
the complete symbol. In the case of an invertible operator
$\Delta_{g}$ of Laplace-type one has for $t\in\mathbb{R}$
(1.15)
$${\rm det}_{{\rm res}}(\Delta_{g}+tI)=\exp\left(\frac{A_{g}(\Sigma)\,{\rm rk}(E%
)}{2\pi}\ t-\frac{1}{2\pi}\int_{\Sigma}\mbox{\rm tr\,}(\varepsilon_{x}(\Delta_%
{g}))\,dx-\frac{\chi(\Sigma)\,{\rm rk}(E)}{3}\right)\ ,$$
where $A_{g}(\Sigma)$ is the surface area of $\Sigma$ with
respect to a Riemannian metric $g$, and $\varepsilon(\Delta_{g})$ is the
unique element of $C^{\infty}(\mbox{\rm End}(E))$ and $\nabla$ the unique
connection such that $\Delta_{g}=-\sum_{i,j}g^{ij}(x)\nabla_{i}\nabla_{j}+\varepsilon_{x}(\Delta_{g})$. On the
other hand, if $(M,g)$ is a 4-manifold and $\Delta_{g}$ the
Laplace-Beltrami operator then
(1.16)
$${\rm det}_{{\rm res}}(\Delta_{g}+tI)=\exp\left(-\frac{\mbox{\rm vol}_{g}(M)}{1%
6\pi^{2}}\ t^{2}+\frac{1}{24\pi^{2}}\int_{M}\kappa_{M}\,dx\ t\right)\,{\rm det%
}_{{\rm res}}(\Delta_{g})\ ,$$
with $\mbox{\rm vol}_{g}(M)$ the Riemannian volume, $\kappa_{M}$ the scalar
curvature. More generally, if $(M,g)$ is a Riemannian manifold of
dimension $2m$, $E$ a vector bundle, and $\Delta_{g}$ an operator on
$C^{\infty}(E)$ of Laplace-type, meaning $\Delta_{g}$ is a second-order
differential operator with scalar principle symbol
(1.17)
$$\sigma(\Delta_{g})_{\,2}(x,\xi)=|\xi|_{g(x)}^{2}\,I:=-\sum_{i,j=1}^{2m}g^{ij}(%
x)\xi_{i}\xi_{j}\,I\ ,$$
where $\xi=(\xi_{1},\ldots,\xi_{2m})\in\mathbb{R}^{2m}$, then for $\varepsilon>0$ the heat operator $e^{-\varepsilon\Delta_{g}}$ is a smoothing operator
with heat trace expansion as $\varepsilon\rightarrow 0+$
(1.18)
$$\mbox{\rm Tr\,}(e^{-\varepsilon\Delta_{g}})=\frac{c_{-m}(\Delta_{g})}{%
\varepsilon^{m}}+\ldots+\frac{c_{-1}(\Delta_{g})}{\varepsilon}+c_{0}(\Delta_{g%
})+O(\varepsilon)$$
with locally determined coefficients $c_{j}(\Delta_{g})$;
specifically, (1.17) easily implies
(1.19)
$$c_{-m}(\Delta_{g})=\frac{\mbox{\rm vol}_{g}(M)\,{\rm rk}(E)}{(4\pi)^{m}}\ ,$$
while, by standard transition formulae (see for example [GrSe],
[Gr]), (1.13) becomes
(1.20)
$${\rm det}_{{\rm res}}(\Delta_{g})=e^{-2\,c_{0}(\Delta_{g})}\ .$$
(If $M$ is odd-dimensional ${\rm det}_{{\rm res}}(\Delta_{g})=1$.)
On the other hand, for $t\in\mathbb{R}$ one has
$$\mbox{\rm Tr\,}(e^{-\varepsilon(\Delta_{g}+tI)})=e^{-\varepsilon\,t}\,\mbox{%
\rm Tr\,}(e^{-\varepsilon\Delta_{g}})\ ,$$
and so from
(1.18)
(1.21)
$$c_{0}(\Delta_{g}+tI)=\frac{(-1)^{m}}{m!}\,c_{-m}(\Delta_{g})\ t^{m}+\frac{(-1)%
^{m-1}}{(m-1)!}\,c_{-m+1}(\Delta_{g})\ t^{m-1}+\ldots+c_{0}(\Delta_{g})\ .$$
Thus (1.19) and (1.20) yield
$$\displaystyle{\rm det}_{{\rm res}}(\Delta_{g}+tI)$$
$$\displaystyle=$$
$$\displaystyle\exp\left(\frac{2(-1)^{m+1}}{(4\pi)^{m}\,m!}\mbox{\rm vol}_{g}(M)%
\,{\rm rk}(E)\ t^{m}\ \ +\right.$$
$$\displaystyle \left.2\sum_{j=1}^{m-1}\frac{(-1)^{m-j+1}}{(m-j)!}\,%
c_{-m+j}(\Delta_{g})\ t^{m-j}\right)\,{\rm det}_{{\rm res}}(\Delta_{g})\ ,$$
the specific formulas (1.15),
(1.16) now following from [Gi]
Thm(4.1.6) and [McSi].
Remark 1.12.
Let $\omega\in C^{\infty}(M)$ and let $g_{\omega}=e^{2\omega}g$. A
consequence of (1.20) is that the Generalized
Polyakov Formula of [BrOr] for the relative zeta-determinant
of $\Delta_{g_{\omega}}$ may be stated as
$$\log\frac{{\rm det}_{\zeta}(\Delta_{g_{\omega}})}{{\rm det}_{\zeta}(\Delta_{g_%
{0}})}=\int_{0}^{1}\log{\rm det}_{{\rm res}}(\Delta_{g_{\epsilon\omega}})\ d%
\epsilon\ .$$
Turning matters around, (1.13) can be used
to deduce properties of $\zeta(A,0)|^{{\rm mer}}$.
Since the proof of Theorem 1.8 demonstrates that the
equality in (1.13) holds logarithmically
(1.22)
$${\rm res}(\,\log A\,)=-\alpha\,\left(\zeta(A,0)|^{{\rm mer}}+\mbox{\rm h}_{0}(%
A)\right)\ ,$$
(in fact, it holds pointwise on $M$ as an equality between
densities as shown in the proof of Theorem 1.8,)
then, combined with (1.11) which also holds
logarithmically, (but not as a pointwise identity of densities) we
have the following.
Corollary 1.13.
Let $A$, $B$ be classical $\psi{\rm do}$s, admitting
principal angles, and which have positive orders $\alpha,\beta\in\mathbb{R}_{+}$. Then the function $Z(\sigma(A)):=-\alpha\,\left(\zeta(A,0)|^{{\rm mer}}+\mbox{\rm h}_{0}(A)\right)$ is additive:
(1.23)
$$Z(\sigma(AB))=Z(\sigma(A))+Z(\sigma(B))\ .$$
Remark 1.14.
[1] The additivity
(1.23) is referred
to in [Ka] and in the introduction to [KoVi] (whose
notation is respected here) as a property known to Wodzicki,
though no proof appeared. On the other hand, Wodzicki defined in
[Wo3] a determinant for order zero $\psi{\rm do}$s path connected to
the identity which, for such operators, coincides with ${\rm det}_{{\rm res}}$.
[2] In [PaSc] it is shown that
(1.22) extends as an exact splitting formula
into local and global components of the generalized zeta function
$\zeta_{\theta}(A,Q,s)|^{{\rm mer}}=\mbox{\rm Tr\,}(AQ^{-s})|^{{\rm mer}}$. On the
other hand, Grubb [Gr2] has recently shown using resolvent
methods that (1.22) extends to certain classes
of boundary value problems.
Conversely, (1.13),
(1.23) combine to prove (1.11)
when $\alpha,\beta\in\mathbb{R}_{+}$.
From the Atiyah-Bott-Seeley $\zeta$-function index formula (see for
example [Sh]), a further immediate consequence of
(1.12) is the following ‘super’ residue
determinant formula for the index ${\rm Index}(\textsf{D})=\dim\mbox{\rm Ker}(\textsf{D})-\dim\mbox{\rm Ker}(%
\textsf{D}^{*})$ of a general elliptic operator
$\textsf{D}:C^{\infty}(E^{+})\longrightarrow C^{\infty}(E^{-})$ of order $d>0$.
Corollary 1.15.
(1.24)
$$\frac{{\rm det}_{{\rm res}}(\textsf{D}^{*}\textsf{D}+I)}{{\rm det}_{{\rm res}}%
(\textsf{D}\textsf{D}^{*}+I)}=e^{-2d\,{\rm Index}(\textsf{D})}\ .$$
Equivalently,
(1.25)
$${\rm Index}(\textsf{D})=\frac{1}{2d}\left({\rm res}\log(\textsf{D}\textsf{D}^{%
*}+I)-{\rm res}\log(\textsf{D}^{*}\textsf{D}+I)\right)\ .$$
Remark 1.16.
In contrast, there is not a formula for the index using the
residue trace of a classical (non-logarithmic) $\psi{\rm do}$. The
identities (1.13) and (1.25)
lead to an alternative elementary proof of the local Atiyah-Singer
index theorem [ScZa].
For order zero operators we have:
Theorem 1.17.
If $A$ has order $\alpha=0$ and has the form $A=I+\textsf{Q}$ with Q
a classical $\psi{\rm do}$ of negative integer order $k<0$, then
(1.26)
$${\rm det}_{{\rm res}}(I+\textsf{Q})=\prod_{j=1}^{\left[\frac{n}{|k|}\right]}e^%
{\frac{(-1)^{j+1}}{j}\,\mbox{\rm res}(\textsf{Q}^{j})}\ .$$
Remark 1.18.
[1] For order zero operators the zeta-function at zero
in (1.13) is thus replaced by a relative zeta
function at zero (see Theorem 1.20, and comments
around (2.17)).
[2] Generalizing the Fredholm determinant (p=1)
there is a well-known notion of $p$-determinant $\det_{p}(I+Q)$
for $Q$ in the $p^{th}$ Schatten ideal $L_{p}$, which is not
multiplicative for $p>1$, but for $Q\in L_{1}$ the following formula
holds [Si] in analogy to (1.26)
$$\frac{{\rm det}_{p}(I+Q)}{{\rm det}_{1}(I+Q)}=\prod_{j=1}^{p-1}e^{\frac{(-1)^{%
j}}{j}\mbox{\rm Tr\,}(Q^{j})}\ .$$
Theorem 1.17 evidently implies:
Corollary 1.19.
(1.27)
$${\rm det}_{{\rm res}}(I+\textsf{Q})=1\hskip 28.452756pt{\rm if}\hskip 11.38110%
2pt\mbox{\rm ord}(\textsf{Q})<-n\ ,\ \mbox{\rm ord}(\textsf{Q})\in\mathbb{Z}\ .$$
Hence ${\rm det}_{\mbox{\rm res}}$ drops down to a multiplicative function
on the ‘determinant Lie group’ $\widetilde{G}=\Psi^{*}(E)/(I+\Psi^{-\infty}(E)),$ of Kontsevich-Vishik [KoVi], where
$\Psi^{*}(E)$ is the group of invertible elliptic $\psi{\rm do}$s.
((1.27) also follows from Corollary 1.6.)
Example. Let $\Delta_{g}$ be an invertible
generalized Laplacian on a closed Riemannian manifold $(M,g)$ of
dimension $2m$. Thus $\Delta_{g}$ has principal symbol as in
(1.17), and with that notation, we therefore have
$\sigma(\Delta_{g}^{-m})_{-2m}(x,\xi)=|\xi|_{g(x)}^{-2m}I$. Whence,
with $S^{2m-1}$ the Euclidean $(2m-1)$-sphere,
(1.28)
$$\displaystyle\mbox{\rm res}(\Delta_{g}^{-m})$$
$$\displaystyle=$$
$$\displaystyle\int_{M}\frac{1}{(2\pi)^{2m}}\left(\int_{|\xi|_{g(x)}=1}\mbox{\rm
tr%
\,}(I)\ dS_{g(x)}(\xi)\right)dx$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{(2\pi)^{2m}}\int_{M}\sqrt{{\rm det}(g(x))}\left(\int_{S^%
{2m-1}}\ dS(\xi)\right)dx\ {\rm rk}(E)$$
$$\displaystyle=$$
$$\displaystyle\frac{\mbox{\rm vol}(S^{2m-1})}{(2\pi)^{2m}}\,\mbox{\rm vol}_{g}(%
M)\,{\rm rk}(E)\ .$$
Since $\mbox{\rm vol}(S^{2m-1})=2\pi^{m}/(m-1)!$ we therefore have from
(1.26)
(1.29)
$${\rm det}_{{\rm res}}(I+\Delta^{-m})=\exp\left(\frac{\mbox{\rm vol}_{g}(M)\,{%
\rm rk}(E)}{2^{2m-1}(m-1)!\,\pi^{m}}\right)\ .$$
Example. It is instructive to check how the
multiplicativity property of ${\rm det}_{\mbox{\rm res}}$ works for this
class of $\psi{\rm do}$s. As a simple case, consider $\psi{\rm do}$s $\textsf{Q}_{1},\textsf{Q}_{2}$ for which $3\,\mbox{\rm ord}(\textsf{Q}_{i})<-n\leq 2\,\mbox{\rm ord}(\textsf{Q}_{i})<0$ –
for example, operators of order -2 on a 4-manifold. Then according
to Theorem 1.17
(1.30)
$$\log{\rm det}_{\mbox{\rm res}}(I+\textsf{Q}_{i})=\sum_{p=1}^{2}\frac{(-1)^{p+1%
}}{p}\ \mbox{\rm res}(\textsf{Q}_{i}^{p})=\mbox{\rm res}(\textsf{Q}_{i})-\frac%
{1}{2}\,\mbox{\rm res}(\textsf{Q}_{i}^{2})\ .$$
So
(1.31)
$$\log{\rm det}_{\mbox{\rm res}}(I+\textsf{Q}_{1})+\log{\rm det}_{\mbox{\rm res}%
}(I+\textsf{Q}_{2})$$
$$=\mbox{\rm res}(\textsf{Q}_{1})+\mbox{\rm res}(\textsf{Q}_{2})-\frac{1}{2}\,%
\mbox{\rm res}(\textsf{Q}_{1}^{2})-\frac{1}{2}\,\mbox{\rm res}(\textsf{Q}_{2}^%
{2})\ .$$
On the other hand,
$(I+\textsf{Q}_{1})(I+\textsf{Q}_{2})=I+(\textsf{Q}_{1}+\textsf{Q}_{2}+\textsf{%
Q}_{1}\textsf{Q}_{2})$ and
(1.30) applies with $\textsf{Q}_{i}$ replaced by $\textsf{Q}_{1}+\textsf{Q}_{2}+\textsf{Q}_{1}\textsf{Q}_{2}$ so that
(1.32)
$$\log{\rm det}_{\mbox{\rm res}}((I+\textsf{Q}_{1})(I+\textsf{Q}_{2}))$$
$$=\mbox{\rm res}(\textsf{Q}_{1}+\textsf{Q}_{2}+\textsf{Q}_{1}\textsf{Q}_{2})-%
\frac{1}{2}\,\mbox{\rm res}\left((\textsf{Q}_{1}+\textsf{Q}_{2}+\textsf{Q}_{1}%
\textsf{Q}_{2})^{2}\right)\ .$$
Since the $\textsf{Q}_{i}$ have integer order
$$\mbox{\rm res}(\textsf{Q}_{1}+\textsf{Q}_{2}+\textsf{Q}_{1}\textsf{Q}_{2})=%
\mbox{\rm res}(\textsf{Q}_{1})+\mbox{\rm res}(\textsf{Q}_{2})+\mbox{\rm res}(%
\textsf{Q}_{1}\textsf{Q}_{2})$$
while by the tracial property (1.6) of
res
$$\displaystyle\mbox{\rm res}\left((\textsf{Q}_{1}+\textsf{Q}_{2}+\textsf{Q}_{1}%
\textsf{Q}_{2})^{2}\right)$$
$$\displaystyle=$$
$$\displaystyle\mbox{\rm res}(\textsf{Q}_{1}^{2}+\textsf{Q}_{2}^{2}+\textsf{Q}_{%
1}\textsf{Q}_{2}+\textsf{Q}_{2}\textsf{Q}_{1}+\ {\rm terms\ of\ order<-n})$$
$$\displaystyle=$$
$$\displaystyle\mbox{\rm res}(\textsf{Q}_{1}^{2})+\mbox{\rm res}(\textsf{Q}_{2}^%
{2})+2\,\mbox{\rm res}(\textsf{Q}_{1}\textsf{Q}_{2})\ .$$
Hence
(1.31) and (1.32) are equal.
An application of properties in the previous theorems yields the
following formula.
Theorem 1.20.
Let $A\in\Psi^{\alpha}(E),B_{0}\in\Psi^{\beta_{0}}(E),\ldots,B_{d}\in\Psi^{\beta_{d%
}}(E)$ be classical $\psi{\rm do}$s. Assume $\alpha=\mbox{\rm ord}(A)>0$
and $\alpha-\beta_{j}\in\mathbb{N}\backslash\{0\}$ (strictly positive
integer). For $t\in\mathbb{R}$ the polynomial $B[t]=B_{0}+B_{1}\,t+\ldots B_{d}\,t^{d}\in\Psi^{\beta}(E)$ is a classical $\psi{\rm do}$ of order
$\beta=\max\{\beta_{j}\ |\ j=0,\ldots,d\}$. Suppose $A$ admits a
principal angle. Then $\zeta(A+B[t])+\mbox{\rm h}_{0}(A+B[t])$ is a
polynomial of degree $d\,\left[n/(\alpha-\beta)\right]$ in $t$ with
local coefficients. Specifically, let $Q\in\Psi^{-\alpha}(E)$ be a
two-sided parametrix for $A$, then
$$\displaystyle\zeta(A+B[t],0)|^{{\rm mer}}+\mbox{\rm h}_{0}(A+B[t])=\zeta(A,0)|%
^{{\rm mer}}+\mbox{\rm h}_{0}(A)$$
(1.33)
$$\displaystyle-\frac{1}{\alpha}\sum_{k=1}^{\left[\frac{n}{\alpha-\beta}\right]}%
\sum_{I_{k}}\frac{(-1)^{k}}{k}\ \mbox{\rm res}(QB_{i_{1}}QB_{i_{2}}\ldots QB_{%
i_{k}})\ t^{|I_{k}|}\ ,$$
where the inner sum is over $k$-tuples $I_{k}=(i_{1},\ldots,i_{k})$ of $k$ (not necessarily distinct) elements $i_{j}\in\{1,\ldots,d\}$, and $|I_{k}|=i_{1}+\ldots+i_{k}$. If $A$ is
invertible then $Q$ in (1.33) can be replaced
by $A^{-1}$. In particular, if $A$ is invertible
(1.34)
$$\zeta(A+tI,0)|^{{\rm mer}}=\zeta(A,0)|^{{\rm mer}}-\frac{1}{\alpha}\sum_{k=1}^%
{n}\frac{(-1)^{k}}{k}\ \mbox{\rm res}(A^{-k})\ t^{k}\ .$$
Note that (1.33) implies that if
$B\in\Psi^{\beta}(E)$ and $\alpha-\beta-n\in\mathbb{N}\backslash\{0\}$, then
$$\zeta(A+B,0)|^{{\rm mer}}+\mbox{\rm h}_{0}(A+B)=\zeta(A,0)|^{{\rm mer}}+\mbox{%
\rm h}_{0}(A)\ .$$
Equation (1.34) is
equivalent to equation (6.5) of [Wo1].
Example. If we apply (1.34) to a
Laplace-type differential operator $\Delta_{g}$ on a
$2m$-dimensional manifold, then comparing with (1.21)
we infer for $k\geq 1$
(1.35)
$$\frac{1}{2}\ \mbox{\rm res}(\Delta_{g}^{-k})=\frac{c_{-k}(\Delta_{g})}{(k-1)!}\ .$$
Specifically, for $k=m$
$$\frac{1}{2}\ \mbox{\rm res}(\Delta_{g}^{-m})=\frac{\mbox{\rm vol}_{g}(M)\,{\rm
rk%
}(E)}{(4\pi)^{m}\,(m-1)!}$$
which coincides with (1.28), while, for example, for
the Laplace-Beltrami operator on a $4$-manifold with scalar
curvature $\kappa_{M}$
$$\ \mbox{\rm res}(\Delta_{g}^{-2})=\frac{1}{24\pi^{2}}\int_{M}\kappa_{M}\ dx\ .$$
With a little extra work it is easy to see using these methods for
$A\in\Psi^{\alpha\neq 0}(E)$ not necessarily positive but admitting a
principal angle, that
$$\frac{1}{\alpha}\ \mbox{\rm res}(A^{-z_{0}})={\rm Res}^{\mathbb{C}}_{s=z_{0}}%
\zeta(A,s)|^{{\rm mer}}_{s=z_{0}}\ ,$$
where the right side is the usual complex residue of a
meromorphic function. This formula is well-known [Wo2] and
(1.35) is a particular case.
2. Preliminaries
To explain the nature of the residue determinant we
begin with the sub-algebra $\Psi^{0}(E)$ of classical $\psi{\rm do}$s of
order zero. For $A\in\Psi^{0}(E)$ with spectrum disjoint from the
ray $R_{\theta}=\{re^{i\theta}\ |\ r\geq 0\}$ the complex powers for
any $s\in\mathbb{C}$ are defined by
(2.1)
$$A_{\theta}^{-s}=\frac{i}{2\pi}\int_{\Gamma_{\theta}}\lambda_{\theta}^{-s}(A-%
\lambda I)^{-1}\ d\lambda\ \ \in\ \ \Psi^{0}(E)\ ,$$
where $\Gamma_{\theta}$ is a bounded ‘keyhole’ contour enclosing
the spectrum of $A$ but not enclosing any part of $R_{\theta}$ (or
the origin), while the logarithm of $A$ is defined by
(2.2)
$$\log_{\theta}A=\frac{i}{2\pi}\int_{\Gamma_{\theta}}\log_{\theta}\lambda\ (A-%
\lambda I)^{-1}\ d\lambda\ \ \in\ \ \Psi^{0}(E)\ .$$
Here $\lambda_{\theta}^{-s},\ \log_{\theta}\lambda:=-(d/ds)\lambda_{\theta}^{-s}|_{s%
=0}$
are the branches defined by $\lambda_{\theta}^{-s}=|\lambda|^{-s}e^{-is\arg(\lambda)}$ with $\theta-2\pi\leq\arg(\lambda)<\theta$. These
formulas are valid in any Banach algebra.
Sitting inside $\Psi^{0}(E)$ is the ideal $\Psi^{<-n}(E)$. An
operator $Q\in\Psi^{<-n}(E)$ is trace class with an absolutely
integrable matrix-valued kernel
(2.3)
$$K(Q,x,y)=\frac{1}{(2\pi)^{n}}\int_{\mathbb{R}^{n}}e^{i(x-y).\xi}\ \sigma(Q)(x,%
\xi)\,d\xi$$
over $M\times M$ smooth away from the diagonal (so for all
$\psi{\rm do}$s) and continuous along $\{(x,x)\ |\ x\in M\}$.
Consequently, $Q$ has $L^{2}$ trace
(2.4)
$$\mbox{\rm Tr\,}(Q)=\int_{M}\mbox{\rm tr\,}(\,K(Q,x,x)\,)\,dx\ .$$
This is a non-local spectral invariant; from
(2.3) the trace (2.4) depends on the
complete symbol $\sigma(Q)$, not just on finitely many homogeneous
terms.
Taken together, (2.2) and (2.4) define a
determinant on the ring $I+\Psi^{<-n}(E)$ of $\psi{\rm do}$s which
differ from the identity by an element of $\Psi^{<-n}(E)$
(2.5)
$${\rm det}_{{\rm{\footnotesize Tr}}}:I+\Psi^{<-n}(E)\longrightarrow\mathbb{C},%
\hskip 14.226378pt\log{\rm det}_{{\rm{\footnotesize Tr}}}(A):=\mbox{\rm Tr\,}(%
\log_{\theta}(A))\ .$$
Here, one uses
(2.6)
$$A\in I+\Psi^{<-n}(E)\ \ \ \Rightarrow\ \ \ \log_{\theta}A\in\Psi^{<-n}(E)\ .$$
${\rm det}_{{\rm{\footnotesize Tr}}}$ extends the usual determinant in
finite-dimensions to the group $(I+\Psi^{<-n}(E))_{{\rm inv}}$
of invertible operators in $I+\Psi^{<-n}(E)$:
Lemma 2.1.
On the group $(I+\Psi^{<-n}(E))_{{\rm inv}}$ the determinant
${\rm det}_{{\rm{\footnotesize Tr}}}$ is the Fredholm determinant; it is
independent of the choice of $\theta$ and is multiplicative. ${\rm det}_{{\rm{\footnotesize Tr}}}$ is a non-local spectral invariant, and
has no (multiplicative) extension from $(I+\Psi^{<-n}(E))_{{\rm inv}}$ to the group $\Psi^{*}(E)$ of invertible elliptic $\psi{\rm do}$s.
Proof.
The first sentence follows, for example, from the comments around
(2.17), below, and [Sc] Prop(2.21). Alternatively,
one can prove these properties directly; independence of $\theta$
by the method of proof of Proposition 1.5,
multiplicativity by the Campbell-Hausdorff theorem (cf. proof of
Theorem 1.7). Since the $L^{2}$ trace Tr is non-local and has
no extension to a trace functional on $\Psi(E)$, the second
statement easily follows.
∎
Since $\log_{\theta}A$ vanishes on $\mbox{\rm Ker}(A)$, the functional
${\rm det}_{{\rm{\footnotesize Tr}}}$ is discontinuous at non-invertible
elements of the ring $I+\Psi^{<-n}(E)$, and therefore differs
from the Fredholm determinant which is continuous and vanishes on
such elements. On the other hand, from (1.3) and
(2.6), or from (1.10):
Lemma 2.2.
The residue determinant is trivial on $I+\Psi^{<-n}(E)$
(2.7)
$$A\in I+\Psi^{<-n}(E)\ \ \ \Rightarrow\ \ \ {\rm det}_{{\rm\mbox{\rm res}}}A=1\ .$$
The relation of the residue determinant (1.7) to the
classical determinant (2.5) is thus structurally the
same as that of the residue trace to the classical $L^{2}$ operator
trace.
Because adding a smoothing operator to $A\in\Psi(E)$ does not
affect ${\rm det}_{{\rm res}}A$ the invertibility of the operator
is not detected in either (1.26) or
(2.7).
Next, let $A$ be a classical $\psi{\rm do}$ of order $\alpha>0$ with principal angle $\theta$. We assume further for simplicity
that $\theta$ is an Agmon angle, meaning that $A-\lambda I$ is
invertible in a neighborhood of $R_{\theta}$; in particular, $A$ is
elliptic and invertible. Since the $L^{2}$ operator norm of $(A-\lambda I)^{-1}$ is $O(|\lambda|^{-1})$ as $|\lambda|\rightarrow\infty$ one can define for ${\rm Re}(s)>0$
(2.8)
$$A_{\theta}^{-s}=\frac{i}{2\pi}\int_{{\mathcal{C}}_{\theta}}\lambda_{\theta}^{-%
s}(A-\lambda I)^{-1}\ d\lambda\ ,\hskip 28.452756pt{\rm Re}(s)>0\ ,$$
where now ${\mathcal{C}}_{\theta}$ is a contour travelling in along
$R_{\theta}$ from infinity to a small circle around the origin,
clockwise around the circle, and then back out along $R_{\theta}$ to
infinity.
In Seeley’s 1967 paper [Se] on the complex powers
$A^{-s}_{\theta}$ of an elliptic operator, two quite different
extensions of (2.8) to the whole complex plane were
explained.
In the first of these, for $s\in\mathbb{C}$ choosing $k\in\mathbb{N}$ with
${\rm Re}(s)+k>0$ and setting
(2.9)
$$A_{\theta}^{-s}:=A^{k}A_{\theta}^{-s-k}\ \ \in\ \ \Psi^{-\alpha s}(E)\ ,$$
where $A_{\theta}^{-s-k}$ on the right side is defined by
(2.8), defines, independently of $k$, a group of
elliptic classical $\psi{\rm do}$s with
(2.10)
$$A_{\theta}^{0}=I,\ A_{\theta}^{m}=A^{m},\ m\in\mathbb{Z}\ .$$
An important consequence is the construction of the logarithm of
$A$, a logarithmic $\psi{\rm do}$ of type $\alpha$ [Ok1, KoVi], defined by
(2.11)
$$\log_{\theta}A:=-\left.\frac{d}{ds}\right|_{s=0}A_{\theta}^{-s}\ ,$$
and satisfying $\frac{d}{ds}A_{\theta}^{-s}=-\log_{\theta}A\cdot A_{\theta}^{-s}$. Since $\log_{\theta}A$ and $A_{\theta}^{-s}$ for ${\rm Re}(s)<0$
are unbounded (note (2.10)), and hence far from trace
class, this extension has been of little direct interest. It is,
though, precisely the object of interest, and, with the residue
trace at hand, all that is needed to define the residue
determinant.
Traditionally, however, one does something quite different and
attempts to extend the determinant (2.5) from $I+\Psi^{<-n}(E)$ to $\Psi(E)$ by extending the $L^{2}$ trace from
$\Psi^{<-n}(E)$ to $\Psi(E)$ using spectral zeta-functions. The
latter leads necessarily to a quasi-trace on $\Psi(E)$ (i.e. a
functional which is non-tracial), and hence to a quasi-determinant
(i.e. a functional which is non-multiplicative), the
zeta-determinant. This is achieved through the meromorphic
extension of the complex powers $A_{\theta}^{-s}$ from ${\rm Re}(s)>n/\alpha$,
the second of Seeley’s extensions, and has been the object of
enormous interest. It is, though, quite irrelevant to the
construction of the residue determinant.
Nevertheless, spectral zeta functions have a part to play in what
follows and we need to recall something of these constructions. We
include a $\psi{\rm do}$ coefficient $B$, which for the moment we assume
to be classical of order $\beta$, and we assume $A$ to have order
$\alpha>0$ and to be invertible. The kernel $K(B\,A_{\theta}^{-s},x,y)$ of
$BA_{\theta}^{-s}$, which is continuous in $(x,y)$ and holomorphic in
$s$ for ${\rm Re}(s)>(n+\beta)/\alpha$, has along the diagonal a meromorphic
extension $K(B\,A_{\theta}^{-s},x,x)|^{{\rm mer}}$ to all $s\in\mathbb{C}$
with at most simple poles located on the real axis at the points
indicated in (2.12). Consequently the zeta function
$\zeta_{\theta}(B,A,s):=\mbox{\rm Tr\,}(B\,A_{\theta}^{-s})$ is holomorphic for
${\rm Re}(s)>(n+\beta)/\alpha$ and extends to a meromorphic function
$$\zeta_{\theta}(B,A,s)|^{{\rm mer}}:=\int_{M}\mbox{\rm tr\,}(K(B\,A_{\theta}^{-%
s},x,x)|^{{\rm mer}})\ dx\ \ $$
on $\mathbb{C}$. It has pole structure ([Se], [GrSe], [Gr])
(2.12)
$$\Gamma(s)\,\zeta_{\theta}(B,A,s)|^{{\rm mer}}\sim\sum_{j=0}^{\infty}\frac{c_{j%
}}{s+\frac{j-n-\beta}{\alpha}}+\sum_{k=0}^{\infty}\left(\frac{c^{\prime}_{k}}{%
(s+k)^{2}}+\frac{c^{\prime\prime}_{k}}{s+k}\right)\ ,$$
where the terms $c_{j},c^{\prime}_{j}$, are local, depending on just
finitely many homogeneous terms of the symbols of both $A$ and
$B$, while the $c^{\prime\prime}_{k}$ are global, depending on the
complete symbols. Around zero (2.12) implies a
Laurent expansion
$$\,\zeta_{\theta}(B,A,s)|^{{\rm mer}}=\frac{c^{\prime}_{0}}{s}+(c_{n+\beta}+c^{%
\prime\prime}_{0})+O(s)\ ,\hskip 28.452756pt{\text{a}s}\ \ s\longrightarrow 0\ ,$$
the simple pole
determining the zeta-function formula for the residue trace of the
classical $\psi{\rm do}$ $B$
(2.13)
$$\mbox{\rm res}(B)=\alpha\,{\rm Res}^{\mathbb{C}}_{s=0}\,(\,\zeta_{\theta}(B,A,%
s)|^{{\rm mer}}\,)=\alpha\,c^{\prime}_{0}\ ,$$
while the constant term defines the ‘$A$-weighted zeta-trace’
(2.14)
$$\mbox{\rm Tr\,}_{\zeta}^{A}(B)=c_{n+\beta}+c^{\prime\prime}_{0}\ .$$
If $\beta<-n$ one has $\mbox{\rm res}(B)=0$ and so $\zeta_{\theta}(B,A,s)|^{{\rm mer}}$ is then holomorphic near $s=0$, while $\mbox{\rm Tr\,}_{\zeta}^{A}(B)=\zeta_{\theta}(B,A,0)|^{{\rm mer}}$ and is equal to the $L^{2}$ trace
$\mbox{\rm Tr\,}(B)=c^{\prime\prime}_{0}$. If $\beta\in\mathbb{R}\backslash\mathbb{Z}$ then
once more $\mbox{\rm res}(B)=0$ and $\mbox{\rm Tr\,}_{\zeta}^{A}(B)=\zeta_{\theta}(B,A,0)|^{{\rm mer}}$ is independent of $A$ [KoVi],
and again equal to the global term $c^{\prime\prime}_{0}$ [Gr],
and vanishes on commutators for which the sum of operator orders
is non-integral [KoVi]. These properties also hold on the
subalgebra of odd-class $\psi{\rm do}$s [KoVi]. $\mbox{\rm Tr\,}_{\zeta}^{A}$ is a
quasi-trace in so far as it is not tracial on the full algebra
$\Psi(E)$, but does extend the $L^{2}$ trace to a trace on the above
subclasses.
If $B$ is a logarithmic $\psi{\rm do}$ then $\zeta_{\theta}(B,A,s)|^{{\rm mer}}$ again extends meromorphically to $\mathbb{C}$ but now with $\beta=0$ in (2.12) and with possible additional poles
$c_{j,1}(s+\frac{j-n}{\alpha})^{-2}$ [Gr]. When $B=\log_{\theta}A$ there is no pole at $s=0$ and a (quasi-) determinant,
the zeta determinant ${\rm det}_{\zeta,\theta}(A)$, may be defined by
taking the zeta trace 333For $L$ a logarithmic $\psi{\rm do}$, such
as $\log_{\theta}A$, the dependence on the choice of regularizing
operator $\mbox{\rm Tr\,}_{\zeta}^{A_{1}}(L)-\mbox{\rm Tr\,}_{\zeta}^{A_{2}}(L)=\mbox{%
\rm Tr\,}(L(A_{1}^{-s}-A_{2}^{-s}))|_{s=0}^{{\rm mer}}$ is computed in
[KoVi], [Ok2], [PaSc] as a residue trace, leading to
the multiplicative anomaly formula for the zeta-determinant. of
$\log_{\theta}A$
(2.15)
$$\displaystyle\log{\rm det}_{\zeta,\theta}(A)$$
$$\displaystyle:=$$
$$\displaystyle\mbox{\rm Tr\,}_{\zeta}^{A}(\log A)$$
$$\displaystyle=$$
$$\displaystyle\zeta_{\theta}(\log A,A,0)|^{{\rm mer}}\ .$$
If $\alpha>0$ and one sets $\zeta(A,s)|^{{\rm mer}}:=\zeta(I,A,s)|^{{\rm mer}}=\mbox{\rm Tr\,}(A^{-s})|^{{%
\rm mer}}$, then
equivalently
(2.16)
$$\log{\rm det}_{\zeta,\theta}(A)=-\frac{d}{ds}\zeta_{\theta}(A,s)|_{s=0}^{{\rm
mer%
}}\ ,\hskip 28.452756pt\alpha>0\ .$$
If $\alpha=0$ then from (2.6) and the comment
following (2.14), the determinant (2.15)
is defined on $I+\Psi^{<-n}(E)$ and equal there to the Fredholm
determinant. On the other hand, $\zeta_{\theta}(A,s)$, and hence the
right side of (2.16), is then not defined for any $s$.
Nevertheless, the relative zeta function
(2.17)
$$\zeta_{\theta}^{{\rm rel}}[A,B](s):=\mbox{\rm Tr\,}(A_{\theta}^{-s}-B_{\theta}%
^{-s})$$
is defined on $I+\Psi^{<-n}(E)$, and one has there
$\mbox{\rm Tr\,}\log_{\theta}(A)=-\frac{d}{ds}\zeta_{\theta}^{{\rm rel}}[A,I](s%
)|_{s=0}$. For elliptic $A_{0},A_{1}$ of non-zero order
$\zeta_{\theta}^{{\rm rel}}[A_{0},A_{1}](s)=\zeta_{\theta}(A_{0},s)-\zeta_{%
\theta}(A_{1},s)$; relative determinants are studied in [Mu],
[Sc].
The zeta determinant ${\rm det}_{\zeta,\theta}(A)$ is non-local,
depends on the spectral cut $R_{\theta}$ [Wo1] and is not
multiplicative [Ok2, KoVi].
The residue determinant has a quite different nature.
First, the construction of the residue determinant takes
place completely independently of spectral zeta-functions, and it
is in this distinction that its non-triviality lies.
Specifically, using the spectral $\zeta$-function formula
(2.13) with $B=\log_{\theta}A$ to define a putative
residue determinant, rather than the symbolic definition
(1.7), leads to a trivial determinant (equal to $1$);
the triviality is equivalent to $\zeta(A,s)|^{{\rm mer}}$ being
holomorphic at zero, and thus (2.15) being defined.
Further, since the residue zeta function $\zeta_{\mbox{\rm res}}(A,s):=\mbox{\rm res}(A^{-s})$ is highly discontinuous – for, from
(1.3), $\mbox{\rm res}(A^{-s})$ can be non-zero only for $s\cdot\mbox{\rm ord}(A)\in\mathbb{Z}\cap(\,-\infty\ ,\,n\,]$ – there is no residue
analogue of (2.16).
The residue trace (1.7) on $\log_{\theta}A$ thus does not
arise a complex residue, but it does generalize the integral
formula which for classical $\psi{\rm do}$s coincides with the complex
residue (2.13). However, if $A,B$ have residue
determinants and are of the same order then the difference
$\mbox{\rm res}(\log A)-\mbox{\rm res}(\log B)$ is given as a complex residue.
The residue determinant ${\rm det}_{\mbox{\rm res}}(A)$ is local,
independent of the spectral cut $R_{\theta}$ and multiplicative.
3. Proofs
Let $\textsf{a}\sim(\textsf{a}_{0},\textsf{a}_{1},\ldots)\in S^{\alpha}(U),\ %
\textsf{b}\sim(\textsf{b}_{0},\textsf{b}_{1},\ldots)\in S^{\beta}(U)$ be local classical (1-step
polyhomogeneous) symbols of respective degrees $\alpha,\beta\in\mathbb{R}$.
Then a product structure is defined on $S(U)$
$$\textsf{a}\circ\textsf{b}\sim((\textsf{a}\circ\textsf{b})_{0},(\textsf{a}\circ%
\textsf{b})_{1},\ldots\ )\in S^{\alpha+\beta}(U)\ ,$$
with
(3.1)
$$(\textsf{a}\circ\textsf{b})_{j}=\sum_{|\mu|+k+l=j}\frac{1}{\mu!}\partial_{\xi}%
^{\mu}(\textsf{a}_{k})\,D_{x}^{\mu}(\textsf{b}_{l})\ ,$$
and multiplicative identity element
$$\textsf{I}=(I,0,0,\ldots)\ .$$
At the $\psi{\rm do}$ level this represents the operator product modulo
smoothing operators; thus if in local coordinates $\sigma(A),\sigma(B)$
are respectively equivalent symbols to $\textsf{a},\textsf{b}$, then $\sigma(AB)$
is equivalent to $\textsf{a}\circ\textsf{b}$. This is all that is needed to
compute local quantities such as the residue determinant.
To do so for a classical elliptic $\psi{\rm do}$ $A$ of order $\alpha$,
standard methods [Gi, Se, Sh] construct a parametrix for $A-\lambda I$ by inverting locally at the symbolic level. We consider a
finite open cover of $M$ by coordinate patches $U_{i},i\in I=\{1,\ldots,m\}$, over each of which $E$ is trivialized, with a
subordinate partition of unity $\phi_{i}\in C^{\infty}(U_{i})$ such that for
$i,j\in I$ there is a $l_{ij}\in I$ with $\mbox{\rm supp}(\phi_{i})\cup\mbox{\rm supp}(\phi_{j})\subset U:=U_{l_{ij}}$. Then
(3.2)
$$A=\sum_{i,j}\phi_{i}A\phi_{j}$$
with each summand a $\psi{\rm do}$ acting in a single coordinate patch,
and it will be enough to work with a symbol $\textsf{a}=\sigma(\phi_{i}A\phi_{j})\sim(\textsf{a}_{0},\textsf{a}_{1},\ldots)%
\in S^{\alpha}(U)$ of each such
local operator. A local resolvent symbol
$$\textsf{r}(\lambda)\sim(\textsf{r}(\lambda)_{0},\textsf{r}(\lambda)_{1},\ldots%
\ )\in S^{-\alpha}(U_{\lambda})$$
is defined over $U_{\lambda}=\{x\in U\ |\ \lambda\notin{\rm spec}(\textsf{a}_{0}(x,\xi)),\ ,\xi%
\in\mathbb{R}^{n}\}$ by the inductive formulae
(3.3)
$$\textsf{r}(\lambda)_{0}(x,\xi)=(\textsf{a}_{0}(x,\xi)-\lambda\textsf{I})^{-1}\ ,$$
(3.4)
$$\textsf{r}(\lambda)_{j}(x,\xi)=-\,\textsf{r}(\lambda)_{0}(x,\xi)\sum_{%
\stackrel{{\scriptstyle|\mu|+k+l=j}}{{l<j}}}\frac{1}{\mu!}\partial_{\xi}^{\mu}%
\textsf{a}_{k}(x,\xi)\,D_{x}^{\mu}\textsf{r}(\lambda)_{l}(x,\xi)\ .$$
For $|\xi|\geq 1,t\geq 1$, each $\textsf{r}(\lambda)_{j}(x,\xi)$ has the
quasi-homogeneity property
$$\textsf{r}(t^{\alpha}\lambda)_{j}\,(x,t\,\xi)=t^{-\alpha-j}\,\textsf{r}(%
\lambda)_{j}(x,\xi)\ ,$$
and by construction
(3.5)
$$\textsf{r}(\lambda)\circ(\textsf{a}-\lambda\textsf{I})\sim(\textsf{a}-\lambda%
\textsf{I})\circ\textsf{r}(\lambda)\sim\textsf{I}\ .$$
Consequently, if $A$ has principal angle $\theta$, then $\log_{\theta}A$
and $A_{\theta}^{-s}$ are represented by symbols
(3.6)
$$\log_{\theta}\textsf{a}\sim(\textsf{q}_{\theta,0},\textsf{q}_{\theta,1},\ldots%
\ )\ ,\hskip 28.452756pt\textsf{a}_{\theta}^{-s}\sim(\textsf{a}_{\theta,0}^{-s%
},\textsf{a}_{\theta,1}^{-s},\ldots\ )\ ,$$
where, with $\Gamma_{\theta}=\Gamma_{\theta}(x,\xi)$ a closed
contour as in (2.1) chosen to enclose the spectrum of
$\textsf{a}_{0}(x,\xi)$ avoiding the spectral cut and the origin,
(3.7)
$$\textsf{q}_{\theta,j}(x,\xi)=\frac{i}{2\pi}\int_{\Gamma_{\theta}}\log_{\theta}%
\lambda\ \textsf{r}(\lambda)_{j}(x,\xi)\ d\lambda\ ,$$
(3.8)
$$\textsf{a}_{\theta,j}^{-s}(x,\xi)=\frac{i}{2\pi}\int_{\Gamma_{\theta}}\lambda_%
{\theta}^{-s}\ \textsf{r}(\lambda)_{j}(x,\xi)\ d\lambda\ .$$
We may also write
$$(\log\textsf{a})_{\theta,j}(x,\xi):=\textsf{q}_{\theta,j}(x,\xi)\ .$$
For $|\xi|\geq 1$, it follows for $j\geq 0$ that
$\textsf{a}_{\theta,j}^{-s}\in S^{-\alpha s-j}(U)$ is homogeneous of degree $-\alpha s-j$, and for $j\geq 1$ that $\textsf{q}_{\theta,j}\in S^{-j}(U)$ is
homogeneous of degree $-j$. Since
$$\textsf{q}_{\theta,j}(x,\xi)=-\partial_{s}|_{s=0}\textsf{a}_{\theta,j}^{-s}(x,\xi)$$
and
$$\textsf{a}_{\theta,j}^{-s}(x,\xi)|_{s=0}=\delta_{j,0}\textsf{I}\ ,$$
then
(3.9)
$$\textsf{a}_{\theta}^{0}=\textsf{I}=(I,0,0,\ldots)$$
and
(3.10)
$$\textsf{q}_{\theta,0}(x,\xi)=\alpha\log[\xi]I+\textsf{p}_{\theta,0}(x,\xi)$$
with $\textsf{p}_{\theta,0}\in S^{0}(U)$ a classical symbol of degree $0$.
This means that the $\psi{\rm do}$s $\log_{\theta}A$ and $A_{\theta}^{-s}$ can be
approximated modulo smoothing operators as
(3.11)
$$\log_{\theta}A\sim\sum_{j=0}^{\infty}(\log_{\theta}A)_{[j]}\ ,\hskip 28.452756%
ptA_{\theta}^{-s}\sim\sum_{j=0}^{\infty}A_{\theta,j}^{(-s)}$$
with $(\log_{\theta}A)_{[j]}={\rm OP}(\textsf{q}_{\theta,j})$, an operator in
$\Psi^{-j}(U,E)$ for $j\geq 1$, and $A_{\theta,j}^{(-s)}={\rm OP}(\textsf{a}_{\theta,j}^{-s})\in\Psi^{-s\alpha-j}(U%
,E)$ for $j\geq 0$. In
particular, the local residue density associated to
$\textsf{q}_{\theta,n}(x,\xi)$ is defined independently of the choice of
local coordinates and
(3.12)
$$\log{\rm det}_{\mbox{\rm res}}A:=\frac{1}{(2\pi)^{n}}\int_{M}\int_{|\xi|=1}%
\mbox{\rm tr\,}(\textsf{q}_{\theta,n}(x,\xi))\ dS(\xi)\,dx\ .$$
Proof of Proposition 1.5
Proof.
From (3.4) we have that $\textsf{r}(\lambda)_{n}$ is computed
only using the first $n+1$ homogeneous terms $\textsf{a}_{0},\ldots,\textsf{a}_{n}$. Consequently, by (3.7) and (3.12)
the same is true for the residue determinant.
Let $\theta,\phi\in\mathbb{R}$ be two choices of principal angle with $(\theta-\phi)/2\pi\in\mathbb{R}\backslash\mathbb{Z}$. Then
(3.13)
$$\textsf{q}_{\phi,j}(x,\xi)-\textsf{q}_{\theta,j}(x,\xi)=2\pi i\,m\,\textsf{I}_%
{j}\ +\ \int_{\Gamma_{\phi,\theta}}\textsf{r}(\lambda)_{j}(x,\xi)\ d\lambda\ ,$$
where $m=\pm[(\theta-\phi)/2\pi]\in\mathbb{Z}$ and the bounded contour
$\Gamma_{\phi,\theta}=\Gamma_{\phi,\theta}(x,\xi)$ can be taken of the form
$$\{\rho e^{i\theta}\ |\ R\geq\rho\geq r\}\cup\{r\,e^{it}\ |\ \phi\geq t\geq%
\theta\}\cup\{\rho e^{i(\theta-2\pi)}\ |\ r\leq\rho\leq R\}\cup\{R\,e^{it}\ |%
\ \phi\leq t\leq\theta\}$$
enclosing an annular region between the cuts $R_{\theta}$, and
$R_{\phi}$ and circles of radius $r<R$. This follows by a similar
analysis to [Wo1]§3 for the symbols of the complex powers.
On the other hand, the contour integral on the right-side of
(3.13) is $-2\pi i$ times the homogeneous component of
degree $-j$ of the local symbol of a $\psi{\rm do}$ projection
$P_{\theta,\phi}(A)$ whose range contains the direct sum of those
generalized eigenspaces of $A$ with eigenvalues contained in
$\Gamma_{\phi,\theta}$, and is zero if $(\theta-\phi)/2\pi\in\mathbb{Z}$ (see
[Bu], [Po]). Consequently, taking $j=n$,
(3.12) and (3.13) imply
$${\rm res}(\log_{\theta}A)-{\rm res}(\log_{\phi}A)=-2\pi i\,{\rm res}(P_{\theta%
,\phi}(A))\ .$$
Since the residue trace of any $\psi{\rm do}$ projection is zero
[Wo2], we infer that $\det_{\rm res}$ is independent of the
choice of principal angle.
∎
Remark 3.1.
The vanishing of res on $\psi{\rm do}$ projections is shown in
[Wo2] to be equivalent to $\zeta_{\theta}(A,0)|^{{\rm mer}}$ being
independent of $\theta$ .
Proof of Theorem 1.7
Proof.
From (3.12) and
(3.1), $\mbox{\rm res}(\log A)$ is seen to depend on only
the first $n+1$ homogeneous terms in the local symbol expansion of
$A$, and finitely many of their derivatives, while
$(\textsf{a}\circ\textsf{b})_{n}$ is determined using only $\textsf{a}_{0},\ldots,\textsf{a}_{n},\textsf{b}_{0},\ldots,\textsf{b}_{n}$. The demonstration of multiplicativity can
therefore be reduced to a certain finite-dimensional symbol
algebra, introduced by Okikiolu [Ok1]§3, where the following
standard Banach algebra version of the Campbell-Hausdorff Theorem
[Ok1, Ja] can be applied.
Theorem Let ${\mathcal{B}}$ be a Banach
algebra with norm $\|\ .\ \|$ and identity $I$. For invertible
elements $a,b\in{\mathcal{B}}$ and a choice of Agmon angles one can define
using (2.2) elements $\log(a),\,\log(b)$ and $\log(ab)$ in ${\mathcal{B}}$. Then for real sufficiently small $s,t>0$
(3.14)
$$\log(a^{s}\,b^{t})=s\,\log(a)+t\,\log(b)+\sum_{k=1}^{\infty}C^{(k)}(s\,\log(a)%
,t\,\log(b))\ ,$$
where $C^{(k)}(s\,\log(a),t\,\log(b))$ is the element of ${\mathcal{B}}$
(3.15)
$$\sum_{j=1}^{\infty}\frac{(-1)^{j+1}}{j+1}\sum\frac{({\rm Ad}(s\,\log(a))^{n_{1%
}}({\rm Ad}(t\,\log(b))^{m_{1}}\ldots({\rm Ad}(s\,\log(a))^{n_{j}}({\rm Ad}(t%
\,\log(b))^{m_{j}}\log(b)}{(1+\sum_{i=1}^{j}m_{i})\,n_{1}!\ldots n_{j}!\,m_{1}%
!\ldots m_{j}!}$$
and the inner sum is over $j$-tuples of pairs $(n_{i},m_{i})$ such
that $n_{i}+m_{i}>0$ and $\sum_{i=1}^{j}n_{i}+m_{i}=k$. For $c\in{\mathcal{B}}$ the operator ${\rm Ad}(c)$ acts by ${\rm Ad}(c)(c^{{}^{\prime}})=[c,c^{{}^{\prime}}]$.
We denote by $\textsf{S}_{[n]}(U)$ the algebra of finite-symbol sequences
of length $n$, introduced in [Ok1]. An element of
$\textsf{S}_{[n]}(U)$ is an $(n+1)$-tuple $\textsf{p}=(\textsf{p}_{0},\ldots,\textsf{p}_{n})$
of polynomials
(3.16)
$$\textsf{p}_{j}:U\times\mathbb{R}^{n}\longrightarrow\mbox{\rm End}(\mathbb{R}^{%
N})\ ,\hskip 28.452756pt\textsf{p}_{j}(x,\xi)=\sum_{|\mu|+|\nu|\leq n-1}p_{j,%
\mu,\nu}\,x^{\mu}\xi^{\nu}\ ,$$
with $p_{j,\mu,\nu}\in\mbox{\rm End}(\mathbb{R}^{N})$. $\textsf{S}_{[n]}(U)$ is a
finite-dimensional vector space which relative to a fixed point
$(x_{0},\xi_{0})\in U\times\mathbb{R}^{n}$ can be endowed with an associative
product structure, defined for $\textsf{p},\widetilde{\textsf{p}}\in\textsf{S}_{[n]}(U)$ by
(3.17)
$$(\textsf{p}\circ\widetilde{\textsf{p}})_{j}=\pi_{n-j}\left(\sum_{|\mu|+k+l=j}%
\frac{1}{\mu!}\partial_{\xi}^{\mu}(\textsf{p}_{k})\,D_{x}^{\mu}(\widetilde{%
\textsf{p}}_{l})\right)\ ,$$
where for a smooth function $f$ defined in a neighborhood of
$(x_{0},\xi_{0})\in U\times\mathbb{R}^{n}$
(3.18)
$$\pi_{m}(f)=\sum_{|\mu|+|\nu|\leq m}\frac{1}{\mu!\nu!}\partial_{\xi}^{\mu}%
\partial^{\nu}(f)(x_{0},\xi_{0})\,(x-x_{0})^{\mu}\,(\xi-\xi_{0})^{\nu}$$
is the Taylor expansion of $f$ around $(x_{0},\xi_{0})$ to order $m$.
Endowed with this product, relative to $(x_{0},\xi_{0})$,
$\textsf{S}_{[n]}(U)$ becomes an algebra which we denote by
$$\textsf{S}_{[n]}(U)(x_{0},\xi_{0})\ .$$
The map from the symbol space $S(U)$ to symbols of length $n$
(3.19)
$$\pi:S(U)\longrightarrow\textsf{S}_{[n]}(U)(x_{0},\xi_{0})\ ,\hskip 14.226378pt%
\pi(\textsf{a}):=(\pi_{n}(\textsf{a}_{0}),\pi_{n-1}(\textsf{a}_{1}),\ldots,\pi%
_{0}(\textsf{a}_{n}))\ ,$$
where $\textsf{a}=(\textsf{a}_{0},\textsf{a}_{1},\ldots)$, is an algebra homomorphism,
so that
(3.20)
$$\left(\pi(\textsf{a}\circ\textsf{b})\right)_{j}=\left(\pi(\textsf{a})\circ\pi(%
\textsf{b})\right)_{j}\ ,$$
while, from (3.18), evaluation at the point
$(x_{0},\xi_{0})$ gives
(3.21)
$$\left(\pi(\textsf{a})\right)_{j}(x_{0},\xi_{0})=\textsf{a}_{j}(x_{0},\xi_{0})%
\ ,\hskip 14.226378ptj\leq n\ .$$
The logarithm of an element $\textsf{p}=(\textsf{p}_{0},\ldots,\textsf{p}_{n})\in\textsf{S}_{[n]}(U)(x_{0},%
\xi_{0})$ admitting a principal angle can be
defined by the procedure used in $S(U)$: Consider, by inclusion,
p as an element $\tilde{\textsf{p}}$ of $S(U)$. If $\lambda\notin{\rm spec}(\textsf{p}_{0}(x,\xi))$ then $\tilde{\textsf{p}}$ has a resolvent
$\textsf{r}(\lambda)\in S(U_{\lambda})$ given by (3.4), while
(3.22)
$$\textsf{r}_{\pi}(\lambda):=\pi(\textsf{r}(\lambda))\in\textsf{S}_{[n]}(U)(x_{0%
},\xi_{0})$$
inverts $\textsf{p}-\lambda\textsf{I}_{n}$ in $\textsf{S}_{[n]}(U)(x_{0},\xi_{0})$; that is,
since $\pi(\tilde{\textsf{p}})=\textsf{p}$, applying $\pi$ to
(3.5) and using (3.20) we have
(3.23)
$$\textsf{r}_{\pi}(\lambda)\circ(\textsf{p}-\lambda\textsf{I}_{n})=(\textsf{p}-%
\lambda\textsf{I}_{n})\circ\textsf{r}_{\pi}(\lambda)=\textsf{I}_{n}\ ,$$
where $\textsf{I}_{n}=(I,0,\ldots,0)$ is the identity symbol in
$\textsf{S}_{[n]}(U)(x_{0},\xi_{0})$. Set
(3.24)
$$(\log_{\theta}\textsf{p})_{j}(x,\xi)=\frac{i}{2\pi}\int_{\Gamma_{\theta}}\log_%
{\theta}\lambda\ \textsf{r}_{\pi}(\lambda)_{j}(x,\xi)\ d\lambda\ .$$
Since the entries of $\textsf{r}_{\pi}(\lambda)$ are finite Taylor
expansions of $\textsf{r}(\lambda)_{j}(x,\xi)$ around $(x_{0},\xi_{0})$ the only
logarithmic term is a $\log|\xi_{0}|$, there is no $\log|\xi|$ term.
It follows that (3.24) is an element of
$\textsf{S}_{[n]}(U)(x_{0},\xi_{0})$. Moreover, it is clear ([Ok1] Lemma
3.6) that for $\textsf{a}\in S(U)$
(3.25)
$$\left(\pi(\log_{\theta}\textsf{a})\right)_{j}=\left(\log_{\theta}(\pi(\textsf{%
a})\right)_{j}\ .$$
Likewise, $\textsf{p}_{\theta,j}^{-s}(x,\xi)=\frac{i}{2\pi}\int_{\Gamma_{\theta}}\lambda^%
{-s}_{\theta}\textsf{r}_{\pi}(\lambda)_{j}(x,\xi)\ d\lambda\in\textsf{S}_{[n]}%
(U)(x_{0},\xi_{0})$ if
$\textsf{p}\in\textsf{S}_{[n]}(U)(x_{0},\xi_{0})$, and we find for $\textsf{a}\in S(U)$
(3.26)
$$\left(\pi(\textsf{a}_{\theta}^{-s})\right)_{j}=\left(\,(\pi(\textsf{a}))_{%
\theta}^{-s}\,\right)_{j}\ .$$
Now for $s,t\in[0,1]$ and $\textsf{a},\textsf{b}\in S(U)$, (3.20),
(3.25) and (3.26) give
(3.27)
$$\left(\,\pi(\log_{\theta}(\textsf{a}^{s}\circ\textsf{b}^{t}))\,\right)_{n}=%
\left(\,\log_{\theta}(\pi(\textsf{a})^{s}\circ\pi(\textsf{b})^{t})\,\right)_{n%
}\ ,$$
omitting the $\theta$ subscript. Since $\textsf{S}_{[n]}(U)(x_{0},\xi_{0})$ is a
finite-dimensional algebra, for $s,t\geq 0$ sufficiently small we
have from (3.14) for the induced norm
$$\displaystyle\left(\,\pi(\log_{\theta}(\textsf{a}^{s}\circ\textsf{b}^{t}))\,%
\right)_{n}$$
$$\displaystyle=$$
$$\displaystyle s\,(\log\,\pi(\textsf{a}))_{n}+t\,(\log\,\pi(\textsf{b}))_{n}$$
$$\displaystyle+\sum_{k=1}^{\infty}\left(C^{(k)}(s\,\log\,\pi(\textsf{a}),\ t\,%
\log\pi(\textsf{b}))\right)_{n}$$
$$\displaystyle=$$
$$\displaystyle s\,(\pi(\log\,\textsf{a}))_{n}+t\,(\pi(\log\,\textsf{b}))_{n}$$
$$\displaystyle+\sum_{k=1}^{\infty}\left(\pi\left(C^{(k)}(s\,\log\,\textsf{a},\ %
t\,\log\,\textsf{b})\right)\,\right)_{n}\ ,$$
and so, evaluating at the point $(x_{0},\xi_{0})$,
(3.21) implies
$$\displaystyle\log_{\theta}(\textsf{a}^{s}\circ\textsf{b}^{t})_{n}(x_{0},\xi_{0})$$
$$\displaystyle=$$
$$\displaystyle s\,\log\,\textsf{a}_{n}(x_{0},\xi_{0})+t\,\log\,\textsf{b}_{n}(x%
_{0},\xi_{0})$$
$$\displaystyle+$$
$$\displaystyle\sum_{k=1}^{\infty}\left(C^{(k)}(s\,\log\,\textsf{a},\ t\,\log\,%
\textsf{b})\right)_{n}(x_{0},\xi_{0})\ .$$
Since all terms in (3) lie in the symbol class $S(U)$,
with uniformly continuous derivatives of all orders on compact
subsets of $U\times\mathbb{R}^{n}$, the convergence in (3) as
$N\rightarrow\infty$ of
$$\log_{\theta}(\textsf{a}^{s}\circ\textsf{b}^{t})_{n}-s\,\log\,\textsf{a}_{n}-t%
\,\log\,\textsf{b}_{n}-\sum_{k=1}^{N}\left(C^{(k)}(s\,\log\,\textsf{a},\ t\,%
\log\,\textsf{b})\right)_{n}$$
and all its derivatives at the point $(x_{0},\xi_{0})$ is also
uniform in $(x_{0},\xi_{0})\in U_{c}\times S^{n-1}$ for compact subsets
$U_{c}\subset U$. Hence, taking a partition of unity we can
interchange the sum with integration over $S^{*}M$ to get
(3.30)
$$\int_{M}\int_{|\xi|=1}\mbox{\rm tr\,}(\log(\textsf{a}^{s}\circ\textsf{b}^{t})_%
{n}(x,\xi))\ dS(\xi)\,dx=$$
$$s\int_{M}\int_{|\xi|=1}\mbox{\rm tr\,}(\log\,\textsf{a}_{n}(x,\xi))\ dS(\xi)\,%
dx+t\int_{M}\int_{|\xi|=1}\mbox{\rm tr\,}(\log\,\textsf{b}_{n}(x,\xi))\ dS(\xi%
)\,dx$$
$$+\sum_{k=1}^{\infty}\int_{M}\int_{|\xi|=1}\mbox{\rm tr\,}(C^{(k)}(s\,\log\,%
\textsf{a},\ t\,\log\,\textsf{b})_{n}(x,\xi))\ dS(\xi)\,dx\ .$$
But $C^{(k)}(s\,\log A,\ t\,\log B)$ is classical $\psi{\rm do}$
of order $0$ with symbol
$$\sigma(C^{(k)}(s\,\log A,\ t\,\log B))\sim C^{(k)}(s\,\log\,\textsf{a},\ t\,%
\log\,\textsf{b})\ ,$$
and
so, in particular,
$$\sigma(C^{(k)}(s\,\log A,\ t\,\log B))_{n}=C^{(k)}(s\,\log\,\textsf{a},\ t\,%
\log\,\textsf{b})_{n}\ .$$
It is, furthermore, by definition a commutator of logarithmic
$\psi{\rm do}$s, and hence by Proposition 1.2
$$\frac{1}{(2\pi)^{n}}\int_{M}\int_{|\xi|=1}\mbox{\rm tr\,}(C^{(k)}(s\,\log\,%
\textsf{a},\ t\,\log\,\textsf{b})_{n}(x,\xi))\ dS(\xi)\,dx=\mbox{\rm res}\left%
(C^{(k)}(s\,\log A,\ t\,\log B)\,\right)=0\ .$$
Thus
(3.30) says that for sufficiently small $s,t\in[0,1]$
(3.31)
$$\mbox{\rm res}(\log(A^{s}B^{t}))=s\,\mbox{\rm res}(\log A)+t\,\mbox{\rm res}(%
\log B)\ .$$
But (3.31) is analytic in $s,t$ and so it holds for
all $s,t\in[0,1]$. Evaluating at $s=t=1$ completes the proof.
∎
Proof of Theorem 1.8
Proof.
Let $\textsf{a}(x,\xi)=\sigma(A)(x,\xi)\in S^{\alpha}(U)$ be the symbol of $A$
localized over $U$, as above. The complex powers
$A^{-s}_{\theta}\in\Psi^{-\alpha s}(E)$ are classical $\psi{\rm do}$s defined in
the half-plane ${\rm Re}(s)>0$ by (2.8), and elsewhere by (2.9), with
local symbol
(3.32)
$$\sigma(A^{-s}_{\theta})(x,\xi)\ \sim\ \sum_{j\geq 0}\textsf{a}_{\theta,j}^{-s}%
(x,\xi)\ .$$
If $A$ is not invertible, then for $s\neq 0$ (2.10)
remains unchanged, while
(3.33)
$$A_{\theta}^{0}=I-\Pi_{0}(A)\ ,$$
with $\Pi_{0}(A)$ a (in general, non self-adjoint) projection onto
the generalized eigenspace $E_{0}(A)$ in the statement of
Theorem 1.8.
The symbol $\sigma(A^{-s}_{\theta})(x,\xi)$ is integrable in $\xi$ for
${\rm Re}(s)>n/\alpha$ and, for such $s$, $K(A^{-s}_{\theta},x,x)\,dx$ defines a
$C^{\infty}$ globally defined $n$-form on $M$ with values in $\mbox{\rm End}(E)$.
For ${\rm Re}(s)>n/\alpha$ and any $J\in\mathbb{N}$ we have with
$\hat{d}\xi:=(2\pi)^{-n}d\xi$
(3.34)
$$\displaystyle K(A^{-s}_{\theta},x,x)$$
$$\displaystyle=$$
$$\displaystyle\int_{\mathbb{R}^{n}}\sigma(A^{-s}_{\theta})(x,\xi)\ \hat{d}\xi$$
$$\displaystyle=$$
$$\displaystyle\int_{\mathbb{R}^{n}}\left(\sigma(A^{-s}_{\theta})(x,\xi)-\sum_{j%
=0}^{J-1}\textsf{a}_{\theta,j}^{-s}(x,\xi)\right)\hat{d}\xi\ +\ \sum_{j=0}^{J-%
1}\int_{\mathbb{R}^{n}}\textsf{a}_{\theta,j}^{-s}(x,\xi)\ \hat{d}\xi\ .$$
With $A^{-s}_{\theta}$ defined for all $s\in\mathbb{C}$ by (2.9), the
difference
$$\sigma(A^{-s}_{\theta})(x,\xi)-\sum_{j=0}^{J-1}\textsf{a}_{\theta,j}^{-s}(x,%
\xi)\ \in\ S^{-\alpha{\rm Re}(s)-J}(U)$$
is integrable
in $\xi$ for
(3.35)
$${\rm Re}(s)>\frac{n-J}{\alpha}\ ,$$
and so the first integral on the right-side of (3.34)
extends holomorphically to the half-plane
(3.35). Hence choosing $J=n+1$ (or any $J>n$) we can set $s=0$ in that integral to get, using
(3.9) and (3.33),
(3.36)
$$\displaystyle\left.\int_{\mathbb{R}^{n}}\sigma(A^{-s}_{\theta})(x,\xi)-\sum_{j%
=0}^{n}\textsf{a}_{\theta,j}^{-s}(x,\xi)\ \hat{d}\xi\right|^{{\rm mer}}_{s=0}$$
$$\displaystyle=$$
$$\displaystyle\int_{\mathbb{R}^{n}}\left(\sigma(A^{0}_{\theta})(x,\xi)-\sum_{j=%
0}^{n}\textsf{a}_{\theta,j}^{0}(x,\xi)\right)\ \hat{d}\xi$$
$$\displaystyle=$$
$$\displaystyle\int_{\mathbb{R}^{n}}\left(\sigma(I-\Pi_{0}(A))(x,\xi)-\sum_{j=0}%
^{n}\textsf{I}_{\theta,j}(x,\xi)\right)\ \hat{d}\xi$$
$$\displaystyle=$$
$$\displaystyle\int_{\mathbb{R}^{n}}-\,\sigma(\Pi_{0}(A))(x,\xi)\ \hat{d}\xi\ .$$
The remaining objects of interest, then, are the local-kernels
along the diagonal
$$\textsf{K}_{j}^{-s}(x)=\int_{\mathbb{R}^{n}}\textsf{a}_{j}^{-s}(x,\xi)\ \hat{d%
}\xi\ .$$
Splitting the integral into two parts we have
(3.37)
$$\left.\textsf{K}_{j}^{-s}(x)\right|^{{\rm mer}}=\left.\int_{|\xi|\leq 1}%
\textsf{a}_{j}^{-s}(x,\xi)\ \hat{d}\xi\right|^{{\rm mer}}+\left.\int_{|\xi|%
\geq 1}\textsf{a}_{j}^{-s}(x,\xi)\ \hat{d}\xi\right|^{{\rm mer}}\ .$$
We deal first with the second term on the right side of
(3.37), for which the symbol is homogeneous in
$|\xi|$, hence leading to only local poles (any $s$). Changing to
polar coordinates and using the homogeneity of $\textsf{a}_{j}^{-s}(x,\xi)$,
we have for ${\rm Re}(s)>(n-j)/\alpha$
(3.38)
$$\displaystyle\int_{|\xi|\geq 1}\textsf{a}_{j}^{-s}(x,\xi)\ \hat{d}\xi$$
$$\displaystyle=$$
$$\displaystyle\int_{1}^{\infty}r^{-\alpha s-j+n-1}dr\int_{|\xi|=1}\textsf{a}_{j%
}^{-s}(x,\xi)\ \hat{d}S(\xi)$$
(3.39)
$$\displaystyle=$$
$$\displaystyle\frac{1}{(\alpha s+j-n)}\int_{|\xi|=1}\textsf{a}_{j}^{-s}(x,\xi)%
\ \hat{d}S(\xi)\ .$$
The meromorphic extension of the left side of
(3.38) is defined by (3.39).
When $j\neq n$ then (3.39) is holomorphic
around $s=0$ and so from (3.9)
(3.40)
$$\displaystyle\left.\int_{|\xi|\geq 1}\textsf{a}_{j}^{-s}(x,\xi)\ \hat{d}\xi%
\right|_{s=0}^{{\rm mer}}$$
$$\displaystyle=$$
$$\displaystyle 0\ ,\hskip 28.452756ptj\neq 0,\,n\ ,$$
(3.41)
$$\displaystyle\left.\int_{|\xi|\geq 1}\textsf{a}_{0}^{-s}(x,\xi)\ \hat{d}\xi%
\right|_{s=0}^{{\rm mer}}$$
$$\displaystyle=$$
$$\displaystyle-\,\frac{1}{(2\pi)^{n}}.\frac{{\rm vol}(S^{n-1})}{n}\ .$$
For $j=n$ we use [Ok2] Lemma(2.1) which states that there is
an equality
$$\textsf{a}_{j}^{-s}(x,\xi)=\sum_{k=0}^{\infty}\frac{(-s)^{k}}{k!}((\,\log%
\textsf{a})^{k})_{j}(x,\xi)\ ,$$
where $(\log\textsf{a})^{k}:=\log\textsf{a}\circ\log\textsf{a}\circ\ldots\circ\log%
\textsf{a}$
($k$ times) and the right-side is convergent as a function of
$(s,x,\xi)$ in the standard Frechet topology on $C^{\infty}(\mathbb{C}\times U,(\mathbb{R}^{N})^{*}\otimes\mathbb{R}^{N})$. So we obtain
$$\displaystyle\left.\int_{|\xi|\geq 1}\textsf{a}_{n}^{-s}(x,\xi)\ \hat{d}\xi%
\right|^{{\rm mer}}$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{\alpha s}\int_{|\xi|=1}\left(\ \textsf{I}_{n}(x,\xi)-s\,%
(\log\textsf{a})_{n}(x,\xi)+o(s)\ \right)\ \hat{d}\xi$$
$$\displaystyle=$$
$$\displaystyle-\frac{1}{\alpha}\int_{|\xi|=1}(\log\textsf{a})_{n}(x,\xi)\,\hat{%
d}\xi+o(s^{0}).$$
Hence
(3.42)
$$\left.\int_{|\xi|\geq 1}\textsf{a}_{n}^{-s}(x,\xi)\ \hat{d}\xi\right|_{s=0}^{{%
\rm mer}}=-\frac{1}{\alpha}\int_{|\xi|=1}(\log\textsf{a})_{n}(x,\xi)\ \hat{d}%
\xi\ .$$
For general $s\in\mathbb{C}$ the first (non-homogeneous) term on the
right side of (3.37) is a more complicated expression
leading to global poles. However, at $s=0$ this is local and, from
(3.9), given by
$$\left.\int_{|\xi|\leq 1}\textsf{a}_{j}^{-s}(x,\xi)\ \hat{d}\xi\right|^{{\rm mer%
}}_{s=0}=\int_{|\xi|\leq 1}\textsf{I}_{j}(x,\xi)\ \hat{d}\xi\ .$$
Hence
(3.43)
$$\displaystyle\left.\int_{|\xi|\leq 1}\textsf{a}_{j}^{-s}(x,\xi)\ \hat{d}\xi%
\right|_{s=0}^{{\rm mer}}$$
$$\displaystyle=$$
$$\displaystyle 0\hskip 28.452756ptj\neq 0\ .$$
(3.44)
$$\displaystyle\left.\int_{|\xi|\leq 1}\textsf{a}_{0}^{-s}(x,\xi)\ \hat{d}\xi%
\right|_{s=0}^{{\rm mer}}$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{(2\pi)^{n}}\,.\,{\rm vol}(B^{n})\ ,$$
where $B^{n}$ is the $n$-ball.
Thus from (3.34), (3.36),
(3.37), (3.40), (3.41),
(3.42), (3.43),
(3.44) we have
$$\displaystyle\left.K(A_{\theta}^{-s},x,x)\right|_{s=0}^{{\rm mer}}$$
$$\displaystyle=$$
$$\displaystyle-\int_{\mathbb{R}^{n}}\sigma(\Pi_{0}(A))(x,\xi)\ \hat{d}\xi dx$$
$$\displaystyle-\frac{1}{\alpha}\int_{|\xi|=1}(\log\textsf{a})_{n}(x,\xi)\,\hat{%
d}S(\xi)-\,\frac{1}{(2\pi)^{n}}.\frac{{\rm vol}(S^{n-1})}{n}+\frac{1}{(2\pi)^{%
n}}\,.\,{\rm vol}(B^{n})$$
$$\displaystyle=$$
$$\displaystyle-\int_{\mathbb{R}^{n}}\sigma(\Pi_{0}(A))(x,\xi)\ \hat{d}\xi dx-%
\frac{1}{\alpha}\int_{|\xi|=1}(\log\textsf{a})_{n}(x,\xi)\,\hat{d}S(\xi)\ .$$
Hence
$$\int_{M}\int_{|\xi|=1}\mbox{\rm tr\,}((\log\textsf{a})_{n}(x,\xi))\,\hat{d}S(%
\xi)\,dx=$$
$$-\alpha\left(\int_{M}\left.\mbox{\rm tr\,}(K(A_{\theta}^{-s},x,x)\right|_{s=0}%
^{{\rm mer}})\,dx+\int_{M}\int_{\mathbb{R}^{n}}\mbox{\rm tr\,}(\sigma(\Pi_{0}(%
A))(x,\xi))\ \hat{d}\xi dx\right)\ ,$$
that is,
$$\mbox{\rm res}(\log\,A)=-\alpha\,\left(\,\zeta(A,0)|^{{\rm mer}}+\mbox{\rm Tr%
\,}(\Pi_{0}(A))\,\right)\ .$$
∎
Proof of Theorem 1.17
Proof.
We have,
$$\log(I+\textsf{Q})=\frac{i}{2\pi}\int_{\Gamma_{\theta}}\log\lambda\ (I+\textsf%
{Q}-\lambda I)^{-1}\ d\lambda\ ,$$
where the finite contour $\Gamma_{\theta}$ encloses, in particular
$1$. Iterating
$$(I+\textsf{Q}-\lambda I)^{-1}=(1-\lambda)^{-1}I\ -(1-\lambda)^{-1}\textsf{Q}(I%
+\textsf{Q}-\lambda I)^{-1}$$
yields
$$(I+\textsf{Q}-\lambda I)^{-1}=\sum_{j=0}^{m}(-1)^{j}(1-\lambda)^{-j-1}\textsf{%
Q}^{j}+(-1)^{m+1}(1-\lambda)^{m}\textsf{Q}^{m}(I+\textsf{Q}-\lambda I)^{-1}\ ,$$
and so
(3.45)
$$\log(I+\textsf{Q})=\sum_{j=0}^{m}(-1)^{j}\,\textsf{Q}^{j}\,\frac{i}{2\pi}\int_%
{\Gamma_{\theta}}\log\lambda\,(1-\lambda)^{-j-1}\ d\lambda+R(\textsf{Q},m)\ ,$$
where
$$R(\textsf{Q},m)=(-1)^{m+1}\textsf{Q}^{m}\ \frac{i}{2\pi}\int_{\Gamma_{\theta}}%
\log\lambda\ (1-\lambda)^{m}(I+\textsf{Q}-\lambda I)^{-1}\ d\lambda\ .$$
$R(\textsf{Q},m)$ is a classical $\psi{\rm do}$ of order $-km$ and so for any
positive integer $m$ with $mk<-n$ we have $\mbox{\rm res}(R(\textsf{Q},m))=0$.
All operators in (3.45) are integer order and so we can
use the linearity of res in Lemma 1.1 to find
$$\mbox{\rm res}(\log(I+\textsf{Q}))=\sum_{j=1}^{\left[\frac{n}{|k|}\right]}(-1)%
^{j}\,\mbox{\rm res}(\textsf{Q}^{j})\,\frac{i}{2\pi}\int_{\Gamma_{\theta}}\log%
\lambda\,(1-\lambda)^{-j-1}\ d\lambda\ ,$$
the summation beginning now from $j=1$, since $\mbox{\rm res}(I)=0$ and the
contour integral is zero for $j=0$. Since $\Gamma_{\theta}$
encloses $1$, then for $j\geq 1$
$$\frac{i}{2\pi}\int_{\Gamma_{\theta}}\log\lambda\ (1-\lambda)^{-j-1}\ d\lambda=%
\frac{1}{j}\ \frac{i}{2\pi}\int_{\Gamma_{\theta}}\lambda^{-1}\ (1-\lambda)^{-j%
}\ d\lambda=-\frac{1}{j}$$
and we reach the conclusion.
∎
Proof of Theorem 1.20
Proof.
Let $A\in\Psi^{\alpha}(E)$ be an elliptic $\psi{\rm do}$, admitting a
principal angle. Let $Q$ be a parametrix for $A$, so that
(3.46)
$$AQ-I=s_{\infty}\in\Psi^{-\infty}(E)\ ,\hskip 42.679134ptQA-I=\tilde{s}_{\infty%
}\in\Psi^{-\infty}(E)$$
are smoothing operators. For any smoothing operator $\kappa_{\infty}\in\Psi^{-\infty}(E)$ one has by Corollary 1.6
(3.47)
$${\rm det}_{{\rm res}}(A+\kappa_{\infty})={\rm det}_{{\rm res}}(A)\ .$$
Let $B\in\Psi^{\alpha}(E)$ with $\alpha-\beta\in\mathbb{N}$. Then by
(1.11), which from the proof of
Theorem 1.11 is seen to hold logarithmically,
$$\displaystyle\log{\rm det}_{\mbox{\rm res}}(A+B)$$
$$\displaystyle=$$
$$\displaystyle\log{\rm det}_{\mbox{\rm res}}(AQ-s_{\infty})(A+B)$$
$$\displaystyle=$$
$$\displaystyle\log{\rm det}_{\mbox{\rm res}}(AQA+AQB+t_{\infty})$$
$$\displaystyle=$$
$$\displaystyle\log{\rm det}_{\mbox{\rm res}}(AQA+AQB)$$
$$\displaystyle=$$
$$\displaystyle\log{\rm det}_{\mbox{\rm res}}A+\log{\rm det}_{\mbox{\rm res}}(QA%
+QB)$$
$$\displaystyle=$$
$$\displaystyle\log{\rm det}_{\mbox{\rm res}}A+\log{\rm det}_{\mbox{\rm res}}(I+%
QB)\ ,$$
where $t_{\infty}\in\Psi^{-\infty}(E)$ and for the final equality we
use (3.46) and (3.47). Rewriting in
terms of (1.13) and (1.26) this
reads
$$-\alpha\,(\,\zeta(A+B,0)|^{{\rm mer}}+\mbox{\rm h}_{0}(A+B)\,)=-\alpha\,(\,%
\zeta(A,0)|^{{\rm mer}}+\mbox{\rm h}_{0}(A)\,)+\sum_{j=0}^{M}\frac{(-1)^{j}}{j%
}\,\mbox{\rm res}\,\left(\,(QB)^{j}\,\right)\ .$$
The sum terminates when $\mbox{\rm ord}(QB).j<-n$, so we may take
$$M=\left[\frac{n}{\alpha-\beta}\right]\ .$$
Replacing $B$ by $B[t]=B_{0}+B_{1}\,t+\ldots B_{d}\,t^{d}$ now
proves (1.33).
∎
References
[Bo]
J. Bost, Fibr$\acute{{\rm e}}$s d$\acute{{\rm e}}$terminants,
d$\acute{{\rm e}}$terminants r$\acute{{\rm e}}$gularises et
mesures sur le espaces de modules de courbes complexes, Asterique
152 (1988), 113–149.
[BrOr]
T. Branson and B. Ørsted, Conformal geometry and global
invariants, Diff. Geom. Appl. 1 (1991), 279–308.
[Bu]
T. Burak, On spectral projections of elliptic operators,
Ann. Scuola Norm. Sup. Pisa 24 (1970), 209–230.
[Gi]
P. Gilkey, Invariance Theory, the heat
equation and the Atiyah-Singer Index Theorem. 2nd Edition, CRC
Press, 1995.
[Gr]
G. Grubb, A resolvent approach to traces
and zeta Laurent expansions, AMS Contemp. Math. Proc., vol. 366,
2005, pp.67–93, arXiv: math.AP/0311081.
[Gr2]
G. Grubb, On the logarithmic component
in trace defect formulas, preprint 2004, arXiv: math.AP/0411483.
[GrSe]
G. Grubb and R. Seeley, Weakly parametric
pseudodifferential operators and Atiyah-Patodi-Singer boundary
problems, Invent. Math. 121 (1995), 481-529.
[Gu]
V. Guillemin,
A new proof of Weyl’s formula on the asymptotic distribution
of eigenvalues, Adv. Math. 102 (1985), 184–201.
[Ja]
N. Jacobson, Lie Algebras, Interscience
Tracts in Pure and Appl. Math. 10 1962. Wiley, New York.
[Ka]
C. Kassel, Le r$\acute{{\rm e}}$sidu
non-commutatif [d’Apr$\grave{{\rm e}}$s M.Wodzicki], Asterique
177-178 (1989), 199–229.
[KoVi]
M. Kontsevich and S. Vishik,
Determinants of elliptic pseudodifferential operators arXiv:
hep-th/9404046 (1994); Geometry of determinants of elliptic
operators, Funct. Anal. on the Eve of the 21st Century 1,
Birkhauser, Progr. Math. 131, 1995, pp.173–197.
[McSi]
H. McKean and I. Singer,
Curvature and the eigenvalues of the Laplacian, J. Diff.
Geom. 1 (1967), 43–69.
[Mu]
W. M$\ddot{\rm u}$ller,
Relative zeta functions, relative determinants and scattering
theory, Comm. Math. Phys. 192 (1998), 309–347.
[Ok1]
K. Okikiolu, The Campbell-Hausdorff theorem
for elliptic operators and a related trace formula, Duke Math. J.
79 (1995), 687–722.
[Ok2]
K. Okikiolu, The multiplicative anomaly for
determinants of elliptic operators, Duke Math. J. 79
(1995), 723–750.
[Pa]
S. Paycha, Anomalies and regularization techniques in mathematics and
physics, preprint (Colombia) 2004.
[PaRo]
S. Paycha and S. Rosenberg,
Traces and characteristic classes on loop groups, in
Infinite dimensional groups and manifolds, ed. V.Turavev,T.
Wurzbacher, de Gruyter, Berlin, 2002.
[PaSc]
S. Paycha and S. Scott, The Laurent expansion
for regularized integrals of holomorphic symbols, preprint 2004.
[Po]
R. Ponge, Spectral asymmetry, zeta functions, and the noncommutative residue,
Preprint, arXiv: math.DG/0310102 (2005).
[Sc]
S. Scott, Zeta determinants on manifolds with
boundary, J. Funct. Anal. 192 (2002), 112–185.
[ScZa]
S. Scott and D. Zagier, A symbol proof of the
local index theorem, preprint 2004.
[Se]
R. T. Seeley, Complex powers of an elliptic
operator, AMS Proc. Symp. Pure Math. X, 1966, AMS Providence,
1967, pp. 288–307.
[Sh]
M. A. Shubin, Pseudodifferential Operators
and Spectral Theory, Springer, 1987.
[Si]
B. Simon, Trace Ideals and their Applications,
LMS Lecture Notes 35, CUP, 1979.
[Wo1]
M. Wodzicki, Spectral asymmetry and zeta functions,
Invent. Math. 66 (1982), 115–135.
[Wo2]
M. Wodzicki, Local invariants of spectral
asymmetry, Invent. Math. 75 (1984), 143–178.
[Wo3]
M. Wodzicki, Non-commutative residue,
Chapter I. Fundamentals, K-Theory, Arithmetic and Geometry,
Springer Lecture Notes 1289, 1987, pp.320–399.
King’s College,
London. Email: [email protected] |
Bag-of-Features Image Indexing and Classification in Microsoft SQL Server Relational Database
Marcin Korytkowski, Rafał Scherer, Paweł Staszewski, Piotr Woldan
Institute of Computational Intelligence
Czȩstochowa University of Technology
al. Armii Krajowej 36, 42-200 Czȩstochowa, Poland
Email: [email protected], [email protected]
Abstract
This paper presents a novel relational database architecture aimed to visual objects classification and retrieval. The framework is based on the bag-of-features image representation model combined with the Support Vector Machine classification and is integrated in a Microsoft SQL Server database.
content-based image processing, relational databases, image classification
††publicationid: pubid: 978-1-4799-8322-3/15/$31.00 © 2015 IEEE
I Introduction
Thanks to content-based image retrieval (CBIR) [1][2][3][4][5][6][7][8] we are able to search for similar images and classify them [9][10][11][12][13]. Images can be analyzed based on color representation [14][15][16], textures [17][18][19][20], shape [21][22][23] or edge detectors [24]. Recently, local invariant features have gained a wide popularity [25][26][27][28][29].
The most popular local keypoint detectors and descriptors are SURF [30], SIFT [25] or ORB [31].
To find similar images to a query image, we need to compare all feature descriptors of all images usually by some distance measures. Such comparison is enormously time consuming and there is ongoing worldwide research to speed up the process. Yet, the current state of the art in the case of high-dimensional computer vision applications is not fully satisfactory. The literature presents countless methods and variants utilizing e.g. a voting scheme or histograms of clustered keypoints.
They are mostly based on some form of approximate search.
Recently, the bag-of-features (BoF) approach [32][33][29][34][35] has gained in popularity. In the BoF method, clustered vectors of image features are collected and sorted by the count of occurrence (histograms). All individual descriptors or approximations of sets of descriptors presented in the histogram form must be compared. Such calculations are computationally expensive. Moreover, the BoF approach requires to redesign the classifiers when new visual classes are added to the system.
The paper deals with a visual query-by-example problem in relational databases. Namely, we developed a system based on Microsoft SQL Server which is able to classify a sample image or to return similar images to this image.
Storing huge amount of undefined and unstructured binary data and its fast and efficient searching and retrieval is the main challenge for database designers. Examples of such data are images, video files etc. Users of world most popular relational database management systems (RDBMS) such as Oracle, MS SQL Server and IBM DB2 Server are not encouraged to store such data directly in the database files. The example of such an approach can be Microsoft SQL Server where binary data is stored outside the RDBMS and only the information about the data location is stored in the database tables. MS SQL Server utilizes a special field type called FileStream which integrates SQL Server database engine with NTFS file system by storing binary large object (BLOB) data as files in the file system. Microsoft SQL dialect (Transact-SQL) statements can insert, update, query, search, and back up FileStream data. Application Programming Interface provides streaming access to the data. FileStream uses operating system cache for caching file data. This helps to reduce any negative effects that FileStream data might have on the RDBMS performance.
Filestream data type is stored as a varbinary(max) column with pointer to actual data which are stored as BLOBs in the NTFS file system. By setting the FileStream attribute on a column and consequently storing BLOB data in the file system, we achieve the following advantages:
•
performance is the same as the NTFS file system and SQL Server cache is not burden with the Filestream data,
•
Standard SQL statements such as SELECT, INSERT, UPDATE, and DELETE work with FileStream data; however, associated files can be treated as standard NTFS files.
In the proposed system, large image files are stored in a FileStream field. Unfortunately, despite using this technique, there does not exist a technology for fast and efficient retrieval of images based on their content in existing relational database management systems.
Standard SQL language does not contain commands for handling multimedia, large text objects, and spatial data.
We designed a special type of field, in which a set of keypoints can be stored in an optimal way, as so-called User-Defined Type (UDT). Along with defining the new type of field, it is necessary to implement methods to compare its content. When designing UDT, various features must be also implemented, depending on implementing the UDT as a class or a structure, as well as on the format and serialization options. This could be done using one of the supported .NET Framework programming languages and the UDT can be implemented as a dynamic-link library (DLL), loaded in MS SQL Server. Another major challenge is to create a special database indexing algorithm, which would significantly speed up answering to SQL queries for data based on the newly defined field.
As aforementioned, standard SQL does not contain commands for handling multimedia, large text objects and spatial data. Thus, communities that create software for processing such specific data types, began to draw up SQL extensions, but they transpired to be incompatible with each other. That problem caused abandoning new task-specific extensions of SQL and a new concept won, based on libraries of object types SQL99 intended for processing specific data applications. The new standard, known as SQL/MM (full name: SQL Multimedia and Application Packages), was based on objects, thus programming library functionality is naturally available in SQL queries by calling library methods. SQL/MM consists of several parts: framework – library for general purposes, full text – defines data types for storing and searching large amount of text, spatial – for processing geospatial data, still image – defines types for processing images and data mining – data exploration.
There are also attempts to create some SQL extensions using fuzzy logic for building flexible queries. In [36] possibilities of creating flexible queries and queries based on userâs examples are presented. It should be emphasized that the literature shows little efforts of creating a general way of querying multimedia data.
The main contribution and novelty of the paper is as follows:
•
We present a novel system for content-based image classification built in a Microsoft SQL Server database,
•
We created a special database indexing algorithm, which will significantly speed up answering to visual query-by-example SQL queries in relational databases.
The paper is organized as follows. Section II describes the proposed database system. Section III provides simulation results on the the PASCAL Visual Object Classes (VOC) 2012 dataset [37].
II System Architecture and Relational Database Structure
Our system and generally BoF can work with various image features. In the paper we use SIFT features as an example. To calculate SIFT keypoints we used the OpenCV library. We did not use functions from this library as a user defined functions (UDF) directly in the database environment because:
1.
User Defined Functions can be written only in the same .NET framework version as the MS SQL Server (e.g. MS SQL Server was created based on .NET 4.0)
2.
Calculations used to find image keypoints are very complex, thus running such computations directly on the database server causes the database engine to become unresponsive.
From the above-mentioned reasons, similarly as in the case of the Full Text Search technology, the most time-consuming computations are moved to the operating system as background system services of WCF (Windows Communication Foundation).
WCF Data Service works as the REST architecture (Representational State Transfer) which was introduced by Roy T. Fielding in his PhD thesis [38]. Thanks to WCF technology, it is relatively easy to set the proposed solution in the Internet.
To store image local keypoints in the database, we created a User Defined Type (column sift_keypoints in SIFTS table). These values are not used in the classification of new query images. They are stored in case we need to identify a new class of objects in the existing images as having keypoint values, we would not have to generate keypoint descriptors again. Newly created type was created in C# language as a CLR class and only its serialized form is stored in the database.
The database stores also Support Vector Machine classifiers parameters in the SVMConfigs table. Such an approach allows running any time the service with learned parameters. Running the service in the operating system will cause reading SVM classifiers from the database. The Stats table is for collecting algorithm statistics, where the most important numbers are execution times for consecutive stages of the algorithm. The Images table is for storing membership of images for visual classes. Dictionaries table is responsible for storing keypoint clusters data, and these cluster parameters are stored in the DictionaryData field of UDT type:
⬇
public struct DictionaryData:
INullable, IBinarySerialize
{
private bool _null;
public int WordsCount {get; set;}
public int SingleWordSize {get; set;}
public double[][] Values {get; set;}
public override string ToString()
…
}
The WordsCount variable stores information about the number of words in the BoF dictionary, SingleWordSize variable value depends on the algorithm used to generate image keypoint descriptors, and in case of the SIFT algorithm, it equals 128. Two-dimensional matrix Values stores information regarding cluster centers.
The system operates in two modes:
II-1 learning mode
Image keypoint descriptors are clustered to build a bag-of-features dictionary by the $k$-means algorithm. Cluster parameters are stored in DictionaryData variables. Next, image descriptors are created for subsequent images. They can be regarded as histograms of membership of image local keypoints to words from dictionaries. We use SIFTDetector method from the Emgu CV (http://www.emgu.com) library with the following signature: ComputeDescriptorsRaw(Image<Gray, byte>grayScaleImage, Image<Gray, byte> mask , VectorOfKeypoint keypoints).
Obtained descriptors are then stored in the Descriptors table
of UDT type:
⬇
public struct DescriptorData:
INullable, IBinarySerialize
{
// Private member
private bool _null;
public int WordsCount {get; set;}
public double[] Values {get; set;}
…
}
Using records from this table, learning datasets are generated for SVM classifiers to recognize various visual classes. These classifiers parameters are stored after the training phase in the SVMConfigs table.
II-2 Classification Mode
In the classification phase, the proposed system works fully automatically. After sending an image file to the Images_FT table, a service generating local interest points is launched. In the proposed approach, we use SIFT descriptors. Next, the visual descriptors are checked against membership to clusters stored in the database in the Dictionaries table and on this base, the histogram descriptor is created. To determine membership to a visual class we have to use this vector as the input for all SVM classifiers obtained in the learning phase. For the classification purposes, we extended SQL language and defined GetClassOfImage() method in C# language and added it to the set of User Defined Functions. The argument of this method is the file identifier from the FileTable table.
Microsoft SQL Server constraints the sum of indexed columns to 900 bytes. Therefore, it was not possible to create an index on the columns constituting visual descriptors. To allow fast image searching of the Descriptors table, we created a field comparative_descriptor that stores descriptor value hashed by the MD5 algorithm. It allowed creating index on this new column, thus the time to find an image corresponding with the query image was reduced substantially.
III Numerical Simulations
We tested the proposed method on three classes of visual objects taken from the PASCAL Visual Object Classes (VOC) dataset [37], namely: Bus, Cat and Train. We divided these three classes of objects into learning and testing examples. The testing set consists of 15% images from the whole dataset. Before the learning procedure we generated local keypoint vectors for all images from the Pascal VOC dataset using the SIFT algorithm.
All simulations were performed on a Hyper-V virtual machine with MS Windows Operating System (8 GB RAM, Intel Xeon X5650, 2.67 GHz). The testing set only contained images that had never been presented to the system during learning process.
The bag-of-features image representation model combined with the Support Vector Machine (SVM) classification was run five times for various dictionary sizes: 40, 50, 80, 100, 130 and 150 words. Dictionaries for the BoF were created using C++ language, based on the OpenCV Library [39].
The results of the BoF and SVM classification on the testing data are presented in Table I. The SQL queries responses are nearly real-time for even relatively large image datasets.
IV Conclusion
We presented a method that allows integrating relatively fast content-based image classification algorithm with relational database management system. Namely, we used bag of features, Support Vector Machine classifiers and special Microsoft SQL Server features, such as User Defined Types and CLR methods, to classify and retrieve visual data. Moreover, we created indexes to search for the same query image in large sets of visual records. Described framework allows automatic searching and retrieving images on the base of their content using the SQL language. The SQL responses are nearly real-time on even relatively large image datasets.
The system can be extended to use different visual features or to have a more flexible SQL querying command set.
Acknowledgment
This work was supported by the Polish National Science Centre (NCN) under project number DEC-2011/01/D/ST6/06957.
References
[1]
J. A. Daniel Carlos, Guimaraes Pedronette and R. da S. Torres, “A scalable
re-ranking method for content-based image retrieval,” Information
Sciences, vol. 265, no. 0, pp. 91 – 104, 2014.
[2]
P. Drozda, K. Sopyla, and P. Górecki, “Online crowdsource system
supporting ground truth datasets creation,” in Artificial Intelligence
and Soft Computing - 12th International Conference, ICAISC 2013, Zakopane,
Poland, June 9-13, 2013, Proceedings, Part I, 2013, pp. 532–539.
[3]
T. Kanimozhi and K. Latha, “An integrated approach to region based image
retrieval using firefly algorithm and support vector machine,”
Neurocomputing, vol. 151, Part 3, no. 0, pp. 1099 – 1111, 2015.
[4]
E. Karakasis, A. Amanatiadis, A. Gasteratos, and S. Chatzichristofis, “Image
moment invariants as local features for content based image retrieval using
the bag-of-visual-words model,” Pattern Recognition Letters, vol. 55,
no. 0, pp. 22 – 27, 2015.
[5]
C.-H. Lin, H.-Y. Chen, and Y.-S. Wu, “Study of image retrieval and
classification based on adaptive features using genetic algorithm feature
selection,” Expert Systems with Applications, vol. 41, no. 15, pp.
6611 – 6621, 2014.
[6]
G.-H. Liu and J.-Y. Yang, “Content-based image retrieval using color
difference histogram,” Pattern Recognition, vol. 46, no. 1, pp. 188
– 198, 2013.
[7]
S. Liu and X. Bai, “Discriminative features for image classification and
retrieval,” Pattern Recognition Letters, vol. 33, no. 6, pp. 744 –
751, 2012.
[8]
E. Rashedi, H. Nezamabadi-pour, and S. Saryazdi, “A simultaneous feature
adaptation and feature selection method for content-based image retrieval
systems,” Knowledge-Based Systems, vol. 39, no. 0, pp. 85 – 94,
2013.
[9]
M. Bazarganigilani, “Optimized image feature selection using pairwise
classifiers,” Journal of Artificial Intelligence and Soft Computing
Research, vol. 1, no. 2, pp. 147–153, 2011.
[10]
Y. Chang, Y. Wang, C. Chen, and K. Ricanek, “Improved image-based automatic
gender classification by feature selection,” Journal of Artificial
Intelligence and Soft Computing Research, vol. 1, no. 3, pp. 241–253, 2011.
[11]
B. Karimi and A. Krzyzak, “A novel approach for automatic detection and
classification of suspicious lesions in breast ultrasound images,”
Journal of Artificial Intelligence and Soft Computing Research,
vol. 3, no. 4, pp. 265–276, 2013.
[12]
N. Shrivastava and V. Tyagi, “Content based image retrieval based on relative
locations of multiple regions of interest using selective regions matching,”
Information Sciences, vol. 259, no. 0, pp. 212 – 224, 2014.
[13]
J. Yang, K. Yu, Y. Gong, and T. Huang, “Linear spatial pyramid matching using
sparse coding for image classification,” in Computer Vision and
Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, June 2009, pp.
1794–1801.
[14]
J. Huang, S. Kumar, M. Mitra, W.-J. Zhu, and R. Zabih, “Image indexing using
color correlograms,” in Computer Vision and Pattern Recognition, 1997.
Proceedings., 1997 IEEE Computer Society Conference on, Jun 1997, pp.
762–768.
[15]
S. Kiranyaz, M. Birinci, and M. Gabbouj, “Perceptual color descriptor based on
spatial distribution: A top-down approach,” Image Vision Comput.,
vol. 28, no. 8, pp. 1309–1326, Aug. 2010.
[16]
G. Pass and R. Zabih, “Histogram refinement for content-based image
retrieval,” in Applications of Computer Vision, 1996. WACV ’96.,
Proceedings 3rd IEEE Workshop on, Dec 1996, pp. 96–102.
[17]
T. Chang and C.-C. Kuo, “Texture analysis and classification with
tree-structured wavelet transform,” Image Processing, IEEE
Transactions on, vol. 2, no. 4, pp. 429–441, Oct 1993.
[18]
J. Francos, A. Meiri, and B. Porat, “A unified texture model based on a 2-d
wold-like decomposition,” Signal Processing, IEEE Transactions on,
vol. 41, no. 8, pp. 2665–2678, Aug 1993.
[19]
A. K. Jain and F. Farrokhnia, “Unsupervised texture segmentation using gabor
filters,” Pattern Recognition, vol. 24, no. 12, pp. 1167 – 1186,
1991.
[20]
J. Śmietański, R. Tadeusiewicz, and E. Łuczyńska, “Texture
analysis in perfusion images of prostate cancerâa case study,”
International Journal of Applied Mathematics and Computer Science,
vol. 20, no. 1, pp. 149–156, 2010.
[21]
H. V. Jagadish, “A retrieval technique for similar shapes,” SIGMOD
Rec., vol. 20, no. 2, pp. 208–217, Apr. 1991.
[22]
H. Kauppinen, T. Seppanen, and M. Pietikainen, “An experimental comparison of
autoregressive and fourier-based descriptors in 2d shape classification,”
Pattern Analysis and Machine Intelligence, IEEE Transactions on,
vol. 17, no. 2, pp. 201–207, Feb 1995.
[23]
R. C. Veltkamp and M. Hagedoorn, “State of the art in shape matching,” in
Principles of Visual Information Retrieval, M. S. Lew, Ed. London, UK, UK: Springer-Verlag, 2001, pp.
87–119.
[24]
C. Zitnick and P. Dollar, “Edge boxes: Locating
object proposals from edges,” in
Computer Vision â ECCV 2014, ser.
Lecture Notes in Computer Science, D. Fleet, T. Pajdla, B. Schiele, and
T. Tuytelaars, Eds. Springer
International Publishing, 2014, vol. 8693, pp. 391–405.
[25]
D. G. Lowe, “Distinctive image features from scale-invariant keypoints,”
Int. J. Comput. Vision, vol. 60, no. 2, pp. 91–110, Nov. 2004.
[26]
J. Matas, O. Chum, M. Urban, and T. Pajdla, “Robust wide-baseline stereo from
maximally stable extremal regions,” Image and Vision Computing,
vol. 22, no. 10, pp. 761 – 767, 2004, british Machine Vision Computing 2002.
[27]
K. Mikolajczyk and C. Schmid, “Scale and affine
invariant interest point detectors,”
International Journal of Computer
Vision, vol. 60, no. 1, pp. 63–86, 2004.
[28]
D. Nister and H. Stewenius, “Scalable recognition with a vocabulary tree,” in
Proceedings of the 2006 IEEE Computer Society Conference on Computer
Vision and Pattern Recognition - Volume 2, ser. CVPR ’06. Washington, DC, USA: IEEE Computer Society, 2006, pp.
2161–2168.
[29]
J. Sivic and A. Zisserman, “Video google: a text retrieval approach to object
matching in videos,” in Computer Vision, 2003. Proceedings. Ninth IEEE
International Conference on, Oct 2003, pp. 1470–1477 vol.2.
[30]
H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-up robust features
(surf),” Comput. Vis. Image Underst., vol. 110, no. 3, pp. 346–359,
Jun. 2008.
[31]
E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “Orb: An efficient
alternative to sift or surf,” in Computer Vision (ICCV), 2011 IEEE
International Conference on, Nov 2011, pp. 2564–2571.
[32]
K. Grauman and T. Darrell, “Efficient image matching with distributions of
local invariant features,” in Computer Vision and Pattern Recognition,
2005. CVPR 2005. IEEE Computer Society Conference on, vol. 2, June 2005, pp.
627–634 vol. 2.
[33]
J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman, “Object retrieval
with large vocabularies and fast spatial matching,” in Computer Vision
and Pattern Recognition, 2007. CVPR ’07. IEEE Conference on, June 2007, pp.
1–8.
[34]
S. Voloshynovskiy, M. Diephuis, D. Kostadinov, F. Farhadzadeh, and T. Holotyak,
“On accuracy, robustness, and security of bag-of-word search systems,” in
IS&T/SPIE Electronic Imaging. International Society for Optics and Photonics, 2014, pp. 902 807–902 807.
[35]
J. Zhang, M. Marszalek, S. Lazebnik, and C. Schmid, “Local features and
kernels for classification of texture and object categories: A comprehensive
study,” in Computer Vision and Pattern Recognition Workshop, 2006.
CVPRW ’06. Conference on, June 2006, pp. 13–13.
[36]
D. Dubois, H. Prade, and F. Sedes, “Fuzzy logic techniques in multimedia
database querying: a preliminary investigation of the potentials,”
Knowledge and Data Engineering, IEEE Transactions on, vol. 13, no. 3,
pp. 383–392, May 2001.
[37]
M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The
pascal visual object classes (voc) challenge,” International Journal
of Computer Vision, vol. 88, no. 2, pp. 303–338, Jun. 2010.
[38]
R. T. Fielding, “Architectural styles and the design of network-based software
architectures,” Ph.D. dissertation, University of California, Irvine, 2000.
[39]
G. Bradski, “The opencv library,” Doctor Dobbs Journal, vol. 25,
no. 11, pp. 120–126, 2000. |
On the correspondence between D-branes and stationary supergravity solutions
of type II Calabi-Yau compactifications.111talk presented
at the Workshop on Strings, Duality and Geometry, CRM Montreal,
March 2000
Frederik Denef
Department of Mathematics, Columbia University
New York, NY 10027, USA
[email protected]
Abstract:
In this talk, I review how four dimensional stationary
supergravity solutions that are more general than spherically
symmetric black holes emerge naturally in the low energy
description of BPS states in type II Calabi-Yau compactifications.
An explicit construction of multicenter solutions using single
center attractor flows as building blocks is presented, and some
interesting properties of these solutions are examined. We end
with a brief remark on non-BPS configurations.
1 Introduction
Calabi-Yau compactifications of type II string theory, which have
${\mathcal{N}}=2$ residual supersymmetry in four dimensions, are known
to have a moduli dependent spectrum of wrapped BPS D-branes; such
branes, observed as BPS particles in four dimensions, can for
example decay at so-called surfaces of marginal stability, similar
to the well known decays of BPS particles in Seiberg-Witten theory
[1]. In type IIB theory, when the geometric D-brane picture
can be trusted, the mathematical equivalent of existence of a BPS
brane in a certain homology class, is the existence of a special
lagrangian submanifold in that class. In recent years, the latter
problem got quite some attention, though it turned out to be an
extremely difficult issue [2, 3], and the general
existence question remains largely unsolved. In type IIA theory,
the mathematical equivalent is the existence of certain
holomorphic bundles on holomorphic submanifolds, again when the
geometric D-brane picture can be trusted. This problem is better
under control, though our understanding is far from complete.
Existence of BPS states in stringy regimes of the moduli space,
such as the Gepner point of the Quintic, can in favorable
circumstances be tackled from a pure conformal field theory
perspective, building on the work of [4]. Substantial work
in this context has been pioneered in [5] and
significantly extended in [6, 7, 8, 9, 10, 11, 12, 13, 14].
From a complementary, four dimensional low energy point of view,
one expects to have BPS solutions to the supergravity equations of
motion for any BPS state in the spectrum, at least if supergravity
can be trusted. The simplest solutions of this kind are
spherically symmetric black holes. Those were first studied in
${\mathcal{N}}=2$ theories in [15], where it was shown that they
exhibit a remarkable “attractor” feature: the value of the
moduli at the horizon are fixed by the charge of the black hole,
in the sense that they are invariant under continuous changes of
the moduli at infinity. In [16], where this property was
linked to a vast and still largely unexplored treasure of
arithmetical properties, it was noted that the existence of these
solutions is a nontrivial problem, depending strongly on the value
of the charges and vacuum moduli. It is therefore natural to
conjecture [16] a correspondence between the existence of
those supergravity solutions and the existence of BPS D-brane
states in the full string theory.
However, as pointed out in [6, 17], this conjecture
fails in a number of established cases, where the state is known
to exist in string theory, but the corresponding black hole
solution does not exist in the supergravity theory, even in
regimes where supergravity can clearly be trusted. Obviously, from
the physics point of view, this is a major consistency problem.
The solution to this paradox was discovered in [17]:
the restriction to spherically symmetric black holes turned out to
be too narrow. For one, solutions corresponding to branes wrapped
around conifold cycles, though still spherically symmetric, are
not black holes, but rather “empty holes”, as a result of
a mechanism reminiscent of the enhançon mechanism
of [18], with a core carrying instead of a massless
vector multiplet and enhanced gauge symmetry, a massless
hypermultiplet. But more importantly, ${\mathcal{N}}=2$ supergravity
allows for solutions involving mutually nonlocal charges at rest
at a finite equilibrium distance from each other. Those solutions
are in general stationary but non-static, as they can carry a
(quantized) intrinsic angular momentum, much like the
monopole-electron system in ordinary Maxwell theory. And as it
happens, some of the BPS states found in string theory can only be
realized in the low energy theory as such multi-center solutions.
General stationary solutions of four dimensional ${\mathcal{N}}=2$
supergravity were first explored in [19], further analyzed
from a geometrical point of view in [17], and given
a rigorous, systematic treatment, including $R^{2}$ corrections, in
[20].
The multicenter configurations also give a beautiful low energy
picture of what happens at marginal stability: the state can
literally be seen to decay smoothly into its constituents.
Furthermore, Joyce’s stability criterion for special lagrangian
submanifolds [2] is elegantly recovered and generalized,
and a strong similarity to the general Pi-stability criterion
proposed in [10] emerges.
This opens up the exciting possibility that the existence
conjecture of [16], suitably generalized to include
multicenter solutions, should be taken seriously. The consequences
for mathematics and string theory of this correspondence, provided
it is true, are clearly far reaching. For example, it would enable
us to study the D-brane spectrum of compact Calabi-Yau
compactifications (and its mathematical equivalents), in a quite
systematic way, a problem that has been pretty elusive thus far
using other approaches. This issue, for type IIA theory on the
Quintic, will be addressed in a forthcoming paper [21],
mainly from a numerical perspective.
In this talk, I will review how solutions more general than
spherically symmetric black holes arise in the low energy
description of BPS states, focusing on the solutions to the BPS
equations rather than on the technicalities of their derivation,
for which we refer to [17]. A detailed construction
of multicenter solutions from single center flows, a closer
examination of some properties of these solutions and a brief
digression on non-BPS states, extend the results of this
reference.
2 Geometry of IIB/CY compactifications
To establish our notation and setup, let us briefly review the low
energy geometry of type-IIB string theory compactified on a
Calabi-Yau 3-fold. We will always work in the type IIB framework,
but the equivalence with type IIA through mirror symmetry will be
implicitly assumed in the presentation of our examples.
We will follow the manifestly duality invariant formalism
of [16]. Consider type-IIB string theory compactified on a
Calabi-Yau manifold $X$. The four-dimensional low energy theory is
${\mathcal{N}}=2$ supergravity coupled to $n_{v}=h^{1,2}$ massless abelian
vectormultiplets and $n_{h}=h^{1,1}+1$ massless hypermultiplets,
where the $h^{i,j}$ are the Hodge numbers of $X$. The
hypermultiplet fields will play no role in the following and are
set to zero.
The vectormultiplet scalars are given by the complex structure
moduli of $X$, and the lattice of electric and magnetic charges is
identified with $H^{3}(X,\mathbb{Z})$, the lattice of integral harmonic
$3$-forms on $X$. The “total” electromagnetic field strength
${\mathcal{F}}$ is (up to normalisation convention) equal to the type-IIB
self-dual five-form field strength, and is assumed to have values
in $\Omega^{2}(M_{4})\otimes H^{3}(X,\mathbb{Z})$, where $\Omega^{2}(M_{4})$
denotes the space of 2-forms on the four-dimensional spacetime
$M_{4}$. The usual components of the field strength are retrieved by
picking a symplectic basis ${\alpha^{I},\beta_{I}}$ of $H^{3}(X,\mathbb{Z})$:
$${\mathcal{F}}=F^{I}\otimes\beta_{I}-G_{I}\otimes\alpha^{I}\,.$$
(1)
A 3-brane wrapped around a cycle Poincarè dual to $\Gamma\in H^{3}(X,\mathbb{Z})$ has electric and magnetic charges equal to its
components with respect to this basis. The total field strength
satisfies the self-duality constraint: ${\mathcal{F}}=*_{10}{\mathcal{F}}\ $,
which translates to electric-magnetic duality in the four
dimensional theory.
The geometry of the vector multiplet moduli space, parametrized
with $n_{v}$ coordinates $z^{a}$, is special Kähler [22]. The
(positive definite) metric
$$g_{a\bar{b}}=\partial_{a}\bar{\partial}_{\bar{b}}{\mathcal{K}}$$
(2)
is derived from the Kähler potential
$${\mathcal{K}}=-\ln\left(i\int_{X}\Omega_{0}\wedge\bar{\Omega}_{0}\right),$$
(3)
where $\Omega_{0}$ is the holomorphic $3$-form on $X$, depending
holomorphically on the complex structure moduli. It is convenient
to introduce also the normalized 3-form
$$\Omega=e^{{\mathcal{K}}/2}\,\Omega_{0}\,.$$
(4)
The “central charge” of $\Gamma\in H^{3}(X,\mathbb{Z})$ is given by
$$Z(\Gamma)\equiv\int_{X}\Gamma\wedge\Omega\equiv\int_{\Gamma}\Omega\,,$$
(5)
where we denoted, by slight abuse of notation, the cycle
Poincaré dual to $\Gamma$ by the same symbol $\Gamma$. Note that
$Z(\Gamma)$ has nonholomorphic dependence on the moduli through
the Kähler potential.
We will make use of the (antisymmetric, topological, moduli
independent) intersection product:
$$\langle\Gamma_{1},\Gamma_{2}\rangle=\int_{X}\Gamma_{1}\wedge\Gamma_{2}=\#(%
\Gamma_{1}\cap\Gamma_{2})\,.$$
(6)
With this notation, we have for a symplectic basis $\{\alpha^{I},\beta_{I}\}$ by definition $\langle\alpha^{I},\beta_{J}\rangle=\delta^{I}_{J}$. Integrality of this intersection product is
equivalent to Dirac quantization of electric and magnetic charges.
3 BPS equations of motion
3.1 The static, single center, spherically symmetric case
The BPS equations of motion for the static, spherically symmetric
case were derived in [15], and cast in the form of first
order flow equations on moduli space in [23]. We assume a
charge $\Gamma\in H^{3}(X,\mathbb{Z})$ is located at the origin of space.
The spacetime metric is of the form
$$ds^{2}=-e^{2U}dt^{2}+e^{-2U}dx^{i}dx^{i}\,,$$
(7)
with $U$ a function of the radial coordinate distance
$r=|\bf{x}|$, or equivalently of the inverse radial coordinate
$\tau=1/r$. The BPS equations of motion for $U(\tau)$ and the
moduli $z^{a}(\tau)$ are:
$$\displaystyle\partial_{\tau}U$$
$$\displaystyle=$$
$$\displaystyle-e^{U}|Z|\,,$$
(8)
$$\displaystyle\partial_{\tau}z^{a}$$
$$\displaystyle=$$
$$\displaystyle-2e^{U}g^{a\bar{b}}\,\bar{\partial}_{\bar{b}}|Z|\,,$$
(9)
where $Z=Z(\Gamma)$ is as in (5) and $g_{a\bar{b}}$ as in
(2). A closed expression for the electromagnetic
field, given the solutions of these flow equations, can be found
e.g. in [17].
This is the form of the BPS equations found in [23]. An
alternative form of the equations, essentially equivalent to those
found in [15], is:
$$2\,\partial_{\tau}\left[e^{-U}\mathop{\rm Im}\nolimits\left(e^{-i\alpha}\Omega%
\right)\right]=-\Gamma\,,$$
(10)
where $\alpha=\arg Z$, which can be shown to be the phase of the
conserved supersymmetry (see e.g. [20]). Note that this
nice compact equation actually has $2n_{v}+2$ real components,
corresponding to taking intersection products with the $2n_{v}+2$
elements of a basis $\{C_{L}\}_{L}$ of $H^{3}(X,\mathbb{Z})$:
$$2\,\partial_{\tau}\left[e^{-U}\mathop{\rm Im}\nolimits\left(e^{-i\alpha}Z(C_{L%
})\right)\right]=-\langle C_{L},\Gamma\rangle\,,$$
(11)
One component is redundant, since taking the intersection product
of (10) with $\Gamma$ itself produces trivially $0=0$.
This leaves $2n_{v}+1$ independent equations, matching the number
of real variables $\{U,\mathop{\rm Re}\nolimits z^{a},\mathop{\rm Im}\nolimits z^{a}\}$.
Note that alternatively, we could have left $\alpha$ as an
arbitrary field instead of putting it equal to $\arg Z$. Then the
previously redundant component of the equation gives $\mathop{\rm Im}\nolimits(e^{-i\alpha}Z)=0$, hence $\alpha=\arg Z$ or $\alpha=\arg(-Z)$. The
latter possibility is automatically excluded however, since it
gives rise to a highly singular, unphysical solution. Indeed, this
case corresponds to (8)-(9) with the sign of the
right hand sides reversed. Then $|Z|$ and $e^{U}$ would be
increasing functions in $\tau$, with $e^{U}$ satisfying the estimate
$e^{U}\geq\frac{e^{U(\tau_{0})}}{1-e^{U(\tau_{0})}|Z(\tau_{0})|(\tau-\tau_{0})}$ for any $\tau_{0}$ and $\tau>\tau_{0}$. Since this
diverges at finite $\tau$, the solution breaks down. Note that
this candidate solution in any case would have had negative ADM
mass and be gravitationally repulsive — physically quite
undesirable properties. So only the possibility $\alpha=\arg Z$
remains, bringing us back to the original setup of the equations.
Since the right hand side of (11) consists of
$\tau$-independent integer charges, (10) readily
integrates to
$$2\,e^{-U}\mathop{\rm Im}\nolimits\left(e^{-i\alpha}\Omega\right)=-\Gamma\,\tau%
+2\mathop{\rm Im}\nolimits\left(e^{-U}e^{-i\alpha}\Omega\right)_{\tau=0}.$$
(12)
For asymptotically flat space, $U_{\tau=0}=U_{r=\infty}=0$.
In contrast to [15] and most of the older attractor
literature, we prefer to work with the normalized periods
and an explicit phase factor $e^{i\alpha}$. The difference
amounts to nothing more than a normalization gauge choice, but it
proves to be conceptually more transparent, and
numeric-computationally far more convenient to make the above
choice.
The result (12) is very powerful, as it solves
in principle the equations of motion. Of course, finding the
explicit flows in moduli space from (12) requires
inversion of the periods to the moduli, which in general is not
feasible analytically. However, in large complex structure
approximations or numerically for e.g. the quintic, this turns
out to be possible.
There is one catch to (12) though, namely, as shown
in [17], it is not valid for a vanishing cycle
$\Gamma$, at values of the moduli where it vanishes. Indeed,
looking for example at a one modulus case 222where we take
the modulus $z$ to be the unnormalized holomorphic $\Omega_{0}$
period, say. near a conifold point, we see that while
(8)-(9) allows for solutions with constant $z$ and
$U$ at the conifold point (since the inverse metric becomes zero
there), this is not the case for (12). The correct
equation in this case is (8)-(9). This subtlety is
important in the discussion of “empty hole” solutions (see
below), and it thus eliminates some confusion in the older
attractor literature, where solutions with charge corresponding to
a conifold cycle seemed to emerge that were very unphysical (naked
curvature singularities, gravitational repulsion, etc.). Such
pathological behavior is indeed what one gets when naively
applying (12) to those cases.
Finally, it was also observed in [17] that the BPS
equations of motion can be interpreted as geodesic equations for
stretched strings with varying tension in a certain curved
background, making contact, at least in a rigid (gravity
decoupling) limit of the theory, with the “3-1-7” brane picture
of BPS states in ${\mathcal{N}}=2$ quantum field theory
[24, 25].
3.2 The general stationary case
The BPS equations of motion for the general case, though of course
more complicated, are quite similar in structure to those of the
single center case. For a derivation, we refer to
[17] and [20]. In the latter reference, the
equations below were shown to describe the most general stationary
BPS solutions, provided a certain ansatz was made for the
embedding of the residual supersymmetry. More general solutions
could exist, but a fully general analysis proved to be too
cumbersome to be carried out thus far.
The metric will be of the form
$$ds^{2}=-e^{2U}\left(dt+\omega_{i}dx^{i}\right)^{2}+e^{-2U}dx^{i}dx^{i}\,,$$
(13)
where $U$ and $\omega$, together with the moduli fields $z^{a}$, are
time-independent solutions of the following BPS equations,
elegantly generalizing (12):
$$\displaystyle 2\,e^{-U}\mathop{\rm Im}\nolimits\left(e^{-i\alpha}\Omega\right)$$
$$\displaystyle=$$
$$\displaystyle H\,,$$
(14)
$$\displaystyle\boldsymbol{*}d\omega$$
$$\displaystyle=$$
$$\displaystyle\langle dH,H\rangle\,,$$
(15)
with $H({\bf x})$ an $H^{3}(X)$-valued harmonic function (on flat
coordinate space $\mathbb{R}^{3}$), and $\boldsymbol{*}$ the flat Hodge star operator
on $\mathbb{R}^{3}$. For $N$ charges $\Gamma_{p}$ located at coordinates
${\bf x}_{p}$, $p=1,\ldots,N$, in asymptotically flat space, one has:
$$H=-\sum_{p=1}^{N}\Gamma_{p}\,\tau_{p}\,\,+\,2\mathop{\rm Im}\nolimits\left(e^{%
-i\alpha}\Omega\right)_{r=\infty}\,,$$
(16)
with $\tau_{p}=1/|{\bf x}-{\bf x}_{p}|$.
Note that at large $r$, equation (14) reduces to lowest
order in $1/r$ to the spherically symmetric case
(12). The phase $\alpha$ in (14) is to be
considered as an a priori independent unknown field here. However,
a reasoning similar to the comments under (11) shows
that it must be asymptotically equal to the phase of $Z(\sum_{p}\Gamma_{p})$ for $|{\bf x}|\to\infty$.333This will be made more
explicit in section 4.3.
The electromagnetic field is again determined algebraically from
the solutions of the BPS equations for $U$, $\omega$ and $z^{a}$. We
refer to [17] (or [20]) for the explicit
expressions.
4 Solutions
4.1 Static single center case:
black, empty and no holes.
In asymptotically flat space (which we will assume unless stated
otherwise), all well-behaved solutions to
to (8)–(9) saturate the BPS bound $M_{ADM}=|Z_{\tau=0}|$. Another simple universal characteristic is that the
“size” of the solutions scales proportional to the charge
number, due to a trivial rescaling symmetry of the equations of
motion. An important and less trivial general property is the
following. Equation (9) implies $\partial_{\tau}|Z|=-4e^{U}g^{a\bar{b}}\,\partial_{a}|Z|\,\bar{\partial}_{\bar{%
b}}|Z|\leq 0$. Therefore the flows
in moduli space for increasing $\tau$, given by the BPS equations,
will converge to minima of $|Z|$, and the corresponding moduli
values are generically invariant under continuous deformations of
the moduli at spatial infinity, so they only depend on the charge
$\Gamma$, a phenomenon referred to as the attractor mechanism. One
distinguishes three cases [16], depending on the value of the
minimum of $|Z|$ (zero or nonzero) and its position in moduli
space (at singular or regular point):
1.
nonzero minimal $|Z|$. This yields a regular BPS black
hole. The near-horizon geometry is $AdS_{2}\times S^{2}$, and the
horizon area equals $4\pi|Z|^{2}_{min}$. From the BPS equations
of motion, one directly deduces that at the horizon,
$2\mathop{\rm Im}\nolimits(\bar{Z}\Omega)=-\Gamma$. This equation, determining
(locally) the position of the attractor point in moduli space, is
often called the attractor equation.
2.
zero minimal $|Z|$ at a regular point in moduli space. In
this case, the BPS equations do not have a solution [16]:
$Z=0$ will be reached by the flow at a finite radius, beyond which the
solution cannot be continued. This is consistent with physical
expectations: in a vacuum close to a regular zero of $Z$ in moduli
space, the state cannot exist, as it would imply the existence of
a massless particle at the zero locus, which in turn should create a
singularity in moduli space [26], in contradiction with the
supposed regularity of the point under consideration.
3.
zero minimal $|Z|$ at a singular or boundary
point in moduli space. In
this case, the BPS equations may or may not have a solution. In
the case of a conifold cycle for example, the equations do have a
solution, describing an “empty hole” [17]: again,
the zero of $Z$ (i.e. the conifold locus in moduli space)
is reached at finite radius, but now, as mentioned at the end of
section 3.1, the solution can be
continued in a ($1\times$) continuous differentiable way,
simply as flat space (i.e. constant $U$) with the
moduli fixed at the conifold locus. This is illustrated in fig. 1. Inside this core, a test
conifold particle would be massless, and the charge source
becomes completely delocalized. The latter is illustrated for
example by the fact that the core radius of an emty hole
increases when the background moduli approach the conifold radius
(since the flow reaches the attractor point “earlier” in
$\tau$). Therefore, if we let another conifold particle approach our
initial empty hole, the radius of that particle will increase,
eventually smoothly “melting” into the original core (this process
can in fact be analyzed quantitively by considering multicenter
solutions with parallel charges).
All this is quite similar to the
enhançon mechanism of [18], the main difference
being that it is a hypermultiplet becoming massless in the core
here instead of a vector multiplet. The analogon of the
unphysical, naive “repulson” solution of [18] is the
naive (and wrong) solution one would get by continuing
(12) inside the core.
Of course, from the full
string theoretic point of view, we cannot necessarily
trust the usual low energy supergravity lagrangian all the way close to
the conifold
locus, because in principle there is an additional (nearly) massless field to
be included. However, for the four dimensional supergravity
theory on itself, the empty hole solutions are perfectly well behaved, and
exhibit some properties that are physically very pleasing for such
states [17], such as the absence of a horizon,
and slow motion scattering (probably) without the coalescence effect typical
for black holes [27]. Indeed, one expects 3-branes wrapped
around a conifold cycle to behave like elementary particles that
can be consistently decoupled from gravity, rather than as black
holes, and one does not expect bound states of several copies of
such branes [26]. It is quite nice to see this emerging from the low
energy description here.
It would perhaps be interesting to find out whether empty holes, like
their black hole cousins [28], also have some sort of
a Maldacena dual [29] QFT description.
4.2 Configurations with uniformly charged spherical shells
4.2.1 Static equilibrium and marginal stability surfaces
There is a relatively simple but nontrivial generalization of
these spherically symmetric solutions, which in the end will
provide one way out of the paradoxes mentioned in the
introduction, namely configurations involving one or more
uniformely charged spherical shells (see fig. 2). In
[17] it was explained how such configurations can
arise naturally in a physical process.
The BPS solution of the field equations between two shells are
identical to the usual spherical symmetric ones, with the
appropriate enclosed charge substituted, and the solutions in
adjacent regions matched by continuity. However, to get a complete
(and stable) BPS configuration, the various energy contributions
(energy stored in fields and “bare” mass of the shells) should
precisely add up to $|Z(\Gamma)|_{r=\infty}$, with $\Gamma$ the
total charge. Let us for example consider the one shell case of
fig. 2. Denote the radius of the shell by
$r_{\mathrm{ms}}$. The energy in the bulk fields outside the
$\Gamma_{2}$-shell can be seen to be
$E_{\mathrm{out}}=|Z(\Gamma)|_{r=\infty}-(e^{U}|Z(\Gamma)|)_{r=r_{\mathrm{ms}}}$, with
$\Gamma=\Gamma_{1}+\Gamma_{2}$. The bare energy of the shell itself is
$E_{\mathrm{shell}}=(e^{U}|Z(\Gamma_{2})|)_{r_{\mathrm{ms}}}$. The
energy inside the shell is $E_{\mathrm{in}}=(e^{U}|Z(\Gamma_{2})|)_{r_{\mathrm{ms}}}$. So the total energy is
$$E_{\mathrm{tot}}=|Z(\Gamma)|_{\infty}+\left(e^{U}(|Z(\Gamma_{1})|+|Z(\Gamma_{2%
})|-|Z(\Gamma_{1})+Z(\Gamma_{2})|)\right)_{r_{\mathrm{ms}}}.$$
(17)
To saturate the BPS bound, the second term must vanish. This is
the case if and only if the phases of $Z(\Gamma_{1})$ and
$Z(\Gamma_{2})$ are equal for the values of the moduli at
$r=r_{\mathrm{ms}}$, that is, if the flow in moduli space given by
the solution crosses a surface of $(\Gamma_{1},\Gamma_{2})$ marginal
stability at $r=r_{\mathrm{ms}}$ (explaining the subscript “ms”
for this radius).
Because of the BPS condition, these configurations can be expected
to be stable. To verify this, one can compute the force potential
$W$ on a test particle of charge $\Gamma_{t}$ at rest in the
background (BPS) field of a charge $\Gamma_{0}$, starting from the
DBI+WZ action for a D-brane in an external field
[17]. The result is:
$$W(r)=\left.2\,e^{U}\,|Z(\Gamma_{t})|\,\sin^{2}(\frac{\alpha_{t}-\alpha_{0}}{2}%
)\,\,\right|_{r}\,,$$
(18)
where $\alpha_{i}=\arg Z(\Gamma_{i})$. This potential is everywhere
positive, and acquires a zero minimum when
$\alpha_{t}(r)=\alpha_{0}(r)$, that is, indeed, at marginal stability.
A specific example is shown in fig. 3.
4.2.2 Closed expression for equilibrium distance, and existence of solutions
A closed expression for the equilibrium radius $r_{\mathrm{ms}}$
can be extracted from the integrated flow
equation (12) for the fields outside the shell.
Taking the intersection product of $\Gamma_{1}$ with this equation
gives, denoting $Z(\Gamma_{i})$ in short as $Z_{i}$:
$$2\mathop{\rm Im}\nolimits(e^{-U}e^{-i\alpha}Z_{1})=-\langle\Gamma_{1},\Gamma%
\rangle\,\tau+2\,\mathop{\rm Im}\nolimits(e^{-i\alpha}Z_{1})_{\tau=0}\,.$$
(19)
At $1/\tau=r=r_{\mathrm{ms}}$, the left-hand side is zero, so
$$r_{\mathrm{ms}}=\frac{\langle\Gamma_{1},\Gamma\rangle}{2\,\mathop{\rm Im}%
\nolimits(e^{-i\alpha}Z_{1})_{r=\infty}}\,.$$
(20)
Using $e^{i\alpha}=Z/|Z|$ with $Z=Z_{1}+Z_{2}$ and $\langle\Gamma_{1},\Gamma\rangle=\langle\Gamma_{1},\Gamma_{2}\rangle$, this
can be written more symmetrically as
$$r_{\mathrm{ms}}=\frac{1}{2}\langle\Gamma_{1},\Gamma_{2}\rangle\left.\frac{|Z_{%
1}+Z_{2}|}{\mathop{\rm Im}\nolimits(\bar{Z_{2}}Z_{1})}\right|_{r=\infty}.$$
(21)
Such composite configurations will not exist for all possible
charge combinations ($\Gamma_{1},\Gamma_{2}$) in a given vacuum. For
example, a necessary condition for existence is obviously
$r_{\mathrm{ms}}>0$, with $r_{\mathrm{ms}}$ given by charge and
vacuum data as in (21).444In particular,
$\Gamma_{1}$ and $\Gamma_{2}$ should be mutually nonlocal.
This is not a sufficient condition however. For
example, the flow could hit a zero before it reaches the surface
of $(\Gamma_{1},\Gamma_{2})$ marginal stability. Or the unique point
in the flow where the left-hand side of (19) is zero,
could correspond to a point in moduli space where $Z_{1}$ and $Z_{2}$
have opposite instead of equal phases. Furthermore, of
course, $\Gamma_{2}$ has to exist as a BPS state for the values of
the moduli at $r_{ms}$, and so should the solution inside the
shell.
4.3 General, multicenter, stationary case
The above spherical solutions, while interesting and suggestive,
are still not at the same level as genuine black hole soliton
solutions of supergravity, in the sense that we explicitly added a
smeared out charge source with a nonvanishing bare mass
contribution to the total energy. On the other hand, the
expression (18) for the potential of a test particle
suggests the existence of truly solitonic BPS solutions with only
point charges, located at equilibrium distance ($r_{\mathrm{ms}}$
in fig. 2) from each other. Such solutions, in the
limit of a large number of charges positively proportional to
$\Gamma_{2}$, evenly distributed over a sphere at
$r=r_{\mathrm{ms}}$ from a black hole center with charge
$\Gamma_{1}$, can be expected to approach the spherically symmetric
case away from $r=r_{\mathrm{ms}}$. Sufficiently close to the
$\Gamma_{2}$ charges on the other hand, the solution can be expected
to approach a pure $\Gamma_{2}$ black hole solution.
To address this problem quantitavely, we should look for solutions
to the general multicenter BPS equations given by
(14)–(16). Because of the remark in footnote
4, we expect the relevant solution to involve mutually
nonlocal charges. The latter complicates the situation
considerably, since such configurations will in general not be
static, because the right hand side of (15) is nonvanishing
and hence $\omega$ cannot be gauged away.
4.3.1 Properties of muticenter solutions with mutually nonlocal charges
Assuming we have a solution to those equations, let us see what
properties we can deduce. A first observation is that there will
be constraints on the positions of the charges. Indeed, acting
with ${\bf d}\boldsymbol{*}$ on equation (15) gives
$$0=\langle\Delta H,H\rangle\,,$$
(22)
with $\Delta$ the (flat) laplacian on $\mathbb{R}^{3}$, so,
using (16) and $\Delta\tau_{p}=-4\pi\delta^{3}({\bf x}-{\bf x}_{p})$, we find that for all $p=1,\dots,N$:
$$\sum_{q=1}^{N}\frac{\langle\Gamma_{p},\Gamma_{q}\rangle}{|{\bf x}_{p}-{\bf x}_%
{q}|}=2\,\mathop{\rm Im}\nolimits\left(e^{-i\alpha}Z(\Gamma_{p})\right)_{r=%
\infty}.$$
(23)
Note that the full moduli space of solutions to the constraints
(23) will have a fairly complicated structure.
However, in the particular case of one source with charge
$\Gamma_{1}$ at ${\bf x}=0$ and $M$ sources with charges postively
proportional to $\Gamma_{2}$ at positions ${\bf x}_{p}$, the constraints
simplify to
$$|{\bf x}_{p}|=\frac{\langle\Gamma_{2},\Gamma_{1}\rangle}{2\,\mathop{\rm Im}%
\nolimits(e^{-i\alpha}Z(\Gamma_{2}))_{r=\infty}}\,,$$
(24)
which is, as expected, precisely the equilibrium distance
$r_{\mathrm{ms}}$ found in the spherical shell picture,
equation (21).
Incidentally, by summing equation (23) over all $p$,
one gets $\mathop{\rm Im}\nolimits(e^{-i\alpha}Z(\Gamma))_{\infty}=0$, with
$\Gamma=\sum_{p}\Gamma_{p}$. On the other hand, by taking the
intersection product of (14) with any $\Gamma_{p}$, and using
(23), one sees that in the limit ${\bf x}\to{\bf x}_{p}$,
that is, $\tau_{p}\to\infty$ and $\tau_{q}\to 1/|{\bf x}_{p}-{\bf x}_{q}|$ for
$q\neq 0$, one has $\mathop{\rm Im}\nolimits(e^{-i\alpha}Z(\Gamma_{p}))\to 0$.
Therefore we have
$$\alpha_{|{\bf x}|=\infty}=\arg Z(\Gamma)_{\infty}\qquad\mbox{ and }\qquad%
\alpha_{{\bf x}={\bf x}_{p}}=\arg Z(\Gamma_{p})_{{\bf x}_{p}}\,.$$
(25)
As argued under (11), the opposite sign for
$e^{i\alpha}$ is to be excluded, as it gives an unphysical and
severely singular negative mass solution, corresponding to a flow
in the “wrong direction” in moduli space [17].
Note that this also implies that very far from all charges, as
well as very close (in terms of coordinate distance) to any one of
them, the solution will approach the single center case. Thus, if
the solution exists, we can expect its image in moduli space to
look like a fattened, “split” flow, as sketched in fig. 4. We will come back to this point in much more
detail in section 4.3.2.
A second property that can be deduced directly from the equations
is the total angular momentum of the solution. It is well known
from ordinary Maxwell electrodynamics that multicenter
configurations with mutually non-local charges (e.g. the
monopole-electron system) can have intrinsic angular momentum even
when the particles are at rest. The same turns out to be true
here.
We define the angular momentum vector ${\bf J}$ from the asymptotic
form of the metric (more precisely of $\omega$) as [31]
$$\omega_{i}=2\,\epsilon_{ijk}\,J^{j}\,\frac{x^{k}}{r^{3}}+O\left(\frac{1}{r^{3}%
}\right)\quad\mbox{for }r\longrightarrow\infty\,.$$
(26)
Plugging this expression in (15) and using (16)
and (23), we find, after some work,
$${\bf J}=\frac{1}{2}\sum_{p<q}\langle\Gamma_{p},\Gamma_{q}\rangle\,{{\bf e}}_{%
pq}\,,$$
(27)
where ${{\bf e}}_{pq}$ is the unit vector pointing from ${\bf x}_{q}$ to
${\bf x}_{p}$:
$${{\bf e}}_{pq}=\frac{{\bf x}_{p}-{\bf x}_{q}}{|{\bf x}_{p}-{\bf x}_{q}|}\,.$$
(28)
Just like in ordinary electrodynamics, this is a “topological”
quantity: it is invariant under continuous deformations of the
solution, and quantized in half-integer units (more precisely,
when all charges are on the z-axis, $2J_{z}\in\mathbb{Z}$). The
appearance of intrinsic configurational angular momentum implies
that quantization of these composites will have some non-trivial
features. In particular, when many particles are involved, the
ground state will presumably be highly degenerate.
4.3.2 Construction and existence of solutions
For simplicity, we will focus here on configurations with only two
different kinds of charges $\Gamma_{1},\Gamma_{2}$, each distributed
over $N$ centers $x_{1,a}$, resp. $x_{2,a}$, $a=1,\ldots,N$. We
take $\Gamma_{1}$ and $\Gamma_{2}$ to be mutually nonlocal, $\langle\Gamma_{1},\Gamma_{2}\rangle\neq 0$. Centers of equal charge may
coincide.
Then we can write the harmonic function $H$ of (16) as
$$H=-\Gamma_{1}\,V_{1}({\bf x})-\Gamma_{2}\,V_{2}({\bf x})\,\,+\,2\mathop{\rm Im%
}\nolimits\left(e^{-i\alpha}\Omega\right)_{r=\infty}\,,$$
(29)
with
$$V_{i}({\bf x})=\sum_{a=1}^{N}\frac{1}{|{\bf x}-{\bf x}_{i,a}|}\,,$$
(30)
and (23) becomes:
$$V_{12}\equiv V_{1}({\bf x}_{2,b})=V_{2}({\bf x}_{1,a})=\frac{2}{\langle\Gamma_%
{1},\Gamma_{2}\rangle}\left(\frac{\mathop{\rm Im}\nolimits(Z_{1}\bar{Z_{2}})}{%
|Z_{1}+Z_{2}|}\right)_{r=\infty}.$$
(31)
Taking the intersection product of (14) with the source
charges $\Gamma_{1}$, $\Gamma_{2}$ and with a basis
$\{\Gamma^{\perp}_{L}\}_{L}$ of the vector space spanned by the elements
of $H^{3}(X,\mathbb{Z})$ which are local w.r.t. $\Gamma_{1},\Gamma_{2}$ (i.e.,
they have zero intersection with both $\Gamma_{1}$ and $\Gamma_{2}$),
and using (31), we get the equations
$$\displaystyle 2\,e^{-U}\mathop{\rm Im}\nolimits[e^{-i\alpha}Z_{1}]$$
$$\displaystyle=$$
$$\displaystyle-\langle\Gamma_{1},\Gamma_{2}\rangle\,(V_{2}({\bf x})-V_{12})\,,$$
(32)
$$\displaystyle 2\,e^{-U}\mathop{\rm Im}\nolimits[e^{-i\alpha}Z_{2}]$$
$$\displaystyle=$$
$$\displaystyle-\langle\Gamma_{2},\Gamma_{1}\rangle\,(V_{1}({\bf x})-V_{12})\,,$$
(33)
$$\displaystyle 2\,e^{-U}\mathop{\rm Im}\nolimits[e^{-i\alpha}Z^{\perp}_{L}]$$
$$\displaystyle=$$
$$\displaystyle 2\mathop{\rm Im}\nolimits[e^{-i\alpha}Z^{\perp}_{L}]_{r=\infty}%
\,(=\mbox{const.})\,.$$
(34)
This is a set of $2n+2$ independent equations, equivalent to
(14), for $2n+2$ variables, where $n$ is the number of
moduli.
Similarly, the second BPS equation (15) becomes
$$\displaystyle\boldsymbol{*}{\bf d}\,\omega$$
$$\displaystyle=$$
$$\displaystyle\langle\Gamma_{1},\Gamma_{2}\rangle[(V_{2}-V_{12}){\bf d}V_{1}-(V%
_{1}-V_{12}){\bf d}V_{2}]$$
(35)
$$\displaystyle=$$
$$\displaystyle\langle\Gamma_{1},\Gamma_{2}\rangle\,(V_{1}-V_{12})(V_{2}-V_{12})%
\,{\bf d}\,\ln\left(\frac{V_{1}-V_{12}}{V_{2}-V_{12}}\right)\,.$$
(36)
We define two (local) space coordinate functions, $t$ and
$\theta$, as follows:
$$V_{1}({\bf x})-V_{12}=t\cos\theta\quad;\quad V_{2}({\bf x})-V_{12}=t\sin\theta\,,$$
(37)
with $t>0$. So
$$\displaystyle t$$
$$\displaystyle=$$
$$\displaystyle\sqrt{(V_{1}-V_{12})^{2}+(V_{2}-V_{12})^{2}}\,,$$
(38)
$$\displaystyle\tan\theta$$
$$\displaystyle=$$
$$\displaystyle\frac{V_{1}-V_{12}}{V_{2}-V_{12}}\,.$$
(39)
To get a full (local) coordinate system, one of course has to
choose a third coordinate function, but we leave this choice
arbitrary here. Note that at spatial infinity, $t=V_{12}$ and
$\theta=-3\pi/4$; at any of the $\Gamma_{1}$-charged centers,
$t=\infty$ and $\theta=0$; and at any of the $\Gamma_{2}$-charged
centers, $t=\infty$ and $\theta=\pi/2$. Generically, the range of
$t$ on a surface of constant $\theta$ is finite and
$\theta$-dependent. An example (with $N=1$) is shown in fig. 5.
We also introduce a $\theta$-dependent “effective charge”
$\Gamma_{\theta}$:
$$\Gamma_{\theta}\equiv\cos\theta\,\Gamma_{1}+\sin\theta\,\Gamma_{2}\,.$$
(40)
Then we can rewrite equations (32)–(34) on a
surface of fixed $\theta$ as:
$$\displaystyle 2\,e^{-U}\mathop{\rm Im}\nolimits[e^{-i\alpha}Z_{1}]$$
$$\displaystyle=$$
$$\displaystyle-\langle\Gamma_{1},\Gamma_{\theta}\rangle\,t\,,$$
(41)
$$\displaystyle 2\,e^{-U}\mathop{\rm Im}\nolimits[e^{-i\alpha}Z_{2}]$$
$$\displaystyle=$$
$$\displaystyle-\langle\Gamma_{2},\Gamma_{\theta}\rangle\,t\,,$$
(42)
$$\displaystyle 2\,e^{-U}\mathop{\rm Im}\nolimits[e^{-i\alpha}Z^{\perp}_{L}]$$
$$\displaystyle=$$
$$\displaystyle 2\mathop{\rm Im}\nolimits[e^{-i\alpha}Z^{\perp}_{L}]_{r=\infty}%
\,(=\mbox{const.})\,,$$
(43)
or, going back to the compact form of the equations:
$$2\,\partial_{t}\left[e^{-U}\mathop{\rm Im}\nolimits(e^{-i\alpha}\Omega)\right]%
=-\Gamma_{\theta}.$$
(44)
This, together with the asymptotics of $\alpha$ at spatial
infinity in (25), implies
$$\alpha=\arg Z(\Gamma_{\theta})\quad\mbox{or}\quad\alpha=\arg[-Z(\Gamma_{\theta%
})]\,.$$
(45)
Thus, comparing with (11), we see that if $\alpha=\arg Z(\Gamma_{\theta})$, (44) describes nothing but
(part of) an ordinary single center flow for a charge
$\Gamma_{\theta}$, while in the other case, $\alpha=\arg[-Z(\Gamma_{\theta})]$, it describes part of an inverted
555An inverted flow is a flow with reversed flow evolution
parameter, here $\partial_{t}\to-\partial_{t}$. As noted under
(11), if the flow parameter gets too large, a solution
corresponding to an inverted flow always blows up into a very
unphysical singularity [17]. However, here $t$ is
generically bounded, leaving the possibility to have indeed a well
behaved solution involving (partial) inverted effective subflows.
single center flow for a charge $\Gamma_{\theta}$. The only
difference with (11) is the spatial parametrization of
the flow. In the original case we had $\tau=1/r$ going from $0$ to
$\infty$, here we have $t$ as in (38), which has a
$\theta$-dependent range.
An important question is which of the two possibilities in
(45) is satisfied at a given point ${\bf x}$. This is
going to to be ${\bf x}$-dependent, since the asymptotic conditions
(25) imply $\alpha=\arg[-Z_{\theta}]$ at spatial
infinity, and $\alpha=\arg[Z_{\theta}]$ when approaching any of
the centers. On the other hand, since $\alpha$ and $Z_{\theta}$ have
to be continuous functions, the spatial surface on which the
solution flips from one possibility to the other must have
$Z_{\theta}=0$, that is, $Z_{2}/Z_{1}=-\tan\theta\in\mathbb{R}$; in other
words, the moduli are at $(\Gamma_{1},\Gamma_{2})$- or at
$(\Gamma_{1},-\Gamma_{2})$-marginal stability, for $\tan\theta\leq 0$ resp. $\tan\theta\geq 0$. The converse statement is also
true, provided $t\neq 0$: if the moduli are at $(\Gamma_{1},\pm\Gamma_{2})$-marginal stability at a certain point ${\bf x}$ and $t\neq 0$, then $Z_{\theta}=0$. This follows directly from equations
(42)–(43).
Because a surface of marginal stability has real codimension one
in moduli space, and because of the asymptotics of
(25), a surface of $(\Gamma_{1},\pm\Gamma_{2})$-marginal stability will in any case split the image of
the solution in moduli space in two parts, as depicted in fig. 4: a region connected to the moduli at spatial
infinity, where $alpha=\arg[-Z_{\theta}]$, and a region connected to
the moduli at the centers, where $\alpha=\arg[Z_{\theta}]$. The
corresponding regions in space are separated by a surface
$\Sigma_{0}$ where $Z_{\theta}=0$ for some value of $\theta$.
Furthermore, we can assume the marginal stability surface under
consideration to be of $(\Gamma_{1},\Gamma_{2})$ type (so $\tan\theta\leq 0$ on $\Sigma_{0}$ if $t\neq 0$), as it is not possible to
have only a $(\Gamma_{1},-\Gamma_{2})$-marginal stability surface
intersecting the image in moduli space. This will become clear
further on, but can be seen indirectly by imagining to vary
continuously the moduli at spatial infinity to let them approach
the intersecting surface of marginal stability; according to
(31), the configuration will then degenerate to one
with infinitely separated $\Gamma_{1}$- and $\Gamma_{2}$-centers, and
the total mass will simply equal the sum of the BPS masses of the
individual charges in the given vacuum. This total mass can only
saturate the BPS bound if the marginal stability surface under
consideration is of $(\Gamma_{1},\Gamma_{2})$-type.
Note that the solution will not necessarily exist: as noted under
(11), complete inverted flows break down at a finite
value of the flow parameter. Therefore, the $\Gamma_{\theta}$-flows
for which $\alpha=\arg(-Z_{\theta})$ should have a range of $t$ that
is not too large. We will study this problem in more detail below.
To show how the partition in effective single center flows can be
used to explicitly construct a (possibly numerical) solution, and
thus to establish its existence, let us focus on the $N=1$ case
shown in fig. 5. There exists a circle in space where
$t=0$. From this locus, effective (generically partial, and
possibly inverted) $\Gamma_{\theta}$-flows start for arbitrary
values of $\theta$, together covering all of space. On the surface
$\theta=-3\pi/4$, running to spatial infinity, the effective flow
corresponds to an inverted flow with charge
$\Gamma_{\theta}=-\frac{1}{\sqrt{2}}(\Gamma_{1}+\Gamma_{2})$, which
image in moduli space is identical to that of a
$\Gamma_{1}+\Gamma_{2}$-flow. When $\theta=0$, we have a pure,
complete $\Gamma_{1}$-flow, and when $\theta=\pi/2$ a pure, complete
$\Gamma_{2}$-flow. At $t=0$, the moduli must be at
$(\Gamma_{1},\Gamma_{2})$-marginal stability, with $\alpha=\arg Z_{1}=\arg Z_{2}$ (this follows directly from
(42)-(43) plus the asymptotic conditions
(25)). Hence this point in moduli space is
determined as the intersection of the $\Gamma_{1}+\Gamma_{2}$ flow
starting from the moduli at spatial infinity with the surface of
marginal stability. The $\theta=-3\pi/4$, $\theta=0$ and
$\theta=\pi/2$ flows together form a “split flow”, as shown for
a specific (numerically computed) quintic example in fig. 6. The (partial) flows for the other values of $\theta$
will fatten this split flow to something like fig. 7 (b). A subset of the flows with $\tan\theta<0$
will cross a zero of $Z_{\theta}$, namely where the surface of
marginal stability is crossed in moduli space, as explained
earlier and illustrated in fig. 7. Note that this
does not lead to a breakdown of the solution, since at the same
time, we jump from $\alpha=\arg(-Z_{\theta})$ to $\alpha=\arg Z_{\theta}$, such that the inverted $\Gamma_{\theta}$-flow we have up
to the zero gets smoothly connected to an ordinary
$\Gamma_{\theta}$-flow starting from the zero.
This construction shows that a multicenter solution for given
center locations satisfying the constraint (31),
will indeed exist, if each of the $\Gamma_{\theta}$ flows exists.
The latter will be the case provided none of the $\Gamma_{\theta}$
flows crosses the $(\Gamma_{1},-\Gamma_{2})$-marginal stability
surface (where $\tan\theta\geq 0$, $Z_{\theta}$ vanishes and the
$\alpha=\arg(\pm Z_{\theta})$ condition flips), or, if some do,
provided their $t$ range is not too large.666The solution
to the second BPS equation (15) does not present further
obstacles to the existence of the solution, and is discussed
below. It is quite plausible that this will be satisfied if and
only if the “skeleton” split flow exists, though we will not try
to prove this here. What could go wrong for example is that the
partial $\Gamma_{\theta}$-flow for $\theta=\pi/4$, which finitely
extends the incoming $\Gamma_{1}+\Gamma_{2}$-branch of the split flow,
could hit a regular zero and have a maximal $t$ beyond the point
where the inverted flow beyond the zero blows up. A necessary
condition for this to happen is of course that the complete
$\Gamma_{1}+\Gamma_{2}$-flow hits a regular zero. Such cases (as in
the example of fig. 6) are quite interesting on their
own, as they correspond to states that can only be realized as a
multicenter solution; in particular they cannot be realized
as a regular black hole.777It is not that easy to find such
examples with regular black holes as constituents, because if the
charges $\Gamma_{1}\,(=\Gamma_{\theta=0})$ and $\Gamma_{2}\,(=\Gamma_{\theta=\pi/2})$ flow to a nonzero minimal $|Z|$, then
$\Gamma_{1}+\Gamma_{2}\,(=\sqrt{2}\,\Gamma_{\theta=\pi/4})$
usually flows to a finite minimum as well. One basically needs an
obstruction for smooth $\theta:0\to\pi/2$ interpolation between
the $\Gamma_{1}$ and $\Gamma_{2}$ attractor flows. In the example of
fig. 6, the obstruction occurs due to the conifold
points “inside” the split. This observation presumably also
explains the apparent absence of such examples for e.g. the one
modulus $T^{6}$ compactification of [16].
Existence of the full multicenter solution will certainly be
implied by existence of the skeleton split flow for many-center
configurations approximating the idealized, uniformly charged
spherical shell of fig. 2. Indeed, when we uniformly
distribute an enormous number $N$ of $\Gamma_{2}$- centers on a
sphere at equilibrium distance $r=r_{\mathrm{ms}}$ from a black
hole center with charge $N\Gamma_{1}$, the corresponding fattened
flow given by the multicenter solution will in fact be very thin,
staying everywhere very close to the skeleton split flow. In the
limit $N\to\infty$, the fattened flow becomes infinitely thin
and reduces to the split flow itself. The $\Gamma_{1}+\Gamma_{2}$
branch corresponds to the solution outside888The shell will
have a radius $r_{\mathrm{ms}}$ proportional to $N$ in the
${\bf x}$-coordinates. The typical distance $\ell$ between centers on
the sphere is of order $\sqrt{N}$. So the discrete structure of
the charge becomes visible at a distance of order $\ell\sim r_{\mathrm{ms}}/\sqrt{N}$ from the shell. By “outside” or
“inside” the shell, we mean being at a much greater distance
from it than $\ell$, such that the discrete structure is
essentially invisible. the $\Gamma_{1}$ branch to the solution
inside the shell, and the $\Gamma_{2}$ branch to the solution on
(and near) the shell. This can be deduced directly from the above
equations. It is plausible because away from the shell, in the
limit $N\to\infty$ and on the scale of $r_{\mathrm{ms}}$, the
situation reduces effectively to the idealized one depicted in
fig. 2. On the other hand, when zooming in to the
natural ${\bf x}$-coordinate scale close to the shell, one sees a
number of $\Gamma_{2}$ centers, with separations of order $\sqrt{N}\to\infty$, floating around in a background with moduli value at
$z^{a}_{\mathrm{ms}}$. Around each of those centers, we should
therefore have a moduli flow corresponding to the
$\Gamma_{2}$-branch of the split flow.
Actually, to get the full solution, we should also construct
$\omega$ from (36). We did not do this explicitly, but
because the position constraint (31) is in fact the
integrability condition for this equation, a solution should
certainly exist. In terms of $t$ and $\theta$, (36)
reads, rather nicely:
$$\boldsymbol{*}d\,\omega=-\langle\Gamma_{1},\Gamma_{2}\rangle\,t^{2}\,d\theta\,.$$
(46)
Here $\boldsymbol{*}d\,\omega$ is actually the invariant local quantity
(because $\omega$ can always be transformed to any $\omega+d\lambda$ by an ${\bf x}$-dependent time coordinate transformation),
and hence of more physical significance than $\omega$ itself.
Equation (46) is quite useful to visualize this
quantity in specific configurations.
So our conclusion is: (at least some) regular multicenter BPS
black hole solutions exist if and only if the corresponding split
flow solution exists.
A final remark we want to make is that for multicenter BPS
solutions involving also empty holes, this conclusion apparently
does not hold: even when the split flow exist (with at least one
branch, say $\theta=0$, ending on e.g. a conifold point) — and
consequently also an idealized spherical shell solution — a
corresponding multicenter solution (with point sources) does not
seem to exist, because any $\Gamma_{\theta}$-flow with $\theta$
sufficiently small but nonzero, will hit a regular zero, and has
at the same time an arbitrarily large maximal $t$, leading to a
breakdown of the solution. This might be related to the
delocalized nature of the source for empty holes, as discussed in
section 4.1, but at this point, we do not have a concrete
proposal for a resolution of this puzzle.
5 Composite configurations and existence of BPS states in string theory
5.1 A modified correspondence conjecture
As already mentioned in the introduction, it turns out that there
are quite a few examples of BPS states known to exist in certain
Calabi-Yau compactifications of type II string theory, which do
not have a corresponding single center BPS black (or empty) hole
solution, not even for large $N$. An example is given in fig. 8. Other examples are the higher dyons and the W-boson
in Seiberg-Witten theory.
Strictly speaking, this disproves the correspondence conjecture of
[16]. In [17], this puzzle and related paradoxes
were studied in detail, and the necessity of considering more
general stationary (multicenter) solutions in this context was
demonstrated. Thus we are brought to the following adaptation of
the correspondence conjecture, in its strongest form: a BPS
state of a given charge exists in the full string theory if and
only if a single or (possibly multi-) split attractor flow
corresponding to that charge exists.999In fact, this
statement of the conjecture is probably too strong, as it might
happen that a certain split flow exists but ceases to do so after
continuous variation of the moduli, without actually
crossing a surface of marginal stability. In that case, one does
not expect the original state to exist as a BPS state in the full
quantum theory. This is similar to a phenomenon occurring in the
context of 3-pronged strings [25], where the
existence criterion for certain BPS states needs to be refined
accordingly. We will discuss this issue in more detail in
[21]. Note that often, a BPS state can
have several different realizations in the four dimensional low
energy effective supergravity theory, either as an ordinary BPS
black hole, corresponding to a single flow, or as one or more
multicenter solutions (or spherical shell solutions),
corresponding to one or more split flows. In fig. 9,
it is shown how this modification of the conjecture indeed
resolves the paradoxes as presented by the examples mentioned in
the previous paragraph.
A much more detailed study of the BPS spectrum of the quintic from
this effective field theory point of view will be presented in
[21].
5.2 Marginal stability, Joyce transitions and
$\Pi$-stability
From (21) or (23) or
(31), it follows that when the moduli at infinity
approach the the surface of $(\Gamma_{1},\Gamma_{2})$-marginal
stability, the equilibrium distance between the $\Gamma_{1}$- and
$\Gamma_{2}$-sources will diverge, eventually reaching infinity at
marginal stability. Beyond the surface, the realization of this
charge as a BPS $(\Gamma_{1},\Gamma_{2})$-composite no longer exists.
This gives a nicely continuous four dimensional spacetime picture
for the decay of the state when crossing a surface of marginal
stability.
Furthermore, these formulae tell us at which side of the marginal
stability surface the composite state can actually exist: since
$r_{\mathrm{ms}}>0$, it is the side satisfying
$$\langle\Gamma_{1},\Gamma_{2}\rangle\,\sin(\alpha_{1}-\alpha_{2})>0\,,$$
(47)
where $\alpha_{i}=\arg Z(\Gamma_{i})_{r=\infty}$. Sufficiently close
to marginal stability, this reduces to
$$\langle\Gamma_{1},\Gamma_{2}\rangle\,(\alpha_{1}-\alpha_{2})>0\,,$$
(48)
which is precisely the stability condition for “bound states” of
special lagrangian 3-cycles found in a purely Calabi-Yau
geometrical context by Joyce! (under more specific conditions,
which we will not give here) [2, 14].
Note also that, since the right-hand side of (19) can only
vanish for one value of $\tau$, the composite configurations we
are considering here will actually satisfy
$$|\alpha_{1}-\alpha_{2}|<2\pi\,.$$
(49)
If the constituent $\Gamma_{i}$ of the composite configuration for
which $\langle\Gamma,\Gamma_{i}\rangle>0$ can be identified with
a “subobject” of the state as defined in [10], the above
conditions imply that the phases satisfy the $\Pi$-stability
criterion introduced in that reference. Though this similarity is
interesting, it is not clear how far it extends. $\Pi$-stability
seems to be considerably more subtle than what emerges here. It
would be interesting to explore this connection further.
5.3 Non-BPS composites
From the discussion of section 4.2.1, one could also
contemplate the existence of non-BPS composites. For example, what
happens when we try to throw up to $N$ particles of charge
$\Gamma_{2}$ into a black hole of charge $N\Gamma_{1}$, if we know
that a composite BPS $(N\Gamma_{1},N\Gamma_{2})$-configuration does
not exist? What does the ground state of this system look like in
the low energy effective supergravity theory? One possibility is
that a stable, statinary, non-BPS composite develops. Another
possibility is that at a certain point, we simply cannot throw any
$\Gamma_{1}$-particle into the black hole anymore, because it is
repelled from it all the way up to spatial infinity. As a first
approach to study non-BPS composites, one can look for nonzero
minima of the force potential $W$ for a test particle in the field
of a (BPS) black hole, equation (18). A configuration
with this particle in its equilibrium position can then be
expected to exist as a non-BPS solution.
A quintic example illustrating the possibility of having such a
nonzero minimum is given in fig. 10.
6 Conclusions
We have proposed a modified version of the correspondence
conjecture of [16] between BPS states in Calabi-Yau
compactifications of type II string theory on the one hand, and
four dimensional stationary ${\mathcal{N}}=2$ supergravity solutions on
the other hand, and established a link between these solutions and
split attractor flows in moduli space. Some interesting
connections emerged, to the enhançon mechanism, the 3-pronged
string picture of QFT BPS states, $\Pi$-stability and Joyce
transitions of special lagrangian manifolds.
The most prominent open question is of course whether the
conjecture (perhaps in a more refined form, as outlined in
footnote 9) actually works. A more systematic
comparison with known string theory results would be the obvious
strategy for verifying this, but this is complicated considerably
because of the fact that the number of cases that are accessible
in both approaches at the same time, is rather limited. Some steps
in that direction will be presented in [21].
Other interesting open question are: How to find a proper
“multicenter” description of composite configurations involving
empty holes? Is there a connection between D-brane moduli spaces
and supergravity solution moduli spaces? Can those solution moduli
spaces teach us something about the entropy of these states? What
is the precise relation with $\Pi$-stability? Is there a link
between the emergence of spatially extended configurations here
and in the context of noncommutative brane effects, as for example
in [32]? And who’s going to win the Subway Series, the
Yankees or the Mets?
Hopefully some of these question will get an answer in the near
future.
Acknowledgments.I would like to thank Mike Douglas, Tomeu Fiol, Brian Greene, Greg
Moore, Mark Raugas and Christian Römelsberger for useful
discussions and correspondence, and the conference organizers Eric
D’Hoker, D.H. Phong and S.T. Yau for their hard work and
patience. Part of this work was done in collaboration with Brian
Greene and Mark Raugas.
References
[1]
N. Seiberg and E. Witten, Electric-magnetic duality,
monopole condensation and confinement in $N=2$ supersymmetric
Yang-Mills theory, Nucl. Phys. B 426 (1994) 19 [hep-th/9407087].
[2]
D. Joyce, On counting special lagrangian homology
3-spheres, hep-th/9907013.
[3]
N. Hitchin, The moduli space of special lagrangian
submanifolds, dg-ga/9711002.
[4]
A. Recknagel and V. Schomerus, D-branes in Gepner models,
Nucl. Phys. B 531 (1998) 185 [hep-th/9712186].
[5]
I. Brunner, M.R. Douglas, A. Lawrence and C. Römelsberger,
D-branes on the quintic, J. High Energy Phys. 08 (2000) 015
[hep-th/9906200].
[6]
M.R. Douglas, Topics in D-geometry, Class. and Quant. Grav. 17 (2000) 1057
[hep-th/9910170].
[7]
D.-E. Diaconescu and C. Romelsberger, D-branes and bundles
on
elliptic fibrations, Nucl. Phys. B 574 (2000) 245 [hep-th/9910172].
[8]
E. Scheidegger, D-branes on some one- and two-parameter
Calabi-Yau hypersurfaces, J. High Energy Phys. 04 (2000) 003 [hep-th/9912188].
[9]
I. Brunner and V. Schomerus, D-branes at singular curves of
Calabi-Yau compactifications, J. High Energy Phys. 04 (2000) 020
[hep-th/0001132].
[10]
M.R. Douglas, B. Fiol and C. Romelsberger, Stability and BPS
branes, hep-th/0002037.
[11]
M.R. Douglas, B. Fiol and C. Romelsberger, The spectrum of
BPS
branes on a noncompact Calabi-Yau, hep-th/0003263.
[12]
D.E. Diaconescu and M.R. Douglas, D-branes on Stringy
Calabi-Yau Manifolds, hep-th/0006224.
[13]
B. Fiol and M. Marino, BPS states and algebras from
quivers, J. High Energy Phys. 07 (2000) 031 [hep-th/0006189].
[14]
S. Kachru and J. McGreevy, Supersymmetric three-cycles and
(super)symmetry breaking, Phys. Rev. D 61 (2000) 026001 [hep-th/9908135].
[15]
S. Ferrara, R. Kallosh and A. Strominger, $N=2$ extremal
black
holes, Phys. Rev. D 52 (1995) 5412 [hep-th/9508072].
[16]
G. Moore, Arithmetic and attractors, hep-th/9807087;
Attractors and arithmetic, hep-th/9807056.
[17]
F. Denef, Supergravity flows and D-brane stability,
J. High Energy Phys. 08 (2000) 050 [hep-th/0005049].
[18]
C.V. Johnson, A.W. Peet and J. Polchinski, Gauge theory and
the
excision of repulson singularities, Phys. Rev. D 61 (2000) 086001
[hep-th/9911161].
[19]
K. Behrndt, D. Lüst and W.A. Sabra, Stationary solutions
of
$N=2$ supergravity, Nucl. Phys. B 510 (1998) 264 [hep-th/9705169].
[20]
G.L. Cardoso, B. de Wit, J. Käppeli and T. Mohaupt,
Stationary BPS Solutions in N=2 Supergravity with
$R^{2}$-Interactions, hep-th/0009234.
[21]
F. Denef, B. Greene, M. Raugas, Type IIA D-branes on the
Quintic from a four dimensional supergravity perspective, to
appear.
[22]
B. de Wit, P.G. Lauwers and A.V. Proeyen, Lagrangians of
$N=2$
supergravity-matter systems, Nucl. Phys. B 255 (1985) 569;
B. Craps, F. Roose, W. Troost and A.V. Proeyen, What is
special
kaehler geometry?, Nucl. Phys. B 503 (1997) 565 [hep-th/9703082].
[23]
S. Ferrara, G.W. Gibbons and R. Kallosh, Black holes and
critical points in moduli space, Nucl. Phys. B 500 (1997) 75
[hep-th/9702103].
[24]
A. Sen, BPS states on a three brane probe,
Phys. Rev. D 55 (1997) 2501 [hep-th/9608005].
[25]
M.R. Gaberdiel, T. Hauer and B. Zwiebach, Open string-string
junction transitions, Nucl. Phys. B 525 (1998) 117 [hep-th/9801205];
O. Bergman and A. Fayyazuddin, String junctions and BPS
states
in seiberg-witten theory, Nucl. Phys. B 531 (1998) 108 [hep-th/9802033];
A. Mikhailov, N. Nekrasov and S. Sethi, Geometric
realizations
of BPS states in $N=2$ theories, Nucl. Phys. B 531 (1998) 345
[hep-th/9803142];
O. DeWolfe, T. Hauer, A. Iqbal and B. Zwiebach, Constraints
on
the BPS spectrum of $N=2$, $D=4$ theories with ADE flavor
symmetry, Nucl. Phys. B 534 (1998) 261 [hep-th/9805220].
[26]
A. Strominger, Massless black holes and conifolds in string
theory, Nucl. Phys. B 451 (1995) 96 [hep-th/9504090].
[27]
R. Ferell and D. Eardley, Slow-motion scattering and
coalescence of maximally charged black holes, Phys. Rev. 59 (1987) 1617.
[28]
J. Maldacena, J. Michelson and A. Strominger, Anti-de Sitter
fragmentation, J. High Energy Phys. 02 (1999) 011 [hep-th/9812073].
[29]
J. Maldacena, The large-$N$ limit of superconformal field
theories and supergravity, Adv. Theor. Math. Phys. 2 (1998) 231 [hep-th/9711200];
S.S. Gubser, I.R. Klebanov and A.M. Polyakov, Gauge theory
correlators from non-critical string theory, Phys. Lett. B 428 (1998) 105
[hep-th/9802109];
E. Witten, Anti-de Sitter space and holography,
Adv. Theor. Math. Phys. 2 (1998) 253 [hep-th/9802150].
[30]
B.R. Greene and C.I. Lazaroiu, Collapsing D-branes in
Calabi-Yau
moduli space, 1, hep-th/0001025.
[31]
C. Misner, K. Thorne and J.A. Wheeler, Gravitation, Freeman
and Co. 1973, chapter 21.
[32]
R.C. Myers, Dielectric-branes, J. High Energy Phys. 12 (1999) 022
[hep-th/9910053];
N.R. Constable, R.C. Myers and O. Tafjord,
The noncommutative
bion core, Phys. Rev. D 61 (2000) 106009 [hep-th/9911136]. |
Boundary De Giorgi-Ladyzhenskaya classes and their application to regularity of swirl of Navier-Stokes
Jan Burczak
[email protected]
Institute of Mathematics, Polish Academy of Sciences, Śniadeckich 8, 00-950 Warsaw.
Abstract
The embeddings theorem of space-boundary-type DeGiorgi-Ladyzhenskaya parabolic classes into Hölder spaces is presented, which is useful for regularity considerations for parabolic boundary value problems. Additionaly, the application of this theory to Navier-Stokes’s swirl is presented.
keywords:
DeGiorgi classes, swirl of Navier-Stokes, regularity of parabolic systems
††journal: arXiv
1 Introduction
We present an unified treatment of embeddings of boundary-type DeGiorgi-Ladyzhenskaya parabolic classes into Hölder spaces. This result serves the regularity studies certain PDEs. Therefore we restrict ourselves to the case of space boundary and do not consider time-boundary, as in the class of PDEs which can be tackled by this theory, the local-in-time smoothness is standard. Generally we follow ideas of lsu , where the case of boundary regularity is briefly mentioned. Here we provide clear and complete proofs and improve the original result qualitatively by obtaining better Hölder exponents, which is done in spirit of zaj . Finally, the application of this theory to Navier-Stokes’s swirl is presented.
2 Notation and preliminary results
We work with a following geometric objects
1.
$\Omega_{T}$ denoting space-time cylinder $\Omega\times[-T;0]$ with a domain $\Omega$ as its base,
2.
$\Gamma\subset\partial\Omega\times(-T,0)$ is the open part of the space boundary of $\Omega_{T}$, in which vicinity we are interested in boundary regularity (in the case of Dirichlet data we need to have certain regularity of boundary data on $\Gamma$),
3.
$Q(\rho,\tau)$ be, for a fixed point $(x_{0},t_{0})\in\Gamma$, a boundary cylinder $B_{\rho}(x_{0})\times(t_{0}-\tau;t_{0})$, which is small enough to satisfy $\partial\Omega_{T}\cap Q(\rho,\tau)=\Gamma\cap Q(\rho,\tau)$.
We will use also a following notation:
$$|f|_{V(\Omega_{T})}\equiv\sup_{t\in[-T,0]}|f(t)|_{2,\Omega}+|\nabla f|_{2,%
\Omega_{T}}$$
(2.1)
$$V(\Omega_{T})\equiv\{f\in L^{2}(\Omega_{T}):|f|_{V(\Omega_{T})}<\infty\}$$
(2.2)
where $|f|_{2,U}\equiv\int_{U}|f|^{2}$ and $\nabla$ means space gradient. Observe that here we assume that $|f(t)|_{2,\Omega}<\infty$ for every $t$. Let
$$\mathop{\text{osc}}_{U}f=max_{U}f-min_{U}f$$
(2.3)
$$A^{f}_{k,\rho}(t)\equiv\{x\in B_{\rho}(x_{0})\cap\Omega:f(x,t)>k\}$$
(2.4)
$$\mu_{k,\rho,\tau}\equiv\int_{t_{0}-\tau}^{t_{0}}\mu^{\frac{r}{q}}(A^{f}_{k,%
\rho}(t))dt$$
(2.5)
$$f^{(k)}\equiv(f-k)^{+}$$
(2.6)
We introduce now classes $B_{N},B_{D}$ dependent on further specified parameters. The former is useful for showing boundary regularity for Neumann problems, the latter for Dirichlet problems. Let us define formal inequality:
$$\displaystyle\int_{B_{\rho}(x_{0})\cap\Omega}|w^{(k)}(x,t_{0})\xi(x,t_{0})|^{2%
}dx+\int_{Q_{\rho,\tau}\cap\Omega_{T}}|\nabla w^{(k)}(x,t)\xi(x,t)|^{2}dxdt$$
(2.7)
$$\displaystyle\leq\int_{B_{\rho}(x_{0})\cap\Omega}|w^{(k)}(x,t_{0}-\tau)\xi(x,t%
_{0}-\tau)|^{2}dx+$$
$$\displaystyle\gamma\left[\int_{Q_{\rho,\tau}\cap\Omega_{T}}(|\nabla\xi|^{2}+%
\xi|\xi,_{t}|)|w^{(k)}|^{2}+\left(\int_{t_{0}-\tau}^{t_{0}}\left(\int_{A_{k,%
\rho}(t)}\xi(x,t)dx\right)^{\frac{r}{q}}dt\right)^{\frac{2(1+\kappa)}{r}}\right]$$
Definition 2.1.
$u\in B_{N}(\Omega_{T},M,\gamma,r,\delta,\kappa)$ iff
(i)
$u$ is a pointwisely defined representative of a function in $V(\Omega_{T})\cap L^{\infty}(\Omega_{T})$ and $|u|_{\infty,\Omega_{T}}\leq M$.
(ii)
Inequality (2.7) with $w\equiv\pm u$ and $\frac{1}{r}+\frac{n}{2q}=\frac{n}{4}$ holds for any $k\geq\mathop{\text{ess\>sup}}_{Q_{\rho,\tau}}w-\delta$ and $\xi\in C(Q_{\rho,\tau}),\;0\leq\xi\leq 1,\xi\equiv 0$ on $\partial B_{\rho}(x_{0})\times(t_{0}-\tau,t_{0})$.
Definition 2.2.
$u\in B_{D}(\Omega_{T},M,\gamma,r,\delta,\kappa;\Gamma,c_{\Gamma},\beta)$ iff
(i)
$u$ is a pointwisely defined representative of a function in $V(\Omega_{T})\cap L^{\infty}(\Omega_{T})$ and $|u|_{\infty,\Omega_{T}}\leq M$
(ii)
Inequality (2.7) with $w\equiv\pm u$ and $\frac{1}{r}+\frac{n}{2q}=\frac{n}{4}$ holds for any $k\geq\max(\mathop{\text{ess\>sup}}_{Q_{\rho,\tau}}w-\delta,\;\max_{\Gamma\cap Q%
_{\rho,\tau}}w)$ and $\xi\in C(Q_{\rho,\tau}),\;0\leq\xi\leq 1,\xi\equiv 0$ on $\partial B_{\rho}(x_{0})\times(t_{0}-\tau,t_{0})$.
(iii)
for an open set $\Gamma\subset\partial\Omega\times(-T,0)$ holds $\mathop{\text{osc}}_{Q(\rho,\rho^{2})}u_{|\Gamma}\leq c_{\Gamma}\rho^{\beta}$
Remark 2.0.
It is important that $\xi$ does not have to vanish on $\Gamma$.
Remark 2.0.
One can unessentialiy generalize definitions 2.1, 2.2 demanding that (2.7) holds merely for functions $\xi$, which cutoff certain cylinders $Q_{\rho,\tau}$.
A quotation of a few well-known results ends this section.
Lemma 2.1.
For nonnegative $h\in W^{1,1}(B_{\rho})$, vanishing on $U$ of positive Lebesgue measure, holds a following generalized Poincare inequality:
$$\int_{U}h\eta\leq K_{P}\rho^{n}\frac{\mu^{\frac{1}{n}}(U)}{\mu(U_{0})}\int_{B_%
{\rho}}|\nabla h|\eta$$
(2.8)
where $\eta\equiv\eta(|x|)\in[0,1]$ and $\eta_{|U_{0}}\equiv 1$, $K_{P}=2^{n}\left(\frac{1}{n}+\omega_{n}\right)$ .
Lemma 2.2.
Assume that $\Omega$ is convex. For nonnegative $h\in W^{1,1}(B_{\rho}\cap\Omega)$, vanishing on $U$ of positive Lebesgue measure, holds a following generalized Poincare inequality:
$$\int_{U}h\eta\leq\tilde{K}_{P}\frac{\rho^{n+1}}{\mu(U_{0})}\int_{B_{\rho}\cap%
\Omega}|\nabla h|\eta$$
(2.9)
where $\eta\equiv\eta(|x|)\in[0,1]$ and $\eta_{|U_{0}}\equiv 1$, $\tilde{K}_{P}=$ .
Suggestion of Proof can be found in lsu , p.92.
3 Results
The following conditions excluding cusps of $\Omega$ are needed for validity of results
$$\mathop{\text{\LARGE$\exists$}}_{\theta_{0}>0,\rho_{0}>0}\mathop{\text{\LARGE$%
\forall$}}_{\rho\leq\rho_{0},(x,t)\in\Gamma}\mu(B_{\rho}(x)\cap\Omega^{c})\geq%
\theta_{0}\mu(B_{\rho}(x)),$$
(3.10)
$$\mathop{\text{\LARGE$\exists$}}_{\theta_{0}>0,\rho_{0}>0}\mathop{\text{\LARGE$%
\forall$}}_{\rho\leq\rho_{0},(x,t)\in\Gamma}\mu(B_{\rho}(x)\cap\Omega)\geq%
\theta_{0}\mu(B_{\rho}(x));$$
(3.11)
the former allows for a much simplification of the result concerning Dirichlet boundary case and is referred to as the anti-outer-cusp condition in what follows. The latter plays a role at the Neumann boundary case and is referred to as the anti-inner-cusp condition.
Theorem 3.1.
Assume that the anti-outer-cusp condition (3.10) holds.
Take $u\in B_{D}(\Omega_{T},M,\gamma,r,\delta,\kappa;\Gamma,c_{\Gamma},\beta)$ with
$$r>2\;\text{ for }\;n=2\quad\text{ and }\quad r\geq 2\;\text{ for }\;n>2$$
$$M<\infty,\;\gamma>0,\;\delta>0,\;\kappa>0,\;c_{\Gamma}<\infty,\;\beta>0$$
then $u$ is Hölder continuous in vicinity of $\Gamma$.
More precisely: take any $\sigma\in(1,2],\;\theta\in(0,1]$ and a boundary cylinder $Q(\tilde{\rho}_{0},\theta\tilde{\rho}_{0}^{2})$ where $\tilde{\rho}_{0}\leq\rho_{0}$, with $\rho_{0}$ from (3.10), such that $\partial\Omega_{T}\cap Q(\tilde{\rho}_{0},\theta\tilde{\rho}_{0}^{2})=\Gamma%
\cap Q(\tilde{\rho}_{0},\theta\tilde{\rho}_{0}^{2})$. We have for $\rho\leq\sigma^{-2}\tilde{\rho}_{0}$
$$\mathop{\text{osc}}_{Q(\rho,\theta{\rho}^{2})\cap(\Omega_{T}\cup\Gamma)}u\leq C%
\rho^{\alpha}$$
(3.12)
with
$$\alpha=\min\left(-\log_{\sigma^{2}}(1-2^{-s}),\beta,\frac{n\kappa}{2}\right)%
\qquad C=\max\left((\sfrac{\sigma^{2}}{\tilde{\rho}_{0}})^{\alpha}\max\left(%
\mathop{\text{osc}}_{Q_{\tilde{\rho}_{0}}}u,2^{s}\sigma^{\frac{n\kappa}{2}}%
\tilde{\rho}_{0}^{\frac{n\kappa}{2}}\right),c_{\Gamma}\right)$$
(3.13)
and $s$ satisfying
$$s\geq 1+\max\left[\left\lceil log_{2}\frac{2M}{\delta}\right\rceil\ +\theta%
\left(4\left(\frac{1}{n}+\omega_{n}\right)^{2}\omega_{n}^{\frac{2}{n}}\gamma%
\frac{2^{3n+2+2\max\left(1,\frac{1+\kappa}{r}\right)}}{\eta^{2}(\sigma-1)%
\theta_{0}^{2}}\right),\;log_{2}(2c_{\Gamma}\sigma^{\beta})\right]$$
(3.14)
Theorem 3.2.
Assume that the anti-outer-cusp condition (3.11) holds.
Take $u\in B_{N}(\Omega_{T},M,\gamma,r,\delta,\kappa)$ with
$$r>2\;\text{ for }\;n=2\quad\text{ and }\quad r\geq 2\;\text{ for }\;n>2$$
$$M<\infty,\;\gamma>0,\;\delta>0,\;\kappa>0$$
then $u$ is Hölder continuous in vicinity of $\Gamma$.
More precisely: take any $\sigma\in(1,2]$,
$$\theta\leq\min\left(1,\frac{\theta_{0}}{2304\gamma},\;\left(\frac{\omega^{-%
\frac{2(1+\kappa)-q}{q}}_{n}}{128\gamma}\right)^{\frac{r}{2(1+\kappa)}}\right)$$
(3.15)
and a boundary cylinder $Q(\tilde{\rho}_{0},\theta\tilde{\rho}_{0}^{2})$ where $\tilde{\rho}_{0}\leq\rho_{0}$, with $\rho_{0}$ from (3.11), such that $\partial\Omega_{T}\cap Q(\tilde{\rho}_{0},\theta\tilde{\rho}_{0}^{2})=\Gamma%
\cap Q(\tilde{\rho}_{0},\theta\tilde{\rho}_{0}^{2})$. We have for $\rho\leq\sigma^{-2}\tilde{\rho}_{0}$
$$\mathop{\text{osc}}_{Q(\rho,\theta{\rho}^{2})\cap(\Omega_{T}\cup\Gamma)}u\leq C%
\rho^{\alpha}$$
(3.16)
with
$$\alpha=\min\left(-\log_{\sigma^{2}}(1-2^{-s}),\frac{n\kappa}{2}\right)\qquad C%
=(\sfrac{\sigma^{2}}{\tilde{\rho}_{0}})^{\alpha}\max\left(\mathop{\text{osc}}_%
{Q_{\tilde{\rho}_{0}}}u,2^{s}\sigma^{\frac{n\kappa}{2}}\tilde{\rho}_{0}^{\frac%
{n\kappa}{2}}\right)$$
(3.17)
and any $s$ satisfying
$$s\geq\left\lceil log_{2}\frac{2M}{\delta}\right\rceil\ +(72\omega_{n}\tilde{K}%
_{P})^{2}\gamma\theta\frac{2^{n+2+2\max\left(1,\frac{1+\kappa}{r}\right)}}{%
\eta^{2}(\sigma-1)}$$
(3.18)
As a example of an application of the above mentioned theory, we present the proof of the result on the swirl of the axially symmetric Navier-Stokes flow in a cylinder. Before stating the result, let us introduce some quantities.
For $v_{r},\;v_{\phi},\;v_{z}$ being the cylindrical components of three-dimensional vector field $u$ introduce quantity $u=rv_{\phi}$ called swirl. Let $v$ be a (weak) solution to Navier-Stokes system in a cylinder $\Omega_{T}$ with radius $R$:
$$\displaystyle v,_{t}+v\cdot\nabla v-\nu\Delta v=0$$
$$\displaystyle\text{ in }\;\Omega_{T}$$
(3.19)
$$\displaystyle\text{div }u=0$$
$$\displaystyle\text{ on }\;\Omega_{T}$$
$$\displaystyle v\cdot n=0,\quad n\cdot{\mathbb{D}}(v)\cdot\tau_{i}=0$$
$$\displaystyle\text{ on }\;S_{1}^{T}$$
$$\displaystyle v\cdot n=0$$
$$\displaystyle\text{ on }\;S_{2}^{T}$$
$$\displaystyle v_{|t=0}=v_{0}$$
$$\displaystyle\text{ in }\;\Omega$$
where $S_{1}$ denotes the curved part of boundary of the cylinder and $S_{2}$ - its (two-component) flat part. Consequently $u$ solves a following equation:
$$\displaystyle u,_{t}+v\cdot\nabla u-\nu\Delta u+\nu\frac{u,_{r}}{r}=0$$
$$\displaystyle\text{ in }\;\Omega_{T}$$
(3.20)
$$\displaystyle u,_{r}=\frac{2}{R}u$$
$$\displaystyle\text{ on }\;S_{1}^{T}$$
$$\displaystyle u\cdot n=0$$
$$\displaystyle\text{ on }\;S_{2}^{T}$$
$$\displaystyle u_{|t=0}=u_{0}$$
$$\displaystyle\text{ in }\;\Omega$$
Theorem 3.3.
Assume that $u\leq M$ satisfies (3.20) with respective (3.19) solution $v\in L^{r^{\prime}}(0,T;L^{q^{\prime}})$, $\frac{3}{q^{\prime}}+\frac{2}{r^{\prime}}=1-\frac{3}{2}\kappa$. Then $u\in B_{N}(\Omega_{T},M,\gamma,r,\delta,\kappa)$ with any $\gamma\in(0,\nu),\delta\in\mathbb{R}$.
As a corollary let us formulate
Theorem 3.4.
Assume for axially symmetric (3.19) solution $v$ that $v_{r},v_{z}\in L^{10}(\Omega_{T})$, $rv_{0}$ is bounded and in vicinity of the axis of symmetry $u_{0}$ is Hölder continuous with Hölder exponent $\frac{3}{2}\kappa$, $\kappa\in(0,\frac{1}{3}]$. Then $u\in C^{\alpha}(\Omega^{T})$.
For the entire section, fix $x_{0},t_{0}$ and supercylinder $Q_{(\min{\rho_{0},1}),1}$ containing all further cylinders, where $\rho_{0}$ comes from the anti-cusp condition. Denote the boundary cylinder $Q(\sigma\rho,\theta(\sigma\rho)^{2})$ by $Q_{\sigma\rho}$ and
$$\overline{m}\equiv\max_{Q_{\sigma\rho}\cap(\Omega_{T}\cup\Gamma)}u,\quad%
\underline{m}\equiv\min_{Q_{\sigma\rho}\cap(\Omega_{T}\cup\Gamma)}u,\quad%
\omega\equiv\mathop{\text{osc}}_{Q_{\sigma\rho}\cap(\Omega_{T}\cup\Gamma)}u\;(%
=\overline{m}-\underline{m})$$
(3.21)
Lemma 3.5 (Trichotomy for $B_{D}$).
Take $u\in B_{D}$. For any fixed $\eta>0$ and $\sigma\in(1,2]$ exists $s=s(\eta,\sigma)$ for which a following trichotomy holds for every time contraction parameter $\theta\leq 1$:
$$\displaystyle\text{either}\quad(T1)$$
$$\displaystyle\qquad\omega\leq 2^{s}\rho^{\min(\beta,\frac{n\kappa}{2})}$$
(3.22)
$$\displaystyle\text{or}\quad(T2)$$
$$\displaystyle\qquad\mu(\{(x,t)\in Q_{\rho}\cap\Omega_{T}:u(x,t)>\overline{m}-2%
^{-s+1}\omega\})\leq\eta\rho^{n+2}$$
$$\displaystyle\text{or}\quad(T2^{\prime})$$
$$\displaystyle\qquad\mu(\{(x,t)\in Q_{\rho}\cap\Omega_{T}:u(x,t)<\underline{m}+%
2^{-s+1}\omega\})\leq\eta\rho^{n+2}$$
This lemma asserts quantitatively a following observation: for a function in $B_{D}$ either we (T1) control oscillations on $Q_{\sigma\rho}$ or (T2) on a considerable fraction (in terms of Lebesgue measure) of a slightly smaller cylinder $Q_{\rho}$ u is bounded away from its maximum or minimum in the bigger cylinder.
Define
$$\lceil x\rceil=\inf_{\mathbb{N}\cup 0}\{c\geq x\}$$
Proof.
Assume that (T1) fails.
Therefore
$$\frac{\omega}{2}>2^{s-1}\rho^{\min(\beta,\frac{n\kappa}{2})}\geq 2^{s-1}\rho^{%
\beta}\geq c_{\Gamma}(\sigma\rho)^{\beta}\geq\mathop{\text{osc}}_{Q(\sigma\rho%
,(\sigma\rho)^{2})}u_{|\Gamma}$$
(3.23)
where the last-but-one inequality is given by definition of $s$ i.e. (3.43) and last one by definition of $B_{D}$, point (iii).
Inequality (3.23) implies that
$$\text{either }\;\max_{Q(\sigma\rho,(\sigma\rho)^{2})}u_{|\Gamma}<\overline{m}-%
\frac{\omega}{4}\quad\text{ or }\;\min_{Q(\sigma\rho,(\sigma\rho)^{2})}u_{|%
\Gamma}>\underline{m}+\frac{\omega}{4}.$$
(3.24)
Assume that the former holds. Define
$$\displaystyle r_{0}$$
$$\displaystyle\equiv\left\lceil log_{2}\frac{2M}{\delta}\right\rceil$$
(3.25)
$$\displaystyle k_{r}$$
$$\displaystyle\equiv\overline{m}-2^{-r}\omega\quad\text{ for }r\geq r_{0}$$
where $M,\delta$ are parameters of $B_{D}$.
Observe that (3.25) and assumption that the first possibility in (3.24) holds imply for $r_{0}\geq 2$
$$k_{r}\geq\max\left(\max_{\Gamma\cap Q_{\sigma\rho}}u,\;\mathop{\text{ess\>sup}%
}_{Q_{\sigma\rho}\cap(\Omega_{T}\cup\Gamma)}u-\delta\right),$$
(3.26)
so levels $k_{r}$ are admissible to (2.7). We show that (T2) is valid. For clarity the following main part of the proof is divided into a few steps
(i)
Define a function in $Q_{\rho}$
$$h(x,t)\equiv\begin{cases}\begin{aligned} \displaystyle k_{r+1}-k_{r}&%
\displaystyle\quad\{(x,t)\in Q_{\rho}\cap(\Omega_{T}\cup\Gamma):u(x,t)>k_{r+1}%
\}\\
\displaystyle u(x,t)-k_{r}&\displaystyle\quad\{(x,t)\in Q_{\rho}\cap(\Omega_{T%
}\cup\Gamma):k_{r}<u(x,t)\leq k_{r+1}\}\\
\displaystyle 0&\displaystyle\quad otherwise\end{aligned}\end{cases}$$
(3.27)
Both $u\in B_{D}$ and (3.26) giving $u_{\Gamma\cap Q_{\rho}}(x,t)-k_{r}\leq 0$ imply that $h(t,\cdot)\in W^{1,1}(B_{\rho})$. Hence one can use Lemma 2.1, choosing $\eta\equiv 1$
$$\int_{B_{\rho}}h(t)\leq K_{P}\rho^{n}\frac{\mu^{\frac{1}{n}}(B_{\rho})}{\mu_{n%
}(\{x\in B_{\rho}:h(x,t)=0\})}\int_{B_{\rho}}|\nabla h(t)|$$
(3.28)
By definition $h=0$ outside $\Omega$. Using this and Tchebytschev inequality one has from (3.28)
$$(k_{r+1}-k_{r})\mu_{n}(A_{k_{r+1},\rho}(t))\leq K_{P}\frac{{\omega_{n}}^{\frac%
{1}{n}}\rho^{n+1}}{\mu(B_{\rho}\cap\Omega^{c})}\int_{A_{k_{r},\rho}(t)%
\backslash A_{k_{r+1},\rho}(t)}|\nabla h(t)|$$
(3.29)
where definition (2.4) is used111 na raze wywalamy $\Gamma$ z T2 .
In view of anti-cusp condition (3.10), (3.29) yields
$$\omega 2^{-(r+1)}\mu_{n}(A_{\overline{m}-\omega 2^{-(r+1)},\rho}(t))\leq\rho%
\frac{K_{P}{\omega_{n}}^{\frac{1-n}{n}}}{\theta_{0}}\int_{A_{k_{r},\rho}(t)%
\backslash A_{k_{r+1},\rho}(t)}|\nabla u^{(k_{r})}(t)|$$
(3.30)
Integrate (3.30) over $[t_{0}-\theta\rho^{2},t_{0}]$
$$\displaystyle\omega 2^{-(r+1)}\mu(\{(x,t)\in Q_{\rho}\cap\Omega_{T}:u(x,t)>%
\overline{m}-2^{-{(r+1)}}\omega\})\leq\\
\displaystyle\rho\frac{K_{P}{\omega_{n}}^{\frac{1-n}{n}}}{\theta_{0}}\int_{t_{%
0}-\theta\rho^{2}}^{t_{0}}\int_{A_{k_{r},\rho}(t)\backslash A_{k_{r+1},\rho}(t%
)}|\nabla u^{(k_{r})}|$$
(3.31)
Squaring this one has
$$\displaystyle\omega^{2}4^{-(r+1)}\mu^{2}(\{(x,t)\in Q_{\rho}\cap\Omega_{T}:u(x%
,t)>\overline{m}-2^{-{(r+1)}}\omega\})\leq\\
\displaystyle\rho^{2}\frac{(K_{P}{\omega_{n}}^{\frac{1-n}{n}})^{2}}{\theta_{0}%
^{2}}\left[\int_{t_{0}-\theta\rho^{2}}^{t_{0}}\mu_{n}(A_{k_{r},\rho}(t)%
\backslash A_{k_{r+1},\rho}(t))\right]\left[\int_{Q_{\rho}\cap\Omega_{T}}|%
\nabla u^{(k_{r})}|^{2}\right]$$
(3.32)
(ii)
To estimate term $\int_{Q_{\rho}\cap\Omega_{T}}|\nabla u^{(k_{r})}|^{2}$ in (3.32) we use the definition of $B_{D}$. Observe that (3.26) concludes that $k_{r}$ with $w=+u$ is admissible to (2.7) in $Q_{\sigma\rho}$. This with
$$\xi(x,t)=\begin{cases}\begin{aligned} \displaystyle 1&\displaystyle\text{in}%
\quad\overline{Q_{\rho}}\\
\displaystyle 0&\displaystyle\text{outside}\quad\overline{Q_{\sigma\rho}}\;%
\text{ and for }\;t=t_{0}-\theta(\sigma\rho)^{2}\\
\displaystyle\text{affine}&\displaystyle\text{otherwise}\end{aligned}\end{cases}$$
(3.33)
$$|\nabla\xi|\leq|\rho(\sigma-1)|^{-1},\quad|\xi,_{t}|\leq|\theta\rho^{2}(\sigma%
^{2}-1)|^{-1}$$
(3.34)
produces
$$\displaystyle\gamma^{-1}\int_{Q_{\rho}\cap\Omega_{T}}|\nabla u^{(k_{r})}|^{2}%
\leq\\
\displaystyle(|\rho(\sigma-1)|^{-2}+|\theta\rho^{2}(\sigma^{2}-1)|^{-1})\int_{%
Q_{\sigma\rho}\cap\Omega_{T}}|u^{(k_{r})}|^{2}+\left(\int_{t_{0}-\theta(\sigma%
\rho)^{2}}^{t_{0}}\mu_{n}^{\frac{r}{q}}({A_{k_{r},\sigma\rho}(t)})dt\right)^{%
\frac{2(1+\kappa)}{r}}$$
(3.35)
It holds
$$\int_{Q_{\sigma\rho}\cap\Omega_{T}}|u^{(k_{r})}|^{2}=\int_{Q_{\sigma\rho}\cap%
\Omega_{T}}|(u-\overline{m}+2^{-r}\omega)^{+}|^{2}\leq 4^{-r}\omega^{2}\omega_%
{n}\theta(\sigma\rho)^{n+2}$$
$$\left(\int_{t_{0}-\theta(\sigma\rho)^{2}}^{t_{0}}\mu_{n}^{\frac{r}{q}}({A_{k_{%
r},\sigma\rho}(t)})dt\right)^{\frac{2(1+\kappa)}{r}}\leq\omega_{n}\left(\theta%
(\sigma\rho)^{2+\frac{nr}{q}}\right)^{\frac{2(1+\kappa)}{r}}=\omega_{n}\theta^%
{\frac{2(1+\kappa)}{r}}(\sigma\rho)^{n(1+\kappa)}$$
where definition of $u^{(k_{r})}$, $A_{k_{r},\sigma\rho}(t)\subset B_{\sigma\rho}$ and $\frac{1}{r}+\frac{n}{2q}=\frac{n}{4}$ (see $B_{D}$ definition) are used. In view of the above two inequalities (3.35) implies
$$\displaystyle\int_{Q_{\rho}\cap\Omega_{T}}|\nabla u^{(k_{r})}|^{2}\leq\\
\displaystyle\gamma\omega_{n}\left[(|\rho(\sigma-1)|^{-2}+|\theta\rho^{2}(%
\sigma^{2}-1)|^{-1})4^{-r}\omega^{2}\theta(\sigma\rho)^{n+2}+\theta^{\frac{2(1%
+\kappa)}{r}}(\sigma\rho)^{n(1+\kappa)}\right]\\
\displaystyle\leq\gamma\omega_{n}\left[4^{-r}\omega^{2}\theta(1+\theta)\rho^{n%
}\frac{\sigma^{n+2}}{\sigma-1}+\theta^{\frac{2(1+\kappa)}{r}}(\sigma\rho)^{n(1%
+\kappa)}\right]\leq\rho^{n}K_{\ref{tri7.5}}\left[4^{-r}\omega^{2}+\rho^{n%
\kappa}\right].$$
(3.36)
because by assumption $\theta\leq 1$, one can take
$$K_{\ref{tri7.5}}=\omega_{n}\gamma\frac{2^{n+2+2\max\left(1,\frac{1+\kappa}{r}%
\right)}}{\sigma-1}$$
As $\rho^{n\kappa}=\rho^{\min(2\beta,n\kappa)+(n\kappa-2\beta)^{+}}\leq\rho^{(n%
\kappa-2\beta)^{+}}4^{-s}\omega^{2}$ holds by assumption that (T1) fails, one has for $r\in[r_{0},s]$ and $\rho\leq 1$
$$\int_{Q_{\rho}\cap\Omega_{T}}|\nabla u^{(k_{r})}|^{2}\leq K_{\ref{tri7.5}}\rho%
^{n}4^{-r}\omega^{2}$$
(3.37)
(iii)
Use (3.37) in (3.32) to get for $r\in[r_{0},s]$
$$\displaystyle\omega^{2}4^{-(r+1)}\mu^{2}(\{(x,t)\in Q_{\rho}\cap\Omega_{T}:u(x%
,t)>\overline{m}-2^{-{(r+1)}}\omega\})\leq\\
\displaystyle 4^{-r}\omega^{2}K_{\ref{lem73D}}\rho^{n+2}\left[\int_{t_{0}-%
\theta\rho^{2}}^{t_{0}}\mu_{n}(A_{k_{r},\rho}(t)\backslash A_{k_{r+1},\rho}(t)%
)\right]$$
(3.38)
for
$$K_{\ref{lem73D}}=\frac{(K_{P}{\omega_{n}}^{\frac{1-n}{n}})^{2}}{\theta_{0}^{2}%
}K_{\ref{tri7.5}}=\left(\frac{1}{n}+\omega_{n}\right)^{2}\omega_{n}^{\frac{2-n%
}{n}}\gamma\frac{2^{3n+2+2\max\left(1,\frac{1+\kappa}{r}\right)}}{(\sigma-1)%
\theta_{0}^{2}}$$
(3.39)
Divide (3.38) by $\omega^{2}4^{-(r+1)}$; use $\overline{m}-2^{-{(r+1)}}\omega\leq\overline{m}-2^{-{(s-1)}}\omega$ for $r\in[r_{0},s-2]$ and definition of $A_{k,\rho}(t)$
$$\displaystyle\mu^{2}(\{(x,t)\in Q_{\rho}\cap\Omega_{T}:u(x,t)>\overline{m}-2^{%
-{(s-1)}}\omega\})\leq\\
\displaystyle 4K_{\ref{lem73D}}\rho^{n+2}\mu(\{(x,t)\in Q_{\rho}\cap\Omega_{T}%
:u(x,t)\in(k_{r},k_{r+1}]\}).$$
(3.40)
To enable further control of constant, sum (3.40) over $r\in[r_{0},s-2]$
$$\displaystyle\mu^{2}(\{(x,t)\in Q_{\rho}\cap\Omega_{T}:u(x,t)>\overline{m}-2^{%
-{(s-1)}}\omega\})\leq\\
\displaystyle\frac{4K_{\ref{lem73D}}}{s-1-r_{0}}\rho^{n+2}\mu(\{(x,t)\in Q_{%
\rho}\cap\Omega_{T}:u(x,t)\in(k_{r_{0}},k_{s-1}]\})\leq\frac{4\omega_{n}K_{%
\ref{lem73D}}\theta}{s-1-r_{0}}\rho^{2(n+2)}$$
(3.41)
So the main part of the proof results in
$$\mu(\{(x,t)\in Q_{\rho}\cap\Omega_{T}:u(x,t)>\overline{m}-2^{-{(s-1)}}\omega\}%
)\leq\sqrt{\frac{4\theta\omega_{n}K_{\ref{lem73D}}}{s-1-r_{0}}}\rho^{(n+2)}$$
(3.42)
The proof concludes with a proper choice of $s$ satisfying:
$$\sqrt{\frac{4\theta\omega_{n}K_{\ref{lem73D}}}{s-1-r_{0}}}\leq\eta,\qquad 2c_{%
\Gamma}\sigma^{\beta}\leq 2^{s}$$
(3.43)
The first inequality gives (T2) from (3.42) while the second allows for (3.23). Recall (3.24); its second alternative
is considered analogously as the above case, with $w=-u$ instead of $+u$, and yields (T2’).
∎
Performing computation based on conditions (3.43) one has
Remark 3.0.
In Lemma 3.22 any
$$s\geq 1+\max\left[\left\lceil log_{2}\frac{2M}{\delta}\right\rceil\ +\theta%
\left(4\left(\frac{1}{n}+\omega_{n}\right)^{2}\omega_{n}^{\frac{2}{n}}\gamma%
\frac{2^{3n+2+2\max\left(1,\frac{1+\kappa}{r}\right)}}{\eta^{2}(\sigma-1)%
\theta_{0}^{2}}\right),\;log_{2}(2c_{\Gamma}\sigma^{\beta})\right]$$
(3.44)
is admissible. One can choose $\theta$ small enough to shrink
$$\theta\left(4\left(\frac{1}{n}+\omega_{n}\right)^{2}\omega_{n}^{\frac{2}{n}}%
\gamma\frac{2^{3n+2+2\max\left(1,\frac{1+\kappa}{r}\right)}}{\eta^{2}(\sigma-1%
)\theta_{0}^{2}}\right)$$
as needed. Recalling that $\sigma\leq 2$, sufficient condition for $s$ reads
$$s>\left\lceil log_{2}\frac{2Mc_{\Gamma}}{\delta}\right\rceil+2+\beta$$
(3.45)
Below we state an analogous result to Lemma 3.48 for $B_{N}$. Recall that $Q_{\sigma\rho}\equiv Q(\sigma\rho,\theta(\sigma\rho)^{2})$. We define respective quantities without resorting to presently unknown boundary values
$$\overline{m}\equiv\max_{Q_{\sigma\rho}\cap\Omega_{T}}u,\quad\underline{m}%
\equiv\min_{Q_{\sigma\rho}\cap\Omega_{T}}u,\quad\omega\equiv\mathop{\text{osc}%
}_{Q_{\sigma\rho}\cap\Omega_{T}}u\;(=\overline{m}-\underline{m})$$
(3.46)
Lemma 3.6 (Trichotomy for $B_{N}$).
Assume that anti-inner-cusp condition (3.11) holds. Take $u\in B_{N}$ and a cylinder $Q_{\rho\sigma}$ with the time contraction parameter satisfying
$$\theta\leq\min\left(1,\frac{\theta_{0}}{2304\gamma},\;\left(\frac{\omega^{-%
\frac{2(1+\kappa)-q}{q}}_{n}}{128\gamma}\right)^{\frac{r}{2(1+\kappa)}}\right)$$
(3.47)
Then for any fixed $\eta>0$ and $\sigma\in(1,2]$ exists $s=s(\eta,\sigma)$, for which a following trichotomy holds:
$$\displaystyle\text{either}\quad(T1)$$
$$\displaystyle\qquad\omega\leq 2^{s}\rho^{\frac{n\kappa}{2}}$$
(3.48)
$$\displaystyle\text{or}\quad(T2)$$
$$\displaystyle\qquad\mu(\{(x,t)\in Q_{\rho}\cap\Omega_{T}:u(x,t)>\overline{m}-2%
^{-s+1}\omega\})\leq\eta\rho^{n+2}$$
$$\displaystyle\text{or}\quad(T2^{\prime})$$
$$\displaystyle\qquad\mu(\{(x,t)\in Q_{\rho}\cap\Omega_{T}:u(x,t)<\underline{m}+%
2^{-s+1}\omega\})\leq\eta\rho^{n+2}$$
An attempt to rewrite the proof of Lemma (3.22) fails at obtaining (3.29) from (3.28). Extrapolation of a truncated $u$ by zero outside $\Omega_{T}$, as in (3.27), does not produce Sobolev function $h$ now, because boundary values of $u$ are not known. Thus one may extrapolate $u$ and define $h$ on $Q_{\rho}$ or restrict in definition of $h$ to $Q_{\rho}\cap\Omega_{T}$. In both cases we loose an easy way to control $\mu_{n}(\{h(t)=0\})$. Regaining this control poses the main new point in the proof of Lemma (3.48). In the proof below we focus on this problem and sketch the part which overlaps with the previous proof.
Recall that
$$\lceil x\rceil=\inf_{\mathbb{N}\cup 0}\{c\geq x\}$$
Proof.
Introduce
$$\displaystyle r_{0}$$
$$\displaystyle\equiv\left\lceil log_{2}\frac{2M}{\delta}\right\rceil,$$
(3.49)
$$\displaystyle k_{r}$$
$$\displaystyle\equiv\overline{m}-2^{-r}\omega\quad\text{ for }r\geq r_{0},$$
levels $k_{r}$ are admissible to (2.7).
By definitions of $A,\omega,\overline{m},\underline{m}$ either
$$A^{u}_{\overline{m}-\frac{\omega}{2},\rho}(t_{0}-\theta\rho^{2})\leq\frac{1}{2%
}\mu_{n}(B_{\rho}\cap\Omega)$$
(3.50)
or
$$A^{-u}_{\underline{m}-\frac{\omega}{2},\rho}(t_{0}-\theta\rho^{2})\leq\frac{1}%
{2}\mu_{n}(B_{\rho}\cap\Omega)$$
(3.51)
Consider case when (3.50) holds222The other one is performed analogously with $-u$ in place of $u$, it implies for $r\geq 1$
$$A^{u}_{\overline{m}-\frac{\omega}{2^{r}},\rho}(t_{0}-\theta\rho^{2})\leq\frac{%
1}{2}\mu_{n}(B_{\rho}\cap\Omega).$$
(3.52)
One can assume that both
$$\max_{Q_{\rho}\cap\Omega_{T}}u>\overline{m}-2^{-s}\omega$$
(3.53)
holds, as otherwise (T2) holds with $\eta=0$, and (T1) fails:
$$\omega>2^{s}\rho^{\frac{n\kappa}{2}}$$
(3.54)
The following essential part of the proof is divided into few steps.
(i)
Define
$$h(x,t)\equiv\begin{cases}\begin{aligned} \displaystyle k_{r+1}-k_{r}&%
\displaystyle\quad\{(x,t)\in Q_{\rho}\cap\Omega_{T}:u(x,t)>k_{r+1}\}\\
\displaystyle u(x,t)-k_{r}&\displaystyle\quad\{(x,t)\in Q_{\rho}\cap\Omega_{T}%
:k_{r}<u(x,t)\leq k_{r+1}\}\\
\displaystyle 0&\displaystyle\quad otherwise\end{aligned}\end{cases}$$
(3.55)
As $h(t,\cdot)\in W^{1,1}(B_{\rho}\cap\Omega)$, Lemma 2.2 applies; choosing in it $\eta\equiv 1$ one has
$$\int_{B_{\rho}\cap\Omega}h(t)\leq\tilde{K}_{P}\frac{\rho^{n+1}}{\mu_{n}(\{x\in
B%
_{\rho}\cap\Omega:h(x,t)=0\})}\int_{B_{\rho}\cap\Omega}|\nabla h(t)|$$
(3.56)
which implies
$$\displaystyle(k_{r+1}-k_{r})\mu_{n}(A^{u}_{k_{r+1},\rho}(t))\leq\\
\displaystyle\tilde{K}_{P}\frac{\rho^{n+1}}{\mu_{n}(\{x\in B_{\rho}(x_{0})\cap%
\Omega:u(x,t)\leq k_{r}\})}\int_{A^{u}_{k_{r},\rho}(t)\backslash A^{u}_{k_{r+1%
},\rho}(t)}|\nabla u^{(k_{r})}(t)|$$
(3.57)
As already remarked directly before the proof, we need in (3.57) estimate of a following type
$$\mu_{n}(\{x\in B_{\rho}(x_{0})\cap\Omega:u(x,t)\leq k_{r}\})\geq\chi\omega_{n}%
\rho^{n};$$
(3.58)
for some nonzero $\chi$. Such majorisation is done in the next step.
(ii)
In (2.7) take function $\eta(x)$ cutting off between $B_{\sfrac{\rho}{\lambda}}$ and $B_{\rho}$ with $\lambda>1$
$$\eta(x,t)=\begin{cases}\begin{aligned} \displaystyle 1&\displaystyle\text{in}%
\quad\overline{B_{\sfrac{\rho}{\lambda}}}\\
\displaystyle 0&\displaystyle\text{outside}\quad\overline{B_{\rho}}\\
\displaystyle\text{affine}&\displaystyle\text{otherwise}\end{aligned}\end{cases}$$
(3.59)
$$|\nabla\eta|\leq\frac{\lambda^{2}}{\rho^{2}(\lambda-1)^{2}}$$
(3.60)
to obtain for $k_{r}$, admissible in view of (3.49),
$$\displaystyle\max_{t_{0}-\theta\rho^{2}\leq t\leq t_{0}}\int_{B_{\sfrac{\rho}{%
\lambda}}\cap\Omega}|u^{(k_{r})}(t)|^{2}\leq\\
\displaystyle\int_{B_{{\rho}}\cap\Omega}|u^{(k_{r})}(t_{0}-\theta\rho^{2})|^{2%
}+\gamma\left[\frac{\lambda^{2}}{\rho^{2}(\lambda-1)^{2}}\int_{Q_{{\rho,\tau}}%
\cap\Omega}|u^{(k_{r})}|^{2}+(\theta\rho^{2})^{\frac{2(1+\kappa)}{r}}\mu^{%
\frac{2(1+\kappa)}{q}}_{n}(B_{{\rho}}\cap\Omega)\right].$$
(3.61)
Estimate the first summand of right hand-side of (3.61) by (3.52); because left hand-side satisfies for $l>0$
$$\int_{B_{\sfrac{\rho}{\lambda}}\cap\Omega}|u^{(k_{r})}(t)|^{2}=\int_{A_{k_{r},%
\sfrac{\rho}{\lambda}}(t)}|u^{(k_{r})}(t)|^{2}\geq\int_{A_{k_{r}+l,\sfrac{\rho%
}{\lambda}}(t)}|u^{(k_{r})}(t)|^{2}\geq l^{2}\mu_{n}(A_{k_{r}+l,\sfrac{\rho}{%
\lambda}}(t))$$
we have
$$\displaystyle l^{2}\mu_{n}(A_{k_{r}+l,\sfrac{\rho}{\lambda}}(t))\leq\frac{1}{2%
}|u^{(k_{r})}(t_{0}-\theta\rho^{2})|_{L^{\infty}(B_{{\rho}}\cap\Omega)}^{2}\mu%
_{n}(B_{\rho}\cap\Omega)+\\
\displaystyle\gamma\frac{\lambda^{2}}{\rho^{2}(\lambda-1)^{2}}\max_{t\in[t_{0}%
-\theta\rho^{2},t_{0}]}|u^{(k_{r})}(t)|_{L^{\infty}(B_{{\rho}}\cap\Omega)}^{2}%
\theta\rho^{2}\mu_{n}(B_{\rho}\cap\Omega)+\gamma(\theta\rho^{2})^{\frac{2(1+%
\kappa)}{r}}\mu^{\frac{2(1+\kappa)}{q}}_{n}(B_{{\rho}}\cap\Omega)$$
(3.62)
Define $H\equiv\max_{t\in[t_{0}-\theta\rho^{2},t_{0}]}|u^{(k_{r})}(t)|_{L^{\infty}(B_{%
{\rho}}\cap\Omega)}$ and take $l=H\psi$. Hence division (3.62) by $l^{2}$ and estimate $\mu_{n}(B_{\rho}\cap\Omega)\leq\mu_{n}(B_{\rho})$ in its last summand yield
$$\displaystyle\mu_{n}(A_{k_{r}+H\psi,\sfrac{\rho}{\lambda}}(t))\\
\displaystyle\leq\frac{\mu_{n}(B_{\rho}\cap\Omega)}{\psi^{2}}\left[\frac{1}{2}%
+\gamma\left[\frac{\lambda^{2}\theta\rho^{2}}{\rho^{2}(\lambda-1)^{2}}+\frac{(%
\theta\rho^{2})^{\frac{2(1+\kappa)}{r}}\omega^{\frac{2(1+\kappa)-q}{q}}_{n}%
\rho^{n\frac{2(1+\kappa)-q}{q}}}{H^{2}}\right]\right]$$
(3.63)
Assume
$$r\leq s-1$$
(3.64)
this in tandem with definition of $H,k_{r},\overline{m}$; (3.53), $s\geq r_{0}$ gives
$$\displaystyle H\psi$$
$$\displaystyle=\left(\max_{Q_{\rho}\cap\Omega_{T}}u-[\overline{m}-2^{-r}\omega]%
\right)\psi\leq 2^{-r}\omega\psi$$
(3.65)
$$\displaystyle H$$
$$\displaystyle=\max_{Q_{\rho}\cap\Omega_{T}}u-[\overline{m}-2^{-r}\omega]>2^{-r%
}\omega-2^{-s}\omega\geq 2^{-s}\omega\geq\rho^{\frac{n\kappa}{2}}$$
The last inequality stems from (3.54). So (3.65) in (3.63) yields
$$\displaystyle\mu_{n}(A_{\overline{m}-2^{-r}\omega(1-\psi),\sfrac{\rho}{\lambda%
}}(t))$$
$$\displaystyle\leq\mu_{n}(A_{k_{r}+H\psi,\sfrac{\rho}{\lambda}}(t))$$
(3.66)
$$\displaystyle\mu_{n}(A_{k_{r}+H\psi,\sfrac{\rho}{\lambda}}(t))$$
$$\displaystyle\leq\frac{\mu_{n}(B_{\rho}\cap\Omega)}{\psi^{2}}\left[\frac{1}{2}%
+\gamma\left[\frac{\lambda^{2}\theta}{(\lambda-1)^{2}}+\theta^{\frac{2(1+%
\kappa)}{r}}\omega^{\frac{2(1+\kappa)-q}{q}}_{n}\rho^{\nu}\right]\right]$$
where
$$\nu=n\frac{2(1+\kappa)-q}{q}+{\frac{4(1+\kappa)}{r}}-n\kappa=4(1+\kappa)\left[%
\frac{1}{r}+\frac{n}{2q}\right]-n(1+\kappa)=0$$
because by $B_{N}$ definition $\frac{1}{r}+\frac{n}{2q}=\frac{n}{4}$, hence for $\psi=\sfrac{3}{4}$
$$\mu_{n}(A_{\overline{m}-2^{-(r+2)}\omega,\sfrac{\rho}{\lambda}}(t))\leq\mu_{n}%
(B_{\rho}\cap\Omega)\left[\frac{8}{9}+\frac{16\gamma}{9}\left[\frac{\lambda^{2%
}\theta}{(\lambda-1)^{2}}+\theta^{\frac{2(1+\kappa)}{r}}\omega^{\frac{2(1+%
\kappa)-q}{q}}_{n}\right]\right]$$
(3.67)
Combine (3.67) with
$$\mu_{n}(A_{k,\rho}(t))\leq\mu_{n}(A_{k,\sfrac{\rho}{\lambda}}(t))+\omega_{n}%
\left(\frac{\lambda-1}{\lambda}\right)^{n}\rho^{n}\leq\mu_{n}(A_{k,\sfrac{\rho%
}{\lambda}}(t))+\frac{1}{18}\mu_{n}(B_{\rho}\cap\Omega),$$
(3.68)
where the equality holds for
$$\left(\frac{\lambda-1}{\lambda}\right)^{n}=\frac{\theta_{0}}{18}$$
(3.69)
thanks to anti-inner-cusp condition (3.11), to get
$$\mu_{n}(A_{\overline{m}-2^{-(r+2)}\omega,\rho}(t))\leq\mu_{n}(B_{\rho}\cap%
\Omega)\left[\frac{17}{18}+\frac{16\gamma}{9}\left[\frac{\lambda^{2}\theta}{(%
\lambda-1)^{2}}+\theta^{\frac{2(1+\kappa)}{r}}\omega^{\frac{2(1+\kappa)-q}{q}}%
_{n}\right]\right]$$
(3.70)
Recall that estimating (3.58) is the aim of this step of the proof; (3.70) implies
$$\displaystyle\mu_{n}(\{x\in B_{\rho}(x_{0})\cap\Omega:u(x,t)\leq k_{r+2}\})%
\geq\\
\displaystyle\mu_{n}(B_{\rho}\cap\Omega)\left[\frac{1}{18}-\frac{16\gamma}{9}%
\left[\frac{\lambda^{2}\theta}{(\lambda-1)^{2}}+\theta^{\frac{2(1+\kappa)}{r}}%
\omega^{\frac{2(1+\kappa)-q}{q}}_{n}\right]\right]\geq\\
\displaystyle\theta_{0}\omega_{n}\rho^{n}\left[\frac{1}{18}-\frac{16\gamma}{9}%
\left[\theta\left(\frac{18}{\theta_{0}}\right)^{\frac{2}{n}}+\theta^{\frac{2(1%
+\kappa)}{r}}\omega^{\frac{2(1+\kappa)-q}{q}}_{n}\right]\right],$$
(3.71)
where the last inequality results from anti-inner-cusp condition (3.11) and (3.69). Therefore for validity of (3.58) one needs
$$\chi\equiv\theta_{0}\left[\frac{1}{18}-\frac{16\gamma}{9}\left[\theta\left(%
\frac{18}{\theta_{0}}\right)^{\frac{2}{n}}+\theta^{\frac{2(1+\kappa)}{r}}%
\omega^{\frac{2(1+\kappa)-q}{q}}_{n}\right]\right]>0.$$
(3.72)
For any $\theta$ satisfying (3.47) one computes $\chi\geq\sfrac{1}{36}$. Summing up, estimate (3.58) indeed holds in a following form
$$\mu_{n}(\{x\in B_{\rho}(x_{0})\cap\Omega:u(x,t)\leq k_{r+2}\})\geq\frac{\omega%
_{n}\rho^{n}}{36}$$
(3.73)
provided (3.64) holds, i.e $r\leq s-1$.
(iii)
Use of (3.73) in (3.57) gives for ${t\in[t_{0}-\theta\rho^{2},t_{0}]}$
$$(k_{r+1}-k_{r})\mu_{n}(A^{u}_{k_{r+1},\rho}(t))\leq\frac{36}{\omega_{n}}\tilde%
{K}_{P}\rho\int_{A^{u}_{k_{r},\rho}(t)\backslash A^{u}_{k_{r+1},\rho}(t)}|%
\nabla u^{(k_{r})}(t)|$$
(3.74)
which is an exact analogue of (3.30) in the proof of Lemma 3.22. Let us only sketch the remainder of the proof, as from (3.74) it progresses along the lines of Lemma 3.22. Estimate (3.74) implies an analogue of (3.32)
$$\displaystyle\omega^{2}4^{-(r+1)}\mu^{2}(\{(x,t)\in Q_{\rho}\cap\Omega_{T}:u(x%
,t)>\overline{m}-2^{-{r+1}}\omega\})\leq\\
\displaystyle\rho^{2}\left(\frac{36}{\omega_{n}}\tilde{K}_{P}\right)^{2}\left[%
\int_{t_{0}-\theta\rho^{2}}^{t_{0}}\mu_{n}(A_{k_{r},\rho}(t)\backslash A_{k_{r%
+1},\rho}(t))\right]\left[\int_{Q_{\rho}\cap\Omega_{T}}|\nabla u^{(k_{r})}|^{2%
}\right]$$
(3.75)
This, combined with computation rewritten from point (ii) of Lemma 3.22, implies for $r\in[r_{0},s]$, a (3.38) counterpart
$$\displaystyle\omega^{2}4^{-(r+1)}\mu^{2}(\{(x,t)\in Q_{\rho}\cap\Omega_{T}:u(x%
,t)>\overline{m}-2^{-{r+1}}\omega\})\leq\\
\displaystyle 4^{-r}\omega^{2}K_{\ref{lem73N}}\rho^{n+2}\left[\int_{t_{0}-%
\theta\rho^{2}}^{t_{0}}\mu_{n}(A_{k_{r},\rho}(t)\backslash A_{k_{r+1},\rho}(t)%
)\right]$$
(3.76)
with
$$K_{\ref{lem73N}}=(36\tilde{K}_{P})^{2}\omega_{n}\gamma\frac{2^{n+2+2\max\left(%
1,\frac{1+\kappa}{r}\right)}}{\sigma-1}$$
(3.77)
As in the $B_{D}$ case, summing (3.76) over $r\in[r_{0},s-2]$ gives
$$\mu(\{(x,t)\in Q_{\rho}\cap\Omega_{T}:u(x,t)>\overline{m}-2^{-{(s-1)}}\omega\}%
)\leq\sqrt{\frac{4\omega_{n}\theta K_{\ref{lem73N}}}{s-1-r_{0}}}\rho^{(n+2)}$$
(3.78)
Estimate (3.78) implies (T2) provided
$$\sqrt{\frac{4\theta\omega_{n}K_{\ref{lem73N}}}{s-1-r_{0}}}\rho^{(n+2)}\leq\eta$$
(3.79)
∎
As before, formulate
Remark 3.0.
In Lemma 3.48 any
$$s\geq\left\lceil log_{2}\frac{2M}{\delta}\right\rceil\ +(72\omega_{n}\tilde{K}%
_{P})^{2}\gamma\theta\frac{2^{n+2+2\max\left(1,\frac{1+\kappa}{r}\right)}}{%
\eta^{2}(\sigma-1)}$$
(3.80)
is good. Therefore, taking $\theta$ such that
$$(72\omega_{n}\tilde{K}_{P})^{2}\gamma\theta\frac{2^{n+2+2\max\left(1,\frac{1+%
\kappa}{r}\right)}}{\eta^{2}(\sigma-1)}\leq 1$$
we see that any
$$s\geq\max\left(3,\left\lceil log_{2}\frac{2M}{\delta}\right\rceil\ +1\right)$$
(3.81)
is admissible.
In the following two lemmas we elaborate the case, when alternative (T2), (T2’) of Lemmas 3.22, 3.48 hold.
Lemma 3.7 (Vanishing measure).
Take $u\in B_{D}$ or $B_{N}$. There exists $\eta_{0}>0$ such that for any level $k$ admissible to (2.7) and boundary cylinder $Q_{\rho}\subset\Omega_{T}$ inequality
$$\mu(\{(x,t)\in Q_{\rho}\cap\Omega_{T}:u(x,t)>k\})\leq\eta_{0}\rho^{n+2}$$
(3.82)
implies that
$$\text{either }\max_{Q_{\rho}}u-k<\rho^{\frac{n\kappa}{2}}\quad\text{ or }\quad%
\mu\left(\left\{(x,t)\in Q_{\sfrac{\rho}{\sigma}}\cap\Omega_{T}:u(x,t)>\frac{k%
+\max_{Q_{\rho}}u}{2}\right\}\right)=0$$
(3.83)
Lemma 3.8.
Currently we are ready to present key results, enabling quantitative control of oscillations.
Lemma 3.9 (Oscillation control for $B_{D}$).
Assume that anti-outer-cusp condition (3.10) holds. Take a cylinder $Q_{\rho\sigma}$ with the time contraction parameter $\theta\leq 1$ and $u\in B_{D}$. Then for any fixed $\sigma\in(1,2]$ exists $s=s(\sigma)$, for which either
$$\displaystyle\mathrm{(O1)}\qquad\mathop{\text{osc}}_{Q_{\sfrac{\rho}{\sigma}}%
\cap\Omega_{T}}u\leq 2^{s}\rho^{\frac{n\kappa}{2}}$$
(3.84)
or
$$\displaystyle\mathrm{(O2)}\qquad\mathop{\text{osc}}_{Q_{\sfrac{\rho}{\sigma}}%
\cap\Omega_{T}}u\leq(1-2^{-s})\mathop{\text{osc}}_{Q_{\rho\sigma}\cap\Omega_{T%
}}u$$
for every $\rho\leq\frac{\rho_{0}}{\sigma}$, where $\rho_{0}$ stems from anti-outer-cusp condition (3.10).
To be precise, $s=s(\eta_{0},\sigma)$ from Lemma 3.22 with $\eta_{0}$ fixed by Lemma 3.7.
Lemma 3.10 (Oscillation control for $B_{N}$).
Assume that anti-inner-cusp condition (3.11) holds. Take $u\in B_{N}$ and a cylinder $Q_{\rho\sigma}$ with the time contraction parameter satisfying (3.47).
Then for any fixed $\sigma\in(1,2]$ exists $s=s(\sigma)$, for which either
$$\displaystyle\mathrm{(O1)}\qquad\mathop{\text{osc}}_{Q_{\sfrac{\rho}{\sigma}}%
\cap\Omega_{T}}u\leq 2^{s}\rho^{\frac{n\kappa}{2}}$$
(3.85)
or
$$\displaystyle\mathrm{(O2)}\qquad\mathop{\text{osc}}_{Q_{\sfrac{\rho}{\sigma}}%
\cap\Omega_{T}}u\leq(1-2^{-s})\mathop{\text{osc}}_{Q_{\rho\sigma}\cap\Omega_{T%
}}u$$
for every $\rho\leq\frac{\rho_{0}}{\sigma}$, where $\rho_{0}$ stems from anti-outer-cusp condition (3.11).
To be precise, $s=s(\eta_{0},\sigma)$ from Lemma 3.48 with $\eta_{0}$ fixed by Lemma 3.8.
Having Lemmas 3.48, 3.8 and 3.22, 3.7 one shows Lemmas 3.10, 3.9 as Lemma 7.4 from lsu , Chapter II.7 (for some more details in case of cylinders scaled by factor $\sigma$ instead of $2$ as in lsu , compare Lemma 5.3 from zaj ). Because the argument is straightforward, we present it below for reader’s convenience.
Proof of Lemmas 3.9, 3.10.
As we consider both $B_{N}$ and $B_{D}$ case, for convenience we refer to either Lemma 3.22 or 3.48 as trichotomy lemma and to Lemma 3.7 or 3.8 as vanishing measure lemma. Fix $\sigma_{0}\in(1,2]$. Set the smallness parameter $\eta$ equal to $\eta_{0}$ from vanishing measure lemmas. Take as $s_{0}=s(\eta_{0},\sigma_{0})$ from trichotomy lemmas. Suppose (O1) does not hold. Therefore, in view of trichotomy lemmas, (T2) or (T2’) is valid. Focus on the case when (T2) holds333 again, the other one is performed in the same way, with $-u$ instead of $u$:
$$\mu(\{(x,t)\in Q_{\rho}\cap\Omega_{T}:u(x,t)>k_{s_{0}-1}\})\leq\eta_{0}\rho^{n%
+2}\text{ \quad with \quad}k_{s_{0}-1}=\overline{m}-2^{-(s_{0}-1)}\omega$$
In view of definition of $s_{0}$, level $k_{s_{0}-1}$ is admissible in (2.7). Observe that for a fixed $\eta_{0}$ one has thesis of vanishing measure lemmas for every level $k$ admissible to (2.7) independently from $s$, so there is no loop in the above choice of parameters.
This allows us via vanishing measure lemmas to state
that either
$$\max_{Q_{\rho}}u-k_{s_{0}-1}<\rho^{\frac{n\kappa}{2}}$$
(3.86)
or
$$\mu\left(\left\{(x,t)\in Q_{\sfrac{\rho}{\sigma}}\cap\Omega_{T}:u(x,t)>\frac{k%
_{s_{0}-1}+\max_{Q_{\rho}}u}{2}\right\}\right)=0$$
(3.87)
In view of definition of $k_{s_{0}-1}$ and assumption that (O1) fails, (3.86) yields
$$\max_{Q_{\sfrac{\rho}{\sigma}}\cap\Omega_{T}}u\leq\max_{Q_{\rho}\cap\Omega_{T}%
}u<k_{s_{0}-1}+\rho^{\frac{n\kappa}{2}}<\overline{m}-2^{-(s_{0}-1)}\omega+2^{-%
s_{0}}\mathop{\text{osc}}_{Q_{\sfrac{\rho}{\sigma}}\cap\Omega_{T}}u\leq%
\overline{m}-2^{-s_{0}}\omega.$$
(3.88)
In case of validity of (3.87) holds
$$\max_{Q_{\sfrac{\rho}{\sigma}}\cap\Omega_{T}}u\leq\frac{\overline{m}-2^{-(s_{0%
}-1)}\omega+\max_{Q_{\rho}}u}{2}\leq\overline{m}-2^{-s_{0}}\omega$$
(3.89)
Therefore (3.88), (3.89) imply thesis, because
$$\mathop{\text{osc}}_{Q_{\sfrac{\rho}{\sigma}}\cap\Omega_{T}}u=\max_{Q_{\sfrac{%
\rho}{\sigma}}\cap\Omega_{T}}u-\min_{Q_{\sfrac{\rho}{\sigma}}\cap\Omega_{T}}u<%
\overline{m}-2^{-s_{0}}\omega-\min_{Q_{\sfrac{\rho}{\sigma}}\cap\Omega_{T}}u%
\leq(1-2^{-s_{0}})\mathop{\text{osc}}_{Q_{\rho\sigma}\cap\Omega_{T}}u.$$
(3.90)
∎
We are ready to derive Hölder regularity result from the above formulated oscillation control lemmas, i.e. Lemmas 3.9, 3.10, use the fact below, where as usual $Q_{\rho}$ denotes $Q(\rho,\theta\rho^{2})$
Lemma 3.11.
Fix $\theta$. If measurable, bounded $u:Q_{\rho_{0}}\cap\Omega_{T}\rightarrow\mathbb{R}$ satisfies for $\eta<1$, $b>1$
$$\text{either }\mathop{\text{osc}}_{Q_{{\rho}}\cap\Omega_{T}}u\leq\eta\mathop{%
\text{osc}}_{Q_{b\rho}\cap\Omega_{T}}u\quad\text{or}\mathop{\text{osc}}_{Q_{{%
\rho}}\cap\Omega_{T}}u\leq c_{1}\rho^{\delta}$$
(3.91)
then $u$ is Hölder continuous; more precisely for every $\rho\leq b^{-1}\rho_{0}$ holds
$$\mathop{\text{osc}}_{Q_{\rho}\cap\Omega_{T}}u\leq C\rho^{\alpha}$$
with
$$\alpha=\min(-\log_{b}\eta,\delta)\quad C=(\sfrac{b}{\rho_{0}})^{\alpha}\max%
\left(\mathop{\text{osc}}_{Q_{\rho_{0}}}u,c_{1}\rho_{0}^{\delta}\right)$$
The proof of the above Lemma 3.11 con be found in lsu , Chapter II.5. Finally we formulate
Proof of Theorems 3.14, 3.18.
Combination of oscillation control lemmas, i.e. Lemmas 3.9, 3.10 and Lemma 3.11 gives the main result. Exact estimates for quantity $s$ is given by Remarks 3.45, 4.
∎
Proof of Theorems 3.3, 3.4.
Having theory on boundary DeGiorgi-Ladyzhenskaya classes, we perform these proofs exactly as proofs of Lemma 3.2 and Main Theorem in zaj2 .
∎
Bibliography
References
(1)
Ladyzhenskaya, O. A., Solonnikov V. A., Uraltseva, N. N. Linear and quasilinear equations of parabolic type,
(2)
Zajaczkowski, W. M. Global regular axially symmetric solutions to the Navier-Stokes equations in a periodic cylinder,
(3)
Zajaczkowski, W. M. The Hölder continuity of the swirl for the Navier-Stokes motions, |
Invertible harmonic mappings beyond Kneser theorem and quasiconformal harmonic mappings
David Kalaj
University of Montenegro, faculty of natural sciences and mathematics,
Cetinjski put b.b. 81000, Podgorica, Montenegro
[email protected]
Abstract.
In this paper we extend Rado-Choquet-Kneser theorem for the mappings
with Lipschitz boundary data and essentially positive Jacobian at
the boundary without restriction on the convexity of image domain.
The proof is based on a recent extension of Rado-Choquet-Kneser
theorem by Alessandrini and Nesi [2] and it is used an
approximation schema. Some applications for the family of
quasiconformal harmonic mappings between Jordan domains are given.
Key words and phrases:Planar harmonic mappings, Quasiconformal mapping,
Convex domains, Rado-Kneser-Choquet theorem
1. Introduction and statement of the main result
Harmonic mappings in the plane are univalent complex-valued harmonic
functions of a complex variable. Conformal mappings are a special
case where the real and imaginary parts are conjugate harmonic
functions, satisfying the Cauchy-Riemann equations. Harmonic
mappings were studied classically by differential geometers because
they provide isothermal (or conformal) coordinates for minimal
surfaces. More recently they have been actively investigated by
complex analysts as generalizations of univalent analytic functions,
or conformal mappings. For the background to this theory we refer to
the book of Duren [6]. If $w$ is a univalent complex-valued
harmonic functions, then by Lewy’s theorem (see [24]), $w$ has a
non-vanishing Jacobian and consequently, according to the inverse
mapping theorem, $w$ is a diffeomorphism.
Moreover, if $w$ is a harmonic mapping of the unit disk $\mathbf{U}$
onto a convex Jordan domain $\Omega$, mapping the boundary $\mathbf{T}=\partial\mathbf{U}$ onto $\partial\Omega$ homeomorphically, then
$w$ is a diffeomorphism. This is celebrated theorem of Rado, Kneser
and Choquet ([21]). This theorem has been extended in various
directions (see for example [11], [3], [36] and
[35]). One of the recent extensions is the following
proposition, due to Nesi and Alessandrini, which is one of the main
tools in proving our main result.
Proposition 1.1.
[2] Let $F:\mathbf{T}\to\gamma\subset\mathbf{C}$ be an orientation preserving diffeomorphism
of class $C^{1}$ onto a simple closed curve of the complex plane
$\mathbf{C}$. Let $D$ be a bounded domain such that $\partial D=\gamma$. Let $w=P[F]\in C^{1}(\overline{\mathbf{U}};\mathbf{C}),$ where
$P[f]$ is the Poisson extension of $F$. The mapping $w$ is a
diffeomorphism of $\overline{\mathbf{U}}$ onto $\overline{D}$ if and
only if
(1.1)
$$J_{w}(e^{it})>0\text{ everywhere on}\ \mathbf{T},$$
where $J_{w}(e^{it}):=\lim_{r\to 1^{-}}J_{w}(re^{it})$, and $J_{w}(re^{it})$ is the Jacobian
of $w$ at $re^{it}$.
In this paper we generalize Rado-Kneser-Choquet theorem as follows.
Theorem 1.2 (The main result).
Let $F:\mathbf{T}\to\gamma\subset\mathbf{C}$ be an orientation preserving Lipschitz weak
homeomorphism of the unit circle $\mathbf{T}$ onto a $C^{1,\alpha}$
smooth Jordan curve. Let $D$ be a bounded domain such that $\partial D=\gamma$. Then $J_{w}(e^{it})/|F^{\prime}(t)|$ exists a.e. in $\mathbf{T}$
and has a continuous extension $T_{w}(e^{it})$ to $\mathbf{T}$. If
(1.2)
$$T_{w}(e^{it})>0\ \ \text{ everywhere on}\ \ \mathbf{T},$$
then the mapping $w=P[F]$ is a diffeomorphism of $\mathbf{U}$ onto
$D$.
In order to compare this statement with Kneser’s Theorem, it is
worth noticing that when $D$ is convex, then by Remark 3.2
the condition (1.2) is automatically satisfied.
It follows from Theorem 1.2 that under its conditions, the
Jacobian $J_{w}$ of $w$ has continuous extension to the boundary
provided that $F\in C^{1}(\mathbf{T})$ and it should be noticed that
this does not mean that the partial derivatives of $w$
have necessarily a continuous extension to the boundary (see e.g.
[27] for a counterexample).
Note that we do not have any restriction on convexity of image
domain in Theorem 1.2 which is proved in section 3.
Using this theorem, in section 4 we characterize all quasiconformal
harmonic mappings between the unit disk $\mathbf{U}$ and a smooth
Jordan domain $D$, in terms of boundary data (see
Theorem 4.1) which could be considered as a variation of
Proposition 1.1.
2. Preliminaries
2.1. Arc length parameterization of a Jordan curve
Suppose
that $\gamma$ is a rectifiable Jordan curve in the complex plane
$\mathbf{C}$. Denote by $l$ the length of $\gamma$ and let
$g:[0,l]\mapsto\gamma$ be an arc length parameterization of
$\gamma$, i.e. a parameterization satisfying the condition:
$$|g^{\prime}(s)|=1\text{ for all $s\in[0,l]$}.$$
We will say that $\gamma$ is of class $C^{1,\alpha}$, $0<\alpha\leq 1$, if $g$ is of class $C^{1}$ and
$$\sup_{t,s}\frac{|g^{\prime}(t)-g^{\prime}(s)|}{|t-s|^{\alpha}}<\infty.$$
Definition 2.1.
Let $l=|\gamma|$.
We will say that a surjective function $F=g\circ f:\mathbf{T}\to\gamma$ is a weak homeomorphism, if $f:[0,2\pi]\to[0,l]$ is a
nondecreasing surjective function.
Definition 2.2.
Let $f:[a,b]\to\mathbf{C}$ be a continuous function. The modulus of
continuity of $f$ is
$$\omega(t)=\omega_{f}(t)=\sup_{|x-y|\leq t}|f(x)-f(y)|.\,$$
The function $f$ is called Dini continuous if
(2.1)
$$\int_{0^{+}}\frac{\omega_{f}(t)}{t}\,dt<\infty.$$
Here
$\int_{0^{+}}:=\int_{0}^{k}$ for some positive constant $k$. A smooth
Jordan curve $\gamma$ with the length $l=|\gamma|$, is said to be
Dini smooth if $g^{\prime}$ is Dini continuous. Observe that every smooth
$C^{1,\alpha}$ Jordan curve is Dini smooth.
Let
(2.2)
$$K(s,t)=\text{Re}\,[\overline{(g(t)-g(s))}\cdot ig^{\prime}(s)]$$
be a function defined on $[0,l]\times[0,l]$. By
$K(s\pm l,t\pm l)=K(s,t)$ we extend it on $\mathbf{R}\times\mathbf{R}$. Note that $ig^{\prime}(s)$ is the inner unit normal vector of $\gamma$
at $g(s)$ and therefore, if $\gamma$ is convex then
(2.3)
$$K(s,t)\geq 0\text{ for every $s$
and $t$}.$$
Suppose now that $F:\mathbf{R}\mapsto\gamma$
is an arbitrary $2\pi$ periodic Lipschitz function such that
$F|_{[0,2\pi)}:[0,2\pi)\mapsto\gamma$ is an orientation preserving
bijective function. Then there exists an increasing continuous
function $f:[0,2\pi]\mapsto[0,l]$ such that
(2.4)
$$F(\tau)=g(f(\tau)).$$
In the remainder of this paper we will identify $[0,2\pi)$ with the
unit circle $\mathbf{T}$, and $F(s)$ with $F(e^{is})$. In view of
the previous convention we have for a.e. $e^{i\tau}\in\mathbf{T}$
that
$$F^{\prime}(\tau)=g^{\prime}(f(\tau))\cdot f^{\prime}(\tau),$$
and therefore
$$|F^{\prime}(\tau)|=|g^{\prime}(f(\tau))|\cdot|f^{\prime}(\tau)|=f^{\prime}(%
\tau).$$
Along with the function $K$ we will also consider the function $K_{F}$
defined by
$$K_{F}(t,\tau)=\text{Re}\,[\overline{(F(t)-F(\tau))}\cdot iF^{\prime}(\tau)].$$
It is easy to see that
(2.5)
$$K_{F}(t,\tau)=f^{\prime}(\tau)K(f(t),f(\tau)).$$
Lemma 2.3.
If $\gamma$ is Dini smooth, and $\omega$ is modulus of continuity of
$g^{\prime}$, then
(2.6)
$$|K(s,t)|\leq\int_{0}^{\min\{|s-t|,l-|s-t|\}}\omega(\tau)d\tau.$$
Proof.
Note that
$$\begin{split}\displaystyle K(s,t)&\displaystyle=\text{Re}[\overline{(g(t)-g(s)%
)}\cdot ig^{\prime}(s)]\\
&\displaystyle=\text{Re}\left[\overline{(g(t)-g(s))}\cdot i\left(g^{\prime}(s)%
-\dfrac{g(t)-g(s)}{t-s}\right)\right],\end{split}$$
and
$$g^{\prime}(s)-\dfrac{g(t)-g(s)}{t-s}=\int_{s}^{t}\frac{g^{\prime}(s)-g^{\prime%
}(\tau)}{t-s}d\tau.$$
Therefore
$$\begin{split}\displaystyle\left|g^{\prime}(s)-\dfrac{g(t)-g(s)}{t-s}\right|&%
\displaystyle\leq\int_{s}^{t}\frac{|g^{\prime}(s)-g^{\prime}(\tau)|}{t-s}d\tau%
\\
&\displaystyle\leq\int_{s}^{t}\frac{\omega(\tau-s)}{t-s}d\tau\\
&\displaystyle=\frac{1}{t-s}{\int_{0}^{t-s}\omega(\tau)d\tau}.\end{split}$$
On the other hand
$$|\overline{g(t)-g(s)}|\leq\sup_{s\leq x\leq t}|g^{\prime}(x)|(t-s)=(t-s).$$
It follows that
(2.7)
$$|K(s,t)|\leq\int_{0}^{|s-t|}\omega(\tau)d\tau.$$
Since $K(s\pm l,t\pm l)=K(s,t)$ according to (2.7) we obtain (2.6).
∎
Lemma 2.4.
If $\omega:[0,l]\to[0,\infty)$, $\omega(0)=0$, is a bounded
function satisfying $\int_{0^{+}}\omega(x)dx/x<\infty$, then for
every constant $a$, we have $\int_{0^{+}}\omega(ax)dx/x<\infty$.
Next for every $0<y\leq l$ holds the following formula:
(2.8)
$$\int_{0+}^{y}\frac{1}{x^{2}}\int_{0}^{x}\omega(at)dtdx=\int_{0+}^{y}\frac{%
\omega(ax)}{x}-\frac{\omega(ax)}{y}dx.$$
Proof.
The first statement of the lemma is immediate.
Taking the substitutions $u=\int_{0}^{x}\omega(at)dt$ and $dv=x^{-2}dx$, and using the fact that
$$\lim_{\alpha\to 0}\frac{\int_{0}^{\alpha}\omega(at)dt}{\alpha}=\omega(0)=0$$
we
obtain:
$$\begin{split}\displaystyle\int_{0+}^{y}\frac{1}{x^{2}}\int_{0}^{x}\omega(at)%
dtdx&\displaystyle=\lim_{\alpha\to 0+}\int_{\alpha}^{y}\frac{1}{x^{2}}\int_{0}%
^{x}\omega(at)dtdx\\
&\displaystyle=-\lim_{\alpha\to 0+}\left.\frac{\int_{0}^{x}\omega(at)dt}{x}%
\right|_{\alpha}^{y}+\lim_{\alpha\to 0+}\int_{\alpha}^{y}\frac{\omega(ax)}{x}%
dx\\
&\displaystyle=\int_{0+}^{y}\frac{\omega(ax)}{x}-\frac{\omega(ax)}{y}dx.\end{split}$$
∎
A function $\varphi:A\to B$ is called $(\ell,\mathcal{L)}$
bi-Lipschitz, where $0<\ell<\mathcal{L}<\infty$, if $\ell|x-y|\leq|\varphi(x)-\varphi(y)|\leq\mathcal{L}|x-y|$ for $x,y\in A$.
Lemma 2.5.
If $\varphi:\mathbf{R}\to\mathbf{R}$ is a $(\ell,\mathcal{L})$
bi-Lipschitz mapping ($\mathcal{L}$ Lipschitz weak homeomorphism),
such that $\varphi(x+a)=\varphi(x)+b$ for some $a$ and $b$ and
every $x$, then there exist a sequence of $(\ell,\mathcal{L})$
bi-Lipschitz diffeomorphisms (respectively a sequence of
diffeomorphisms) $\varphi_{n}:\mathbf{R}\to\mathbf{R}$ such that
$\varphi_{n}$ converges uniformly to $\varphi$, and $\varphi_{n}(x+a)=\varphi_{n}(x)+b$.
Proof.
We introduce appropriate mollifiers: fix a smooth function
$\rho:{\mathbb{R}}\to[0,1]$ which is compactly supported on the interval
$(-1,1)$ and satisfies $\int_{\mathbb{R}}\rho=1$. For ${\varepsilon}=1/n$ consider the
mollifier
(2.9)
$$\rho_{\varepsilon}(t):=\frac{1}{{\varepsilon}}\,\rho\left(\frac{t}{{%
\varepsilon}}\right).$$
It is compactly supported in the interval $(-{\varepsilon},{\varepsilon})$ and
satisfies $\int_{\mathbb{R}}\rho_{\varepsilon}=1$. Define
$$\varphi_{\varepsilon}(x)=\varphi\ast\rho_{\varepsilon}=\int_{\mathbf{R}}%
\varphi(y)\frac{1}{\varepsilon}\rho\left(\frac{x-y}{\varepsilon}\right)dy=\int%
_{\mathbf{R}}\varphi(x-\varepsilon z)\rho(z)dz.$$
Then
$$\varphi^{\prime}_{\varepsilon}(x)=\int_{\mathbf{R}}\varphi^{\prime}(x-%
\varepsilon z)\rho(z)dz.$$
It follows that
$$\ell\int_{\mathbf{R}}\rho(z)dz=\ell\leq|\varphi^{\prime}_{\varepsilon}(x)|\leq%
\mathcal{L}\int_{\mathbf{R}}\rho(z)dz=\mathcal{L}.$$
The fact that $\varphi_{\varepsilon}(x)$ converges uniformly to
$\varphi$ follows by Arzela-Ascoli theorem.
To prove the case, when $\varphi$ is a $\mathcal{L}$ Lipschitz weak
homeomorphism, we make use of the following simple fact. Since
$\varphi$ is $\mathcal{L}$ -Lipschitz, then
$$\varphi_{m}(x)=\frac{mb}{mb+a}(\varphi(x)+x/m)$$
is
$(\ell_{m},\mathcal{L}_{m})$ bi-Lipschitz, for some
$\ell_{m},\mathcal{L}_{m}$, with $\varphi_{m}(x+a)=\varphi_{m}(x)+b$, and
$\varphi_{m}$ converges uniformly to $\varphi$. By the previous case,
we can choose a diffeomorphism
(2.10)
$$\psi_{m}=\varphi_{m}\ast\rho_{\varepsilon_{m}}=\frac{mb}{mb+a}\left(\varphi%
\ast\rho_{\varepsilon_{m}}+\frac{x}{m}\right),$$
such that $\|\psi_{m}-\varphi_{m}\|_{\infty}\leq 1/m$. Thus
$$\lim_{n\to\infty}\|\psi_{n}-\varphi\|_{\infty}=0.$$
The proof is completed.
∎
2.2. Harmonic functions and Poisson integral
The function
$$P(r,t)=\frac{1-r^{2}}{2\pi(1-2r\cos t+r^{2})},\ \ \ 0\leq r<1,\ \ t\in[0,2\pi]$$
is called the Poisson
kernel. The Poisson integral of a complex function $F\in L^{1}(\mathbf{T})$ is a complex harmonic function given by
(2.11)
$$w(z)=u(z)+iv(z)=P[F](z)=\int_{0}^{2\pi}P(r,t-\tau)F(e^{it})dt,$$
where $z=re^{i\tau}\in\mathbf{U}$. We refer to the book of Axler,
Bourdon and Ramey [4] for good setting of harmonic functions.
The Hilbert transformation of a function $\chi\in{\rm L}^{1}(\mathbf{T})$
is defined by the formula
$$\tilde{\chi}(\tau)=H(\chi)(\tau)=-\frac{1}{\pi}\int_{0+}^{\pi}\frac{\chi(\tau+%
t)-\chi(\tau-t)}{2\tan(t/2)}\mathrm{d}t.$$
This integral is improper and converges for a.e.
$\tau\in[0,2\pi]$; this and other facts concerning the operator $H$
used in this paper can be found in the book of Zygmund
[41, Chapter VII]. If $f$ is a harmonic function then a
harmonic function $\tilde{f}$ is called the harmonic conjugate of $f$
if $f+i\tilde{f}$ is an analytic function. Let $\chi,\tilde{\chi}\in L^{1}(\mathbf{T})$. Then
(2.12)
$$P[\tilde{\chi}]=\widetilde{P[\chi]},$$
where $\tilde{k}(z)$ is the harmonic
conjugate of $k(z)$ (see e.g. [33, Theorem 6.1.3]).
Assume that $z=x+iy=re^{i\tau}\in\mathbf{U}$. The complex
derivatives of a differential mapping $w:\mathbf{U}\to\mathbf{C}$ are
defined as follows:
$$w_{z}=\frac{1}{2}\left(w_{x}+\frac{1}{i}w_{y}\right)$$
and
$$w_{\bar{z}}=\frac{1}{2}\left(w_{x}-\frac{1}{i}w_{y}\right).$$
The derivatives of $w$ in polar coordinates can be expressed as
$$w_{\tau}(z):=\frac{\partial w(z)}{\partial\tau}=i(zw_{z}-\overline{z}w_{\bar{z%
}})$$
and
$$w_{r}(z):=\frac{\partial w(z)}{\partial r}=(e^{i\tau}w_{z}+e^{-i\tau}w_{\bar{z%
}}).$$
The
Jacobian determinant of $w$ is expressed in polar coordinates as
(2.13)
$$J_{w}(z)=|w_{z}|^{2}-|w_{\bar{z}}|^{2}=\frac{1}{r}\mathrm{Im}(h_{\tau}%
\overline{h_{r}})=\frac{1}{r}\mathrm{Re}(ih_{r}\overline{h}_{\tau}).$$
Assume that $w=P[F](z)$ is a
harmonic function defined on the unit disk $\mathbf{U}$. Then there
exist two analytic functions $h$ and $k$ defined in the unit disk
such that $w=h+\overline{k}$. Moreover $w_{\tau}=i(zh^{\prime}(z)-\bar{z}\overline{k^{\prime}(z)})$ is a harmonic function and $rw_{r}=zh^{\prime}(z)+\bar{z}\overline{k^{\prime}(z)}$ is its harmonic conjugate.
It follows from (2.11) that $w_{\tau}$ equals the
Poisson–Stieltjes integral of $F^{\prime}$:
$$\begin{split}\displaystyle w_{\tau}(re^{i\tau})&\displaystyle=\int_{0}^{2\pi}%
\partial_{\tau}P(r,\tau-t)F(t)dt\\
&\displaystyle=-\int_{0}^{2\pi}\partial_{t}P(r,\tau-t)F(t)dt\\
&\displaystyle=-\int_{0}^{2\pi}\partial_{t}P(r,\tau-t)F(t)dt\\
&\displaystyle=-P(r,\tau-t)F(t)|_{t=0}^{2\pi}+\int_{0}^{2\pi}P(r,\tau-t)dF(t)%
\\
&\displaystyle=\int_{0}^{2\pi}P(r,\tau-t)dF(t).\end{split}$$
Hence, by Fatou’s theorem, the radial limits of $w_{\tau}$ exist a.e.
and
$$\lim_{r\to 1^{-}}w_{\tau}(re^{i\tau})=F_{0}^{\prime}(\tau),\ \ a.e.,$$
where
$F_{0}$ is the absolutely continuous part of $F$.
As $rw_{r}$ is the harmonic conjugate of $w_{\tau}$, it turns out that
if $F$ is absolutely continuous, then
(2.14)
$$\lim_{r\to 1^{-}}w_{r}(re^{i\tau})=H(F^{\prime})(\tau)\,\,(a.e.),$$
and
(2.15)
$$\lim_{r\to 1^{-}}w_{\tau}(re^{i\tau})=F^{\prime}(\tau)\ \ \ (a.e).$$
3. The proof of the main theorem
The aim of this chapter is to prove Theorem 1.2. We will
construct a suitable sequence $w_{n}$ of univalent harmonic mappings,
converging almost uniformly to $w=P[F]$. In order to do so, we will
mollify the boundary function $F$, by a sequence of diffeomorphism
$F_{n}$ and take the Poisson extension $w_{n}=P[F_{n}]$. We will show that
under the condition of Theorem 1.2 for large $n$, $w_{n}$
satisfies the conditions of theorem of Alessandrini and Nesi. By a
result of Hengartner and Schober [9], the limit function $w$
of a locally uniformly convergent sequence of univalent harmonic
mappings $w_{n}$ is univalent, providing that $F$ is a surjective
mapping.
We begin by the following lemma.
Lemma 3.1.
Let $\gamma$ be a Dini smooth Jordan curve, denote by $g$ its
arc-length parameterization and assume that $F(t)=g(f(t))$ is a
Lipschitz weak homeomorphism from the unit circle onto $\gamma$. If
$w(z)=u(z)+iv(z)=P[F](z)$ is the Poisson extension of $F$, then for
almost every $\tau\in[0,2\pi]$ exists the limit
$$J_{w}(e^{i\tau}):=\lim_{r\to 1^{-}}J_{w}(re^{i\tau})$$
and there holds the
formula
(3.1)
$$\begin{split}\displaystyle J_{w}(e^{i\tau})&\displaystyle=f^{\prime}(\tau)\int%
_{0}^{2\pi}\frac{\mathrm{Re}\,[\overline{(g(f(t))-g(f(\tau)))}\cdot ig^{\prime%
}(f(\tau))]}{2\sin^{2}\frac{t-\tau}{2}}dt.\end{split}$$
Proof.
Let $z=re^{i\tau}$. Since $F$ is Lipschitz it is
absolutely continuous and by (2.15) and (2.14) we obtain
that there exist radial
derivatives of $w_{\tau}$ and $w_{r}$ for a.e. $e^{i\tau}\in\mathbf{T}$.
By Fatou’s theorem (see e.g. [4, Theorem 6.39], c.f.
(2.15))), we have
(3.2)
$$\lim_{r\to 1^{-}}w_{\tau}(re^{i\tau})=F^{\prime}(\tau)$$
for almost every $e^{i\tau}\in\mathbf{T}$.
Further for a.e. $\tau\in[0,2\pi]$, by using Lagrange theorem we
have
$$\frac{u(e^{i\tau})-u(re^{i\tau})}{1-r}=u_{r}(pe^{i\tau}),\ \ \ r<p<1$$
and
$$\frac{v(e^{i\tau})-v(re^{i\tau})}{1-r}=v_{r}(qe^{i\tau}),\ \ \ r<q<1.$$
It follows that for a.e. $\tau\in[0,2\pi]$
(3.3)
$$\lim_{r\to 1^{-}}\frac{u(e^{i\tau})-u(re^{i\tau})}{1-r}=\lim_{r\to 1^{-}}u_{r}%
(r(e^{i\tau}))$$
and
(3.4)
$$\lim_{r\to 1^{-}}\frac{v(e^{i\tau})-v(re^{i\tau})}{1-r}=\lim_{r\to 1^{-}}v_{r}%
(r(e^{i\tau}))$$
and consequently for a.e. $\tau\in[0,2\pi]$
(3.5)
$$\lim_{r\to 1^{-}}\frac{w(e^{i\tau})-w(re^{i\tau})}{1-r}=\lim_{r\to 1^{-}}w_{r}%
(r(e^{i\tau})).$$
By using the previous facts and the formulas
$$w(e^{i\tau})-w(re^{i\tau})=\int_{0}^{2\pi}[F(\tau)-F(t)]P(r,\tau-t)dt$$
and (2.13) we obtain:
(3.6)
$$\begin{split}\displaystyle\lim_{r\to 1^{-}}J_{w}(re^{i\tau})&\displaystyle=%
\lim_{r\to 1^{-}}\frac{\mathrm{Re}[iw_{r}(re^{i\tau})\overline{w_{\tau}(re^{i%
\tau})}]}{r}\\
&\displaystyle=\lim_{r\to 1^{-}}\frac{\mathrm{Re}[i(w(e^{i\tau})-w(re^{i\tau})%
)\overline{w_{\tau}(re^{i\tau})}]}{(1-r)r}\\
&\displaystyle=\lim_{r\to 1^{-}}\frac{1}{1-r}\int_{0}^{2\pi}P(r,\tau-t)\mathrm%
{Re}[i(F(\tau)-F(t))\overline{F^{\prime}(\tau)}]dt\\
&\displaystyle=\lim_{r\to 1^{-}}\int_{-\pi}^{\pi}K_{F}(t+\tau,\tau)\frac{P(r,t%
)}{1-r}dt,\ \ a.e.\end{split}$$
where
(3.7)
$$K_{F}(t,\tau)=f^{\prime}(\tau)\text{Re}\,[\overline{(g(f(t))-g(f(\tau)))}\cdot
ig%
^{\prime}(f(\tau)))],$$
and $P(r,t)$ is the Poisson kernel. We
refer to [23, Eq. 5.6] for a similar approach, but for some
other purpose.
To continue, observe first that
$$\frac{P(r,t)}{1-r}=\frac{1+r}{2\pi(1+r^{2}-2r\cos t)}\leq\frac{1}{\pi((1-r)^{2%
}+4r\sin^{2}t/2)}\leq\frac{\pi}{4rt^{2}}$$
for
$0<r<1$ and $t\in[-\pi,\pi]$ because $|\sin(t/2)|\geq t/\pi$.
On the other hand by (2.6) and (3.7), for
$$\sigma=\min\{|f(t+\tau)-f(\tau)|,l-|f(t+\tau)-f(\tau)|\}$$
we obtain
$$\mathopen{|}K_{F}(t+\tau,\tau)\mathclose{|}\leq\|F^{\prime}\|_{\infty}\int_{0}%
^{\sigma}\omega(u)du,$$
where $\omega$ is the
modulus of continuity of $g^{\prime}$. Therefore for $r\geq 1/2$,
(3.8)
$$\begin{split}\displaystyle\mathopen{|}K_{F}(t+\tau,\tau)\frac{P(r,t)}{1-r}%
\mathclose{|}&\displaystyle\leq\frac{\|F^{\prime}\|_{\infty}\pi}{4rt^{2}}\int_%
{0}^{\sigma}\omega(u)du\\
&\displaystyle\leq\frac{\sigma}{t}\frac{\|F^{\prime}\|_{\infty}\pi}{4rt^{2}}%
\int_{0}^{t}\omega\left(\frac{\sigma}{t}u\right)du\\
&\displaystyle\leq\frac{\pi\|F^{\prime}\|_{\infty}^{2}}{2}\frac{1}{t^{2}}\int_%
{0}^{t}\omega(\|F^{\prime}\|_{\infty}u)du:=Q(t).\end{split}$$
Thus $Q(t)$ is a dominant for the expression
$$\mathopen{|}K_{F}(t+\tau,\tau)\frac{P(r,t)}{1-r}\mathclose{|},$$
for $r\geq 1/2$. Having
in mind the equation (2.8), we obtain
$$\begin{split}\displaystyle\int_{-\pi}^{\pi}\mathopen{|}Q(t)\mathclose{|}dt&%
\displaystyle\leq\frac{2\pi\|F^{\prime}\|_{\infty}^{2}}{2}\int_{0}^{\pi}\frac{%
1}{t^{2}}\int_{0}^{t}\omega(\|F^{\prime}\|_{\infty}u)du\\
&\displaystyle={\pi\|F^{\prime}\|_{\infty}^{2}}\int_{0}^{\pi}\left(\frac{%
\omega(\|F^{\prime}\|_{\infty}u)}{u}-\frac{\omega(\|F^{\prime}\|_{\infty}u)}{%
\pi}\right)du\\
&\displaystyle<M<\infty.\end{split}$$
According to the
Lebesgue Dominated Convergence Theorem, taking the limit under the
integral sign in the last integral in
(3.6) we obtain (3.1).
∎
For a
Lipschitz non-decreasing function $f$ and an arc-length
parametrization $g$ of the Dini’s smooth curve $\gamma$ we define
the operator $T$ as follows
(3.9)
$$T[f](\tau)=\int_{0}^{2\pi}\frac{\mathrm{Re}\,[\overline{(g(f(t))-g(f(\tau)))}%
\cdot ig^{\prime}(f(\tau)))]}{2\sin^{2}\frac{t-\tau}{2}}dt,\tau\in[0,2\pi].$$
According to Lemma 3.1, this integral
converges. Notice that if $\gamma$ is a convex Jordan curve then
$\mathrm{Re}\,[\overline{(g(f(t))-g(f(\tau)))}\cdot ig^{\prime}(f(\tau))]\geq 0$, and therefore $T[f]>0$. In the next proof, we
will show that under the integral condition $T[f]>0$ the harmonic
extension of a bi-Lipschitz mapping is a diffeomorphism regardless
of the condition of convexity.
Proof of Theorem 1.2.
Assume for simplicity that $|\gamma|=2\pi$. The general case
follows by normalization. Let $g:[0,2\pi]\to\gamma$ be an arc
length parametrization of $\gamma$. Then $F(e^{it})=g(f(t))$, where
$f:\mathbf{R}\to\mathbf{R}$ is a Lipschitz weak homeomorphism such
that $f(t+2\pi)=f(t)+2\pi$. From (3.9) we have
$$\begin{split}\displaystyle T[f](\tau)&\displaystyle=\lim_{\epsilon\to 0^{+}}%
\int_{\epsilon}^{\pi}\frac{\text{Re}\,[\overline{(g(f(t+\tau))-g(f(\tau)))}%
\cdot ig^{\prime}(f(\tau)))]}{2\sin^{2}\frac{t}{2}}\frac{dt}{2\pi}\\
&\displaystyle+\lim_{\epsilon\to 0^{+}}\int_{-\pi}^{-\epsilon}\frac{\text{Re}%
\,[\overline{(g(f(t+\tau))-g(f(\tau)))}\cdot ig^{\prime}(f(\tau)))]}{2\sin^{2}%
\frac{t}{2}}\frac{dt}{2\pi}.\end{split}$$
Assume that $\beta:[0,2\pi]\to\mathbf{R}$ is a continuous functions
such that
(3.10)
$$g^{\prime}(s)=e^{i\beta(s)},\ \ \beta(0)=\beta(2\pi).$$
Then
(3.11)
$$|g^{\prime}(s)-g^{\prime}(t)|=2\mathopen{|}\sin\frac{\beta(t)-\beta(s)}{2}%
\mathclose{|}.$$
Let
$\omega_{\beta}$ be the modulus of continuity of $g^{\prime}$. Then
(3.12)
$$\omega_{\beta}(\rho)=\max_{|t-s|\leq\rho}2\mathopen{|}\sin\frac{\beta(t)-\beta%
(s)}{2}\mathclose{|}.$$
Since
$\gamma\in C^{1,\alpha}$,
(3.13)
$$\omega_{\beta}(\rho)\leq c(\gamma)\rho^{\alpha}.$$
Further from (3.10), we have
$$\begin{split}\displaystyle\frac{\text{Re}\,[\overline{(g(f(t+\tau))-g(f(\tau))%
)}\cdot ig^{\prime}(f(\tau))]}{2\sin^{2}\frac{t}{2}}&\displaystyle=\frac{\text%
{Re}\,[\overline{\int_{f(\tau)}^{f(t+\tau)}g^{\prime}(s)ds}\cdot ig^{\prime}(f%
(\tau))]}{2\sin^{2}\frac{t}{2}}\\
&\displaystyle=\frac{\text{Re}\,[\overline{\int_{f(\tau)}^{f(t+\tau)}e^{i\beta%
(s)}ds}\cdot ie^{i\beta(\tau)}]}{2\sin^{2}\frac{t}{2}}\\
&\displaystyle=\frac{-\text{Im}\,[\overline{\int_{f(\tau)}^{f(t+\tau)}e^{i%
\beta(s)-\beta(\tau)}ds}]}{2\sin^{2}\frac{t}{2}}\\
&\displaystyle=\frac{\int_{f(\tau)}^{f(t+\tau)}\sin[\beta(s)-\beta(f(\tau))]ds%
}{2\sin^{2}\frac{t}{2}}.\end{split}$$
Taking
$$dU=\frac{1}{2\sin^{2}\frac{t}{2}}dt\text{ \ and \ \ }V=\int_{f(\tau)}^{f(t+%
\tau)}\sin[\beta(s)-\beta(f(\tau))]ds,$$
we
obtain that
$$U=-\cot\frac{t}{2}\ \ \text{and}\ \ dV=f^{\prime}(t+\tau)\sin[\beta(f(t+\tau))%
-\beta(f(\tau))]dt.$$
To continue
recall that $f$ is Lipschitz with a Lipschitz constant $L$. Thus
$$\begin{split}\displaystyle|\lim_{\epsilon\to 0^{+}}U(t)V(t)|_{\epsilon}^{\pi}|%
&\displaystyle=\left|\lim_{\epsilon\to 0^{+}}\cot\frac{\epsilon}{2}\int_{f(%
\tau)}^{f(\epsilon+\tau)}\sin[\beta(s)-\beta(f(\tau))]ds\right|\\
&\displaystyle\leq\lim_{\epsilon\to 0^{+}}\cot\frac{\epsilon}{2}|\sin[\beta(%
\epsilon+\tau)-\beta(f(\tau))]||f(\epsilon+\tau)-f(\tau)|\\
&\displaystyle\leq\lim_{\epsilon\to 0^{+}}L\epsilon\cot\frac{\epsilon}{2}%
\omega_{\beta}(\epsilon)=0.\end{split}$$
Similarly we have
$$\lim_{\epsilon\to 0^{+}}U(t)V(t)|_{-\pi}^{-\epsilon}=0.$$
By a partial integration we obtain
$$\begin{split}\displaystyle T[f](\tau)&\displaystyle=\lim_{\epsilon\to 0^{+}}%
\left(UV|_{\epsilon}^{\pi}+\int_{\epsilon}^{\pi}f^{\prime}(t+\tau)\cdot\sin[%
\beta(f(t+\tau))-\beta(f(\tau))]\cot\frac{t}{2}\frac{dt}{2\pi}\right)\\
&\displaystyle+\lim_{\epsilon\to 0^{+}}\left(UV|_{-\pi}^{-\epsilon}+\int_{-\pi%
}^{-\epsilon}f^{\prime}(t+\tau)\cdot\sin[\beta(f(t+\tau))-\beta(f(\tau))]\cot%
\frac{t}{2}\frac{dt}{2\pi}\right)\\
&\displaystyle=\int_{-\pi}^{\pi}f^{\prime}(t+\tau)\cdot\sin[\beta(f(t+\tau))-%
\beta(f(\tau))]\cot\frac{t}{2}\frac{dt}{2\pi}.\end{split}$$
Hence
$$T[f](\tau)=\int_{-\pi}^{\pi}f^{\prime}(t+\tau)\cdot\sin[\beta(f(t+\tau))-\beta%
(f(\tau))]\cot\frac{t}{2}\frac{dt}{2\pi}.$$
By using Lemma 2.5, we can choose a family
of diffeomorphisms $f_{n}$ converging uniformly to $f$. Then
$$T[f_{n}](\tau)=\int_{-\pi}^{\pi}f_{n}^{\prime}(t+\tau)\cdot\sin[\beta(f_{n}(t+%
\tau))-\beta(f_{n}(\tau))]\cot\frac{t}{2}\frac{dt}{2\pi}.$$
We are going to show that $T[f_{n}]$ converges
uniformly to $T[f]$. In order to do this, we apply Arzela-Ascoli
theorem.
First of all
$$\begin{split}\displaystyle|T[f_{n}](\tau)|&\displaystyle\leq\frac{1}{\pi}\|f_{%
n}^{\prime}\|_{\infty}\int_{0}^{\pi}\omega_{\beta}(\|f_{n}^{\prime}\|_{\infty}%
t)\cot\frac{t}{2}dt\\
&\displaystyle\leq\frac{1}{\pi}\|f^{\prime}\|_{\infty}\int_{0}^{\pi}\omega_{%
\beta}(\|f^{\prime}\|_{\infty}t)\cot\frac{t}{2}dt=C(f,\gamma)<\infty.\end{split}$$
We prove now that $T[f_{n}]$ is
equicontinuous family of functions. We have to estimate the
quantity:
$$|T[f_{n}](\tau)-T[f_{n}](\tau_{0})|.$$
Assume without loss
of generality that $\tau_{0}=0$. Then
$$\begin{split}\displaystyle|T[f_{n}](\tau)-T[f_{n}](0)|&\displaystyle=\left|%
\int_{-\pi}^{\pi}f_{n}^{\prime}(t+\tau)\cdot\sin[\beta(f_{n}(t+\tau))-\beta(f_%
{n}(\tau))]\cot\frac{t}{2}\frac{dt}{2\pi}\right.\\
&\displaystyle-\left.\int_{-\pi}^{\pi}f_{n}^{\prime}(t)\cdot\sin[\beta(f_{n}(t%
))-\beta(f_{n}(0))]\cot\frac{t}{2}\frac{dt}{2\pi}\right|\leq A+B,\end{split}$$
where
$$A=\left|\int_{-\pi}^{\pi}(f_{n}^{\prime}(t+\tau)-f_{n}^{\prime}(t))\cdot\sin[%
\beta(f_{n}(t+\tau))-\beta(f_{n}(\tau))]\cot\frac{t}{2}\frac{dt}{2\pi}\right|$$
and
$$B=\left|\int_{-\pi}^{\pi}f_{n}^{\prime}(t)\cdot\{\sin[\beta(f_{n}(t))-\beta(f_%
{n}(0))]-\sin[\beta(f_{n}(t+\tau))-\beta(f_{n}(\tau))]\}\cot\frac{t}{2}\frac{%
dt}{2\pi}\right|.$$
Take $r\geq 1$, $p>1$, $q>1$ such
that
$$\frac{1}{p}+\frac{1}{q}=1,$$
and $\delta\in(0,1)$.
In what follows, for a function $g\in L^{p}(\mathbf{T})$ we have in mind the following $p-$norm:
$$\|g\|_{p}=\left(\int_{0}^{2\pi}\mathopen{|}g(e^{it})\mathclose{|}^{p}\frac{dt}%
{2\pi}\right)^{1/p}.$$
Define $f_{\tau}(x):=f(x+\tau)$. By (2.10) we have
$$f_{n}=\frac{n}{n+1}\left(f\ast\rho_{\varepsilon_{n}}+\frac{x}{n}\right).$$
Thus
(3.14)
$$|f^{\prime}_{n,\tau}-f^{\prime}_{n}|=\frac{n}{n+1}|(f^{\prime}_{\tau}-f^{%
\prime})\ast\rho_{\varepsilon_{n}}|.$$
According to Young’s inequality for convolution
([40, pp. 54-55; 8, Theorem 20.18]), we obtain that
$$\|(f^{\prime}_{\tau}-f^{\prime})\ast\rho_{\varepsilon_{n}}\|_{r}\leq\|f^{%
\prime}_{\tau}-f^{\prime}\|_{r}.$$
In view of
(3.13) and (3.14), for $1<q<\frac{1}{1-\alpha}$,
according to Hölder inequality we have
$$\begin{split}\displaystyle A&\displaystyle\leq\|f_{n}^{\prime}(t+\tau)-f_{n}^{%
\prime}(t)\|_{p}\cdot\|\sin[\beta(f_{n}(t+\tau))-\beta(f_{n}(\tau))]\cot\frac{%
t}{2}\frac{1}{2\pi}\|_{q}\\
&\displaystyle\leq\|f^{\prime}(t+\tau)-f^{\prime}(t)\|_{p}\cdot\|\omega_{\beta%
}(|f_{n}|_{\infty}t)\cot\frac{t}{2}\frac{1}{2\pi}\|_{q}\\
&\displaystyle\leq C_{1}(\gamma)\|f^{\prime}\|_{\infty}\|f^{\prime}(t+\tau)-f^%
{\prime}(t)\|_{p},\end{split}$$
i.e.
(3.15)
$$A\leq C_{1}(\gamma)\|f^{\prime}\|_{\infty}\|f^{\prime}(t+\tau)-f^{\prime}(t)\|%
_{p}.$$
Let now estimate $B$. First of all
(3.16)
$$B\leq\|f^{\prime}\|_{\infty}\|\{\sin[\beta(f_{n}(t))-\beta(f_{n}(0))]-\sin[%
\beta(f_{n}(t+\tau))-\beta(f_{n}(\tau))]\}\cot\frac{t}{2}\|_{1}.$$
On the other hand, using again Hölder inequality we have
$$\begin{split}\displaystyle\|\{\sin&\displaystyle[\beta(f_{n}(t))-\beta(f_{n}(0%
))]-\sin[\beta(f_{n}(t+\tau))-\beta(f_{n}(\tau))]\}\cot\frac{t}{2}\|_{1}\\
&\displaystyle\leq\|\{\sin[\beta(f_{n}(t))-\beta(f_{n}(0))]-\sin[\beta(f_{n}(t%
+\tau))-\beta(f_{n}(\tau))]\}^{\delta}\|_{p}\\
&\displaystyle\times\|\{\sin[\beta(f_{n}(t))-\beta(f_{n}(0))]-\sin[\beta(f_{n}%
(t+\tau))-\beta(f_{n}(\tau))]\}^{1-\delta}\cot\frac{t}{2}\|_{q}.\end{split}$$
Further
$$\begin{split}\displaystyle\|\{\sin&\displaystyle[\beta(f_{n}(t))-\beta(f_{n}(0%
))]-\sin[\beta(f_{n}(t+\tau))-\beta(f_{n}(\tau))]\}^{\delta}\|_{p}\\
&\displaystyle\leq\|\{\mathopen{|}2\sin\frac{\beta(f_{n}(t))-\beta(f_{n}(0))-%
\beta(f_{n}(t+\tau))+\beta(f_{n}(\tau))}{2}\mathclose{|}\}^{\delta}\|_{p}\\
&\displaystyle\leq\|\{\mathopen{|}2\sin\frac{\beta(f_{n}(t+\tau))-\beta(f_{n}(%
t))}{2}\mathclose{|}\}^{\delta}\|_{p}\\
&\displaystyle+\|\{\mathopen{|}2\sin\frac{\beta(f_{n}(\tau))-\beta(f_{n}(0))}{%
2}\mathclose{|}\}^{\delta}\|_{p}\\
&\displaystyle\leq\omega_{\beta}(|f_{n}^{\prime}|_{\infty}\tau)^{\delta}+%
\omega_{\beta}(|f_{n}^{\prime}|_{\infty}\tau)^{\delta}=2\omega_{\beta}(|f_{n}^%
{\prime}|_{\infty}\tau)^{\delta}\leq 2\omega_{\beta}(|f^{\prime}|_{\infty}\tau%
)^{\delta},\end{split}$$
and
$$\begin{split}\displaystyle\|\{\sin[\beta(f_{n}(t))&\displaystyle-\beta(f_{n}(0%
))]-\sin[\beta(f_{n}(t+\tau))-\beta(f_{n}(\tau))]\}^{1-\delta}\cot\frac{t}{2}%
\|_{q}\\
&\displaystyle\leq\|(2\omega_{\beta}(|f_{n}^{\prime}|_{\infty}t)^{1-\delta}%
\cot\frac{t}{2}\|_{q}.\end{split}$$
Choose $q$ and $\delta$ such that
$$(\alpha-\alpha\delta-1)q>-1.$$
Then the integral
$$\|2\omega_{\beta}(|f_{n}^{\prime}|_{\infty}t)^{1-\delta}\cot\frac{t}{2}\|_{q}$$
converges and it is less or equal to
$$C(\gamma)\|f_{n}^{\prime}\|_{\infty}^{1-\delta}\leq C(\gamma)\|f^{\prime}\|_{%
\infty}^{1-\delta}.$$
Therefore
(3.17)
$$B\leq 2\|f^{\prime}\|_{\infty}C(\gamma)\|f^{\prime}\|_{\infty}^{1-\delta}%
\omega_{\beta}(\|f^{\prime}\|_{\infty}\tau)^{\delta}.$$
Since a translation is continuous (see [37, Theorem 9.5]),
(3.15) and (3.17) imply that the family $\{T[f_{n}]\}$ is
equicontinuous. By Arzela-Ascoli theorem it follows that
$$\lim_{n\to\infty}\|T[f_{n}]-T[f]\|_{\infty}=0.$$
Thus $T[f]$ is
continuous.
Moreover, since $f_{n}$ is a diffeomorphism, for $n$ sufficiently
large there holds the following inequality
$$J_{w_{n}}(e^{i\tau})=f_{n}^{\prime}(\tau)T[f_{n}](e^{i\tau})>0,e^{i\tau}\in%
\mathbf{T}.$$
Since $f_{n}\in C^{\infty}$, it follows that
$$w_{n}=P[F_{n}]\in C^{1}(\overline{\mathbf{U}}).$$
Therefore all the
conditions of Proposition 1.1 are satisfied. This means that
$w_{n}$ is a harmonic diffeomorphism of the unit disk onto the domain
$D$.
Since, by a result of Hengartner and Schober [9], the limit
function $w$ of a locally uniformly convergent sequence of univalent
harmonic mappings $w_{n}$ on $\mathbf{U}$ is either univalent on
$\mathbf{U}$, is a constant, or its image lies on a straight-line, we
obtain that $w=P[F]$ is univalent. The proof is completed.
∎
Remark 3.2.
If $\gamma$ is a $C^{1,\alpha}$ convex curve, then
$\mathrm{Re}\,[\overline{(g(f(t))-g(f(\tau)))}\cdot ig^{\prime}(f(\tau))]\geq 0$ and therefore $T[f](\tau)>0$. By the proof of
Theorem 1.2, $\tau\to T[f](\tau)$ is continuous. Therefore
$\min_{\tau\in[0,2\pi]}T[f](\tau)=\delta>0$.
4. Quasiconformal harmonic mappings
An injective harmonic mapping $w=u+iv$, is called
$K$-quasiconformal ($K$-q.c), $K\geq 1$, if
(4.1)
$$|w_{\bar{z}}|\leq k|w_{z}|$$
on $D$ where $k={(K-1)}/{(K+1)}$.
Here
$$w_{z}:=\frac{1}{2}\left(w_{x}-iw_{y}\right)\text{ and }w_{\bar{z}}:=\frac{1}{2%
}\left(w_{x}+iw_{y}\right).$$
Notice that, since if $|\nabla w(z)|:=\max\{|\nabla w(z)h|:|h|=1\}=|w_{z}|+|w_{\bar{z}}|$ $l(\nabla w(z)):=\min\{|\nabla w(z)h|:|h|=1\}=||w_{z}|-|w_{\bar{z}}||$, the condition (4.1) is
equivalent with
(4.2)
$$|\nabla w(z)|\leq Kl(\nabla w(z)).$$
For a general definition of quasiregular mappings and quasiconformal mappings we
refer to the book of Ahlfors [1]. In this section we apply
Theorem 1.2 to the class of q.c. harmonic mappings. The area of
quasiconformal harmonic mappings is very active
area of research. For a background on this theory we
refer [10], [12]-[26], [27], [31],
[32] and [38]. In this section we obtain some new results
concerning a characterization of this class. We will restrict
ourselves to the class of q.c. harmonic mappings $w$ between the
unit disk $\mathbf{U}$ and a Jordan domain $D$. The unit disk is
taken because of simplicity. Namely, if $w:\Omega\to D$ is q.c.
harmonic, and $a:\mathbf{U}\to\Omega$ is conformal, then $w\circ a$,
is also q.c. harmonic. However the image domain $D$ cannot be
replaced by the unit disk.
The case when $D$ is a convex domain is treated in detail by the
author and others in above cited papers. In this section we will use
our main result to yield a characterization of quasiconformal
harmonic mappings onto a Jordan that is not necessarily convex in
terms of boundary data.
To state the main result of this section, we make use of Hilbert
transforms formalism. It provides a necessary and a sufficient
condition for the harmonic extension of a homeomorphism from the
unit circle to a $C^{2}$ Jordan curve $\gamma$ to be a q.c mapping.
It is an extension of the corresponding result
[15, Theorem 3.1] related to convex Jordan domains.
Theorem 4.1.
Let $F:\mathbf{T}\to\gamma$ be a sense preserving
homeomorphism of the unit circle onto the Jordan curve
$\gamma=\partial D\in C^{2}$. Then $w=P[F]$ is a quasiconformal
mapping of the unit disk onto $D$ if and only if $F$ is absolutely
continuous and
(4.3)
$$0<l(F):=\mathrm{ess\,inf\,}l(\nabla w(e^{i\tau})),$$
(4.4)
$$\|F^{\prime}\|_{\infty}:=\mathrm{ess\,sup\,}|F^{\prime}(\tau)|<\infty$$
and
(4.5)
$$\|H(F^{\prime})\|_{\infty}:=\mathrm{ess}\sup|H(F^{\prime})(\tau)|<\infty.$$
If $F$ satisfies the conditions (4.3), (4.4) and
(4.5), then $w=P[F]$ is $K$ quasiconformal, where
(4.6)
$$K:=\frac{\sqrt{\|F^{\prime}\|^{2}_{\infty}+\|H(F^{\prime})\|^{2}_{\infty}-l(F)%
^{2}}}{l(F)}.$$
The constant $K$ is
approximately sharp for small values of $K$: if $w$ is the identity
or if it is a mapping close to the identity, then $K=1$ or $K$ is
close to $1$ (respectively).
The proof of necessity..
Suppose that $w=P[F]=g+\overline{h}$ is a $K-$q.c. harmonic
mapping that satisfies the conditions of the theorem. By
[15, Theorem 2.1], we see that $w$ is Lipschitz continuous,
(4.7)
$$L:=\|F^{\prime}\|_{\infty}<\infty$$
and
(4.8)
$$|\nabla w(z)|\leq KL.$$
By [19, Theorem 1.4] we have for
$b=w(0)$
(4.9)
$$|\partial w(z)|-|\bar{\partial}w(z)|\geq C(\Omega,K,b)>0,\,z\in\mathbf{U}.$$
Because of (4.8), the analytic functions $\partial w(z)$ and $\bar{\partial}w(z)$ are bounded, and therefore by
Fatou’s theorem:
(4.10)
$$\lim_{r\to 1^{-}}(|\partial w(re^{i\tau})|-|\bar{\partial}w(re^{i\tau})|)=|%
\partial w(e^{i\tau})|-|\bar{\partial}w(e^{i\tau})|\ \ a.e.$$
Combining (4.7), (4.10) and (4.9), we get
(4.3) and (4.4).
Next we prove (4.5). Observe first that $w_{r}=e^{i\tau}w_{z}+e^{-i\tau}w_{\overline{z}}.$ Thus
(4.11)
$$|w_{r}|\leq|\nabla w|\leq KL.$$
Therefore $rw_{r}=P[H(F^{\prime})]$ is a bounded harmonic
function which implies that $H(F^{\prime})\in L^{\infty}(\mathbf{T})$.
Therefore (4.5) holds and the necessity proof is completed.
The proof of sufficiency. We have to prove that under the
conditions (4.3),
(4.4) and
(4.5) $w$ is quasiconformal.
From
$$0<l(F):=\mathrm{ess\,inf\,}l(\nabla w(e^{i\tau}))$$
we obtain that
$$J_{w}(e^{i\tau})=(|w_{z}|+|w_{\bar{z}}|)l(\nabla w(e^{i\tau}))\geq l(F)^{2}\ %
\ (a.e.)$$
As $rw_{r}$ is a harmonic conjugate of $w_{\tau}$, it turns out that if
$F$ is absolutely continuous, then
(4.12)
$$\lim_{r\to 1^{-}}w_{r}(re^{i\tau})=H(F^{\prime})(\tau)\,\,(a.e.),$$
and
(4.13)
$$\lim_{r\to 1^{-}}w_{\tau}(re^{i\tau})=F^{\prime}(\tau)\ \ (a.e.).$$
As
$$|w_{z}|^{2}+|w_{\bar{z}}|^{2}=\frac{1}{2}\left(|w_{r}|^{2}+\frac{|w_{\tau}|^{2%
}}{r^{2}}\right),$$
it follows
that for a.e. $\tau\in[0,2\pi)$
(4.14)
$$\lim_{r\to 1^{-}}|w_{z}(re^{i\tau})|^{2}+|w_{\bar{z}}(re^{i\tau})|^{2}=|w_{z}(%
e^{i\tau})|^{2}+|w_{\bar{z}}(e^{i\tau})|^{2}\leq\frac{1}{2}(\|F^{\prime}\|^{2}%
_{\infty}+\|H(F^{\prime})\|^{2}_{\infty}).$$
To continue we make use of
(4.3). From (4.14), (4.3) and
(4.2), for a.e. $\tau\in[0,2\pi)$,
(4.15)
$$\frac{|w_{z}(e^{i\tau})|^{2}+|w_{\bar{z}}(e^{i\tau})|^{2}}{(|w_{z}(e^{i\tau})|%
-|w_{\bar{z}}(e^{i\tau})|)^{2}}\leq\frac{\|F^{\prime}\|^{2}_{\infty}+\|H(F^{%
\prime})\|^{2}_{\infty}}{2l(F)^{2}}.$$
Hence
(4.16)
$$|w_{z}(e^{i\tau})|^{2}+|w_{\bar{z}}(e^{i\tau})|^{2}\leq S(|w_{z}(e^{i\tau})|-|%
w_{\bar{z}}(e^{i\tau})|)^{2}\ \ \ (a.e.),$$
where
(4.17)
$$S:=\frac{\|F^{\prime}\|^{2}_{\infty}+\|H(F^{\prime})\|^{2}_{\infty}}{2l(F)^{2}}.$$
According to (4.15), $S\geq 1$. Let
$$\mu(e^{i\tau}):=\left|\frac{w_{\bar{z}}(e^{i\tau})}{w_{z}(e^{i\tau})}\right|.$$
Since
every $C^{2}$ curve is $C^{1,\alpha}$ curve, Theorem 1.2 shows
that $w=g+\overline{k}$ is univalent and according to Lewy’s theorem
$J_{w}(z)=|g^{\prime}(z)|^{2}-|h^{\prime}(z)|^{2}>0$. Thus $a(z)=\overline{w_{\bar{z}}}/w_{z}=h^{\prime}/g^{\prime}$ is an analytic function bounded by 1. As
$\mu(e^{i\tau})=|a(e^{i\tau})|$, we have $\mu(e^{i\tau})\leq 1$.
Then (4.16) can be written as
$$1+\mu^{2}(e^{i\tau})\leq S(1-\mu(e^{i\tau}))^{2},$$
i.e. if $S=1$, then $\mu(e^{i\tau})=0$ a.e. and if $S>1$, then
(4.18)
$$\mu^{2}(S-1)-2\mu S+S-1=(S-1)(\mu-\mu_{1})(\mu-\mu_{2})\geq 0,$$
where
$$\mu_{1}=\frac{S+\sqrt{2S-1}}{S-1}$$
and
$$\mu_{2}=\frac{S-1}{S+\sqrt{2S-1}}.$$
If $S>1$, then from (4.18) it
follows that $\mu(e^{i\tau})\leq\mu_{2}$ or $\mu(e^{i\tau})\geq\mu_{1}$. But $\mu(e^{i\tau})\leq 1$ and therefore
(4.19)
$$\mu(e^{i\tau})\leq\frac{S-1}{S+\sqrt{2S-1}}\ \ \ \ (a.e.).$$
If $S=1$, then (4.19) clearly holds. Define $\mu(z)=|a(z)|$.
Since $a$ is a bounded analytic function, by the maximum principle
it follows that
$$\mu(z)\leq k:=\mu_{2},$$
for $z\in\mathbf{U}$.
This yields that
$$K(z)\leq K:=\frac{1+k}{1-k}=\frac{2S-1+\sqrt{2S-1}}{\sqrt{2S-1}+1}=\sqrt{2S-1},$$
i.e.
$$K(z)\leq\frac{\sqrt{\|F^{\prime}\|^{2}_{\infty}+\|H(F^{\prime})\|^{2}_{\infty}%
-l(F)^{2}}}{l(F)}$$
which means that $w$ is $K=\frac{\sqrt{\|F^{\prime}\|^{2}_{\infty}+\|H(F^{\prime})\|^{2}_{\infty}-l(F)^%
{2}}}{l(F)}-$quasiconfomal. The result is
asymptotically sharp because $K=1$ for $w$ being the identity. This
finishes the proof of Theorem 4.6.∎
A conjecture
Let $F:\mathbf{T}\to\gamma\subset\mathbf{C}$ be a homeomorphism
of bounded variation, where $\gamma$ is Dini smooth. Let $D$ be the
bounded domain such that $\partial D=\gamma$. The mapping $w=P[F]$
is a diffeomorphism of $\mathbf{U}$ onto $D$ if and only if
(4.20)
$$\mathrm{ess\,inf}\{J_{w}(e^{it}):t\in[0,2\pi]\}\geq 0.$$
Acknowledgment
I am thankful to the referee for providing very constructive comments and
help in improving the contents of this paper.
References
[1]
L. Ahlfors: Lectures on Quasiconformal mappings,
Van Nostrand Mathematical Studies, D. Van Nostrand 1966.
[2]
G. Alessandrini and V. Nesi: Invertible harmonic mappings,
beyond Kneser. Ann. Scuola Norm. Sup. Pisa, Cl. Sci. 5 VIII
(2009), 451-468.
[3]
G. Alessandrini and V. Nesi: Univalent $\sigma$-harmonic
mappings. Arch. Ration. Mech. Anal. 158 (2001), no. 2,
155–171.
[4]
S. Axler, P. Bourdon and W. Ramey: Harmonic function theory,
Springer Verlag New York 1992.
[5]
P. Duren: Theory of $H^{p}$ spaces . New York and London:
Academic Press XII, 258 p. (1970).
[6]
P. Duren: Harmonic mappings in the plane. Cambridge University
Press, 2004.
[7]
P. Duren and W. Hengartner:
Harmonic mappings of multiply connected domains. Pacific J.
Math. 180:2 (1997), 201 – 220.
[8]
G. L. Goluzin: Geometric function theory. Nauka Moskva 1966.
[9]
W. Hengartner and G. Schober: Univalent harmonic
mappings. Trans. Amer. Math. Soc. 299 (1987), 1–31.
[10]
W. Hengartner and G. Schober: Harmonic mappings with given
dilatation. J. London Math. Soc. (2) 33 (1986), no. 3,
473–483.
[11]
J. Jost: Univalency of harmonic mappings between surfaces. J.
Reine Angew. Math. 324 (1981), 141–153.
[12]
D. Kalaj and M. Pavlović: On quasiconformal self-mappings of the unit disk satisfying the Poisson differential
equation. Trans. Amer. Math. Soc. 363 (2011) 4043–4061.
[13]
D. Kalaj and M. Pavlović:
Boundary correspondence under harmonic quasiconformal
homeomorfisms of a half-plane. Ann. Acad. Sci. Fenn., Math. 30:1 (2005), 159-165.
[14]
D. Kalaj and M. Mateljević:
Inner estimate and quasiconformal harmonic maps between smooth
domains. Journal d’Analise Math. 100 (2006), 117–132.
[15]
D. Kalaj: Quasiconformal harmonic mapping
between Jordan domains. Math. Z. 260:2 (2008), 237–252.
[16]
by same author: Harmonic quasiconformal mappings and Lipschitz
spaces. Ann. Acad. Sci. Fenn., Math. 34:2 (2009), 475–485.
[17]
by same author: On quasiregular mappings between smooth Jordan
domains, Journal of Mathematical Analysis and Applications, 362:1 (2010), 58-63.
[18]
by same author:On boundary correspondence of q.c. harmonic mappings
between smooth Jordan domains. To appear in Math. Nachr. DOI
10.1002/mana.200910053 (arXiv:0910.4950v1).
[19]
by same author:Harmonic mappings and distance function. Ann. Scuola
Norm. Sup. Pisa Cl. Sci. (5) Vol. X (2011),
669–681.(arXiv:1011.3012).
[20]
M. Kneževic and M. Mateljević: On the quasi-isometries of
harmonic quasiconformal mappings. Journal of Mathematical Analysis
and Applications, 334: 1 (2007), 404–413.
[21]
H. Kneser: Lösung der Aufgabe 41, Jahresber. Deutsch. Math.-Verein. 35 (1926) 123–124.
[22]
O. Kellogg: Harmonic functions and Green’s
integral. Trans. Amer. Math. Soc. 13 (1912), 109–132.
[23]
R. S. Laugesen: Planar harmonic maps with inner and Blaschke
dilatations. J. Lond. Math. Soc., II. Ser. 56, No.1, 37–48
(1997).
[24]
H. Lewy: On the non-vanishing of the Jacobian in certain in
one-to-one mappings. Bull. Amer. Math. Soc. 42 (1936),
689–692.
[25]
F. D. Lesley and S. E. Warschawski: Boundary behavior of the
Riemann mapping function of asymptotically conformal curves. Math.
Z. 179 (1982), 299-323.
[26]
V. Manojlović: Bi-lipshicity of quasiconformal harmonic
mappings in the plane. Filomat 23:1 (2009), 85–89.
[27]
O. Martio: On harmonic quasiconformal mappings. Ann. Acad.
Sci. Fenn., Ser. A I 425 (1968), 3-10.
[28]
M. Mateljević and M. Vuorinen:
On harmonic quasiconformal quasi-isometries. Journal of
Inequalities and Applications, 2010, Article ID 178732, 19
pages, DOI:10.1155/2010/1787;
[29]
C. Pommerenke: Univalent functions.
Vanderhoeck & Riprecht, 1975.
[30]
C. Pommerenke and S.E. Warschawski: On the quantitative
boundary behavior of conformal maps. Comment. Math. Helv. 57
(1982), 107–129.
[31]
D. Partyka and K. Sakan:
On bi-Lipschitz type inequalities for quasiconformal harmonic
mappings. Ann. Acad. Sci. Fenn. Math. 32 (2007), 579–594.
[32]
M. Pavlović:
Boundary correspondence under harmonic quasiconformal
homeomorfisms of the unit disc. Ann. Acad. Sci. Fenn., 27
(2002), 365–372.
[33]
M. Pavlović: Introduction to function spaces on the disk.
20. Matematički Institut SANU, Belgrade, 2004. vi+184 pp.
[34]
R. L. Range: On a Lipschitz Estimate for Conformal Maps in the
Plane. Proceedings of the American Mathematical Society, 58:
1, (1976), 375–376.
[35]
F. Schulz: Univalent solutions of elliptic systems of
Heinz-Lewy type. Ann. Inst. H. Poincaré Anal. Non Linéaire 6 (1989), no. 5, 347–361.
[36]
R. Schoen and S-T. Yau: On univalent harmonic maps between
surfaces. Invent. Math. 44 (1978), no. 3, 265–278.
[37]
W. Rudin: Real and complex analysis. Third edition.
McGraw-Hill Book Co., New York, 1987. xiv+416 pp.
[38]
T. Wan: Constant mean curvature surface, harmonic maps, and
universal Teichmüller space. J. Differential Geom. 35: 3
(1992), 643–657.
[39]
S. E. Warschawski: On Differentiability at the Boundary in
Conformal Mapping. Proceedings of the American Mathematical
Society, 12:4 (1961), 614-620.
[40]
A. Weil: L’ Intégration dans les groupes topologiques et ses
applications, Actualités Sci et. Ind. 869, 1145, Hermann et Cie,
1941 and 1951.
[41]
A. Zygmund: Trigonometric Series I. Cambrige University
Press, 1958. |
Transport properties of a multichannel Kondo dot in a magnetic field
Christoph B. M. Hörig
Dirk Schuricht
Institute for Theory of Statistical Physics,
RWTH Aachen, 52056 Aachen, Germany
JARA-Fundamentals of Future Information Technology
(November 20, 2020)
Abstract
We study the nonequilibrium transport through a multichannel Kondo quantum dot in
the presence of a magnetic field. We use the exact solution of the two-loop
renormalization group equation to derive analytical results for the $g$ factor,
the spin relaxation rates, the magnetization, and the differential conductance.
We show that the finite magnetization leads to a coupling between the conduction
channels which manifests itself in additional features in the differential conductance.
pacs: 73.63.-b, 71.10.-w,05.70.Ln, 05.10.Cc
Introduction.—The study of a localized spin coupled via an antiferromagnetic
exchange interaction $J$ to $K$ independent electronic reservoirs has a long history in
condensed matter physics.Hewson93 In the simplest case of $K=1$ the
electron spins completely screen the local spin at low energies and thus lead
to a Fermi liquid. In a renormalization group (RG) analysis this situation is
characterized by the divergence of the renormalized exchange coupling $J(\Lambda)$
at the Kondo scale $\Lambda=T_{\text{K}}$. The situation is completely
changedNozieresBlandin80 if the spin is coupled to more than one
screening channel ($K>1$). Then the renormalized exchange coupling stays finite and flows
to a non-trivial fixed pointNozieresBlandin80 ; Gan94 $J^{*}\sim 1/K$ at low energies,
which manifests itself in unusual non-Fermi liquid behavior like a non-integer “ground-state
degeneracy” or characteristic power laws in various
observables.AndreiDestri84prl ; AffleckLudwig93
The recent developments in the ability to engineer devices on the nanoscale lead to
the experimental realizationPotok-07 of two-channel Kondo physics in a
quantum dot set-up.OregGoldhaber-Gordon03 In this set-up it was possible
to measure the differential conductance and observe universal scaling and
square-root behavior which are characteristic for the two-channel Kondo effect. This
triggered theoretical studies2CK ; MitraRosch11 ; Eflow of the transport properties of
multichannel systems using conformal field theory as well as numerical and perturbative
RG methods. The latter uses a perturbative expansion in the renormalized exchange
coupling which is well-controlled provided $K\gg 1$. Specifically,
Mitra and RoschMitraRosch11 calculated the differential conductance, the splitting of the
Kondo resonance in the $T$ matrix, and the current-induced decoherence in the absence
of a magnetic field. Recently the spin dynamics was studiedEflow in the absence of a
bias voltage and shown to possess pure power-law decay with an exponent $g=4/K$.
In this Brief Report we extend the analysis of Mitra and RoschMitraRosch11
to include an external magnetic field. We perform
a real-time RG (RTRG) analysisSchoeller09 to derive the renormalized magnetic
field, the spin relaxation rates, the magnetization of the quantum dot, and the current.
In particular, we focus on inelastic cotunneling processes which lead to characteristic features
in the differential conductance whenever one of the applied bias voltages $V_{i}$’s equals the
value of the renormalized magnetic field. We further show that the finite magnetization
on the dot leads to a coupling of the conduction channels which results in additional
features in the differential conductance.
Model.—We consider a quantum dot possessing a spin-1/2 degree of freedom,
which is coupled via an exchange interaction $J$ to $K$ independent electronic reservoirs.
At low energies each reservoir constitutes one screening channel for the local spin
thus leading to an over-screened situation for $K>1$. We will thus use the terms reservoir and
channel interchangeably.
Furthermore, the spin-1/2 is subject to an external magnetic field $h_{0}$.
Each reservoir consists of a left ($L$) and right ($R$) lead which are held at chemical
potentials $\mu_{\alpha}^{i}$, $\alpha=L,R$, $i=1,\ldots,K$ (see Fig. 1).
Specifically we consider the system
$$\begin{split}\displaystyle H=&\displaystyle\sum_{i\alpha k\sigma}\epsilon_{k}%
\,c_{i\alpha k\sigma}^{\dagger}c_{i\alpha k\sigma}+h_{0}\,S^{z}\\
&\displaystyle+\frac{J_{0}}{2\nu_{0}}\sum_{i\alpha\alpha^{\prime}kk^{\prime}%
\sigma\sigma^{\prime}}\vec{S}\cdot\vec{\sigma}_{\sigma\sigma^{\prime}}c_{i%
\alpha k\sigma}^{\dagger}c_{i\alpha^{\prime}k^{\prime}\sigma^{\prime}}.\end{split}$$
(1)
Here $c_{i\alpha k\sigma}^{\dagger}$ and $c_{i\alpha k\sigma}$ create and annihilate
electrons with momentum $k$ and spin $\sigma=\uparrow,\downarrow$ in lead $\alpha$
of the reservoir $i$, $\vec{\sigma}$ denotes the Pauli matrices, and $\vec{S}$ is the spin-1/2
operator on the dot. The antiferromagnetic exchange coupling $J_{0}>0$ is
dimensionless in our convention. We stress that the exchange term does not couple
different reservoirs. The chemical
potentials in the leads are parametrized by $\mu_{L/R}^{i}=\pm V_{i}/2$ thus applying a
bias voltage $V_{i}$ to each reservoir. Furthermore we introduce an ultra-violet cutoff $D$
in each reservoir via the density of states $N(\omega)=\nu_{0}D^{2}/(D^{2}+\omega^{2})$.
In the absence of a magnetic
field the transport properties of the model (1) have been previously studied
in Ref. MitraRosch11, .
RG analysis.—Following Ref. Schoeller09, we have performed
a two-loop RTRG analysis including a consistent derivation of the relevant
relaxation rates. This method has been successfully applied to study transport
properties of other Kondo-type quantum dots in the past.Schoeller09 ; PS11
The starting point is an RG equation for the renormalized exchange coupling
$J(\Lambda)$ obtained by integrating out the high-energy degrees of freedom in
the reservoirs. To accomplish this the one introduces a cutoff $\Lambda$ into the
Fermi function and integrates out the Matsubara poles on the imaginary axis by
decreasing the cutoff from its initial value $\Lambda_{0}\sim D$ down to some physical
energy scale. The RG equation for the $K$-channel model
(1) reads up to two loop
$$\Lambda\frac{d}{d\Lambda}J=\beta(J)=-2J^{2}(1-KJ),$$
(2)
which defines the reference solution $J(\Lambda)$ for our analysis.
The RG flow has the well-knownNozieresBlandin80 non-trivial fixed point $J^{*}=1/K$.
The scaling dimension of the leading irrelevant operator is $\Delta=\beta^{\prime}(J^{*})=2/K$,
which is valid for $K\gg 1$ while the exact result is given byAffleckLudwig93
$\Delta=2/(K+2)$. The RG equation possesses the invariant
$$T_{\text{K}}=\Lambda_{0}\left(\frac{eJ_{0}}{J^{*}-J_{0}}\right)^{K/2}e^{-1/2J_%
{0}},$$
(3)
which defines the Kondo temperature. With the initial condition $J_{0}=J(\Lambda_{0})$
the solution of the RG equation can be explicitly written as
$$J(\Lambda)=\frac{J^{*}}{1+W(z)},\quad z=\left(\frac{\Lambda}{T_{\text{K}}}%
\right)^{\Delta}.$$
(4)
Here $W(z)$ denotes the Lambert W functionCorless96 defined by
$z=W(z)e^{W(z)}$, which satisfies $W^{\prime}(z)={W(z)/z/(1+W(z))}$ for $z\neq 0$. The fixed
point $J^{*}$ is reached for $\Lambda\to 0$ as $W(0)=0$. We note that the solution
(4) is valid for all $J_{0}$ but, of course, the derivation of (2)
requires $J_{0}\ll 1$. If $J_{0}<J^{*}$ we can perform the scaling limit $\Lambda_{0}\to\infty$
and $J_{0}\to 0$ while keeping the Kondo temperature constant. In this limit the solution
for $\Lambda\ll T_{\text{K}}$ simplifies to
$$J(\Lambda)=J^{*}\left[1-\left(\frac{\Lambda}{T_{\text{K}}}\right)^{\Delta}%
\right],$$
(5)
i.e. there is a characteristic power-law behavior. Observables are calculated in a
systematic expansion around the reference solution and can be expressed
in terms of the renormalized exchange coupling $J_{c}\equiv J(\Lambda_{c})$ at the physical
energy scalescale (we consider $T=0$)
$$\Lambda_{c}=\max\{V,h_{0}\}.$$
(6)
In contrast to the one-channel Kondo model the existence of the attractive fixed
point $J^{*}=1/K$ implies that this expansion is well-defined for all $\Lambda_{c}$ provided
$K\gg 1$. We have calculated the effective dot Liouvillian and the current kernel
yielding the renormalized magnetic field, the spin relaxation rates, the dot magnetization,
and the current including the leading logarithmic corrections. All calculations follow
Ref. Schoeller09, , we present here only the results and discuss their
properties.
Identical bias voltages.—First let us assume that all bias voltages are identical,
i.e. $V_{i}=V$. Straightforward calculation yields the renormalized magnetic field
$$h=\bigl{[}1-K(J_{c}-J_{0})\bigr{]}h_{0},$$
(7)
which gives for the $g$ factor in leading order (we consider the scaling limit $J_{0}\to 0$
from now)
$$g=2\frac{\partial h}{\partial h_{0}}=2(1-KJ_{c})\stackrel{{\scriptstyle\Lambda%
_{c}\ll T_{\text{K}}}}{{=}}2\left(\frac{\Lambda_{c}}{T_{\text{K}}}\right)^{%
\Delta}.$$
(8)
The longitudinal and transverse spin relaxation rates
$$\displaystyle\Gamma_{1}$$
$$\displaystyle=$$
$$\displaystyle\pi\left[h+\frac{1}{2}\big{(}\left|V-h\right|_{2}+V+h\big{)}%
\right]KJ_{c}^{2},$$
(9)
$$\displaystyle\Gamma_{2}$$
$$\displaystyle=$$
$$\displaystyle\frac{\pi}{2}\left[V+h+\frac{1}{2}\big{(}\left|V-h\right|_{1}+V+h%
\big{)}\right]KJ_{c}^{2},$$
(10)
with
$$\left|x\right|_{l}=\frac{2}{\pi}x\arctan\frac{x}{\Gamma_{l}},\quad l=1,2.$$
(11)
We note that for $\Lambda_{c}\gg T_{\text{K}}$ the renormalization of the magnetic field
(7) is much stronger than in the one-channel Kondo model, as the spin on the dot
is coupled to more screening channels.
Explicit formulas for the dot magnetization and the current are given in Eq. (14)
below for the case of different bias voltages. Simplifying to $V_{i}=V$ we obtain the
differential conductance $G_{i}=dI_{i}/dV_{i}$ per channel, which is plotted in
Fig. 2. The conductance sharply increases around
$V=h$ where inelastic cotunneling processes start to contribute. Due to the strong
renormalization of the magnetic field the increase occurs at voltages much smaller than the
applied magnetic field $h_{0}$. Close to the resonance we find
$$G_{i}=\begin{cases}\frac{\pi}{4}J_{c}^{2}\bigl{[}1+2J_{c}\mathcal{L}_{2}\big{(%
}V-h\big{)}\bigr{]}&\text{for }V<h,\\
\pi J_{c}^{2}\bigl{[}1+2\pi J_{c}\mathcal{L}_{2}\big{(}V-h\big{)}\bigr{]}&%
\text{for }V>h,\end{cases}$$
(12)
where we assumed $|V-h|\ll h$ and defined
$$\mathcal{L}_{l}\big{(}x\big{)}=\ln\frac{\Lambda_{c}}{\sqrt{x^{2}+\Gamma_{l}^{2%
}}},\quad l=1,2.$$
(13)
We note that (12) is formally identical to the differential conductance
in the one-channel Kondo modelSchoeller09 except for the functional form of the
renormalized coupling (5). In particular, there is no explicit dependence
on the number of channels. We further note that close to the fixed point
the conductance (12) is a quantity of order $1/K^{2}$.
The logarithmic divergencies at $V=h$ are cut off by the transverse spin relaxation rate
$\Gamma_{2}$. Thus, as $\Gamma_{2}\sim 1/K$, this feature becomes sharper with increasing
$K$. At small voltages $V<h$ the conductance is independently of the bias voltage
given by $G_{i}=\pi J_{c}^{2}/4$. In particular, a power-law behavior in the voltage is only
found for vanishing magnetic fieldMitraRosch11 while (5)
yields a power-law dependence on the magnetic field. In the limit of a large field
$h\gg T_{\text{K}}$ the linear conductance can be derived usingCorless96
$W(z)\sim\ln z$ for $z\to\infty$ to be $G_{i}(V=0,h)=\pi/16/\ln^{2}(h/T_{\text{K}})$,
which is identical to the result in the one-channel model.
Different bias voltages.—In the following we relax the condition $V_{i}=V$ and
consider channel-dependent bias voltages $V_{i}$. This introduces further energy
scales which will give rise to additional features in the dot magnetization and thus the
differential conductance. In this general set-up the magnetization
and current through the $i$th channel are given by
$$M=-\frac{f_{1}}{2f_{2}},\quad I_{i}=f_{I}^{i}+2Mf_{M}^{i}$$
(14)
where $f_{1,2}$ denote rates appearing in the Liouvillian and $f^{i}_{I,M}$ are similar
terms in the current kernel. Explicitly these rates read
$$\displaystyle f_{1}$$
$$\displaystyle=$$
$$\displaystyle 2\pi KJ_{c}^{2}h+4\pi KJ_{c}^{3}h\mathcal{L}_{2}\big{(}h\big{)}$$
(15)
$$\displaystyle-2\pi J_{c}^{3}\sum\nolimits_{j=1\ldots K}(V_{j}-h)\mathcal{L}_{2%
}\big{(}V_{j}-h\big{)},$$
$$\displaystyle f_{2}$$
$$\displaystyle=$$
$$\displaystyle\pi KJ_{c}^{2}h+2\pi KJ_{c}^{3}h\mathcal{L}_{2}\big{(}h\big{)}$$
$$\displaystyle\*$$
$$\displaystyle+\frac{\pi}{2}J_{c}^{2}\sum\nolimits_{j=1\ldots K}\big{(}\left|V_%
{j}-h\right|_{2}+V_{j}+h\big{)}$$
$$\displaystyle+\pi J_{c}^{3}\sum\nolimits_{j=1\ldots K}\big{(}\left|V_{j}-h%
\right|_{2}-V_{j}+h\big{)}\mathcal{L}_{2}\big{(}V_{j}-h\big{)},$$
$$\displaystyle f_{I}^{i}$$
$$\displaystyle=$$
$$\displaystyle\frac{3}{4}\pi J_{c}^{2}V_{i}+\pi J_{c}^{3}V_{i}\mathcal{L}_{1}%
\big{(}V_{i}\big{)}$$
(17)
$$\displaystyle+\pi J_{c}^{3}(V_{i}-h)\mathcal{L}_{2}\big{(}V_{i}-h\big{)},$$
$$\displaystyle f_{M}^{i}$$
$$\displaystyle=$$
$$\displaystyle-\frac{\pi}{4}J_{c}^{2}\big{(}\left|V_{i}-h\right|_{2}-V_{i}-h%
\big{)}+\pi J_{c}^{3}V_{i}\mathcal{L}_{1}\big{(}V_{i}\big{)}$$
(18)
$$\displaystyle+\pi J_{c}^{3}h\mathcal{L}_{2}\big{(}h\big{)}-\frac{\pi}{2}J_{c}^%
{3}\left|V_{i}-h\right|_{2}\mathcal{L}_{2}\big{(}V_{i}-h\big{)}.$$
The relaxation rates for the case of different bias
voltages are obtained by straightforward generalization of
Eqs. (9) and (10).
In the derivation of Eqs. (15)–(18)
we have neglected all terms in order $J_{c}^{3}$ that
do not contain logarithms at either $h=0$, $V=0$, or $V=h$. Thus when calculating the
magnetization one has to expand consistently up to this order.
We stress that although the different channels are not directly coupled via exchange
interactions, the finite magnetization $M$ mediates a feedback between them. Consider
for example the differential conductance of channel 1, i.e. the black curve in
Fig. 3. The sharp increase at $V=V_{1}=h$ is again due to
the onset of inelastic cotunneling processes. However, there is a second feature
at $V\approx 2h$ which is caused by the non-trivial voltage dependence of the dot
magnetization. This can be seen in the derivative $\partial M/\partial V$ shown in
Fig. 4, which directly enters the differential conductance
[see (14)]. Similarly, the conductance in channel 6 (green curve in
Fig. 3) possesses a feature at $V=2V_{6}=h$ which is caused by
the effect of the applied voltage in channels 1 to 5 onto $M$, while the increase at
$V_{6}=h$ (i.e. $V=2h$) is due to the onset of inelastic cotunneling in channel 6.
In this way the nonequilibrium magnetization introduces additional features into
the individual conductances. We stress that such coupling effects between the
channels are absent for vanishing magnetic fieldMitraRosch11 or if all applied
bias voltages are identical (see Fig. 2).
As a special case of the general set-up with channel-dependent bias voltages we
can recover the experimental situation realizedPotok-07 by Potok et al.
in a semiconductor quantum dot. This is achieved by setting the chemical potentials
to $\mu_{L/R}^{1}=\pm V/2$ and $\mu_{L/R}^{i}=0$ for $i=2,\ldots,K$. In each of the
channels $2,\ldots,K$ we introduce even and odd combinations of the
electron operators, $(c_{iLk\sigma}\pm c_{iRk\sigma})/\sqrt{2}$, such that the
even combinations couple to the spin on the dot with exchange interaction $2J$
while the odd ones decouple completely. The resulting RG equation for $J$ is given
by (2) and thus possesses the fixed point $J^{*}$.
The experimental set-up Potok-07 is now obtained by specializing to $K=2$.
In the presence of a magnetic field the resulting differential conductance is very
similar to the case of identical bias voltages shown in Fig. 2.
In particular, since the second channel does not provide an additional energy scale
there appear no features in the differential conductance beside the cotunneling peak
at $V=h$. A possible experimental set-up to observe the additional features shown
in Fig. 3 thus requires at least two channels with
non-zero and different bias voltages.
Conclusions.—To sum up, we have studied the nonequilibrium transport properties
of a multichannel Kondo quantum dot in the presence of a magnetic field. We used
the solution of the two-loop RG equation to derive analytical results for the
$g$ factor, the spin relaxation rates, the dot magnetization, and the
differential conductance. The latter shows typical features of inelastic cotunneling.
We showed that the main difference to the previously studiedMitraRosch11 situation
without magnetic field is the appearance of additional features in the
differential conductance, which originate in the feedback between the
channels mediated by the finite dot magnetization.
We thank S. Andergassen, A. Rosch, and H. Schoeller for useful discussions.
This work was supported by the German Research Foundation (DFG) through the
Emmy-Noether Program.
References
(1)
A. C. Hewson, The Kondo Problem to Heavy Fermions (Cambridge University
Press, Cambridge, 1993).
(2)
P. Nozières and A. Blandin, J. Phys. France 41, 193 (1980).
(3)
J. Gan, N. Andrei, and P. Coleman, Phys. Rev. Lett. 70, 686 (1993);
J. Gan, J. Phys.: Condens. Matter 6, 4547 (1994).
(4)
N. Andrei and C. Destri, Phys. Rev. Lett. 52, 364 (1984);
A. M. Tsvelick and P. B. Wiegmann, Z. Phys. B 54, 201 (1984);
I. Affleck and A. W. W. Ludwig, Phys. Rev. Lett. 67, 161 (1991).
(5)
A. W. W. Ludwig and I. Affleck, Phys. Rev. Lett. 67, 3160 (1991);
I. Affleck and A. W. W. Ludwig, Phys Rev. B 48, 7297 (1993).
(6)
R. M. Potok, I. G. Rau, H. Shtrikman, Y. Oreg, and D. Goldhaber-Gordon, Nature (London)
446, 167 (2007).
(7)
Y. Oreg and D. Goldhaber-Gordon, Phys. Rev. Lett. 90, 136602 (2003).
(8)
A. I. Tóth, L. Borda, J. von Delft, and G. Zaránd, Phys. Rev. B 76, 155318 (2007);
R. Žitko and J. Bonča, ibid. 77, 245112 (2008);
E. Sela and I. Affleck, Phys. Rev. Lett. 102, 047201 (2009);
E. Vernek, C. A. Büsser, G. B. Martins, E. V. Anda, N. Sandler, and S. E. Ulloa,
Phys. Rev. B 80, 035119 (2009);
A. K. Mitchell and D. E. Logan, ibid. 81, 075126 (2010);
E. Sela, A. K. Mitchell, and L. Fritz, Phys. Rev. Lett. 106, 147202 (2011);
A. K. Mitchell, D. E. Logan, and H. R. Krishnamurthy, Phys. Rev. B 84, 035119 (2011);
A. K. Mitchell, E. Sela, and D. E. Logan, Phys. Rev. Lett. 108, 086405 (2012);
A. K. Mitchell and E. Sela, e-print arXiv:1203.4456.
(9)
A. Mitra and A. Rosch, Phys. Rev. Lett. 106, 106402 (2011).
(10)
M. Pletyukhov and H. Schoeller, e-print arXiv:1201.6295.
(11)
H. Schoeller, Eur. Phys. J. Special Topics 168, 179 (2009);
H. Schoeller and F. Reininghaus, Phys. Rev. B 80, 045117 (2009);
ibid. 80, 209901(E) (2009).
(12)
D. Schuricht and H. Schoeller, Phys. Rev. B 80, 075120 (2009);
M. Pletyukhov, D. Schuricht, and H. Schoeller, Phys. Rev. Lett 104, 106801 (2010);
M. Pletyukhov and D. Schuricht, Phys. Rev. B 84, 041309(R) (2011);
C. B. M. Hörig, D. Schuricht, and S. Andergassen, ibid. 85, 054418 (2012).
(13)
R. M. Corless, G. H. Gonnet, D. E. G. Hare, D. J. Jeffrey, and D. E. Knuth, Adv.
Comput. Math. 5, 329 (1996).
(14)
For the numerical evaluation we use ${\Lambda_{c}=\sqrt{V^{2}+h_{0}^{2}}}$. |
Video Frame Interpolation with Stereo Event and Intensity Cameras
Chao Ding, Mingyuan Lin, Haijian Zhang, Jianzhuang Liu, and Lei Yu
Chao Ding, Mingyuan Lin, Haijian Zhang, and Lei Yu are with the School of Electronic Information, Wuhan University, Wuhan 430072, China. Email:{dingchao, linmingyuan, haijian.zhang, ly.wd}@whu.edu.cn.Jianzhuang Liu is with the Huawei Noah’s Ark Lab, Shenzhen 518000, China. Email: [email protected] research was partially supported by the National Natural Science Foundation of China under Grants 62271354 and 61871297.Corresponding authors: Lei Yu and Haijian Zhang.
Abstract
The stereo event-intensity camera setup is widely applied to leverage the advantages of both event cameras with low latency and intensity cameras that capture accurate brightness and texture information. However, such a setup commonly encounters cross-modality parallax that is difficult to be eliminated solely with stereo rectification especially for real-world scenes with complex motions and varying depths, posing artifacts and distortion for existing Event-based Video Frame Interpolation (E-VFI) approaches.
To tackle this problem, we propose a novel Stereo Event-based VFI (SE-VFI) network (SEVFI-Net) to generate high-quality intermediate frames and corresponding disparities from misaligned inputs consisting of two consecutive keyframes and event streams emitted between them.
Specifically, we propose a Feature Aggregation Module (FAM) to alleviate the parallax and achieve spatial alignment in the feature domain. We then exploit the fused features accomplishing accurate optical flow and disparity estimation, and achieving better interpolated results through flow-based and synthesis-based ways.
We also build a stereo visual acquisition system composed of an event camera and an RGB-D camera to collect a new Stereo Event-Intensity Dataset (SEID) containing diverse scenes with complex motions and varying depths.
Experiments on public real-world stereo datasets, i.e., DSEC and MVSEC, and our SEID dataset demonstrate that our proposed SEVFI-Net outperforms state-of-the-art methods by a large margin. The code and dataset are available at https://dingchao1214.github.io/web_sevfi/.
Index Terms:
Stereo event-intensity camera, video frame interpolation, stereo matching, stereo event-intensity dataset.
I Introduction
Stereo event-intensity camera setup allows us to fully perceive the dynamic contents in the scene and has been widely applied in existing depth estimation and stereo matching algorithms[1, 2, 3, 4]. In this setup, the event camera records continuous motion information with extremely high temporal resolution and low power consumption, while the intensity camera captures precise scene brightness and texture information.
However, this setup suffers from the cross-modality parallax issue, especially in real-world scenes with complex non-linear motions and varying depths [5, 6]. As a result, it can significantly degenerate the performance of existing Event-based Video Frame Interpolation (E-VFI) approaches, most of which rely on simulation datasets and require per-pixel spatial alignment between events and frames [7, 8, 9, 10, 11, 12], leading to artifacts and distortions with the stereo event-intensity camera setup as shown in Fig. 1.
Achieving the per-pixel cross-modal alignments is extremely challenging for the Stereo E-VFI (SE-VFI) task due to the missing inter-frame intensities and modality differences. Even though existing works [7, 5] directly apply the global homography and stereo rectification, it is only valid for scenes either with a large depth [5] or within a plane [7]. Complex motions in real-world dynamic scenes often lead to temporally varying depths, violating the planary homography and thus bringing cross-modal misalignments. Thus the correspondence between events and frames is necessary to fulfill the SE-VFI, which is however an ill-posed problem since the missing inter-frame intensities. On the other hand, event cameras work in a completely different mechanism from intensity cameras, only responding to brightness changes and asynchronously emitting binary events. Despite the implicit inclusion of scene structure and texture information in events, they are often triggered at the edges of objects with high-intensity contrast. These modality differences bring another challenge to establishing spatial data correspondence between events and frames.
To handle these problems, we propose a network called SEVFI-Net for the Stereo Event-based Video Frame Interpolation (SE-VFI) task to fully exploit the potential of E-VFI in real-world scenarios. SE-VFI aims to generate high-quality intermediate frames and cross-modal disparities using spatially misaligned events and frames.
Our SEVFI-Net consists of four main modules, i.e., AlignNet, SynNet, FusionNet, and RefineNet.
Specifically, the AlignNet is built by composing a weighted dual encoder and a Feature Aggregation Module (FAM) to mitigate the modality gap and spatial misalignment between events and frames.
The dual encoder enables us to extract multi-scale features from different modalities separately. To mitigate the spatial misalignment, we employ deformable convolution networks[14] in FAM to estimate the spatial correspondence of different data and obtain aligned features. These features and correspondence enable us to achieve a coarse estimation of motion flows and disparities.
Then, we employ motion flows to warp the boundary frames, resulting in flow-based results. Similarly, we utilize disparities to warp the original events, providing contour constraints for synthesis-based results.
However, it is important to note that these results are only approximations due to the lack of precise inter-frame intensities and the differences in modalities. As a result, the fused result of them may contain overlaps and distortions.
To address these issues, we incorporate a RefineNet to eliminate the defects in the fused results and further optimize the estimations from motion flows and disparities. By doing so, we can effectively handle the challenges posed by modality differences and spatial misalignments.
Additionally, due to the limited diversity of scenes captured by current stereo datasets such as the Stereo Event Camera Dataset for Driving Scenarios (DSEC) [5] and the Multi-Vehicle Stereo Event Camera Dataset (MVSEC) [6],
they are unable to effectively evaluate the performance of algorithms in various scenes and across a wide range of depth variations.
Thus we build a stereo visual acquisition system containing a SilkyEvCam event camera together with an Intel Realsense D455 RGB-D camera and collect a new Stereo Event-Intensity Dataset (SEID). Our SEID captures dynamic scenes with complex non-linear motions and depths variation that pose a challenge for SE-VFI. It provides events with a higher resolution of $640\times 480$ and high-quality RGB frames at the same resolution. Besides, it also provides depth maps synchronized with RGB frames, which can be used for the stereo-matching task.
The main contributions of this paper are three-fold:
•
We propose a novel SEVFI-Net framework for the task of E-VFI when frames and events are captured with stereo camera settings in spatially misaligned scenes.
•
We collect a new stereo event-intensity dataset, containing high-resolution events, high-quality RGB frames, and synchronous depth maps captured in various scenes with complex motions and varying depths.
•
Extensive experiments show that our SEVFI-Net yields high-quality frames and accurate disparities, and achieves state-of-the-art results on real-world stereo event-intensity datasets.
II Related Work
II-A Frame-based VFI (F-VFI)
Video Frame Interpolation (VFI) is a widely investigated low-level task in computer vision, aiming to restore intermediate frames from the neighboring images in the video sequence [15]. However, it is severely ill-posed and the primary challenge is caused by the missing inter-frame information in terms of motions and textures. To relieve the burden, existing VFI approaches commonly rely on inter-frame motion prediction from neighboring frames [16], and thus can be roughly categorized into warping-based and kernel-based.
Warping-based methods [17, 18, 19, 20, 21] utilize optical flow that perceives motion information between consecutive frames and captures dense correspondences.
High-quality VFI results often depend on precise motion estimation.
Due to the lack of inter-frame motion information, these methods are typically under the assumption of linear motion and brightness constancy between keyframes.
Some techniques and information have been utilized to enhance the interpolation performance, e.g., forward warping[17], transformer[18], context[19], depth[20], patch-based [22, 21] and deformable convolution[23, 24].
But due to the assumption of linear motion, these methods encounter challenges when dealing with complex non-linear motions in real-world scenarios. Some approaches design complex high-order motion models, such as cubic[25, 26] and quadratic[27], to address non-linear motions.
However, they need more neighboring frames as inputs to estimate motion models and still struggle in tackling real-world complex motions.
Kernel-based methods[15, 28, 29] incorporate both motion estimation and frame reconstruction. They utilize the input frames to estimate convolution kernels, which encode local motions across the input keyframes and generate intermediate frames based on these kernels.
Compared to warping-based methods, kernel-based ones are more effective in dealing with occlusion, sudden brightness change, and blurry inputs.
But they need expensive costs for computing per-pixel convolution kernels and are hard to capture large motions due to the fixed and small kernel sizes.
The common problem of Frame-based VFI (F-VFI) methods is the missing information between input frames.
Thus motion models or convolution kernels can only be estimated by consecutive neighboring frames, which limits the types of scenes that can be coped with by them in the real world.
II-B Event-based VFI (E-VFI)
Benefiting from extremely low latency, the event cameras can supplement the inter-frame information by emitting events asynchronously in response to the brightness change[30, 31]. Many works focus on video reconstruction solely with events, which can be seen as a type of VFI. These methods are mainly based
on RNN [32, 33, 34] or GAN[35, 36]. However, it is still ill-posed since events only record the brightness change in the scenes, making it difficult to predict accurate brightness values from them.
Recently, event cameras have been adopted for high-quality VFI [7, 8, 9, 11, 12, 10]. Thanks to the extremely high temporal resolution, events can provide reliable motion prediction for dynamic scenes even with non-linear motions, resulting in better results than conventional F-VFI approaches.
Tulyakov et al. [7] propose a network Time Lens that leverages events to compensate for inter-frame motions and textures in a hybrid manner, i.e., warping plus synthesis.
Further, Time Lens is extended to Time Lens++ where a motion spline estimator is introduced to predict reliable non-linear continuous flow from sparse events [8].
Instead of supervised learning, He et al. [9] design an unsupervised learning framework by applying cycle consistency to bridge the gap between synthesized and real-world events. In addition, Gao et al. [11] propose an SNN-based fast-slow joint synthesis framework, i.e., SuperFast, for the high-speed E-VFI task. Xiao et al. [12] analyze the drawbacks of existing methods and introduce a novel method named EVA${}^{2}$ for E-VFI via cross-modal alignment and aggregation.
However, existing E-VFI approaches largely rely on synthetic data, restricted by per-pixel spatial alignment and ideal imaging without intense motion and sudden brightness change, which is difficult to fulfill in real-world applications since the events and frames are usually captured separately by an event camera and an intensity camera [7, 5].
The misalignment between frames and events cannot be eliminated by global homography or stereo rectification, and inevitably produces cross-modal parallax and poses additional challenges to the VFI task, especially when dynamic scenes with complex motions and varying depths are encountered.
Therefore, our work focuses on handling the parallax between events and frames in SE-VFI effectively utilizing events to compensate for inter-frame information and achieve high-quality video frame interpolation.
II-C Stereo Matching
Stereo matching is the process of establishing pixel-to-pixel correspondences between two different views on the epipolar line. Many existing stereo matching methods are based on neural networks and leverage several advanced techniques to improve their performance, e.g., encoder-decoder architecture [37], 3D convolution cost-volume module [38, 39, 40, 41], adaptive sample aggregation module [42], attention [43], transformer [44], and unsupervised learning [45]. However, frame-to-frame stereo matching methods are based on the assumption of high-quality imaging under ideal conditions, where there are no fast motion and high-dynamic-range scenes.
In recent works, event cameras with high-speed and high-dynamic-range imaging capabilities have been utilized to improve the performance of stereo-matching algorithms in various complex scenes. Due to the fact that event and intensity cameras perceive the same light field, the edge information extracted from events and intensity images can be correlated to calculate the sparse disparity map [4, 3]. Additionally, some learning-based methods are proposed to achieve dense disparity estimation. Specifically, Zou et al. [2] introduce an hourglass architecture with a pyramid attention module, extracting multi-scale features and performing fusion by using convolutional kernels of different sizes. Gu et al. [1] propose a self-supervised learning framework, achieving data matching and disparity estimation by establishing gradient structure consistency between frames and events.
However, existing stereo-matching methods are limited by the frame rate of the input frame sequence.
In contrast, our method aims to estimate the corresponding disparity map during video frame interpolation, allowing the resulting disparity sequence to be decoupled from the frame rate of the input sequence. This enables a more comprehensive and effective representation of depth variations in the scene.
III Problem Formualtion
Conventional Frame-based VFI (F-VFI) methods aim at reconstructing intermediate frames between two keyframes:
$$I_{t}=\operatorname{F-VFI}(t;I_{0},I_{1}),\quad t\in[0,1],$$
(1)
where $I_{0},I_{1}$ are input keyframes and $I_{t}$ is the target intermediate frame at a normalized time $t\in[0,1]$. However, due to the lack of motion information between consecutive input keyframes, most Frame-based VFI (F-VFI) methods are built on simplified assumptions, e.g., linear motions [17, 18, 19, 20] or local movements [15, 28], leading to performance degradation in real-world scenarios.
To tackle this problem, E-VFI methods are proposed by predicting inter-frame motion information with events [7, 8, 9, 12, 11], achieving better interpolation performance than F-VFI methods.
They generate intermediate frames with the help of events triggered between input keyframes, thus can model motions close to the real trajectory:
$$I_{t}^{\Omega_{f}}=\operatorname{E-VFI}(t;I_{0}^{\Omega_{f}},I_{1}^{\Omega_{f}};E_{0\rightarrow 1}^{\Omega_{f}}),\quad t\in[0,1],$$
(2)
where $\Omega_{f}$ represents the intensity camera’s image plane, implying an assumption that the frames and events are aligned at the pixel level. However, this assumption is not always valid in existing stereo frame-event camera setups.
Parallax tends to exist between the two data, especially in foreground regions with small depths.
If we mix the unaligned events and frames directly for E-VFI, the parallax of the two data will bring burdens in motion estimation, leading to severe artifacts and distortion in the reconstruction result.
To tackle this problem, we introduce a novel framework for SE-VFI with the misaligned event and frame data:
$$\displaystyle I_{t}^{\Omega_{f}},\mathcal{D}_{t}^{\Omega_{f}\rightarrow\Omega_{e}}=\operatorname{SE-VFI}(t;I_{0}^{\Omega_{f}},I_{1}^{\Omega_{f}};E_{0\rightarrow 1}^{\Omega_{e}}),\quad t\in[0,1]$$
(3)
where $\Omega_{f}$ and $\Omega_{e}$ represent different image planes of intensity and event cameras respectively, and $\mathcal{D}_{t}^{\Omega_{f}\rightarrow\Omega_{e}}$ denotes the disparity from $\Omega_{f}$ to $\Omega_{e}$. The key to SE-VFI is to deal with the parallax between events and frames by establishing a spatial data correlation, complementing the missing motion information between frames using events to realize the reconstruction of intermediate frames, and further estimating the disparity maps between events and frames from the spatial data correlation.
IV Method
In this section, we describe the proposed SEVFI-Net that can generate intermediate frames and disparities from misaligned events and frames.
IV-A Pipeline Overview
Our goal is to generate one or more intermediate frames $I_{t}^{\Omega_{f}}$ from consecutive keyframes $I_{0}^{\Omega_{f}},I_{1}^{\Omega_{f}}$ in one view along with concurrent events $E_{0\rightarrow 1}^{\Omega_{e}}$ in the other.
As shown in Fig. 2, SEVFI-Net contains four subnets, AlignNet, SynNet, FusionNet, and RefineNet. The main issue of SE-VFI is to handle the parallax between the input events and frames. So we set AlignNet at first, which receives the origin data and mitigates the modality gap and spatial misalignment with the help of the key FAM. Specifically, we use FAM to establish the spatial correlation that is represented as the offset between inputs and obtain the aligned feature. Then the aligned feature and the offset are fed into two decoders to estimate bi-directional optical flow and disparity separately. Afterward, we split the VFI process into two pathways: the Flow-based and the Syn-based. The flow-based results demonstrate excellent performance in regions with small motions, while the syn-based results excel in regions with sufficient events.
Then these results are input to FusionNet to produce a high-quality output, which is further refined by RefineNet to improve the reconstruction quality.
IV-B AlignNet
Unlike conventional E-VFI methods, which usually concatenate frames and events as input, SE-VFI focuses more on reducing the parallax effect in the data. Considering the modality discrepancy and the misalignment problem, AlignNet is based on an hourglass architecture [46] using two encoders and two decoders with similar structures but unshared weights as illustrated in Fig. 3.
Assuming that we need to reconstruct the image at time $t$, we first divide the event stream into two parts and operate them by time shift and polarity reversal [47, 7] to get $E_{t\rightarrow 0}^{\Omega_{e}},E_{t\rightarrow 1}^{\Omega_{e}}$,
and represent them by the voxel grid [48].
The frames and events are initially fed into a Shallow Feature Extractor (SFE) to obtain full-scale features. Subsequently, they are passed through Downsample layers and Resblocks to obtain features at different scales.
However, the pivotal issue is the parallax between the inputs, so we design the FAM to tackle it. Inspired by [14, 49], in FAM (see Fig. 3), we first concatenate the features $F_{i}$ and $F_{e}$, and then input them to an SFE to compute their spatial data correlation, which is represented as the offset. Since deformable convolution can sample flexible locations, we utilize the offset to guide the deformable convolution layer for feature alignment. We set the feature from the image encoder $F_{i}$ as the reference. Then the offset provides the data association between $F_{i}$ and $F_{e}$, guiding $F_{e}$ to align with $F_{i}$. Afterward, we concatenate the deformed feature with the input $F_{i}$ to obtain the aligned feature $F_{f}$ and pass the offset to another SFE to get the new feature $F_{d}$ that is used to estimate disparity.
The flow decoder and the disparity decoder differ only slightly in the prediction layers. At first, the CBAM [50] block is employed to enhance the features from both channel and spatial perspectives. Then the flow decoder uses Tanh as the activation function to be compatible with negative flow values, while the disparity decoder uses sigmoid since disparities are all positive. The features are fed into the two decoders that output bi-directional optical flows $F_{t\rightarrow 0}^{\Omega_{f}},F_{t\rightarrow 1}^{\Omega_{f}}$ and the disparity $\mathcal{D}^{\Omega_{f}\rightarrow\Omega_{e}}_{t}$. In summary, the input and output of AlignNet are represented as:
$$F_{t\rightarrow 0}^{\Omega_{f}},F_{t\rightarrow 1}^{\Omega_{f}},\mathcal{D}^{\Omega_{f}\rightarrow\Omega_{e}}_{t}=\operatorname{AlignNet}(I_{0}^{\Omega_{f}},I_{1}^{\Omega_{f}};E_{t\rightarrow 0}^{\Omega_{e}},E_{t\rightarrow 1}^{\Omega_{e}}).$$
(4)
IV-C SynNet
We divide our SEVFI-Net into two pathways, i.e., the flow-based and the syn-based. In the flow-based way, we utilize bi-directional flows to warp the boundary keyframes $I_{0}^{\Omega_{f}},I_{1}^{\Omega_{f}}$ separately to the target time $t$ and get the warped images $W_{0\rightarrow t}^{\Omega_{f}}$ and $W_{1\rightarrow t}^{\Omega_{f}}$ as:
$$\displaystyle W_{0\rightarrow t}^{\Omega_{f}}=\operatorname{Flow-Warp}(I_{0}^{\Omega_{f}};F_{t\rightarrow 0}^{\Omega_{f}}),$$
(5)
$$\displaystyle W_{1\rightarrow t}^{\Omega_{f}}=\operatorname{Flow-Warp}(I_{1}^{\Omega_{f}};F_{t\rightarrow 1}^{\Omega_{f}}).$$
However, the optical flow estimation relies on the assumption of brightness invariance, which can result in inaccuracies in regions where objects move rapidly. Thus we use the syn-based way to fix it.
Since events are emitted with the brightness change at the high temporal resolution, they can record complex motions and help reconstruct intermediate images. But the original events in the stereo camera setup are not aligned with the frames. So we first gather the events $e=\{x_{i},y_{i},t_{i},p_{i}\}_{t=1}^{N}$ within a time window $\Delta t$ around the target time $t$ and then compress them into a tensor, which can be formulated as:
$$\varepsilon_{t}=\sum_{i\in\{i|t-\frac{\Delta t}{2}\leq t_{i}\leq t+\frac{\Delta t}{2}\}}p_{i}\delta(x-x_{i},y-y_{i}).$$
(6)
Subsequently, we use the previously learned disparity to warp the event tensors to get new tensors that match the spatial positions with the frames:
$$\varepsilon_{t}^{w}=\operatorname{Disp-Warp}(\varepsilon_{t};\mathcal{D}_{t}^{\Omega_{f}\rightarrow\Omega_{e}}).$$
(7)
Then we concatenate the keyframes with the warped event tensors as the input ${Input}_{syn}=[I_{0}^{\Omega_{f}},I_{1}^{\Omega_{f}};\varepsilon_{t}^{w}]$ of the U-Net-like SynNet shown in Fig. 4, and finally we obtain a synthesized result:
$$S_{t}^{\Omega_{f}}=\operatorname{SynNet}({Input}_{syn}).$$
(8)
IV-D FusionNet and RefineNet
The flow-based method performs well in regions with small motions and stable brightness, while the syn-based method relies more on regions with sufficient events and can handle complex scenes. Therefore, we use FuisonNet to combine the strengths of both methods and achieve better results. We use the warped and synthesized results together with the optical flow and target time $t$ as input, i.e., ${Input}_{fuse}=[W_{0\rightarrow t}^{\Omega_{f}},W_{1\rightarrow t}^{\Omega_{f}},S_{t}^{\Omega_{f}},F_{t\rightarrow 0}^{\Omega_{f}},F_{t\rightarrow 1}^{\Omega_{f}},t]$, adjusting the proportion of each result by using the optical flow and target time as weights. FusionNet is shown in Fig. 5 and we use the Softmax function at the end to generate three attention maps $\mathcal{M}=\{\mathcal{M}_{0},\mathcal{M}_{1},\mathcal{M}_{2}\}$ and then multiply $\mathcal{M}$ by the previous results $\mathcal{R}=\{W_{0\rightarrow t}^{\Omega_{f}},W_{1\rightarrow t}^{\Omega_{f}},S_{t}^{\Omega_{f}}\}$ to obtain the fused result $\tilde{I}_{t}^{\Omega_{f}}$:
$$\displaystyle\mathcal{M}=\operatorname{FusionNet}({Input}_{fuse}),$$
(9)
$$\displaystyle\tilde{I}_{t}^{\Omega_{f}}=\mathcal{M}\otimes\mathcal{R}=\mathcal{M}_{0}\cdot W_{0\rightarrow t}^{\Omega_{f}}+\mathcal{M}_{1}\cdot W_{1\rightarrow t}^{\Omega_{f}}+\mathcal{M}_{2}\cdot S_{t}^{\Omega_{f}}.$$
(10)
Finally, we use a Residual Dense Network (RDN) [51] as RefineNet, as shown in Fig. 2 where the different results are concatenated and passed to it to learn a residual ${res}$, which is added to the fused result $\tilde{I}_{t}^{\Omega_{f}}$ to obtain the final result $\hat{I}_{t}^{\Omega_{f}}$, which can be formulated as:
$$\displaystyle res=\operatorname{RefineNet}$$
$$\displaystyle(W_{0\rightarrow t}^{\Omega_{f}},W_{1\rightarrow t}^{\Omega_{f}},S_{t}^{\Omega_{f}}),$$
(11)
$$\displaystyle\hat{I}_{t}^{\Omega_{f}}=$$
$$\displaystyle\tilde{I}_{t}^{\Omega_{f}}+res.$$
(12)
IV-E Loss Functions
Our training loss comprises three parts: reconstruction loss $\mathcal{L}_{rec}$, flow loss $\mathcal{L}_{flow}$, and disparity loss $\mathcal{L}_{disp}$.
The reconstruction loss $\mathcal{L}_{rec}$ models the reconstruction quality of the intermediate frames. We denote the ground truth frames by $I_{t}^{gt}$ and utilize the $\mathcal{L}_{1}$ loss to evaluate the similarity between the reconstruction and the ground truth. To achieve better visual quality, we add the perceptual loss [52] to $\mathcal{L}_{rec}$ that is formulated as:
$$\displaystyle\mathcal{L}_{rec}=$$
$$\displaystyle\operatorname{\mathcal{L}_{1}}(S_{t}^{\Omega_{f}},I_{t}^{gt})+\operatorname{\mathcal{L}_{1}}(\tilde{I}_{t}^{\Omega_{f}},I_{t}^{gt})+\operatorname{\mathcal{L}_{1}}(\hat{I}_{t}^{\Omega_{f}},I_{t}^{gt})$$
(13)
$$\displaystyle+0.1\operatorname{\mathcal{L}_{perceptual}}(\hat{I}_{t}^{\Omega_{f}},I_{t}^{gt}).$$
The flow loss $\mathcal{L}_{flow}$ consists of the photometric loss and the smoothness loss used in [53]. The former aims to minimize the difference in intensity between the warped image and the ground truth, while the latter regularizes the output flow by minimizing the flow difference between adjacent pixels in the horizontal, vertical, and diagonal directions.
$\mathcal{L}_{flow}$ is formulated as:
$$\displaystyle\mathcal{L}_{flow}=$$
$$\displaystyle\operatorname{\mathcal{L}_{photometric}}(W_{0\rightarrow t}^{\Omega_{f}},I_{t}^{gt})+0.1\operatorname{\mathcal{L}_{smoothness}}(F_{t\rightarrow 0}^{\Omega_{f}})$$
(14)
$$\displaystyle+\operatorname{\mathcal{L}_{photometric}}(W_{1\rightarrow t}^{\Omega_{f}},I_{t}^{gt})+0.1\operatorname{\mathcal{L}_{smoothness}}(F_{t\rightarrow 1}^{\Omega_{f}}).$$
The disparity loss $\mathcal{L}_{disp}$ models the prediction quality of disparity. We denote the ground truth disparity by $\mathcal{D}_{t}^{gt}$, and first optimize the prediction by the smooth $\mathcal{L}_{1}$ loss since it is more robust, which is defined as:
$$\displaystyle\operatorname{smooth}_{L_{1}}(x)$$
$$\displaystyle=\left\{\begin{array}[]{ll}0.5x^{2}&\text{ if }|x|<1,\\
|x|-0.5&\text{ otherwise. }\end{array}\right.$$
(15)
We also incorporate the edge-aware disparity smoothness loss $\mathcal{L}_{ds}$ used in [54] to promote the local smoothness of disparities by computing the cost using the gradients of both disparities and frames, which is represented as:
$$\mathcal{L}_{ds}=\frac{1}{N}\sum_{i,j}|\partial_{x}\mathcal{D}_{t,i,j}^{\Omega_{f}\rightarrow\Omega_{e}}|e^{-||\partial_{x}I_{i,j}^{gt}||}+|\partial_{y}\mathcal{D}_{t,i,j}^{\Omega_{f}\rightarrow\Omega_{e}}|e^{-||\partial_{y}I_{i,j}^{gt}||},$$
(16)
where $i,j$ are pixel coordinates, $x,y$ represent two directions, and $\partial D,\partial I$ are disparity gradients and image gradients.
The disparity loss can be expressed as:
$$\displaystyle\mathcal{L}_{disp}=\operatorname{smooth}_{L_{1}}(\mathcal{D}_{t}^{\Omega_{f}\rightarrow\Omega_{e}},\mathcal{D}_{t}^{gt})+0.1\mathcal{L}_{ds}.$$
(17)
The total training loss combines all of the above terms:
$$\displaystyle\mathcal{L}_{total}$$
$$\displaystyle=\lambda_{r}\mathcal{L}_{rec}+\lambda_{f}\mathcal{L}_{flow}+\lambda_{d}\mathcal{L}_{disp}.$$
(18)
In practice, we adjust the balancing factors for training on different datasets.
V Stereo Event-Intensity Dataset
There are two public stereo event datasets, DSEC[5] and MVSEC[6]. DSEC is a large-scale outdoor stereo event dataset especially for driving scenarios, while MVSEC captures a single indoor scene and multi-vehicle outdoor driving scenes. But for the VFI task, these two datasets have limited scene diversity and cannot effectively evaluate the algorithm’s generalization ability across different scenes.
Additionally, these datasets record depths using LiDAR, which can only provide sparse depth values. Typically, only around $10\%$ of the depth values are valid, while the remaining $90\%$ are empty [55]. Therefore, the LiDAR data as the reference for VFI evaluation would have limited capability.
To address this problem, we build a stereo visual acquisition system containing a SilkyEvCam event camera and an Intel Realsense D455 RGB-D camera with a baseline of 8cm as shown in Fig. 6. We collect a new SEID dataset that captures various indoor and outdoor scenes with varying depths and complex motions.
The RGB-D camera provides frames and depth maps that are synchronized in both temporal and spatial domains. We collect the data from different sensors through the ROS system and use the UTC clock of the control computer to timestamp the data for soft synchronization. In order to eliminate possible time delays, we manually calibrate the timestamps of events to achieve precise time synchronization.
To estimate the intrinsic and extrinsic parameters, we utilize the event-based image reconstruction algorithm E2VID [32] to generate high-quality images for calibration. In order to achieve good calibration results, we use the MATLAB calibration toolbox to calibrate the intrinsic parameters of the two cameras first, and then we use the obtained intrinsic parameters to calibrate the extrinsic parameters between the two cameras. Afterward, we employ the calibrated parameters to perform stereo rectification on the data, eliminating the deviations in the $y$ and $z$ directions. Furthermore, we remove the values of less than one meter to avoid invalid depth values captured by the RGB-D camera and then convert the depths to disparities using the calibrated parameters.
Our SEID dataset contains 34 indoor and outdoor sequences that are summarized in Tab. I. The two cameras share a similar field of view (FoV), and both events and frames have a resolution of $640\times 480$. The dataset features frames at 60 FPS, which is higher than the previous stereo datasets as shown in Tab. II. High spatial resolution helps capture richer texture details, while a high frame rate preserves more temporal and motion information, providing more valid references at various target timestamps for the VFI task. In addition, we use an RGB-D camera to capture the depths of the scenes, which provides dense disparity maps as shown in Fig. 7 different from LiDAR data. This offers more reliable reference values for cross-modal stereo-matching tasks.
Our SEID dataset captures richer motions and varying depths, providing a more diverse range of depth variations and scene dynamics.
We calculate the average values for each disparity map and further compute the average disparity for each sequence. Then, we conduct a statistical analysis of the disparity distributions in the three datasets, as shown in Fig. 8. The DSEC and MVSEC datasets exhibit a concentration of disparity values within a narrow range, indicating their limited scene diversity. In contrast, our SEID dataset captures a wider range of scenes with richer variations in depths, resulting in a broader distribution of disparities. Therefore, SEID can be used to validate the impact of different disparity conditions on the performance of E-VFI. This enables us to fully exploit the potential of event cameras in stereo tasks and harness their capabilities.
VI Experiments and Analysis
In this section, we evaluate and analyze the proposed SEVFI-Net. In Sec. VI-A, we first describe the experimental settings, including datasets, training details, and compared methods. Then we compare
the VFI performance with the state-of-the-art in Sec. VI-B and analyze the impact of disparity on the E-VFI task. After that, we evaluate the performance of stereo matching with existing cross-modal stereo matching methods in Sec. VI-C. Finally, the importance of the subnets in our SEVFI-Net is discussed in Sec. VI-D.
VI-A Experimental Settings
VI-A1 Datasets
We use DSEC[5], MVSEC[6], and our SEID datasets for training and evaluation.
DSEC.
We select 23 daylight event-frame sequences for training and 10 for testing, using the data from the left intensity camera and the right event camera. Both events and frames are rectified to eliminate distortions and deviations, and the frames are resized to 640 $\times$ 480 pixels to keep the same resolution as the events.
MVSEC.
We use the “indoor$\_$flying” scene data to verify the performance of our model. We choose 3 sequences as the training set and 1 sequence as the testing set, also with the left frames and right events. The extrinsic parameters are used to convert the provided depths to disparities, both frames and events are undistorted and rectified with the calibration parameters.
SEID (Ours).
We divide SEID into two parts, i.e., the training set containing 28 sequences and the testing set containing 6 sequences. Each part consists of six subsets as illustrated in Tab. I. The events, frames, and disparities are undistorted and rectified using the calibration parameters.
VI-A2 Training Details
We implement the proposed SEVFI-Net in Pytorch [56] and train three models separately on DSEC, MVSEC, and SEID datasets. To enhance the robustness of SEVFI-Net, we augment the data by randomly cropping the samples into $224\times 224$ patches. It is trained using the Adam optimizer [57] with $\beta_{1}=0.9$ and $\beta_{2}=0.999$ for 100 epochs.
The learning rate is initialized at $3\times 10^{-4}$ and decayed by 0.8 every 10 epochs. As different datasets record different scenes with varying disparities, we adjust the balancing parameters in Eq. 18 for training.
We set $[\lambda_{r},~{}\lambda_{f},~{}\lambda_{d}]=[2,0.005,0.008]$ for training on DSEC, $[\lambda_{r},~{}\lambda_{f},~{}\lambda_{d}]=[1,0.01,0.001]$ for MVSEC, and $[\lambda_{r},~{}\lambda_{f},~{}\lambda_{d}]=[2,0.002,0.005]$ for SEID.
The model is trained for 100 epochs on each dataset, using 2 NVIDIA TITAN RTX3090 GPUs.
VI-A3 Benchmark
We compare SEVFI-Net with one open-sourced E-VFI approach, i.e., Time Lens[7] and four state-of-the-art F-VFI methods, DAIN[20], RIFE[13], RRIN[16], and Super Slomo[58]. The metrics of Peak Signal to Noise Ratio (PSNR) and Structural SIMilarity (SSIM)[59] are used for quantitative evaluation. The higher the values of PSNR and SSIM, the better the performance.
Since the datasets have different frame rates, we adaptively evaluate each with different skip values to ensure similar inter-frame motion durations.
Specifically, we implement the experiments under the setting of 1-, 3-, and 5-frame skips for sequences in DSEC, 3-, 5-, and 7-frame skips in MVSEC, and 5-, 7-, and 9-frame skips in SEID.
VI-B Comparisons of Video Frame Interpolation
In this subsection, comparisons of our SEVFI-Net to the state-of-the-art methods are made qualitatively and quantitatively in multi-frame reconstruction. After that, the superiority and robustness of SEVFI-Net compared to other E-VFI methods in dealing with disparity changes are also verified.
VI-B1 Results of F-VFI and E-VFI
In this part, we analyze the limitations of the F-VFI methods and identify the challenges encountered by the E-VFI methods when handling stereo event-frame data. Finally, we highlight the effectiveness of our method. Qualitative and quantitative results are provided to show the superiority of our SEVFI-Net.
Qualitative comparisons.
We select two samples from the DSEC and MVSEC datasets to compare the visual performance of each VFI algorithm as illustrated in Figs. 9 and 10.
As displayed in Figs. 9 and 10, the interpolated frames of the F-VFI algorithms fail in dealing with fast and complex motions, leading to blurred or distorted edges in the results.
Although they can roughly reconstruct the scene, they estimate inter-frame motion using the input keyframes and are only good at handling simple linear motion. When the scene becomes complex or the motion is intense, the F-VFI algorithms often fail and produce unrealistic structures and textures. This can result in significant degradation of the interpolated frames.
Compared to F-VFI methods, existing E-VFI methods can estimate a more precise inter-frame motion model with the help of events. However, the cross-modal misalignment in stereo camera setups severely disturbs their performance and causes significant distortions and artifacts during the frame interpolation process as illustrated in Figs. 9 and 10.
This is because existing E-VFI algorithms often rely on the assumption of pixel-level alignment between events and frames, which poses challenges in the context of stereo camera setups where capturing data from different modalities and aligning them effectively can be difficult.
In contrast, our proposed SEVFI-Net achieves superior visual performance.
Compared to existing methods, our method specifically addresses the challenges posed by the stereo camera setups and leverages the misalignment between different data.
Benefiting from the multiple encoders and feature aggregation modules, our SEVFI-Net can process events and frames separately, enabling us to effectively handle biases and discrepancies between the modalities and achieve data fusion, leading to the best visual performance.
Quantitative comparisons.
Quantitative comparisons are given in Tab. III.
We first notice that Time Lens, despite its utilization of events to estimate motion information and the integration of synthesis-based and flow-based approaches, exhibits significant performance degradation compared to F-VFI methods when dealing with real-world stereo datasets. This degradation can be attributed to the absence of effective data alignment in Time Lens.
We then compare our SEVFI-Net with Time Lens. SEVFI-Net achieves significant improvements over Time Lens, with up to 3.98 dB and 0.0712 enhancements in terms of PSNR and SSIM, respectively.
As shown in Figs. 9 and 10, SEVFI-Net showcases superior reconstruction results with fewer artifacts and distortions compared to Time Lens.
This demonstrates the effectiveness of our method in mitigating distractions caused by the cross-modal disparity.
Moreover, we compare the proposed SEVFI-Net with other F-VFI methods not affected by the parallax. The metrics illustrate that the SEVFI-Net outperforms these state-of-the-art F-VFI methods, which means SEVFI-Net removes the artifacts caused by the cross-modal parallax, achieving precise non-linear motion estimation from misaligned events and frames.
VI-B2 Results of Multi-frame Reconstruction
In this part, we compare the performance of our SEVFI-Net with the existing methods in multi-frame reconstruction. We provide qualitative and quantitative results to show the advantages of our method.
SEVFI-Net exhibits unique advantages when it comes to handling non-linear motion, as shown in Figs. 11 and 12, giving the best continuous multi-frame reconstruction and exhibiting excellent ability in reconstructing dynamic scenes, with the motion patterns closely resembling the ground truth. We also conduct experiments of sequence interpolation on MVSEC and our SEID and calculate the average PSNR and SSIM at each frame index separately as shown in Fig. 13, the proposed SEVFI-Net achieves the highest metrics at each frame index.
This is due to the high temporal resolution of events enables us to effectively model non-linear and fast-moving motions and generate interpolated frames that are visually appealing while preserving the natural motion characteristics of the scene.
VI-B3 Results of Changing Disparity
In addition, we also evaluate the impact of disparity on E-VFI results as illustrated in Fig. 14. We perform experiments on our SEID dataset with the 5-frame skip for each subset, seq1-seq6 in the subfigures represent [“Pedestrians”, “Basketball”, “Cars”, “Square”, “Checkerboard”, “Indoor”] in our SEID respectively.
And we calculate the PSNR improvement of our method compared to Time Lens under different disparity conditions, to demonstrate its stability and robustness when dealing with disparity changes. The data misalignment leads to performance degradation for Time Lens, especially when the average disparity value increases. In contrast, our algorithm maintains a relatively stable performance across different disparity values.
Therefore, as the average disparity increases, the improvement in PSNR also shows an overall increasing trend, indicating that it is more stable and robust in scenes with disparity variations.
Overall, our method not only addresses the challenges of the stereo camera setup and misaligned modalities but also achieves the best results in terms of frame interpolation and maintains robustness even in the presence of large depth variations.
VI-C Comparisons of Stereo Matching
In this subsection, we evaluate the stereo-matching performance of our method with existing cross-modal stereo-matching methods. The qualitative and quantitative experiments demonstrate the superiority of our method.
As formulated in Eq. 4, our SEVFI-Net is capable of not only producing interpolated frames but also estimating the corresponding disparity results. To validate its effectiveness in disparity estimation, we conduct experiments on the DSEC[5] with a 3-frame skip setting and on the MVSEC[6] and SEID with a 5-frame skip setting.
To the best of our knowledge, our SEVFI-Net is the first work that combines stereo matching with video frame interpolation.
In our experimental setup, we only have two keyframes and the event stream between them, which cannot be directly applied to existing event-frame stereo-matching algorithms. Therefore, we first choose two representative VFI algorithms, i.e., RIFE[13] and Time Lens[7] to generate intermediate frames and then input them along with the event stream into two existing stereo matching algorithms, i.e., HSM[3] and SSIE[1], to compare their performance with our method.
We utilize the widely-used metrics end-point error (EPE) and 1-pixel, 2-pixel, 3-pixel error, where EPE is the mean disparity error in pixels and the $c$-pixel error is the average percentage of the pixel whose EPE is larger than $c$ pixels.
Compared to the other methods, ours achieves the best visualization performance and gains an $86.6\%$ decrease in EPE and $71.3\%$ decrease in the 3-pixel error as shown in Tabs. IV and 15.
This is because the combination of the VFI and stereo-matching methods can be influenced by the quality of frame interpolation. The accuracy and smoothness of the interpolated frames play a crucial role in achieving high-quality results. Additionally, existing cross-modal stereo-matching algorithms are typically designed based on the assumption of static scenes and moving cameras, and may struggle with complex scenes, leading to significant performance degradation.
In contrast, our SEVFI-Net excels at matching events and images effectively, allowing for the recovery of accurate depth information in complex scenes. Importantly, our method overcomes the limitations of traditional methods constrained by frame rates by outputting disparities through interpolation. This approach enables a more flexible and continuous representation of depth information, which greatly enhances the overall performance of the system. The metrics shown in Tab. IV demonstrate that our SEVFI-Net outperforms existing methods by a large margin.
VI-D Ablation Study
In this section, we investigate the contribution of each subnetwork in our SEVFI-Net to the interpolation results. As illustrated in Tabs. V and 16, we compare the flow-based results, i.e., $W_{0\rightarrow t}^{\Omega_{f}},W_{1\rightarrow t}^{\Omega_{f}}$, the syn-based result $S_{t}^{\Omega_{f}}$, the fused result $\tilde{I}_{t}^{\Omega_{f}}$, and the final refined result $\hat{I}_{t}^{\Omega_{f}}$ on the DSEC, MVSEC, and SEID datasets.
As displayed in LABEL:abl:a and LABEL:abl:b, the flow-based results bring artifacts and distortions when the target time is far from input keyframes,
while the syn-based results can roughly reconstruct inter-frame motion as shown in LABEL:abl:c.
However, due to the limited information in the warped event data, which only includes the surrounding information of the target frame, it is challenging to effectively incorporate the underlying temporal information and achieve a high level of detail in the reconstruction.
The results in Tab. V demonstrate that the utilization of attention-based fusion maps allows us to harness the advantages of both pathways and enhance the quality of the reconstruction. We observe an average improvement of 2.41 dB in terms of PSNR, and the visual quality is also enhanced as illustrated in LABEL:abl:d.
Furthermore, to improve the fused results by addressing artifacts and inconsistencies, we design RefineNet at the end, which computes the residual maps to capture the differences between the fused result and the ground truth. Tab. V shows an improvement of 0.69 dB in terms of PSNR and LABEL:abl:e achieves the best visualization quality.
VII Conclusion
In this paper, we tackle the stereo event-based video frame interpolation task with a novel network named SEVFI-Net.
Considering the problems of spatial misalignment and modality differences present in stereo camera setups, we design a core FAM, establishing spatial correspondence and achieving data fusion using features extracted from events and frames separately.
Additionally, we utilize attention maps to combine flow-based and syn-based results for better fusion. Our SEVFI-Net can generate high-quality intermediate frames and cross-modal disparities with spatially misaligned events and frames.
We also build a stereo visual acquisition system and collect a new SEID containing diverse complex scenes.
Extensive experiments demonstrate that the proposed SEVFI-Net establishes state-of-the-art performance on real-world stereo event-intensity datasets.
References
[1]
J. Gu, J. Zhou, R. S. W. Chu, Y. Chen, J. Zhang, X. Cheng, S. Zhang, and J. S.
Ren, “Self-supervised intensity-event stereo matching,” arXiv
preprint arXiv:2211.00509, 2022.
[2]
Y.-F. Zuo, L. Cui, X. Peng, Y. Xu, S. Gao, X. Wang, and L. Kneip, “Accurate
depth estimation from a hybrid event-RGB stereo setup,” in IEEE
Int. Conf. Intell. Robots Syst., 2021, pp. 6833–6840.
[3]
H. Kim, S. Lee, J. Kim, and H. J. Kim, “Real-time hetero-stereo matching for
event and frame camera with aligned events using maximum shift distance,”
RAL, vol. 8, no. 1, pp. 416–423, 2022.
[4]
Z. Wang, L. Pan, Y. Ng, Z. Zhuang, and R. Mahony, “Stereo hybrid event-frame
(shef) cameras for 3d perception,” in IROS. IEEE, 2021, pp. 9758–9764.
[5]
M. Gehrig, W. Aarents, D. Gehrig, and D. Scaramuzza, “DSEC: A stereo event
camera dataset for driving scenarios,” IEEE Robotics and Automation
Letters, vol. 6, no. 3, pp. 4947–4954, 2021.
[6]
A. Z. Zhu, D. Thakur, T. Özaslan, B. Pfrommer, V. Kumar, and K. Daniilidis,
“The multivehicle stereo event camera dataset: An event camera dataset for
3D perception,” IEEE Robotics and Automation Letters, vol. 3,
no. 3, pp. 2032–2039, 2018.
[7]
S. Tulyakov, D. Gehrig, S. Georgoulis, J. Erbach, M. Gehrig, Y. Li, and
D. Scaramuzza, “Time lens: Event-based video frame interpolation,” in
CVPR, 2021, pp. 16 155–16 164.
[8]
S. Tulyakov, A. Bochicchio, D. Gehrig, S. Georgoulis, Y. Li, and D. Scaramuzza,
“Time Lens++: Event-based frame interpolation with parametric non-linear
flow and multi-scale fusion,” in CVPR, 2022, pp. 17 755–17 764.
[9]
W. He, K. You, Z. Qiao, X. Jia, Z. Zhang, W. Wang, H. Lu, Y. Wang, and J. Liao,
“TimeReplayer: Unlocking the potential of event cameras for video
interpolation,” in CVPR, 2022, pp. 17 804–17 813.
[10]
G. Paikin, Y. Ater, R. Shaul, and E. Soloveichik, “EFI-Net: Video frame
interpolation from fusion of events and frames,” in CVPRW, 2021, pp.
1291–1301.
[11]
Y. Gao, S. Li, Y. Li, Y. Guo, and Q. Dai, “SuperFast: 200$\times$ video
frame interpolation via event camera,” IEEE TPAMI, 2022.
[12]
Z. Xiao, W. Weng, Y. Zhang, and Z. Xiong, “EVA${}^{2}$: Event-assisted video
frame interpolation via cross-modal alignment and aggregation,” IEEE
Transactions on Computational Imaging, vol. 8, pp. 1145–1158, 2022.
[13]
Z. Huang, T. Zhang, W. Heng, B. Shi, and S. Zhou, “Real-time intermediate flow
estimation for video frame interpolation,” in ECCV, 2022, pp.
624–642.
[14]
J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei, “Deformable
convolutional networks,” in ICCV, 2017, pp. 764–773.
[15]
S. Niklaus, L. Mai, and F. Liu, “Video frame interpolation via adaptive
convolution,” in CVPR, 2017, pp. 670–679.
[16]
H. Li, Y. Yuan, and Q. Wang, “Video frame interpolation via residue
refinement,” in ICASSP, 2020, pp. 2613–2617.
[17]
S. Niklaus and F. Liu, “Softmax splatting for video frame interpolation,” in
CVPR, 2020, pp. 5437–5446.
[18]
L. Lu, R. Wu, H. Lin, J. Lu, and J. Jia, “Video frame interpolation with
transformer,” in CVPR, 2022, pp. 3532–3542.
[19]
S. Niklaus and F. Liu, “Context-aware synthesis for video frame
interpolation,” in CVPR, 2018, pp. 1701–1710.
[20]
W. Bao, W.-S. Lai, C. Ma, X. Zhang, Z. Gao, and M.-H. Yang, “Depth-aware video
frame interpolation,” in CVPR, 2019, pp. 3703–3712.
[21]
Y. Zhang, H. Wang, H. Zhu, and Z. Chen, “Optical flow reusing for
high-efficiency space-time video super resolution,” IEEE TCSVT,
vol. 33, no. 5, pp. 2116–2128, 2022.
[22]
H. R. Kaviani and S. Shirani, “Frame rate upconversion using optical flow and
patch-based reconstruction,” IEEE TCSVT, vol. 26, no. 9, pp.
1581–1594, 2015.
[23]
Z. Shi, X. Liu, K. Shi, L. Dai, and J. Chen, “Video frame interpolation via
generalized deformable convolution,” IEEE TMM, vol. 24, pp. 426–439,
2021.
[24]
P. Lei, F. Fang, T. Zeng, and G. Zhang, “Flow guidance deformable compensation
network for video frame interpolation,” IEEE TMM, 2023.
[25]
Z. Chi, R. Mohammadi Nasiri, Z. Liu, J. Lu, J. Tang, and K. N. Plataniotis,
“All at once: Temporally adaptive multi-frame interpolation with advanced
motion modeling,” in ECCV, 2020, pp. 107–123.
[26]
S. Y. Chun, T. G. Reese, J. Ouyang, B. Guerin, C. Catana, X. Zhu, N. M. Alpert,
and G. El Fakhri, “MRI-based nonrigid motion correction in simultaneous
PET/MRI,” Journal of Nuclear Medicine, vol. 53, no. 8, pp.
1284–1291, 2012.
[27]
X. Xu, L. Siyao, W. Sun, Q. Yin, and M.-H. Yang, “Quadratic video
interpolation,” NeurIPS, vol. 32, 2019.
[28]
S. Niklaus, L. Mai, and F. Liu, “Video frame interpolation via adaptive
separable convolution,” in ICCV, 2017, pp. 261–270.
[29]
Y. Zhang, D. Zhao, X. Ji, R. Wang, and W. Gao, “A spatio-temporal auto
regressive model for frame rate upconversion,” IEEE TCSVT, vol. 19,
no. 9, pp. 1289–1301, 2009.
[30]
C. Brandli, R. Berner, M. Yang, S.-C. Liu, and T. Delbruck, “A 240$\times$ 180
130 db 3 $\mu$s latency global shutter spatiotemporal vision sensor,”
IEEE J. Solid-State Circuits, vol. 49, no. 10, pp. 2333–2341, 2014.
[31]
G. Gallego, T. Delbrück, G. Orchard, C. Bartolozzi, B. Taba, A. Censi,
S. Leutenegger, A. J. Davison, J. Conradt, K. Daniilidis et al.,
“Event-based vision: A survey,” IEEE TPAMI, vol. 44, no. 1, pp.
154–180, 2020.
[32]
H. Rebecq, R. Ranftl, V. Koltun, and D. Scaramuzza, “High speed and high
dynamic range video with an event camera,” IEEE TPAMI, vol. 43,
no. 6, pp. 1964–1980, 2019.
[33]
Y. Zou, Y. Zheng, T. Takatani, and Y. Fu, “Learning to reconstruct high speed
and high dynamic range videos from events,” in CVPR, 2021, pp.
2024–2033.
[34]
H. Rebecq, R. Ranftl, V. Koltun, and D. Scaramuzza, “Events-to-video: Bringing
modern computer vision to event cameras,” in CVPR, 2019, pp.
3857–3866.
[35]
L. Wang, Y.-S. Ho, K.-J. Yoon et al., “Event-based high dynamic range
image and very high frame rate video generation using conditional generative
adversarial networks,” in CVPR, 2019, pp. 10 081–10 090.
[36]
M. Mostafavi, L. Wang, and K.-J. Yoon, “Learning to reconstruct hdr images
from events, with applications to depth and flow prediction,” IJCV,
vol. 129, pp. 900–920, 2021.
[37]
F. Zhang, V. Prisacariu, R. Yang, and P. H. Torr, “Ga-net: Guided aggregation
net for end-to-end stereo matching,” in CVPR, 2019, pp. 185–194.
[38]
A. Kendall, H. Martirosyan, S. Dasgupta, P. Henry, R. Kennedy, A. Bachrach, and
A. Bry, “End-to-end learning of geometry and context for deep stereo
regression,” in ICCV, 2017, pp. 66–75.
[39]
S. Tulyakov, A. Ivanov, and F. Fleuret, “Practical deep stereo (pds): Toward
applications-friendly deep stereo matching,” NIPS, vol. 31, 2018.
[40]
Y. Guo, Y. Wang, L. Wang, Z. Wang, and C. Cheng, “Cvcnet: Learning cost volume
compression for efficient stereo matching,” IEEE TMM, 2022.
[41]
Q. Chen, B. Ge, and J. Quan, “Unambiguous pyramid cost volumes fusion for
stereo matching,” IEEE TCSVT, 2023.
[42]
H. Xu and J. Zhang, “Aanet: Adaptive aggregation network for efficient stereo
matching,” in CVPR, 2020, pp. 1959–1968.
[43]
G. Xu, J. Cheng, P. Guo, and X. Yang, “Attention concatenation volume for
accurate and efficient stereo matching,” in CVPR, 2022, pp.
12 981–12 990.
[44]
Z. Li, X. Liu, N. Drenkow, A. Ding, F. X. Creighton, R. H. Taylor, and
M. Unberath, “Revisiting stereo depth estimation from a sequence-to-sequence
perspective with transformers,” in ICCV, 2021, pp. 6197–6206.
[45]
S. N. Uddin, S. H. Ahmed, and Y. J. Jung, “Unsupervised deep event stereo for
depth estimation,” IEEE TCSVT, vol. 32, no. 11, pp. 7489–7504, 2022.
[46]
A. Newell, K. Yang, and J. Deng, “Stacked hourglass networks for human pose
estimation,” in ECCV, 2016, pp. 483–499.
[47]
X. Zhang and L. Yu, “Unifying motion deblurring and frame interpolation with
events,” in CVPR, 2022, pp. 17 765–17 774.
[48]
A. Z. Zhu, L. Yuan, K. Chaney, and K. Daniilidis, “Unsupervised event-based
learning of optical flow, depth, and egomotion,” in CVPR, 2019, pp.
989–997.
[49]
K. C. Chan, X. Wang, K. Yu, C. Dong, and C. C. Loy, “Understanding deformable
alignment in video super-resolution,” in AAAI, vol. 35, no. 2, 2021,
pp. 973–981.
[50]
S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, “CBAM: Convolutional block
attention module,” in ECCV, 2018, pp. 3–19.
[51]
Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, “Residual dense network for
image super-resolution,” in CVPR, 2018, pp. 2472–2481.
[52]
R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable
effectiveness of deep features as a perceptual metric,” in CVPR,
2018, pp. 586–595.
[53]
A. Zhu, L. Yuan, K. Chaney, and K. Daniilidis, “EV-FlowNet: Self-supervised
optical flow estimation for event-based cameras,” in Proceedings of
Robotics: Science and Systems, 2018.
[54]
C. Godard, O. Mac Aodha, and G. J. Brostow, “Unsupervised monocular depth
estimation with left-right consistency,” in CVPR, 2017, pp. 270–279.
[55]
J. Hu, C. Bao, M. Ozay, C. Fan, Q. Gao, H. Liu, and T. L. Lam, “Deep depth
completion from extremely sparse data: A survey,” IEEE TPAMI, 2022.
[56]
A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen,
Z. Lin, N. Gimelshein, L. Antiga et al., “Pytorch: An imperative
style, high-performance deep learning library,” in NeurIPS, 2019, pp.
8026–8037.
[57]
D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,”
arXiv preprint arXiv:1412.6980, 2014.
[58]
H. Jiang, D. Sun, V. Jampani, M.-H. Yang, E. Learned-Miller, and J. Kautz,
“Super slomo: High quality estimation of multiple intermediate frames for
video interpolation,” in CVPR, 2018, pp. 9000–9008.
[59]
Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality
assessment: from error visibility to structural similarity,” IEEE
TIP, vol. 13, no. 4, pp. 600–612, 2004. |
Vortex shedding and drag in dilute Bose-Einstein condensates
T. Winiecki${}^{1}$, B. Jackson${}^{1}$, J. F. McCann${}^{2}$, and C. S. Adams${}^{1}$
${}^{1}$Dept. of Physics, University of Durham, Rochester Building,
South Road, Durham, DH1 3LE, England. UK
${}^{2}$Dept. of Applied Mathematics and Theoretical Physics,
Queen’s University, Belfast, BT7 1NN, Northern Ireland. UK
(December 8, 2020)
Abstract
Above a critical velocity, the dominant mechanism of energy transfer between
a moving object and a dilute Bose-Einstein condensate is vortex formation.
In this paper, we discuss the critical velocity for vortex formation and
the link between vortex shedding and drag in both homogeneous and
inhomogeneous condensates. We find that at supersonic velocities
sound radiation also contributes significantly to the drag force.
pacs: 03.75.Fi, 67.40.Vs, 67.57.De
1 Introduction
One enticing consequence of the discovery of Bose-Einstein condensation
(BEC) in dilute alkali vapours [1] is the potential for
refining our understanding of quantum fluids. In particular, the dilute Bose
gas provides a near-ideal testing ground for elucidating the role of vortices
in the onset of dissipation in superfluids. Recent experiments
have demonstrated the formation of quantized vortices by rotational
excitation of one [2] and
two-component condensates [3], in analogy with the famous
‘rotating bucket’ experiments in liquid helium [4]. In addition,
by moving a far-off resonant laser beam through a condensate,
Raman et al. [5] measure a heating rate which suggests a
critical velocity for dissipation characteristic of vortex shedding.
The attractive feature of experiments on dilute Bose gases
is that the weakly-interacting limit permits quantitative
comparison between theory and experiment. The theoretical
description is based on the Gross-Pitaevskii equation, a form
of non-linear Schrödinger equation (NLSE) [6]. In the NLSE model,
the critical velocity for vortex shedding by a cylindrical object was
found to be, $v_{\rm c}\sim 0.4c$,
where $c$ is the speed of sound [7]. In a trapped
(inhomogeneous) condensate the critical velocity is
lower due to the reduction in density, and hence
sound speed, towards the edge of the condensate [5, 8].
In this paper, we study vortex shedding due to the motion of
an object through homogeneous and trapped Bose-Einstein condensates
using the NLSE model.
The hydrodynamical properties of a NLSE fluid are reviewed in Sec. 2.
In Sec. 3 and 4 we discuss the critical velocity for vortex nucleation and the link between vortex formation and drag in homogeneous condensates.
In Sec. 5 we consider the properties of trapped
condensates and highlight the differences from the
homogeneous case.
2 Quantum fluid mechanics
At low temperatures and low densities, atoms interact by elastic $s$-wave scattering, and collisions can be parameterised by a single variable, the scattering length, $a$.
For atoms of mass $m$, the wavefunction of the condensate,
$\psi(\mbox{\boldmath$r$},t)$, is given by the
solution of the time-dependent Schrödinger equation:
$$i\hbar\partial_{t}\psi(\mbox{\boldmath$r$},t)=\left[-{\hbar^{2}\over 2m}\nabla%
^{2}+V(\mbox{\boldmath$r$},t)+g|\psi(\mbox{\boldmath$r$},t)|^{2}\right]\psi(%
\mbox{\boldmath$r$},t)~{},$$
(1)
where the wavefunction is normalised to the number of atoms, $N$,
the coefficient of the non-linear term, $g=4\pi\hbar^{2}a/m$,
describes the interactions within the fluid, and
$V(\mbox{\boldmath$r$},t)$ represents external potentials arising from the trap
and any moving obstacle.
2.1 Fluid equations
The link between the nonlinear Schrödinger equation (1)
and the equivalent equations of fluid mechanics is well known
[4, 9]. However some points concerning
condensate flow near penetrable objects are less familiar. In this section
we gather some of the key concepts and equations
that figure in the discussion of our simulations.
Classical (isentropic) fluid mechanics is based on two coupled
differential equations: one describing the transport of mass, the other the transport
of momentum [10]. The relevant quantum variables can be constructed from the wavefunction:
the mass density $\rho$ and momentum current density $J$ are defined as,
$$\rho~{}\equiv~{}m\psi^{*}\psi\ \ \ {\rm and}\ \ \ J_{k}\equiv(\hbar/2i)(\psi^{%
*}\partial_{k}\psi-\psi\partial_{k}\psi^{*})~{},$$
(2)
where the index $k$ denotes the vector component.
The fluid velocity
is defined by $v_{k}\equiv J_{k}/\rho$, or equivalently in terms
of the phase, $\phi$, of the wavefunction,
$v_{k}\equiv(\hbar/m)\partial_{k}\phi$. Clearly, the velocity
field is a potential flow, however it is also compressible
and furthermore can support circulation (vorticity) as will be seen.
The conservation of mass (probability), i.e., the
continuity equation, follows from the definition of $\rho$ and
equation (1)
$$\partial_{t}\rho+\partial_{k}J_{k}=0~{}.$$
(3)
The conservation of momentum equation may be found by
considering the rate of change of the momentum current density,
$$\partial_{t}J_{k}+\partial_{j}T_{jk}+\rho\partial_{k}(V/m)=0~{},$$
(4)
where the momentum flux density tensor takes the form [7, 11],
$$T_{jk}=\frac{\hbar^{2}}{4m}(\partial_{j}\psi^{*}\partial_{k}\psi-\psi^{*}%
\partial_{j}\partial_{k}\psi+{\rm c.c.})+\frac{g}{2}\delta_{jk}|\psi|^{4}~{}.$$
(5)
This can be rewritten as,
$$T_{jk}=\rho v_{j}v_{k}-\sigma_{jk}~{},$$
(6)
where the stress tensor $\sigma_{jk}$ is given by,
$$\sigma_{jk}=-{\textstyle{1\over 2}}\delta_{jk}g(\rho/m)^{2}+(\hbar/2m)^{2}\rho%
\partial_{j}\partial_{k}\ln\rho~{}.$$
(7)
2.2 Pressure, sound and drag
The form of equations (3), (4), and (6) is identical
to those for classical fluid flow [10], the difference emerges from the
nature of the stress, equation (7). A classical ideal
fluid is characterised by $\sigma_{jk}=0,$ for all $j,k$.
In a viscous fluid, the shear stress ($\sigma_{jk},j\neq k$)
is produced by velocity gradients between neighbouring streams
such that,
$\sigma_{jk}=\eta(\partial_{j}v_{k}+\partial_{k}v_{j})$, where
$\eta$ is the coefficient of viscosity. This creates a frictional force
which gives rise to energy loss. In a pure dilute Bose-Einstein condensate there is no
frictional viscosity, but a shear stress arises from density gradients,
the second term in equation (7).
This property gives rise to the possibility of vortex formation and drag without
viscosity.
The pressure (normal stress, $-\sigma_{jk},j=k$) within the
quantum fluid takes the simple form,
$$p={\textstyle{1\over 2}}g(\rho/m)^{2}-(\hbar/2m)^{2}\rho\nabla^{2}\ln\rho~{}.$$
(8)
The second term, called the quantum pressure, is weak in homogeneous
regions of the fluid, that is, far from obstacles or
boundaries, vortex lines or shocks. The essential difference
between interacting and noninteracting (ideal) fluids is
the existence of interaction pressure which supports sound propagation.
In the bulk of the fluid, where the quantum pressure is negligible,
the speed of sound [9, 10],
$$c=\sqrt{\partial p/\partial\rho}=(g\rho/m^{2})^{1\over 2}~{}.$$
(9)
The force on an obstacle moving through a condensate can be calculated from the rate
of momentum transfer to the fluid.
By integrating Eq. (4), one finds that
the $k$-th component of the force,
$$F_{k}=\partial_{t}\int_{\Omega}{\rm d}\Omega J_{k}=-\int_{S}{\rm d}S~{}n_{j}T_%
{jk}-\int_{\Omega}{\rm d}\Omega~{}\rho\partial_{k}(V/m)~{},$$
(10)
where $S$ is the surface of the object or control surface within the
fluid [11], $\Omega$ is the volume enclosed by $S$,
$n_{j}$ is the $j$-component of the normal vector to $S$, and
${\rm d}S$ is a surface element.
The second term on the right-hand side can be likened to the buoyancy
of the fluid. In the case of homogeneous flow past an
impenetrable object (Sec. 3 and 4), the wavefunction
vanishes on the object surface and the potential is uniform elsewhere,
therefore, only the first term contributes. Conversely, for a penetrable
object in a
trapped condensate (Sec. 5), $\Omega$ may be chosen to encompass the
entire fluid, and the first term is negligible compared to the second.
2.3 Quantisation of circulation
The quantum Euler equation follows from combining the equations
describing the conservation of mass
and momentum, (3) and (4), along with the identity,
$$\rho^{-1}\partial_{j}[\rho\partial_{j}\partial_{k}\ln\rho]=2\partial_{k}[\rho^%
{-{1\over 2}}\partial_{j}\partial_{j}\rho^{1\over 2}]~{},$$
(11)
allowing the momentum equation to be written as,
$$\partial_{t}v_{k}+v_{j}\partial_{j}v_{k}+\partial_{k}[g\rho/m^{2}-(\hbar^{2}/2%
m)\rho^{-{1\over 2}}\partial_{j}\partial_{j}\rho^{1\over 2}+V/m]=0~{}.$$
(12)
The conservation of energy (Bernoulli equation) then follows
as an integral of Euler’s equation,
or more directly from the real part of equation (1):
$$\hbar\partial_{t}\phi+{\textstyle{1\over 2}}mv^{2}+g\rho/m-(\hbar^{2}/2m)\rho^%
{-{1\over 2}}\nabla^{2}\rho^{1\over 2}+V=0~{}.$$
(13)
Perhaps the most significant quantum effect on the mechanics of the
fluid is the quantisation of angular momentum.
The circulation is given by,
$$\Gamma=\oint{\rm d}\mbox{\boldmath$r$}\cdot\mbox{\boldmath$v$}=(\hbar/m)2\pi s%
\ \ \ \ \ s=0,1,2,\dots$$
(14)
where the closed contour joins fluid particles.
The conservation of angular momentum (Kelvin’s theorem), follows from
Euler’s equation (12) and states that
the circulation around a closed ‘fluid’ contour does not change in time.
This means that within the fluid, vortex lines must created in pairs
which emerge from a point. The exception is at boundaries, where the wavefunction is clamped
to zero and no closed fluid loop can be drawn, e.g., at the surface
of an impenetrable object [10]
or from the edge of a trapped condensate [13].
2.4 Units
For a homogeneous fluid flow, where the external potential is due to the obstacle only,
it is convenient to rescale length and velocity in terms of the healing length $\xi=\hbar/\sqrt{mn_{0}g}$
and the asymptotic speed of sound $c=\sqrt{n_{0}g/m}$, respectively.
In this case, equation (1) becomes
$${\rm i}\partial_{t}\tilde{\psi}(\mbox{\boldmath$r$},t)=\left[-\textstyle{1%
\over 2}\nabla^{2}+V(\mbox{\boldmath$r^{\prime}$})+|\tilde{\psi}(\mbox{%
\boldmath$r$},t)|^{2}\right]\tilde{\psi}(\mbox{\boldmath$r$},t)~{},$$
(15)
where $\tilde{\psi}=\psi/\sqrt{n_{0}}$ and $n_{0}$ is the number density far from the
object. The force per unit length is measured in units of $\hbar\sqrt{n_{0}^{3}g/m}$.
Unless otherwise stated we use these units throughout.
For steady flow, in which $v$ and $n$ are independent of time and
$\phi=-\mu t$, where $\mu$ is the chemical potential,
the Bernoulli equation takes the form
$$n-{\textstyle{1\over 2}}(\sqrt{n})^{-1}\nabla^{2}\sqrt{n}+V+{\textstyle{1\over
2%
}}v^{2}={\rm constant}~{}.$$
(16)
3 The critical velocity
The critical velocity for the breakdown of superfluidity is
often expressed in terms of the Landau condition
[9], $v_{\rm c}=(\epsilon/p)_{\rm min}$,
where $\epsilon$ and $p$ are the energy and momentum of elementary
excitations in the fluid, and $v_{c}$ is the flow velocity in the
fluid bulk . In the dilute Bose gas, the long wavelength
elementary excitations are sound waves and the Landau criterion
predicts that $v_{\rm c}=c$. However, for flow past an object,
the local velocity near the obstacle, $v$, can become supersonic even
when flow velocity, $U$, is subsonic. Consequently, the critical flow
velocity, $v_{\rm c}$, where laminar flow becomes unstable occurs
at a fraction of the sound speed.
An estimate of $v_{\rm c}$ may be found following the argument suggested by
Frisch et al. [7]. For an incompressible flow past a
solid object, Bernoulli’s equation (16) (neglecting the
quantum pressure) has the simple form,
$$n(v)+{\textstyle{1\over 2}}v^{2}=1+{\textstyle{1\over 2}}U^{2}~{},$$
(17)
where $U$ is the background flow velocity.
The maximum velocity which occurs at the equator of the object is
$v={\textstyle{3\over 2}}U$ for a sphere (or $v=2U$ for a cylinder), therefore,
${\textstyle{1\over 2}}({\textstyle{9\over 4}}-1)U^{2}=1-n(v)$.
The critical velocity is reached when the maximum speed, $v={\textstyle{3\over 2}}U$,
is equal to the ‘bulk’ sound speed, $c=\sqrt{n(v)}$, which gives
$v_{\rm c}=U=\sqrt{8/23}\approx 0.59$
(or $\sqrt{2/11}\approx 0.44$ for a cylinder). However, for a compressible
fluid, the equatorial velocity is slightly larger due to pressure effects.
The first-order correction gives
$v={\textstyle{3\over 2}}U+{\textstyle{1\over 2}}U^{3}$ which reduces the
critical velocity to $v_{\rm c}\approx 0.53$.
The exact value of $v_{\rm c}$ may be
found by solving the uniform flow equation [14, 15],
$${\rm i}\partial_{t}\tilde{\psi^{\prime}}(\mbox{\boldmath$r^{\prime}$},t)=\left%
[-\textstyle{1\over 2}\nabla^{\prime 2}+V(\mbox{\boldmath$r^{\prime}$})+|%
\tilde{\psi^{\prime}}(\mbox{\boldmath$r^{\prime}$},t)|^{2}+{\rm i}\mbox{%
\boldmath$v$}\cdot\nabla^{\prime}\right]\tilde{\psi^{\prime}}(\mbox{\boldmath$%
r^{\prime}$},t)~{},$$
(18)
where $\tilde{\psi^{\prime}}(\mbox{\boldmath$r^{\prime}$},t)=\tilde{\psi}(\mbox{%
\boldmath$r$},t)$ is the wavefunction in the
fluid rest frame written
in terms of the object frame coordinates, $\mbox{\boldmath$r^{\prime}$}=\mbox{\boldmath$r$}-\mbox{\boldmath$v$}t$. Stationary solutions of the form, $\tilde{\psi^{\prime}}(\mbox{\boldmath$r^{\prime}$},t)=\phi(\mbox{\boldmath$r^{%
\prime}$}){\rm e}^{-{\rm i}\mu t}$ are
found to exist only for $v\leq v_{\rm c}$, where $v_{\rm c}$ is the critical
velocity for vortex formation.
To illustrate the behaviour of the exact solutions near the critical velocity,
we solve equation (18) in 3D for an impenetrable sphere
with radius $R=50$. The wavefunction, velocity and quantum pressure
term near the object
are shown in Fig. 2. Note that these parameters are related via
the Bernoulli equation (16).
The intersection between the velocity $v$ and wavefunction amplitude, $|\psi|$,
curves defines the position where the velocity is equal to the ‘bulk’ sound speed (9).
Note that close to the object the effective sound speed is increased
due to the quantum pressure term, (8),
therefore even though the density is low
the flow is not ‘supersonic’. The
critical velocity is reached when flow velocity exceeds
the speed of sound in the bulk of the fluid, i.e., when the intersection
between the velocity and wavefunction curves moves into the
region where the quantum pressure term is zero, see Fig. 1(right).
The complication for trapped condensates is that the speed of sound and hence
the critical velocity depend upon position. In addition,
the object potential is typically penetrable and non-uniform.
We return to these topics in Sec. 5.
4 Vortex shedding, drag, and dissipation
For flow faster than the critical velocity, $v>v_{\rm c}$, vortices are emitted
approximately periodically. A typical vortex stream pattern
for flow past an impenetrable cylinder is shown in Fig. 2.
The background flow is from right
to left and the vortex - anti-vortex pairs produce a flow pattern which
opposes the background. Consequently, the
vortex trail separates the main flow from an almost stationary wake.
The momentum loss from the fluid is transferred to the object
creating a drag force.
The contribution of vortex shedding to the drag force
can be estimated by considering the momentum transfer
due to vortex emission, i.e.,
$$\mbox{\boldmath$F$}_{\rm v}=f_{\rm v}\mbox{\boldmath$p$}_{\rm v}~{},$$
(19)
where $f_{\rm v}$ is the vortex shedding frequency
and $\mbox{\boldmath$p$}_{\rm v}$ is the momentum of a vortex pair
as it is created at the equatorial plane.
Small fluctuations in the vortex shedding frequency
occur because as the vortices move
downstream they interact with each other creating fluctuations in the
flow pattern around the object. The drag force is taken to be
the time-average over many vortex emission cycles.
In addition to vortex shedding, drag may also arise due to sound waves.
The time-average drag, evaluated using equation (10), as a function of velocity
for an impenetrable cylinder is shown in Fig. 3.
The drag is zero up to the critical velocity, then increases
approximately quadratically with $v$ [7, 11].
Also shown is the contribution to the drag force from vortex
shedding alone, i.e., equation (19).
This comparison illustrates that for $v<c$ the drag is produced by vortex
shedding, whereas for $v>c$, an increasingly significant contribution arises from
sound waves. For $v>c$, the
reflected matter waves create a standing wave pattern
in front of the object as shown in Fig. 4.
Using similar arguments for energy loss of the flow. Consider now
the condensate at rest and the obstacle moving with speed $v$.
A drag force leads to energy transfer to the condensate. The
energy transfer rate is given by
$$\frac{{\rm d}E}{{\rm d}t}=\mbox{\boldmath$F$}_{\rm drag}.\mbox{\boldmath$v$}~{}.$$
(20)
Eqs. (19) and (20) make the important link between
vortex shedding, drag and energy dissipation.
5 Motion in a trapped condensate
The inhomogeneous density profile and finite size of trapped condensates means that
a steady, uniform flow is difficult to achieve.
The MIT experiment [5] partially overcame
this problem by sweeping the object back and forth at constant velocity
within the central region of the
condensate, where the density is approximately uniform. In this
case, the object moves through its own low-density wake,
and consequently the drag law
is different from the uniform flow case discussed above.
In Fig. 5 we show the time-averaged drag on a laser beam oscillating
in a trapped condensate. Important differences with the uniform flow case
(Fig. 3) arise at high and low velocities. The drag force tends to saturate
at higher velocities as the object expels fluid from the region of
oscillation and the pressure drops.
Fig. 5 also highlights important differences between two and three
dimensions. Two dimensions corresponds to the limit of a ‘cylindrical’
condensate, where the density is uniform parallel to the axis of the object
(which we define as the $z$-axis). However, in realistic three-dimensional
situations the density is inhomogeneous along $z$, leading to a variation of
the speed of sound which vanishes at the condensate edge. This results in a
lower critical velocity in 3D than in 2D, as apparent in Fig. 5.
In the Thomas-Fermi limit ($ga^{3}\gg 1$) the density profile is parabolic, therefore
the average sound speed along $z$ is a factor of $\pi/4$
smaller than that at the centre. The remaining reduction arises from the fact that
even at very low velocities, vortices tend to be formed where the laser beam intersects the
lower density fluid at the condensate edge.
The lower densities arising in 3D also leads to enhanced sound emission, and hence
enhanced drag at high velocities.
Below the critical velocity, dissipation due to sound emission
occurs at the motion extrema, where the
object accelerates. This is illustrated
in Fig. 6 which shows a cross section through a 2D condensate cut
by a moving laser beam. For constant motion, Fig. 6(a), the fluid is distributed
symmetrically around the object and the drag, which is given by an overlap
integral between the condensate density and the gradient of the object potential (i.e., the second term in equation (10)) is zero.
When the object accelerates, Fig. 6(b), the fluid fails to respond
rapidly enough to the abrupt change in velocity, and
the asymmetry in the overlap between the fluid and the objects leads to
a resistance force analogous to dynamic buoyancy. The system
relaxes to the uniform flow case by the emission of a sound wave.
This corresponds to the ‘phonon heating’ process discussed
in [8].
Although the NLSE model describes dissipation in the sense of
energy transfer between a moving object and the fluid, it
does not say anything about how that energy (mostly stored within the vortex core)
may be subsequently converted into heat.
A complete description should include
coupling of the condensate to a thermal cloud, and would describe the damping
of phonon and vortex modes. Recent work on the non-equilibrium dynamics of
the condensate and non-condensate predict a depletion of the
condensate fraction [16] as observed in the MIT experiment.
6 Summary
The motion of an object through a dilute Bose-Einstein condensate
provides an ideal system to study the fundamental
problem of the onset of dissipation in superfluids.
In this paper we have explained the role of vortex shedding and sound emission
in energy transfer between the object and the
condensate. No energy transfer is observed under the condition of uniform,
steady flow at
speeds below a critical velocity. However, if the object accelerates
there is a small dissipative effect, even below the critical
velocity, due to sound emission. The critical velocity is reached
when the local velocity in the bulk of the
fluid (i.e., where the quantum pressure is zero) exceeds the speed of
sound.
Above the critical velocity vortices are emitted leading to a drag force and
energy transfer to the fluid. Vortex shedding
dominates the energy transfer for intermediate velocities, while sound
emission becomes increasingly important for supersonic motion.
We highlight some important differences
between homogeneous and inhomogeneous trapped condensates.
In particular, that the critical velocity is substantially reduced when the object
intersects regions of lower density at the condensate edge and that
the trap inhomogeneity gives rise to an additional term in
the drag force analogous to a buoyancy.
This work is supported by the EPSRC.
References
References
[1]
See e.g. Bose-Einstein condensation in atomic gases, Proc.
Int. School of Physics Enrico Fermi, eds. M. Inguscio, S. Stringari and C.
Wieman (IOS Press, Amsterdam, 1999).
[2]
K. W. Madison, F. Chevy, W. Wohlleben, and J. Dalibard,
Phys. Rev. Lett. 84, 806 (2000).
[3]
M. R. Matthews, B. P. Anderson, P. C. Haljan, D. S. Hall,
C. E. Wieman, and E. A. Cornell, Phys. Rev. Lett. 83, 2498 (1999).
[4]
R. J. Donnelly Quantized vortices in Helium II, (CUP,
Cambridge, 1991).
[5]
C. Raman, M. Köhl, R. Onofrio, D. S.
Durfee, C. E. Kuklewicz, Z. Hadzibabic, and W. Ketterle, Phys. Rev. Lett.
83, 2502 (1999).
[6]
V. L. Ginzburg and L. P. Pitaevskii, Sov. Phys. JETP 7, 858 (1958); E.
P. Gross, J. Math. Phys. 4, 195 (1963).
[7]
T. Frisch, Y. Pomeau, and S. Rica, Phys. Rev. Lett. 69,
1644 (1992).
[8]
B. Jackson, J. F. McCann, and C. S. Adams, Phys. Rev. A 61, 051603 (R)
(2000).
[9]
P. Nozières and D. Pines, Theory of Quantum Liquids Vol II
(Addison-Wesley, Redwood City, 1990).
[10]
Fluid Mechanics (2nd ed.), L. D. Landau and E. M. Lifshitz (Pergamon,
Oxford, 1987).
[11]
T. Winiecki, J. F. McCann, and C. S. Adams, Phys. Rev. Lett. 82, 5186 (1999).
[12]
B. Jackson, J. F. McCann, and C. S. Adams, Phys. Rev. Lett. 80, 3903
(1998).
[13]
B. M. Caradoc-Davies, R. J. Ballagh, and K. Burnett, Phys. Rev. Lett.
83, 895 (1999).
[14]
C. Huepe and M.-É. Brachet, C. R. Acad. Sci. Paris, 325, 195 (1997).
[15]
T. Winiecki, J. F. McCann, and C. S. Adams,
Europhys. Lett. 48, 475 (1999).
[16]
T. Nikuni, E. Zaremba, and A. Griffin, Phys. Rev. Lett. 83, 10 (1999). |
Optimal Investment Stopping Problem
with Nonsmooth Utility in Finite Horizon
Chonghu Guan,
Xun Li,
Zuoquan Xu and
Fahuai Yi
School of Mathematics, Jiaying University, Meizhou 514015, ChinaDepartment of Applied Mathematics, Hong Kong Polytechnic University, Hong KongDepartment of Applied Mathematics, Hong Kong Polytechnic University, Hong KongSchool of Mathematical Science, South China Normal University, Guangzhou 510631, China.
The project is supported by
NNSF of China (No.11271143, No.11371155 and No.11471276), University Special Research
Fund for Ph.D. Program of China (20124407110001), and Research Grants Council of Hong Kong under grants 521610 and 519913.
()
Abstract
In this paper, we investigate an interesting and important stopping problem mixed with stochastic controls and a nonsmooth utility over a finite time horizon. The paper aims to develop new methodologies, which are significantly different from those of mixed dynamic optimal control and stopping problems in the existing literature, to figure out a manager’s decision. We formulate our model to a free boundary problem of a fully nonlinear equation. By means of a dual transformation, however, we can convert the above problem to a new free boundary problem of a linear equation. Finally, using the corresponding inverse dual transformation, we apply the theoretical results established for the new free boundary problem to obtain the properties of the optimal strategy and the optimal stopping time to achieve a certain level for the original problem over a finite time investment horizon.
Keywords: Parabolic variational inequality; Free boundary; Optimal investment; Optimal stopping; Dual transformation.
Mathematics Subject Classification. 35R35; 60G40; 91B70; 93E20.
1 Introduction
Optimal stopping problems have important applications in many fields such as science,
engineering, economics and, particularly, finance. The theory in this area
has been well developed for stochastic dynamic systems over the past decades.
In the field of financial
investment, however, an
investor frequently runs into investment decisions where investors
stop investing in risky assets so as to maximize their expected
utilities with respect to their wealth over a finite time investment
horizon. These optimal stopping problems depend on underlying
dynamic systems as well as investors’ optimization decisions (controls).
This naturally results in a mixed optimal control and stopping problem, and
Ceci-Bassan (2004) is one of the typical representatives along this line of
research. In the general formulation of such models, the control is mixed,
composed of a control and a stopping time. The theory has also been
studied in Bensoussan-Lions (1984), Elliott-Kopp (1999), Yong-Zhou (1999) and Fleming-Soner (2006),
and applied in finance in
Dayanik-Karatzas (2003), Henderson-Hobson (2008), Li-Zhou (2006), Li-Wu
(2008, 2009), Shiryaev-Xu-Zhou (2008) and Jian-Li-Yi (2014).
In the finance field, finding an optimal stopping time point has been
extensively studied for pricing American-style options, which allow option
holders to exercise the options before or at the maturity. Typical examples
that are applicable include, but are not limited to, those presented in
Chang-Pang-Yong (2009), Dayanik-Karatzas (2003) and Rüschendorf-Urusov
(2008). In the mathematical finance literature, choosing an optimal stopping
time point is often related to a free boundary problem for a class of
diffusions (see Fleming-Soner (2006) and Peskir-Shiryaev (2006)).
In many applied areas, especially in more extensive investment
problems, however, one often encounters more general controlled diffusion
processes. In real financial markets, the situation is even more complicated
when investors expect to choose as little time as possible to stop portfolio
selection over a given investment horizon so as to maximize their profits
(see Samuelson (1965), Karatzas-Kou (1998), Karatzas-Sudderth (1999),
Karatzas-Wang (2000), Karatzas-Ocone (2002), Ceci-Bassan (2004), Henderson (2007),
Li-Zhou (2006) and Li-Wu (2008, 2009)).
The initial motivation of this paper comes from our recent studies
on choosing an optimal point at which an investor stops investing
and/or sells all his risky assets (see Carpenter (2000) and Henderson-Hobson (2008)). The objective is to find an
optimization process and stopping time so as to meet certain
investment criteria, such as, the maximum of an expected nonsmooth utility
value before or at the maturity. This is a typical yet important problem in the
area of financial investment. However, there are fundamental
difficulties in handling such mixed controls and stopping problems. Firstly, our
investment problem, which is signifcantly different from the classical
American-style options, involves portfolio process in the objective over the entire
time horizon. Secondly, it
involves the portfolio in the drift and volatility terms of the dynamic systems so that the
problem including multi-dimensional financial assets is more realistic
than those addressed in finance literature (see Capenter (2000)). Therefore, it is difficult to solve
these problems either analytically or numerically using current
methods developed in the framework of studying American-style options.
In our model, the corresponding HJB equation of the problem is formulated into a variational inequality
of a fully nonlinear equation. We make a dual transformation for the
problem to obtain a new free boundary problem with a linear equation. Tackling this new free boundary problem,
we characterize the properties of the free boundary and optimal
strategy for the original problem.
The main innovations of this paper include that: Firstly, we rigorously prove the limit of the value function when $t\rightarrow T$ is the concave hull of the payoff function (but not the payoff function itself), i.e.
$$\lim\limits_{t\rightarrow T-}V(x,t)=\varphi(x),$$
where $\varphi(x)$ is the concave hull of the payoff function $g(x)$ (see Lemma 2.1).
Secondly, since the obstacle $\varphi(x)$ in variational inequality is not strictly concave (see Figure 2.1),
the equivalence between the dual problem (3.14) and the original problem (2.11) is not trivial.
However, we successfully proved it in Section 3.
Thirdly, we show a new method to study the free boundary while the exercise region is not connected (see (4.8)-(4.10) and Lemma 4.2)
so that we can shed light on the monotonicity and differentiability of the free boundaries (see Figure 5.1-5.4.) under any cases of parameters.
The remainder of the paper is organized as follows. In Section 2, the mathematical formulation of the
model is presented, and the corresponding HJB equation with certain boundary-terminal condition is posed.
In particular, we show that the value function $V(x,t)$ is not continuous at $t=T$, i.e.
$$\lim\limits_{t\rightarrow T-}V(x,t)\neq V(x,T).$$
In Section 3, we make a dual transformation to convert the free boundary problem of a fully nonlinear PDE (2.11) to
a new free boundary problem of a linear equation (3.14).
Section 4 devotes to the study for the free boundary of problem (3.14) in different cases of parameters. In Section 5, using the corresponding inverse dual transformation, we construct the solution of the original
problem (2.11) and to present the properties (including the monotonicity and differentiability) are its free boundaries under different cases which is classified in Section 4.
In Section 6 we present conclusions.
2 Model Formulation
2.1 The manager’s problem
The manager operates in a complete, arbitrage-free, continuous-time
financial market consisting of a riskless asset with instantaneous
interest rate $r$ and $n$ risky assets. The risky asset prices $S_{i}$ are
governed by the stochastic differential equations
$$\displaystyle\frac{dS_{i,t}}{S_{i,t}}=(r+\mu_{i})dt+\sigma_{i}dW_{t}^{j},\quad%
\mbox{for }i=1,2,\cdots,n,$$
(2.1)
where the interest rate $r$, the excess appreciation rates $\mu_{i}$, and the volatility vectors
$\sigma_{i}$ are constants,
$W$ is a standard $n$-dimensional Brownian motion.
In addition, the covariance matrix $\sigma\sigma^{\prime}$ is strongly nondegenerate.
A trading strategy for the manager is an $n$-dimensional process $\pi_{t}$ whose $i$-th component, where $\pi_{i,t}$
is the holding amount of the $i$-th risky asset in the
portfolio at time $t$. An admissible trading strategy $\pi_{t}$
must be progressively measurable with respect to $\{\mathcal{F}_{t}\}$ such that $X_{t}\geq 0$. Note that
$X_{t}=\pi_{0,t}+\sum\limits_{i=1}^{n}\pi_{i,t}$, where $\pi_{0,t}$ is the amount invested
in the money.
Hence, the wealth $X_{t}$ evolves according to
$$dX_{t}=(rX_{t}+\mu^{\prime}\pi_{t})dt+\pi_{t}^{\prime}\sigma dW_{t},$$
the portfolio $\pi_{t}$ is a progressively measurable and square integrable process with constraint $X_{t}\geq 0$ for all $t\geq 0$.
The manager’s dynamic problem is to choose an admissible trading strategy $\pi_{t}$ and a stopping time $\tau$ ($t\leq\tau\leq T$) to maximize his expected utility of the exercise wealth before or at the terminal time $T$:
$$\displaystyle V(x,t)=\sup\limits_{\pi,\tau}E_{t,x}[e^{-\beta(\tau-t)}g(X_{\tau%
})]:=\sup\limits_{\pi,\tau}E[e^{-\beta(\tau-t)}g(X_{\tau})|X_{t}=x],$$
(2.2)
where
$$g(x)=\frac{1}{\gamma}[(x-b)^{+}+K]^{\gamma},$$
and $\beta$ is the discounted factor.
If $X_{t}=0$, in order to keep $X_{s}\geq 0$, the only choice of $\pi_{s}$ is 0 and thus $X_{s}\equiv 0,\;t\leq s\leq T$. Thus
$$V(0,t)=\sup\limits_{\pi,\tau}E[e^{-\beta(\tau-t)}g(0)]=g(0)=\frac{1}{\gamma}K^%
{\gamma}.$$
Which means the optimal stopping time $\tau$ is the present moment $t$.
2.2 Discontinuity at the terminal time $T$
From the definition (2.2) we can see that $V(x,T)=g(x)=\frac{1}{\gamma}[(x-b)^{+}+K]^{\gamma}$. Since the portfolio $\pi_{t}$ is unrestricted, $V(x,t)$ may be discontinuous at the terminal time $T$.
Therefore, we should pay attention to $V(x,T-):=\lim\limits_{t\rightarrow T-}V(x,t)$.
Lemma 2.1
The value function $V$ defined in (2.2) is not continuous at the terminal time $T$ and satisfies
$$\lim\limits_{t\rightarrow T-}V(x,t)=\varphi(x),$$
where
$$\varphi(x)=\left\{\begin{array}[]{l}kx+\frac{1}{\gamma}K^{\gamma},\quad 0<x<%
\widehat{x},\\
\frac{1}{\gamma}(x-b+K)^{\gamma},\quad x\geq\widehat{x},\end{array}\right.$$
is the concave hull of $\frac{1}{\gamma}[(x-b)^{+}+K]^{\gamma}$ (see Fig 2.1),
here $k$ and $\widehat{x}$ satisfy
$$\displaystyle\left\{\begin{array}[]{l}k\widehat{x}+\frac{1}{\gamma}K^{\gamma}=%
\frac{1}{\gamma}(\widehat{x}-b+K)^{\gamma},\\
k=(\widehat{x}-b+K)^{\gamma-1}.\end{array}\right.$$
(2.3)
Utility$\varphi(x)$$b$$\widehat{x}$$x$$\frac{1}{\gamma}[(x-b)^{+}+K]^{\gamma}$Fig 2.1 $\varphi(x)$.$\frac{1}{\gamma}K^{\gamma}$
Proof:
We first prove
$$\limsup\limits_{t\rightarrow T-}V(x,t)\leq\varphi(x).$$
Define
$$\zeta_{t}=e^{-(r+\mu^{\prime}(\sigma^{\prime}\sigma)^{-1}\mu)t-\mu^{\prime}%
\sigma^{-1}W_{t}},$$
then
$$d\zeta_{t}=\zeta_{t}[-rdt-\mu^{\prime}\sigma^{-1}dW_{t}]$$
and
$$\displaystyle d(\zeta_{t}X_{t})$$
$$\displaystyle=$$
$$\displaystyle\zeta_{t}dX_{t}+X_{t}d\zeta_{t}+d\zeta_{t}dX_{t}$$
(2.4)
$$\displaystyle=$$
$$\displaystyle\zeta_{t}[(rX_{t}+\mu^{\prime}\pi_{t})dt+\pi_{t}^{\prime}\sigma dW%
_{t}-rX_{t}dt-\mu^{\prime}\sigma^{-1}X_{t}dW_{t}-(\mu^{\prime}\sigma^{-1})(\pi%
_{t}^{\prime}\sigma)^{\prime}dt]$$
$$\displaystyle=$$
$$\displaystyle\zeta_{t}[\pi_{t}^{\prime}\sigma-\mu^{\prime}\sigma^{-1}X_{t}]dW_%
{t}.$$
Thus, $\zeta_{t}X_{t}$ is a martingale. For any $\pi\in{\cal A}$ and stopping time $\tau$ ($t\leq\tau\leq T$), by Jensen’s inequality, we have
$$E_{t,x}\varphi\Big{(}\frac{\zeta_{\tau}}{\zeta_{t}}X_{\tau}\Big{)}\leq\varphi%
\Big{(}E_{t,x}\Big{(}\frac{\zeta_{\tau}}{\zeta_{t}}X_{\tau}\Big{)}\Big{)}=%
\varphi(x).$$
Then
$$\displaystyle\limsup\limits_{t\rightarrow T-}\sup\limits_{\tau,\;\pi}E_{t,x}%
\varphi\Big{(}\frac{\zeta_{\tau}}{\zeta_{t}}X_{\tau}\Big{)}\leq\varphi(x).$$
(2.5)
We now come to prove
$$\displaystyle\lim\limits_{t\rightarrow T-}\sup\limits_{\tau,\;\pi}E_{t,x}\Big{%
|}\varphi(X_{\tau})-\varphi\Big{(}\frac{\zeta_{\tau}}{\zeta_{t}}X_{\tau}\Big{)%
}\Big{|}=0.$$
(2.6)
Indeed, owing to $\varphi(x)$ is differentiable and for all $x,\;y\geq\widehat{x}$,
$$|(x-b+K)^{\gamma}-(y-b+K)^{\gamma}|\leq|x-y|^{\gamma},$$
there exits constant $C>0$ such that for all $x,\;y>0$,
$$|\varphi(x)-\varphi(y)|\leq C|x-y|^{\gamma}.$$
Thus, for any $\pi$ and stopping time $t\leq\tau\leq T$,
$$\displaystyle E_{t,x}\Big{|}\varphi(X_{\tau})-\varphi\Big{(}\frac{\zeta_{\tau}%
}{\zeta_{t}}X_{\tau}\Big{)}\Big{|}\leq CE_{t,x}\Big{(}\Big{(}\frac{\zeta_{\tau%
}}{\zeta_{t}}X_{\tau}\Big{)}^{\gamma}\Big{|}\frac{\zeta_{t}}{\zeta_{\tau}}-1%
\Big{|}^{\gamma}\Big{)}.$$
Using Hölder inequality, we obtain
$$\displaystyle E_{t,x}\Big{|}\varphi(X_{\tau})-\varphi\Big{(}\frac{\zeta_{\tau}%
}{\zeta_{t}}X_{\tau}\Big{)}\Big{|}$$
$$\displaystyle\leq$$
$$\displaystyle C\Big{(}E_{t,x}\Big{(}\frac{\zeta_{\tau}}{\zeta_{t}}X_{\tau}\Big%
{)}\Big{)}^{\gamma}\Big{(}E_{t,x}\Big{|}\frac{\zeta_{t}}{\zeta_{\tau}}-1\Big{|%
}^{\frac{\gamma}{1-\gamma}}\Big{)}^{1-\gamma}$$
$$\displaystyle\leq$$
$$\displaystyle Cx^{\gamma}\Big{(}E_{t,x}\sup\limits_{t\leq s\leq T}\Big{|}\frac%
{\zeta_{t}}{\zeta_{s}}-1\Big{|}^{\frac{\gamma}{1-\gamma}}\Big{)}^{1-\gamma}.$$
Hence,
$$\displaystyle\lim\limits_{t\rightarrow T-}\sup\limits_{\tau,\;\pi}E_{t,x}\Big{%
|}\varphi(X_{\tau})-\varphi\Big{(}\frac{\zeta_{\tau}}{\zeta_{t}}X_{\tau}\Big{)%
}\Big{|}\leq Cx^{\gamma}\lim\limits_{t\rightarrow T-}\Big{(}E_{t,x}\sup\limits%
_{t\leq s\leq T}\Big{|}\frac{\zeta_{t}}{\zeta_{s}}-1\Big{|}^{\frac{\gamma}{1-%
\gamma}}\Big{)}^{1-\gamma}=0.$$
Therefore, by (2.5) and (2.6),
$$\displaystyle\limsup\limits_{t\rightarrow T-}V(x,t)$$
$$\displaystyle=$$
$$\displaystyle\limsup\limits_{t\rightarrow T-}\sup\limits_{\tau,\;\pi}E_{t,x}%
\Big{(}e^{-\beta(\tau-t)}g(X_{\tau})\Big{)}$$
$$\displaystyle\leq$$
$$\displaystyle\limsup\limits_{t\rightarrow T-}\sup\limits_{\tau,\;\pi}E_{t,x}%
\varphi(X_{\tau})$$
$$\displaystyle\leq$$
$$\displaystyle\limsup\limits_{t\rightarrow T-}\sup\limits_{\tau,\;\pi}E_{t,x}%
\varphi\Big{(}\frac{\zeta_{\tau}}{\zeta_{t}}X_{\tau}\Big{)}+\lim\limits_{t%
\rightarrow T-}\sup\limits_{\tau,\;\pi}E_{t,x}\Big{|}\varphi(X_{\tau})-\varphi%
\Big{(}\frac{\zeta_{\tau}}{\zeta_{t}}X_{\tau}\Big{)}\Big{|}$$
$$\displaystyle\leq$$
$$\displaystyle\varphi(x).$$
Next, we further prove
$$\displaystyle\liminf\limits_{t\rightarrow T-}V(x,t)\geq\varphi(x).$$
(2.7)
For fix $t<T$, if $x\geq x_{0}$ or $x=0$, we can get
$$V(x,t)\geq g(x)=\varphi(x),$$
which implies that (2.7) holds true.
If $0<x<x_{0}$, choose $\tau=T$ and choose $\pi_{s}$ such that
$$\frac{\zeta_{s}}{\zeta_{t}}[\pi_{t}^{\prime}\sigma-\mu^{\prime}\sigma^{-1}X_{t%
}]=(\pi^{N}_{s})^{\prime}:=N\chi_{\big{\{}0<\frac{\zeta_{s}}{\zeta_{t}}X_{s}<x%
_{0}\big{\}}}I_{n}^{\prime},\quad\forall\;N>0,$$
where $I_{n}$ is an $n$-dimensional unit column vector. Let $X_{s}^{N}=\frac{\zeta_{s}}{\zeta_{t}}X_{s}$. Then using (2.4) results in
$$dX_{s}^{N}=(\pi^{N}_{s})^{\prime}dW_{s},\quad t\leq s\leq T.$$
It is not hard to obtain
$$0\leq X_{s}^{N}\leq x_{0},\quad t\leq s\leq T,$$
and since
$$\displaystyle\{0<X_{T}^{N}<x_{0}\}$$
$$\displaystyle=$$
$$\displaystyle\{0<X_{s}^{N}=x+NI_{n}^{\prime}(W_{s}-W_{t})<x_{0},\;t\leq s\leq T\}$$
$$\displaystyle\subset$$
$$\displaystyle\{0<x+NI_{n}^{\prime}(W_{T}-W_{t})<x_{0}\},$$
we have
$$P(0<X_{T}^{N}<x_{0})\leq P(0<x+NI_{n}^{\prime}(W_{T}-W_{t})<x_{0})\rightarrow 0%
,\quad N\rightarrow\infty.$$
Note that
$$x_{0}P(X_{T}^{N}=x_{0})\leq EX_{T}^{N}\leq x_{0}P(X_{T}^{N}=x_{0})+x_{0}P(0<X_%
{T}^{N}<x_{0}).$$
Therefore,
$$\lim\limits_{N\rightarrow\infty}P(X_{T}^{N}=x_{0})=\frac{EX_{T}^{N}}{x_{0}}=%
\frac{x}{x_{0}},\quad\lim\limits_{N\rightarrow\infty}P(X_{T}^{N}=0)=1-\frac{x}%
{x_{0}}.$$
As a result,
$$\lim\limits_{N\rightarrow\infty}Eg(X_{T}^{N})=\frac{x}{x_{0}}g(x_{0})+\Big{(}1%
-\frac{x}{x_{0}}\Big{)}g(0)=\frac{x}{x_{0}}\Big{(}kx_{0}+\frac{1}{\gamma}K^{%
\gamma}\Big{)}+\Big{(}1-\frac{x}{x_{0}}\Big{)}\frac{1}{\gamma}K^{\gamma}=kx+%
\frac{1}{\gamma}K^{\gamma}=\varphi(x).$$
Thus
$$\sup\limits_{\tau,\;\pi}E_{t,x}\Big{(}e^{-\beta(\tau-t)}g\Big{(}\frac{\zeta_{%
\tau}}{\zeta_{t}}X_{\tau}\Big{)}\Big{)}\geq e^{-\beta(T-t)}\lim\limits_{N%
\rightarrow\infty}Eg(X_{T}^{N})=e^{-\beta(T-t)}\varphi(x).$$
Meanwhile, similar to (2.6), we have
$$\displaystyle\lim\limits_{t\rightarrow T-}\sup\limits_{\tau,\;\pi}E_{t,x}\Big{%
|}g(X_{\tau})-g\Big{(}\frac{\zeta_{\tau}}{\zeta_{t}}X_{\tau}\Big{)}\Big{|}=0.$$
Therefore,
$$\displaystyle\liminf\limits_{t\rightarrow T-}V(x,t)$$
$$\displaystyle=$$
$$\displaystyle\liminf\limits_{t\rightarrow T-}\sup\limits_{\tau,\;\pi}E_{t,x}%
\Big{(}e^{-\beta(\tau-t)}g(X_{\tau})\Big{)}$$
$$\displaystyle\geq$$
$$\displaystyle\liminf\limits_{t\rightarrow T-}\sup\limits_{\tau,\;\pi}E_{t,x}%
\Big{(}e^{-\beta(\tau-t)}g\Big{(}\frac{\zeta_{\tau}}{\zeta_{t}}X_{\tau}\Big{)}%
\Big{)}-\lim\limits_{t\rightarrow T-}\sup\limits_{\tau,\;\pi}E_{t,x}\Big{|}g(X%
_{\tau})-g\Big{(}\frac{\zeta_{\tau}}{\zeta_{t}}X_{\tau}\Big{)}\Big{|}$$
$$\displaystyle\geq$$
$$\displaystyle\varphi(x).$$
$\Box$
Since the value function is not continuous at the ternimal time $T$, we introduce its corresponding HJB equation with the terminal condition $V(x,T-)=\varphi(x),\quad x>0$ in the next subsection.
2.3 The HJB equation
Applying the dynamic programming principle, we get the following HJB equation
$$\displaystyle\left\{\begin{array}[]{l}\min\Big{\{}-V_{t}-\max\limits_{\pi}[%
\frac{1}{2}(\pi^{\prime}\sigma\sigma^{\prime}\pi)V_{xx}+\mu^{\prime}\pi V_{x}]%
-rxV_{x}+\beta V,V-\frac{1}{\gamma}[(x-b)^{+}+K]^{\gamma}\Big{\}}=0,\\
\quad x>0,\;0<t<T,\\
V(0,t)=\frac{1}{\gamma}K^{\gamma},\quad 0<t<T,\\
V(x,T-)=\varphi(x),\quad x>0,\end{array}\right.$$
(2.8)
Assume $V_{x}\geq 0$. Note that the Hamiltonian operator
$$\displaystyle\max_{\pi}\Big{\{}\frac{1}{2}(\pi^{\prime}\sigma\sigma^{\prime}%
\pi)V_{xx}+\mu^{\prime}\pi V_{x}\Big{\}}-rxV_{x}+rV$$
(2.9)
is singularity if $V_{xx}>0$ or $V_{xx}=0,\;V_{x}>0$.
Thus, $V_{xx}\leq 0$. Moreover, if $V_{x}=0$ holds on $(x_{0},t_{0})$,
then for any $x\geq x_{0}$, $V_{x}(x,t_{0})=0$,
which contradicts to $V(x,t)\geq\frac{1}{\gamma}[(x-b)^{+}+K]^{\gamma}\rightarrow+\infty,\;x%
\rightarrow+\infty$.
Therefore, if $V(x,t)\in C^{2,1}$ is an increasing (in $x$) solutionof (2.8), it must satisfy
$$\displaystyle V_{x}>0,V_{xx}<0,\quad x>0,\;0<t<T.$$
(2.10)
Note that the gradient of $\pi^{\prime}\sigma\sigma^{\prime}\pi$ with respect to $\pi$ is
$$\bigtriangledown_{\pi}(\pi^{\prime}\sigma\sigma^{\prime}\pi)=2\sigma\sigma^{%
\prime}\pi.$$
Hence,
$$\pi^{*}=-(\sigma\sigma^{\prime})^{-1}\mu\frac{V_{x}(x,t)}{V_{xx}(x,t)}.$$
Applying $V_{xx}<0$, we have
$$V-\frac{1}{\gamma}[(x-b)^{+}+K]^{\gamma}\geqslant 0\quad\text{ if and only if %
}\quad V-\varphi(x)\geqslant 0.$$
Define $a^{2}=\mu^{\prime}(\sigma\sigma^{\prime})^{-1}\mu$, then the variational inequality (2.8) is reduce to
$$\min\Big{\{}-V_{t}+\frac{a^{2}}{2}\frac{V^{2}_{x}}{V_{xx}}-rxV_{x}+\beta V,\;V%
-\varphi(x)\Big{\}}=0,$$
Hence, we formulate our problem into the following variational inequality problem
$$\displaystyle\left\{\begin{array}[]{l}\min\Big{\{}-V_{t}+\displaystyle\frac{a^%
{2}}{2}\frac{V^{2}_{x}}{V_{xx}}-rxV_{x}+\beta V,V-\varphi(x)\Big{\}}=0,\quad x%
>0,\;0<t<T,\\
V(0,t)=\frac{1}{\gamma}K^{\gamma},\quad 0<t<T,\\
V(x,T-)=\varphi(x),\quad x>0.\end{array}\right.$$
(2.11)
We want to show this equation has a (unique) solution $V(x,t)\in C^{2,1}$ which satisfies (2.10).
And a verification theorem will ensure this solution is just $\widetilde{V}$ defined in (LABEL:tildeV).
3 Dual Problem
Firstly, assume that
$$\displaystyle V,\;V_{x}\in C(Q_{x}),$$
(3.1)
where $Q_{x}=[0,+\infty)\times(0,T)$ and
$$\displaystyle\lim\limits_{x\rightarrow+\infty}V(x,t)=+\infty,\quad\lim\limits_%
{x\rightarrow+\infty}V_{x}(x,t)=0,\quad\forall t\in(0,T).$$
(3.2)
Later, we will prove the above results in Theorem 5.4.
Now define a dual transformation of $V(x,t)$, for any $t\in(0,T)$ (see Pham [24]),
$$\displaystyle v(y,t)=\max\limits_{x\geq 0}(V(x,t)-xy),\quad y>0,$$
(3.3)
then the optimal $x$ to fix $y>0$ satisfies
$$\displaystyle\left\{\begin{array}[]{ll}\partial_{x}(V(x,t)-xy)=V_{x}(x,t)-y=0,%
&\mbox{if }y\leq V_{x}(0,t),\\
x=0,&\mbox{if }y>V_{x}(0,t).\end{array}\right.$$
Define a transformation
$$\displaystyle x=I(y,t):=\left\{\begin{array}[]{ll}(V_{x}(\cdot,t))^{-1}(y),&%
\mbox{if }y\leq V_{x}(0,t),\\
0,&\mbox{if }y>V_{x}(0,t).\end{array}\right.$$
(3.4)
Owing to (2.10) and (3.2), $I(y,t)$ is continuously decreasing in $y$ and
$\lim\limits_{y\rightarrow 0+}I(y,t)=+\infty$. Thus
$$\displaystyle v(y,t)=V(I(y,t),t)-I(y,t)y,$$
(3.5)
Define the dual transformation of $\varphi(x)$ as
$$\psi(y)=\max\limits_{x\geq 0}(\varphi(x)-xy),\quad y>0,$$
then the optimal $x$ to fix $y$, which we denote by $x_{\varphi}(y)$, is
$$x_{\varphi}(y)=\left\{\begin{array}[]{ll}y^{\frac{1}{\gamma-1}}-(K-b),&\mbox{%
for }0<y<k,\\
\in[0,\widehat{x}],&\mbox{for }y=k,\\
0,&\mbox{for }y>k,\end{array}\right.$$
and
$$\displaystyle\psi(y)$$
$$\displaystyle\!\!\!=$$
$$\displaystyle\varphi(x_{\varphi}(y))-x_{\varphi}(y)y$$
(3.6)
$$\displaystyle\!\!\!=$$
$$\displaystyle\left\{\begin{array}[]{ll}\frac{1-\gamma}{\gamma}y^{\frac{\gamma}%
{\gamma-1}}+(K-b)y,&\mbox{for }0<y<k,\\
\frac{1}{\gamma}K^{\gamma},&\mbox{for }y\geq k,\end{array}\right.$$
(see Fig 3.1).
$\psi(y)$$k$$y$$\frac{1}{\gamma}K^{\gamma}$Fig 3.1 $\psi(y)$.
It follows from (3.6) and (2.3) that we get
$$\displaystyle\psi^{\prime}(y)=\left\{\begin{array}[]{ll}-y^{\frac{1}{\gamma-1}%
}+(K-b)<-\widehat{x}<0,&\mbox{for }0<y<k,\\
0,&\mbox{for }y>k.\end{array}\right.$$
and
$$\displaystyle\psi^{\prime\prime}(y)=\left\{\begin{array}[]{ll}\frac{1}{1-%
\gamma}y^{\frac{1}{\gamma-1}-1}>0,&\mbox{for }0<y<k,\\
0,&\mbox{for }y>k.\end{array}\right.$$
It is obvious that
$$\varphi(x)=\min\limits_{y>0}(\psi(y)+xy).$$
It follows from (3.5) that we have
$$\displaystyle v_{y}(y,t)=V_{x}(I(y,t),t)I_{y}(y,t)-yI_{y}(y,t)-I(y,t)=-I(y,t),$$
(3.7)
$$\displaystyle v_{yy}(y,t)=-I_{y}(y,t)=\frac{-1}{V_{xx}(I(y,t),t)}\chi_{\{y\leq
V%
_{x}(0,t)\}},$$
(3.8)
$$\displaystyle v_{t}(y,t)=V_{t}(I(y,t),t)+V_{x}(I(y,t),t)I_{t}(y,t)-yI_{t}(y,t)%
=V_{t}(I(y,t),t).$$
Thus, for any $y>0$, set $x=I(y,t)$. If $y\leq V_{x}(0,t)$, then
$$\displaystyle-v_{t}-\frac{a^{2}}{2}y^{2}v_{yy}-(\beta-r)yv_{y}+\beta v=-V_{t}+%
\frac{a^{2}}{2}\frac{V^{2}_{x}}{V_{xx}}-rxV_{x}+\beta V\geq 0;$$
(3.9)
If $y>V_{x}(0,t)$, then $x=0$, $v(y,t)=V(0,t)=\frac{1}{\gamma}K^{\gamma}$, $v_{t}(y,t)=V_{t}(0,t)=0$,
$v_{y}(y,t)=v_{yy}(y,t)=0$, which implies
$$\displaystyle v(y,t)=\psi(y),\;-v_{t}-\frac{a^{2}}{2}y^{2}v_{yy}-(\beta-r)yv_{%
y}+\beta v=\frac{\beta}{\gamma}K^{\gamma}>0.$$
(3.10)
In the above case of $y>V_{x}(0,t)$, since $V_{x}(0,t)\geq k$ (which we will prove in Theorem 5.5),
we have $y>k$ as well as $\psi(y)=\frac{1}{\gamma}K^{\gamma}$. Combining (3.9) and (3.10) yields
$$\displaystyle-v_{t}-\frac{a^{2}}{2}y^{2}v_{yy}-(\beta-r)yv_{y}+\beta v\geq 0,%
\quad y>0,\;0<t<T.$$
(3.11)
By the definition of $v$ and $\psi$, we have
$$\displaystyle V(x,t)\geq\varphi(x),\;\forall\;x\geq 0$$
(3.12)
$$\displaystyle\Rightarrow$$
$$\displaystyle\max\limits_{x\geq 0}(V(x,t)-xy)\geq\max\limits_{x\geq 0}(\varphi%
(x)-xy),\;\forall\;y>0$$
$$\displaystyle\Rightarrow$$
$$\displaystyle v(y,t)\geq\psi(y),\;\forall\;y>0.$$
On the other hand,
$$\displaystyle\begin{array}[]{ll}&v(y,t)>\psi(y)\\
\Rightarrow&V(I(y,t),t)-I(y,t)y>\max\limits_{x\geq 0}(\varphi(x)-xy)\\
\Rightarrow&V(I(y,t),t)-I(y,t)y>\varphi(I(y,t))-I(y,t)y\\
\Rightarrow&V(I(y,t),t)>\varphi(I(y,t)),\\
\end{array}$$
and by the variational inequality in (2.11),
$$\displaystyle\begin{array}[]{ll}&V(I(y,t),t)>\varphi(I(y,t))\\
\Rightarrow&\Big{(}-V_{t}+\displaystyle\frac{a^{2}}{2}\frac{V^{2}_{x}}{V_{xx}}%
-rxV_{x}+\beta V\Big{)}(I(y,t),t)=0.\end{array}$$
together with (3.9) yields
$$\displaystyle v(y,t)>\psi(y)\Rightarrow$$
$$\displaystyle\Big{(}-v_{t}-\displaystyle\frac{a^{2}}{2}y^{2}v_{yy}-(\beta-r)yv%
_{y}+\beta v\Big{)}(y,t)=0.$$
(3.13)
Combining the above equation with (3.11), (3.12) and (3.13), we obtain
$$\displaystyle\left\{\begin{array}[]{l}\min\{-v_{t}-\displaystyle\frac{a^{2}}{2%
}y^{2}v_{yy}-(\beta-r)yv_{y}+\beta v,v-\psi\}=0,\quad y>0,\;0<t<T,\\
v(y,T-)=\psi(y),\quad y>0.\end{array}\right.$$
(3.14)
Remark 3.1
The equation in (3.14) is degenerate on the boundary $y=0$. According to Fichera’s theorem (see Oleĭnik
and Radkević [22]), we must not put the boundary condition on $y=0$.
4 The solution and the free boundary of (3.14)
Now we find the solution of (3.14) which is under liner growth condition.
Theorem 4.1
Problem (3.14) has unique solution $v(y,t)\in W^{2,1}_{p,\;loc}(Q_{y}\setminus B_{\varepsilon}(k,T))\cap C(%
\overline{Q_{y}})$ for any $p>2$ and small $\varepsilon>0$.
Moreover
$$\displaystyle\psi(y)\leq v\leq A(e^{B(T-t)}y^{\frac{\gamma}{\gamma-1}}+1),$$
(4.1)
$$\displaystyle v_{t}\leq 0,$$
(4.2)
$$\displaystyle v_{y}\leq 0,$$
(4.3)
$$\displaystyle v_{yy}\geq 0.$$
(4.4)
where $Q_{y}=(0,+\infty)\times(0,T)$, $B_{\varepsilon}(k,T)$ is the disk with center (k,T) and radius $\varepsilon$, $A=\max\{\frac{1-\gamma}{\gamma},\frac{1}{\gamma}K^{\gamma},|K-b|k\}$, $B=\frac{a^{2}}{2}\frac{\gamma}{(\gamma-1)^{2}}+\frac{\beta-r\gamma}{\gamma-1}$.
Proof:
According to the results of existence and uniqueness of $W^{2,1}_{p}$ solutions[18],
the solution of system (3.14) can be proved by a standard penalty method, furthermore, by Sobolev embedding theorem,
$$\displaystyle v\in C((Q_{y}\cup\{t=0,T\})),\quad v_{y}\in C((Q_{y}\cup\{t=0,T%
\})\setminus(k,T))$$
(4.5)
(see Friedman [10]).
Here, we omit the details. The first inequality in (4.1) follows from (3.14) directly, we now prove the second inequality in (4.1).
Denote
$$\displaystyle w(y,t)=A(e^{B(T-t)}y^{\frac{\gamma}{\gamma-1}}+1).$$
Then
$$\displaystyle-w_{t}-\frac{a^{2}}{2}y^{2}w_{yy}-(\beta-r)yw_{y}+\beta w$$
$$\displaystyle=$$
$$\displaystyle Ae^{B(T-t)}y^{\frac{\gamma}{\gamma-1}}\Big{(}B-\frac{a^{2}}{2}%
\Big{(}\frac{\gamma}{\gamma-1}\Big{)}\Big{(}\frac{\gamma}{\gamma-1}-1\Big{)}-(%
\beta-r)\Big{(}\frac{\gamma}{\gamma-1}\Big{)}+\beta\Big{)}+\beta A$$
$$\displaystyle=$$
$$\displaystyle Ae^{B(T-t)}y^{\frac{\gamma}{\gamma-1}}\Big{(}B-\frac{a^{2}}{2}%
\frac{\gamma}{(\gamma-1)^{2}}-\frac{\beta-r\gamma}{\gamma-1}\Big{)}+\beta A$$
$$\displaystyle\geq$$
$$\displaystyle 0,$$
and $w(y,t)\geq w(y,T)\geq\psi(y)$.
Using the comparison principle of variational inequality (see Friedman [10]), we know that $w$ is a super solution of (3.14).
Next we prove (4.2). Let $\widetilde{v}(y,t)=v(y,t-\delta)$ for small $\delta>0$, then $\widetilde{v}$ satisfies
$$\displaystyle\left\{\begin{array}[]{l}\min\{-\widetilde{v}_{t}-\frac{a^{2}}{2}%
y^{2}\widetilde{v}_{yy}-(\beta-r)y\widetilde{v}_{y}+\beta\widetilde{v},%
\widetilde{v}-\psi(y)\}=0,\quad y>0,\;\delta<t<T,\\
\widetilde{v}(y,T)\geq\psi(y),\quad y>0.\end{array}\right.$$
Hence, by the comparison principle, we have $\widetilde{v}\geq v$, i.e. $v_{t}\leq 0$.
Define
$$\displaystyle{\cal\varepsilon R}_{y}$$
$$\displaystyle=$$
$$\displaystyle\{(y,t)\in Q_{y}|v=\psi\},\quad\hbox{exercise region},$$
$$\displaystyle\quad{\cal CR}_{y}$$
$$\displaystyle=$$
$$\displaystyle\{(y,t)\in Q_{y}|v>\psi\},\quad\hbox{continuation region}.$$
Note that $k$ is the only discontinuity point of $\psi^{\prime}(y)$ and $\psi^{\prime\prime}(y)$.
Now, we claim $(k,t)$ could not be contained in ${\cal\varepsilon R}_{y}$ for all $t\in(0,T)$.
Otherwise, if $(k,t_{0})\in{\cal\varepsilon R}_{y}$ for some $t_{0}<T$, then it belongs
to the minimum points of $v-\psi(y)$, thus $v_{y}(k-,t_{0})\leq\psi^{\prime}(k-)<\psi^{\prime}(k+)\leq v_{y}(k+,t_{0})$, which implies $v_{y}$ does not continue
at the point $(k,t_{0})$ so as to yields a contradiction to (4.5).
Here, we present the proof of (4.3) and (4.4). Recalling that
$$\displaystyle\psi^{\prime}(y)<0,\quad\psi^{\prime\prime}(y)>0,\quad y\in(0,k),$$
$$\displaystyle\psi^{\prime}(y)=0,\quad\psi^{\prime\prime}(y)=0,\quad y\in(k,+%
\infty).$$
Thanks to $v$ gets the minimum value in ${\cal\varepsilon R}_{y}$,
$v_{y}=\psi^{\prime}\leq 0$ in ${\cal\varepsilon R}_{y}$.
Moreover, $v_{y}(y,T)=\psi^{\prime}\leq 0$.
Taking the derivative for the following equation
$$-v_{t}-\frac{a^{2}}{2}y^{2}v_{yy}-(\beta-r)yv_{y}+\beta v=0\quad\mbox{in }%
\quad{\cal CR}_{y}$$
with respect to $y$ leads to
$$\displaystyle-\partial_{t}v_{y}-\frac{a^{2}}{2}y^{2}\partial_{yy}v_{y}-(a^{2}+%
\beta-r)y\partial_{y}v_{y}+rv_{y}=0\quad\mbox{in }\quad{\cal CR}_{y}.$$
(4.6)
Note that $v_{y}=\psi^{\prime}\leq 0$ on $\partial({\cal CR}_{y})$, where $\partial({\cal CR}_{y})$ is the boundary
of ${\cal CR}_{y}$ in the interior of $Q_{y}$, using the maximum principle we obtain (4.3).
In addition, $v\geq\psi$, together with $v=\psi,\;v_{y}=\psi^{\prime}$ in ${\cal\varepsilon R}_{y}$ yields $v_{yy}=\psi^{\prime\prime}\geq 0$ in ${\cal\varepsilon R}_{y}$.
It is not hard to prove that
$$\lim\limits_{{\cal CR}_{y}\ni y\rightarrow\partial({\cal CR}_{y})}v_{yy}(y,t)%
\geq 0.$$
and $v_{yy}(y,T)=\psi^{\prime\prime}\geq 0$.
Taking the derivative for equation (4.6) with respect to $y$, we obtain
$$-\partial_{t}v_{yy}-\frac{a^{2}}{2}y^{2}\partial_{yy}v_{yy}-2a^{2}y\partial_{y%
}v_{yy}+(r-a^{2})v_{yy}=0\quad\mbox{in }\quad{\cal CR}_{y}.$$
Using the maximum principle, we obtain
$$\displaystyle v_{yy}\geq 0\quad\hbox{in}\quad{\cal CR}_{y}.$$
(4.7)
$\Box$
Define free boundaries
$$\displaystyle h(t)=\inf\{y\in[0,k]|v(y,t)=\psi(y)\},\quad 0<t<T,$$
(4.8)
$$\displaystyle g(t)=\sup\{y\in[0,k]|v(y,t)=\psi(y)\},\quad 0<t<T,$$
(4.9)
$$\displaystyle f(t)=\inf\{y\in[k,+\infty)|v(y,t)=\psi(y)\},\quad 0<t<T,$$
(4.10)
Owing to $\partial_{t}(v(y,t)-\psi(y))=v_{t}\leq 0$, functions $h(t)$ and $f(t)$ are decreasing in $t$ and $g(t)$ is increasing in $t$.
Substituting the first expression of (3.6) into the equation in (3.14) yields
$$\displaystyle-\partial_{t}\psi-\frac{a^{2}}{2}y^{2}\partial_{yy}\psi-(\beta-r)%
y\partial_{y}\psi+\beta\psi$$
(4.11)
$$\displaystyle=$$
$$\displaystyle\frac{a^{2}}{2}\Big{(}\frac{\gamma}{\gamma-1}-1\Big{)}y^{\frac{%
\gamma}{\gamma-1}}-(\beta-r)y\Big{[}-y^{\frac{1}{\gamma-1}}+(K-b)\Big{]}+\beta%
\Big{[}\frac{1-\gamma}{\gamma}y^{\frac{\gamma}{\gamma-1}}+(K-b)y\Big{]}$$
$$\displaystyle=$$
$$\displaystyle\Big{(}\frac{\beta-r\gamma}{\gamma}-\frac{a^{2}}{2}\frac{1}{1-%
\gamma}\Big{)}y^{\frac{\gamma}{\gamma-1}}+r(K-b)y,\quad y<k,$$
and note that
$$-\partial_{t}\psi-\frac{a^{2}}{2}y^{2}\partial_{yy}\psi-(\beta-r)y\partial_{y}%
\psi+\beta\psi=\frac{\beta}{\gamma}K^{\gamma}>0,\quad y>k.$$
Denote the right hand side of (4.11) by $\Psi(y)$. It is not hard to see that
$$\displaystyle{\cal\varepsilon R}_{y}\subset[\{\Psi(y)\geq 0,\;y<k\}\cup(k,+%
\infty)]\times(0,T).$$
(4.12)
Lemma 4.2
The set ${\cal\varepsilon R}_{y}$ is expressed as
$$\displaystyle{\cal\varepsilon R}_{y}=\{(y,t)\in Q_{y}|h(t)\leq y\leq g(t)\}%
\cup\{(y,t)\in Q_{y}|y\geq f(t)\}.$$
(4.13)
Proof:
By the definitions of $h(t),\;g(t)$ and $f(t)$, we get
$${\cal\varepsilon R}_{y}\subset\{(y,t)\in Q_{y}|h(t)\leq y\leq g(t)\}\cup\{(y,t%
)\in Q_{y}|y\geq f(t)\}.$$
Now, we prove
$$\displaystyle\Omega:=\{(y,t)\in Q_{y}|h(t)\leq y\leq g(t)\}\subset{\cal%
\varepsilon R}_{y}.$$
(4.14)
Since $\{(h(t),t),\;(g(t),t)\}\cap Q_{y}\subset{\cal\varepsilon R}_{y}\cap\{y<k\}%
\subset\{\Psi\geq 0\}$ and $\{\Psi\geq 0\}$ is a connected region, we have
$$\Omega\subset\{\Psi\geq 0\}.$$
Assume that (4.14) is false.
Since ${\cal CR}_{y}$ is an open set,
there exists an open subset $\cal N$ such that ${\cal N}\subset\Omega$ and $\partial_{p}{\cal N}\subset{\cal\varepsilon R}_{y}$,
where $\partial_{p}{\cal N}$ is the parabolic boundary of ${\cal N}$.
Thus,
$$\displaystyle\left\{\begin{array}[]{l}-v_{t}-\displaystyle\frac{a^{2}}{2}y^{2}%
v_{yy}-(\beta-r)yv_{y}+\beta v=0\quad\hbox{in}\;{\cal N},\\
-\psi_{t}-\displaystyle\frac{a^{2}}{2}y^{2}\psi_{yy}-(\beta-r)y\psi_{y}+\beta%
\psi\geq 0\quad\hbox{in}\;{\cal N},\\
v=\psi\quad\hbox{on}\quad\partial_{p}{\cal N}.\end{array}\right.$$
(4.15)
By the comparison principle, $v\leq\psi$ in ${\cal N}$, which implies ${\cal N}=\emptyset$.
Similar proof yields
$$\displaystyle\{(y,t)\in Q_{y}|y\geq f(t)\}\subset{\cal\varepsilon R}_{y}.$$
Therefore, the desired result (4.13) holds.
$\Box$
Thanks to Lemma 4.2, $h(t)$, $g(t)$ and $f(t)$ are three free boundaries of (3.14).
Theorem 4.3
The free boundaries
$h(t),\;g(t)$ and $f(t)\in C^{\infty}(0,T)$ and have the following classification
Case I: $\beta\geq\frac{a^{2}}{2}\frac{\gamma}{1-\gamma}+r\gamma$, $\Psi(k)\geq 0$.
$$h(t)\equiv 0\leq g(t)\leq g(T-)=k=f(T-)\leq f(t),$$
see Fig 4.1.
Case II: $\beta\geq\frac{a^{2}}{2}\frac{\gamma}{1-\gamma}+r\gamma$, $\Psi(k)<0$.
If $\beta>\frac{a^{2}}{2}\frac{\gamma}{1-\gamma}+r\gamma$,
$$h(t)\equiv 0\leq g(t)\leq g(T-)=\Big{(}\frac{-r(K-b)}{\frac{\beta-r\gamma}{%
\gamma}-\frac{a^{2}}{2}\frac{1}{1-\gamma}}\Big{)}^{\gamma-1}<k=f(T-)\leq f(t),$$
see Fig 4.2.
If $\beta=\frac{a^{2}}{2}\frac{\gamma}{1-\gamma}+r\gamma$,
$${\cal\varepsilon R}_{y}\cap\Big{(}(0,k)\times(0,T)\Big{)}=\emptyset,\quad k=f(%
T-)\leq f(t),$$
see Fig 4.4.
Case III: $\beta<\frac{a^{2}}{2}\frac{\gamma}{1-\gamma}+r\gamma$, $\Psi(k)>0$.
$$\Big{(}\frac{-r(K-b)}{\frac{\beta-r\gamma}{\gamma}-\frac{a^{2}}{2}\frac{1}{1-%
\gamma}}\Big{)}^{\gamma-1}=h(T-)\leq h(t)\leq g(t)\leq g(T-)=k=f(T-)\leq f(t),$$
see Fig 4.3.
Case IV: $\beta<\frac{a^{2}}{2}\frac{\gamma}{1-\gamma}+r\gamma$, $\Psi(k)\leq 0$.
$${\cal\varepsilon R}_{y}\cap(0,k)\times(0,T)=\emptyset,\quad k=f(T-)\leq f(t),$$
see Fig 4.4.
$T$$k$${\cal\varepsilon R}_{y}$${\cal CR}_{y}$${\cal\varepsilon R}_{y}$$g(t)$$f(t)$$y$Fig 4.1. $\beta\geq\frac{a^{2}}{2}\frac{\gamma}{1-\gamma}+r\gamma$, $\Psi(k)\geq 0$. $T$$k$${\cal\varepsilon R}_{y}$${\cal CR}_{y}$${\cal\varepsilon R}_{y}$$y_{T}$$y$$g(t)$$f(t)$Fig 4.2. $\beta>\frac{a^{2}}{2}\frac{\gamma}{1-\gamma}+r\gamma$, $\Psi(k)<0$.
$T$$k$${\cal CR}_{y}$${\cal\varepsilon R}_{y}$$y_{T}$$y$$g(t)$$f(t)$$h(t)$Fig 4.3. $\beta<\frac{a^{2}}{2}\frac{\gamma}{1-\gamma}+r\gamma$, $\Psi(k)>0$.$T$$k$${\cal\varepsilon R}_{y}$${\cal CR}_{y}$${\cal\varepsilon R}_{y}$${\cal CR}_{y}$$y$$f(t)$Fig 4.4. $\beta<\frac{a^{2}}{2}\frac{\gamma}{1-\gamma}+r\gamma$, $\Psi(k)\leq 0$,or $\beta=\frac{a^{2}}{2}\frac{\gamma}{1-\gamma}+r\gamma$, $\Psi(k)<0$..
Proof:
By the method of [10], we could prove $h(t),\;g(t),\;f(t)\in C^{\infty}(0,T)$, we omit the details.
Here, we only prove the results in Case II, the remaining situations are similar.
If $\beta>\frac{a^{2}}{2}\frac{\gamma}{1-\gamma}+r\gamma$ and $\Psi(k)<0$, then $K<b$, Denote $y_{T}=\Big{(}\frac{-r(K-b)}{\frac{\beta-r\gamma}{\gamma}-\frac{a^{2}}{2}\frac{%
1}{1-\gamma}}\Big{)}^{\gamma-1}$,
then $\Psi(k)<0$ implies $y_{T}<k$.
By (4.12) and $\{\Psi\geq 0\}=(0,y_{T}]$,
$${\cal\varepsilon R}_{y}\subset\Big{(}(0,y_{T}]\cup(k,\infty)\Big{)}\times(0,T).$$
thus
$$0\leq h(t)\leq g(t)\leq y_{T}<k\leq f(t).$$
Now, we prove $h(t)\equiv 0$.
Set ${\cal N}:=\{(y,t)|0<y\leq h(t),\;0<t<T\}$. It follows from (4.15) that we have $v\leq\psi$ in ${\cal N}$.
By the definition of $h(t)$, ${\cal N}=\emptyset$ as well as $h(t)\equiv 0$.
Here, we aim to prove $f(T-):=\lim\limits_{t\uparrow T}f(t)=k$.
Otherwise, if $f(T-)>k$, then there exists a contradiction that
$$\displaystyle 0$$
$$\displaystyle=$$
$$\displaystyle-v_{t}-\frac{a^{2}}{2}y^{2}v_{yy}-(\beta-r)yv_{y}+\beta v$$
$$\displaystyle=$$
$$\displaystyle-v_{t}-\frac{a^{2}}{2}y^{2}\psi_{yy}-(\beta-r)y\psi_{y}+\beta\psi%
=-\partial_{t}v+\frac{1}{\gamma}K^{\gamma}>0,\quad k<y<f(T-),\;t=T.$$
So $f(T-)=k$.
The proof of $g(T-)=y_{T}$ is similar that if $g(T-)>y_{T}$, there exists contradiction
$$\displaystyle 0$$
$$\displaystyle=$$
$$\displaystyle-v_{t}-\frac{a^{2}}{2}y^{2}v_{yy}-(\beta-r)yv_{y}+\beta v$$
$$\displaystyle=$$
$$\displaystyle-v_{t}-\frac{a^{2}}{2}y^{2}\psi_{yy}-(\beta-r)y\psi_{y}+\beta\psi%
=-\partial_{t}v+\Psi(y)>0,\quad g(T-)<y<y_{T},\;t=T.$$
If $\beta=\frac{a^{2}}{2}\frac{\gamma}{1-\gamma}+r\gamma$ and $\Psi(k)<0$, then $K<b$ as well as $\Psi(y)<0$ for all $0<y<k$, thus $(0,k]\times(0,T)\subset{\cal CR}_{y}$, so $h(t),\;g(t)$ do not exist.
$\Box$
5 The solution and the free boundary of original problem (2.11)
Lemma 5.1
$$\displaystyle v_{yy}>0,\quad 0<y<f(t),\quad 0<t<T.$$
(5.1)
Proof:
Apply strong maximum principle,
$$v_{yy}>0\quad in\quad{\cal CR}_{y},$$
together with
$$v_{yy}=\psi^{\prime\prime}>0\quad in\quad{\cal\varepsilon R}_{y}\cap\Big{(}(0,%
k)\times(0,T)\Big{)},$$
then (5.1) is true.
$\Box$
Lemma 5.2
$$\displaystyle\lim\limits_{y\rightarrow 0+}v_{y}(y,t)=-\infty,\quad 0<t<T.$$
(5.2)
$$\displaystyle\lim\limits_{y\rightarrow f(t)-}v_{y}(y,t)=0,\quad 0<t<T.$$
(5.3)
Proof:
For any $t\in(0,T)$, it is not hard to see that $\lim\limits_{y\rightarrow 0+}v(y,t)\geq\lim\limits_{y\rightarrow 0+}\psi(y)=+\infty$, and by $v_{yy}\geq 0$, thus for some fix $y_{0}>0$,
$$v_{y}(y,t)\leq\frac{v(y_{0},t)-v(y,t)}{y_{0}-y}\rightarrow-\infty,\quad y%
\rightarrow 0+.$$
(5.3) is due to $v_{y}$ is continuously through the free boundary $y=f(t)$.
$\Box$
Thanks to Lemma 5.1 and Lemma 5.2, we can define a transformation
$$\displaystyle y=J(x,t)=\left\{\begin{array}[]{ll}(v_{y}(\cdot,t))^{-1}(-x),&%
\hbox{for}\quad x>0;\\
f(t),&\hbox{for}\quad x=0,\\
\end{array}\right.,\;0<t<T,$$
then $J(x,t)\in C([0,+\infty)\times(0,T))$ and is decreasing to $x$.
Lemma 5.3
$$\displaystyle\lim\limits_{x\rightarrow 0+}J(x,t)=f(t),\quad 0<t<T,$$
(5.4)
$$\displaystyle\quad\lim\limits_{x\rightarrow+\infty}J(x,t)=0,\quad 0<t<T,$$
(5.5)
$$\displaystyle\lim\limits_{t\rightarrow T-}J(x,t)=\varphi^{\prime}(x),\quad x%
\geq 0.$$
(5.6)
Proof:
(5.4) and (5.5) are the results of Lemma 5.2. Now we prove (5.6).
The case of $x>\widehat{x}$.
Owing to the regularity of $v_{y}$ on $t=T$,
$$\lim\limits_{t\rightarrow T-}v_{y}(y,t)=\psi^{\prime}(y),\quad 0<y<k.$$
Notice that $\psi^{\prime}(y)$ maps onto $(-\infty,-\widehat{x})$ for $0<y<k$, and $\psi^{\prime\prime}(y)>0,\;0<y<k$, thus
$$\displaystyle\lim\limits_{t\rightarrow T-}J(x,t)=\lim\limits_{t\rightarrow T-}%
(v_{y}(\cdot,t))^{-1}(-x)=(\psi^{\prime}(\cdot))^{-1}(-x)=\varphi^{\prime}(x),%
\quad x>\widehat{x}.$$
(5.7)
The case of $0\leq x\leq\widehat{x}$.
Due to $(v_{y}(\cdot,t))^{-1}(-x)$ is decreasing to $x$ for all $t\in(0,T)$,
$$\displaystyle(v_{y}(\cdot,t))^{-1}(-x)\leq(v_{y}(\cdot,t))^{-1}(0)=f(t).$$
(5.8)
If $y<k$, then $\lim\limits_{t\rightarrow T-}v_{y}(y,t)=\psi^{\prime}(y)<-\widehat{x}\leq-x$, so when $t$ is sufficient close to $T$,
$$v_{y}(y,t)<-x,$$
thus
$$(v_{y}(\cdot,t))^{-1}(-x)>y,$$
hence
$$\liminf\limits_{t\rightarrow T-}(v_{y}(\cdot,t))^{-1}(-x)\geq y,$$
by the arbitrariness of $y<k$, we see that
$$\liminf\limits_{t\rightarrow T-}(v_{y}(\cdot,t))^{-1}(-x)\geq k.$$
Together with (5.8),
$$k\leq\liminf\limits_{t\rightarrow T-}(v_{y}(\cdot,t))^{-1}(-x)\leq\limsup%
\limits_{t\rightarrow T-}(v_{y}(\cdot,t))^{-1}(-x)\leq\lim\limits_{t%
\rightarrow T-}f(t)=k,$$
thus
$$\displaystyle\lim\limits_{t\rightarrow T-}J(x,t)=\lim\limits_{t\rightarrow T-}%
(v_{y}(\cdot,t))^{-1}(-x)=k=\varphi^{\prime}(x),\quad 0\leq x\leq\widehat{x}.$$
(5.9)
(5.6) follows from (5.7) and (5.9).
$\Box$
Now, we set
$$\displaystyle\widehat{V}(x,t)=\min\limits_{y>0}(v(y,t)+xy).$$
(5.10)
Theorem 5.4
$\widehat{V}$ is the strong solution of (2.11) and satisfies the following
$$\displaystyle\widehat{V},\;\widehat{V}_{x}\in C(Q_{x}).$$
(5.11)
Moreover,
$$\displaystyle\widehat{V}_{t}\leq 0,\quad\widehat{V}_{x}>0,\quad\widehat{V}_{xx%
}<0,\quad(x,t)\in Q_{x}.$$
(5.12)
$$\displaystyle\lim\limits_{x\rightarrow+\infty}\widehat{V}(x,t)=+\infty,\quad%
\lim\limits_{x\rightarrow+\infty}\widehat{V}_{x}(x,t)=0,\quad\forall t\in(0,T).$$
(5.13)
Proof:
From Lemma 5.1 and Lemma 5.2, it is easily seen that $J(x,t)\in\arg\min\limits_{y>0}(v(y,t)+xy)$ for all $(x,t)\in Q_{x}$, thus
$$\widehat{V}(x,t)=v(J(x,t),t)+xJ(x,t),\quad(x,t)\in Q_{x}.$$
In addition
$$\displaystyle\widehat{V}_{x}(x,t)=v_{y}(J(x,t),t)J_{x}(x,t)+xJ_{x}(x,t)+J(x,t)%
=J(x,t)\geq 0,$$
(5.14)
$$\displaystyle\widehat{V}_{xx}(x,t)=J_{x}(x,t)=\partial_{x}[(v_{y}(\cdot,t))^{-%
1}(x)]=\frac{-1}{v_{yy}(J(x,t),t)}<0,$$
(5.15)
$$\displaystyle\widehat{V}_{t}(x,t)=v_{y}(J(x,t),t)J_{t}(x,t)+v_{t}(J(x,t),t)+xJ%
_{t}(x,t)=v_{t}(J(x,t),t)\leq 0,$$
(5.16)
Due to $J(x,t)\in C(Q_{x})$, (5.11) is true.
Moreover
$$\displaystyle\lim\limits_{x\rightarrow+\infty}\widehat{V}(x,t)$$
$$\displaystyle=$$
$$\displaystyle\lim\limits_{x\rightarrow+\infty}[v(J(x,t),t)+xJ(x,t)]$$
$$\displaystyle\geq$$
$$\displaystyle\lim\limits_{x\rightarrow+\infty}v(J(x,t),t)=\lim\limits_{y%
\rightarrow 0+}v(y,t)=+\infty,\quad 0<t<T.$$
$$\lim\limits_{x\rightarrow+\infty}\widehat{V}_{x}(x,t)=\lim\limits_{x%
\rightarrow+\infty}J(x,t)=0,\quad 0<t<T.$$
Here, we verify $\widehat{V}$ is the strong solution of (2.11).
Firstly,
$$\lim\limits_{x\rightarrow 0+}\widehat{V}(x,t)=\lim\limits_{x\rightarrow 0+}[v(%
J(x,t),t)+xJ(x,t)]=v(f(t),t)=\frac{1}{\gamma}K^{\gamma},\quad t\in(0,T),$$
$$\displaystyle\lim\limits_{t\rightarrow T-}\widehat{V}(x,t)$$
$$\displaystyle=$$
$$\displaystyle\lim\limits_{t\rightarrow T-}[v(J(x,t),t)+xJ(x,t)]$$
$$\displaystyle=$$
$$\displaystyle v(\lim\limits_{t\rightarrow T-}J(x,t),T)+x\lim\limits_{t%
\Rightarrow T-}J(x,t)$$
$$\displaystyle=$$
$$\displaystyle\psi(\varphi^{\prime}(x))+x\varphi^{\prime}(x)$$
$$\displaystyle=$$
$$\displaystyle\varphi(x),\quad x\geq 0,$$
so $\widehat{V}$ meets the boundary and terminal conditions in (2.11).
Secondly, due to (5.14), (5.15) and (5.16),
$$\displaystyle\Big{(}-\widehat{V}_{t}+\frac{a^{2}}{2}\frac{\widehat{V}^{2}_{x}}%
{\widehat{V}_{xx}}-rx\widehat{V}_{x}+\beta\widehat{V}\Big{)}(x,t)=\Big{(}-v_{t%
}-\frac{a^{2}}{2}y^{2}v_{yy}-(\beta-r)yv_{y}+\beta v\Big{)}(J(x,t),t)\geq 0.$$
(5.17)
On the other hand,
$$\displaystyle\begin{array}[]{l}\quad v(y,t)\geq\psi(y),\;\forall\;y>0\\
\Rightarrow\min\limits_{y>0}(v(y,t)+xy)\geq\min\limits_{y>0}(\psi(y)+xy),\;%
\forall\;x\geq 0\\
\Rightarrow\widehat{V}(x,t)\geq\varphi(x),\;\forall\;x\geq 0.\end{array}$$
Hence,
$$\min\Big{\{}-\widehat{V}_{t}+\frac{a^{2}}{2}\frac{\widehat{V}^{2}_{x}}{%
\widehat{V}_{xx}}-rx\widehat{V}_{x}+\beta\widehat{V},\widehat{V}-\varphi\Big{%
\}}\geq 0\quad\hbox{in}\quad Q_{x}.$$
Now, we prove that
$$\displaystyle\widehat{V}(x,t)>\varphi(x)\Rightarrow\Big{(}-\widehat{V}_{t}+%
\frac{a^{2}}{2}\frac{\widehat{V}^{2}_{x}}{\widehat{V}_{xx}}-rx\widehat{V}_{x}+%
\beta\widehat{V}\Big{)}(x,t)=0.$$
(5.18)
Before that we first claim
$$\displaystyle\widehat{V}(x,t)>\varphi(x)\Rightarrow v(J(x,t),t)>\psi(J(x,t)),$$
(5.19)
Let $y=J(x,t)$, if $v(y,t)=\psi(y)$ for $y\geq k$, then
$$\displaystyle\begin{array}[]{l}\quad v(y,t)=\psi(y)=\frac{1}{\gamma}K^{\gamma}%
\\
\Rightarrow v_{y}(y,t)=0\\
\Rightarrow x=0\\
\Rightarrow\widehat{V}(x,t)=\frac{1}{\gamma}K^{\gamma}=\varphi(x),\end{array}$$
And if $v(y,t)=\psi(y)$ for $y<k$, then
$$\displaystyle\begin{array}[]{l}\quad v(y,t)=\psi(y)=\frac{1-\gamma}{\gamma}y^{%
\frac{\gamma}{\gamma-1}}+(K-b)y\\
\Rightarrow x=-v_{y}(y,t)=y^{\frac{1}{\gamma-1}}-(K-b)\\
\Rightarrow\widehat{V}(x,t)=v(y,t)+xy=\frac{1-\gamma}{\gamma}y^{\frac{\gamma}{%
\gamma-1}}+(K-b)y+(y^{\frac{1}{\gamma-1}}-(K-b))y=\varphi(x),\end{array}$$
Hence, (5.19) is true.
Combine with (5.19), the variational inequality in (3.14) and (5.17) yields
$$\displaystyle\begin{array}[]{l}\quad\widehat{V}(x,t)>\varphi(x)\\
\Rightarrow v(J(x,t),t)>\psi(J(x,t))\\
\Rightarrow\Big{(}-v_{t}-\frac{a^{2}}{2}y^{2}v_{yy}-(\beta-r)yv_{y}+\beta v%
\Big{)}(J(x,t),t)=0\\
\Rightarrow\Big{(}-\widehat{V}_{t}+\frac{a^{2}}{2}\frac{\widehat{V}^{2}_{x}}{%
\widehat{V}_{xx}}-rx\widehat{V}_{x}+\beta\widehat{V}\Big{)}(x,t)=0.\end{array}$$
Therefore, $\widehat{V}(x,t)$ satisfies the variational inequality in $\eqref{V}$.
So far, we have proved $\widehat{V}(x,t)$ is the strong solution of (2.11).
$\Box$
Now, we discuss the free boundary of (2.11). Define
$$\displaystyle{\cal\varepsilon R}_{x}$$
$$\displaystyle=$$
$$\displaystyle\{\widehat{V}=\varphi\},\quad\hbox{exercise region},$$
$$\displaystyle\quad{\cal CR}_{x}$$
$$\displaystyle=$$
$$\displaystyle\{\widehat{V}>\varphi\},\quad\hbox{continuation region}.$$
And
$$\displaystyle H(t)=\sup\{x\geq 0|\widehat{V}(x,t)=\varphi(x)\},\quad 0<t<T,$$
$$\displaystyle G(t)=\inf\{x\geq 0|\widehat{V}(x,t)=\varphi(x)\},\quad 0<t<T.$$
On the two free boundaries $y=h(t)$ and $y=g(t)$,
$$\displaystyle v(y,t)=\frac{1-\gamma}{\gamma}y^{\frac{\gamma}{\gamma-1}}+(K-b)y,$$
$$\displaystyle v_{y}(y,t)=-y^{\frac{1}{\gamma-1}}+(K-b).$$
Note that
$$\displaystyle x=-v_{y}(y,t).$$
(5.20)
Then the corresponding two free boundaries of (2.11) are
$$\displaystyle H(t)=-v_{y}(h(t),t)=h(t)^{\frac{1}{\gamma-1}}-(K-b),$$
$$\displaystyle G(t)=-v_{y}(g(t),t)=g(t)^{\frac{1}{\gamma-1}}-(K-b).$$
Moreover
$$\displaystyle H^{\prime}(t)=\frac{1}{\gamma-1}h(t)^{\frac{1}{\gamma-1}-1}h^{%
\prime}(t)\geq 0,$$
$$\displaystyle G^{\prime}(t)=\frac{1}{\gamma-1}g(t)^{\frac{1}{\gamma-1}-1}g^{%
\prime}(t)\leq 0,$$
and
$$\displaystyle H(T-)=h(T-)^{\frac{1}{\gamma-1}}-(K-b),$$
$$\displaystyle G(T-)=g(T-)^{\frac{1}{\gamma-1}}-(K-b).$$
On the other hand, by (5.14) and (5.4),
$$\widehat{V}_{x}(0,t)=J(0,t)=(v(\cdot,t))^{-1}(0)=f(t).$$
All this leads up to
Theorem 5.5
The two free boundaries of (2.11) satisfy $H(t),\;G(t)\in C^{\infty}(0,T)$ and
$H^{\prime}(t)\geq 0,\;G^{\prime}(t)\leq 0$, $\widehat{V}_{x}(0,t)=f(t)$.
Moreover, they have the following classification.
Case I: $\beta\geq\frac{a^{2}}{2}\frac{\gamma}{1-\gamma}+r\gamma$, $\Psi(k)\geq 0$.
$$H(t)\equiv+\infty,\quad k^{\frac{1}{\gamma-1}}-(K-b)=G(T-)\leq G(t),$$
i.e.
$$H(t)\equiv+\infty,\quad\widehat{x}=G(T-)\leq G(t),$$
see Fig 5.1.
Case II: $\beta\geq\frac{a^{2}}{2}\frac{\gamma}{1-\gamma}+r\gamma$, $\Psi(k)<0$.
If $\beta>\frac{a^{2}}{2}\frac{\gamma}{1-\gamma}+r\gamma$,
$$H(t)\equiv+\infty,\quad y_{T}=\Big{(}\frac{-r(K-b)}{\frac{\beta-r\gamma}{%
\gamma}-\frac{a^{2}}{2}\frac{1}{1-\gamma}}\Big{)}-(K-b)=G(T-)<G(t),$$
see Fig 5.2.
If $\beta=\frac{a^{2}}{2}\frac{\gamma}{1-\gamma}+r\gamma$,
$${\cal\varepsilon R}_{x}=\emptyset,$$
see Fig 5.4.
Case III: $\beta<\frac{a^{2}}{2}\frac{\gamma}{1-\gamma}+r\gamma$, $\Psi(k)>0$.
$$k^{\frac{1}{\gamma-1}}-(K-b)=G(T-)\leq G(t)\leq H(t)\leq H(T-)=\Big{(}\frac{-r%
(K-b)}{\frac{\beta-r\gamma}{\gamma}-\frac{a^{2}}{2}\frac{1}{1-\gamma}}\Big{)}-%
(K-b),$$
see Fig 5.3.
Case IV: $\beta<\frac{a^{2}}{2}\frac{\gamma}{1-\gamma}+r\gamma$, $\Psi(k)\leq 0$.
$${\cal\varepsilon R}_{x}=\emptyset,$$
see Fig 5.4.
$T$$\widehat{x}$${\cal CR}_{x}$${\cal\varepsilon R}_{x}$$x$$G(t)$Fig 5.1. $\beta\geq\frac{a^{2}}{2}\frac{\gamma}{1-\gamma}+r\gamma$, $\Psi(k)\geq 0$.
$T$${\cal CR}_{x}$${\cal\varepsilon R}_{x}$$x$$\widehat{x}$$G(t)$Fig 5.2. $\beta>\frac{a^{2}}{2}\frac{\gamma}{1-\gamma}+r\gamma$, $\Psi(k)<0$.
$T$$\widehat{x}$${\cal\varepsilon R}_{x}$${\cal CR}_{x}$${\cal CR}_{x}$$x$$G(t)$$H(t)$Fig 5.3. $\beta<\frac{a^{2}}{2}\frac{\gamma}{1-\gamma}+r\gamma$, $\Psi(k)>0$.$T$${\cal CR}_{x}$$x$Fig 5.4. $\beta<\frac{a^{2}}{2}\frac{\gamma}{1-\gamma}+r\gamma$, $\Psi(k)\leq 0$,or $\beta=\frac{a^{2}}{2}\frac{\gamma}{1-\gamma}+r\gamma$, $\Psi(k)<0$.
6 Conclusions
In this paper we presented a new method to study the free boundaries while the exercise region is not connected (see (4.8)-(4.10) and Lemma 4.2)
so that we can shed light on the behaviors of the free boundaries for a fully nonlinear variational inequality without any restrictions on parameters (see Figure 5.1-5.4.). The financial meaning is that if at time $t$, investor’s wealth $x$ is located in
${\cal CR}_{x}$, then he should continue to invest; and if investor’s wealth $x$ is located in
${\cal\varepsilon R}_{x}$, then he should stop to investing.
References
[1]
[2]
Bensoussan, A., and Lions, J.L. (1984):
Impulse Control and Quasi-Variational Inequalities. Gauthier-Villars.
[3]
Capenter, J.N. (2000):
Does option compensation increase managarial risk appetite?
The Journal of Finance, Vol. 50, pp. 2311-2331.
[4]
Ceci, C. and Bassan, B. (2004):
Mixed optimal stopping and stochastic control problems with semicontinuous final reward for diffusion processes.
Stochastics and Stochastics Reports, Vol. 76, pp. 323-337.
[5]
Chang, M.H., Pang, T. and Yong, J. (2009):
Optimal stopping problem for stochastic differential equations with random coefficients.
SIAM Journal on Control and Optimization, Vol. 48, pp. 941-971.
[6]
Choi, K.J., Koo, H.K. and Kwak, D.Y. (2004):
Optimal stopping of active portfolio management.
Annals of Economics and Finance, Vol. 5, pp. 93-126.
[7]
Dayanik, S. and Karatzas, I. (2003):
On the optimal stopping problem for one-dimensional diffusions.
Stochastic Processes and their Applications, Vol. 107, pp. 173-212.
[8]
Elliott, R.J. and Kopp, P.E. (1999):
Mathematics of Financial Markets. Springer-Verlag, New York.
[9]
Fleming, W. and Soner, H. (2006):
Controlled Markov Processes and Viscosity Solutions, 2nd edition. Springer-Verlag, New York.
[10]
Friedman, A. (1975):
Parabolic variational inequalities in one space dimension and smoothness of the free boundary.
Journal of Functional Analysis, Vol. 18, pp. 151-176.
[11]
Friedman, A. (1982):
Variational Principles and Free-Boundary Problems.
Wiley, New York.
[12]
Henderson, V. (2007):
Valuing the option to invest in an incomplete market.
Mathematics and Financial Economics, Vol. 1, pp. 103-128.
[13]
Henderson, V. and Hobson, D. (2008):
An explicit solution for an optimal stopping/optimal control problem which models an asset sale.
The Annals of Applied Probability, Vol. 18, pp. 1681-1705.
[14]
Karatzas I. and Kou S. G. (1998):
Hedging American contingent claims with constrained portfolios.
Finance and Stochastics, Vol. 2, pp. 215-258.
[15]
Karatzas I. and Ocone D. (2002):
A leavable bounded-velocity stochastic control problem.
Stochastic Processes and their Applications, Vol. 99, pp. 31-51.
[16]
Karatzas I. and Sudderth W. D. (1999):
Control and stopping of a diffusion process on an interval.
The Annals of Applied Probability, Vol. 9, pp. 188-196.
[17]
Karatzas I. and Wang H. (2000):
Utility maximization with discretionary stopping.
SIAM Journal on Control and Optimization, Vol. 39, pp. 306-229.
[18]
Ladyženskaja O.A., Solonnikov V.A. and Ural’ceva N.N. (1967):
Linear and Quasilinear Equations of Parabolic Type,
Translated from the Russian by S. Smith. Translations of Mathematical
Monographs, Vol. 23. American Mathematical Society, Providence, R.I., 1967.
[19]
Li, X. and Wu, Z.Y. (2008):
Reputation entrenchment or risk minimization? Early stop and investor-manager agency conflict in fund management,
Journal of Risk Finance, Vol. 9, pp. 125-150.
[20]
Li, X. and Wu, Z.Y. (2009):
Corporate risk management and investment decisions,
Journal of Risk Finance, Vol. 10, pp. 155-168.
[21]
Li, X. and Zhou, X.Y. (2006):
Continuous-time mean-variance efficiency: The 80% rule,
The Annals of Applied Probability, Vol. 16, pp. 1751-1763.
[22]
Oleinik, O.A. and Radkevie, E.V. (1973):
Second Order Equations with Nonnegative Characteristic Form, American Mathematical Society.
Rhode Island and Plenum Press, New York.
[23]
Peskir, G. and Shiryaev, A. (2006):
Optimal Stopping and Free-Boundary Problems, 2nd edition. Birkhäuser Verlag, Berlin.
[24]
Pham, H. (2009):
Continuous-time Stochastic Control and Optimization with Financial Applications.
Springer-Verlag, Berlin.
[25]
Samuelson, P. A. (1965):
Rational theory of warrant pricing. With an appendix
by H. P. McKean, A free boundary problem for the heat equation arising from
a problem in mathematical economics.
Industrial Management Review, Vol. 6, pp. 13-31.
[26]
Shiryaev, A., Xu, Z.Q. and Zhou, X.Y. (2008):
Thou shalt buy and hold.
Quantitative Finance, Vol. 8, pp. 765-776.
[27]
Yong, J. and Zhou, X.Y. (1999):
Stochastic Controls: Hamiltonian Systems and HJB Equations. Springer-Verlag, New York. |
Street Sense: Learning from Google Street View111Data and scripts behind the analysis presented here can be downloaded from https://github.com/geosensing/streetsense.
Suriyan Laohaprapanon
Suriyan can be reached at: [email protected]
Kimberly Ortleb
Kimberly can be reached at: [email protected]
Gaurav Sood
Gaurav can be reached at: [email protected]
Abstract
How good are the public services and the public infrastructure? Does their quality vary by income? These are vital questions—they shed light on how well the government is doing its job, the consequences of disparities in local funding, etc. But there is little good data on many of these questions. We fill this gap by describing a scalable method of getting data on one crucial piece of public infrastructure: roads. We assess the quality of roads and sidewalks by exploiting data from Google Street View. We randomly sample locations on major roads, query Google Street View images for those locations and code the images using Amazon’s Mechanical Turk. We apply this method to assess the quality of roads in Bangkok, Jakarta, Lagos, and Wayne County, Michigan. Jakarta’s roads have nearly four times the potholes than roads of any other city. Surprisingly, the proportion of road segments with potholes in Bangkok, Lagos, and Wayne is about the same, between .06 and .07. Using the data, we also estimate the relation between the condition of the roads and local income in Wayne, MI. We find that roads in more affluent census tracts have somewhat fewer potholes.
The poorer the quality of the public infrastructure and public services, generally, the worse the quality of life. For instance, potholed roads mean that vehicles can’t go as fast and the ride is bumpier. If sidewalks aren’t paved, physically disabled have a tough time getting anywhere. If garbage isn’t picked up regularly, foul smells and unsightliness are part of life, and the risk of disease is greater.
As these examples convey, the quality of public infrastructure and public services matters immensely. It sheds light on the quality of life, and on the resources and functioning of the government. So how good is the public infrastructure? And how good are the public services? More often than not, we have no good answer to these questions.
In this paper, we introduce a method to answer questions about the quality of one important piece of public infrastructure: roads. We capitalize on Google Street View to learn about the condition of the roads. We randomly sample locations on the roads, get Google Street View images for those locations, and crowdsource the coding of the images. To illustrate the method’s utility, we apply the method to learn about the condition of roads in Wayne (Michigan), Bangkok, Lagos, and Jakarta, and to assess the association between local income and the condition of the roads in Wayne. We also discuss ways this labeled data can be augmented and used to build automated systems to answer these questions at scale.
Learning From Google Street View
Since 2007, Google has been working on regularly taking panoramic images of all the streets in the world. In the West, Google’s efforts have been a success: Google’s specially designed vehicles have traversed an overwhelming majority of the streets.222See https://en.wikipedia.org/wiki/Coverage_of_Google_Street_View In the third-world, however, the coverage is patchy. For instance, as we show below, just about 24.6% of Dhaka’s streets are covered by Google Street View.333Some of Google’s estimates of its coverage are either wrong or have become outdated as the road network continues to grow. But Google’s coverage of some other big third-world cities isn’t too shabby. For instance, it covers 99.9% of Bangkok’s streets and 87.2% of Lagos’ streets. In all, the coverage is good enough, especially in the West, that people can build a scalable measurement infrastructure on top of it.
But patchy coverage is not the only problem with Google Street View data. The other is that the data are not always current. A large chunk of the data is at least a few years old. But somewhat older data has its value, especially because we expect Google to map those areas again in the future. The data aren’t perfect but they are rich and valuable.
But how do we efficiently capitalize on Google Street View data? We could download all the data for a city. But doing so is expensive. And it may not even be useful. Depending on the question, a large random sample can fill in nicely for a census. For learning the condition of the roads, that is precisely the case.
To efficiently learn about the condition of the streets, sidewalks, and such, from Google Street View data, we devise a new workflow. We start by downloading data on the kinds of roads we are interested from Open Street Map (OSM). We then chunk the roads into half a kilometer segments, and then randomly sample from the segments. (The open source Python package geo-sampling (Laohaprapanon and Sood 2017) implements this workflow.) We then take the starting latitude and longitude of the sampled segments and query the Google Street View API.
Application
To illustrate the utility of the method, we apply it to learn the condition of the roads, the condition of the sidewalks, and the presence of litter on the streets in four prominent third-world cities and one poor American county.
To learn the condition of roads in Bangkok, Dhaka, Jakarta, Lagos, and Wayne, MI, in the latter half of 2017, we downloaded data on all the streets from OSM. We feared that in many of these cities, Google Street View’s coverage of neighborhood roads would be patchy. So we decided to focus on primary, secondary, tertiary, and trunk roads. We used the geo-sampling package to take a random sample of primary, secondary, tertiary, and trunk road segments for each location (see Figures 1, 2, 3). (Figures SI 2.1, SI 2.2, SI 2.3, SI 2.4 plot the starting longitude and latitude without the surrounding detail of the sampled segments of Bangkok, Jakarta, Lagos, and Wayne, MI respectively.) For Bangkok, Dhaka, Jakarta, and Lagos, we drew a sample of 1,000 segments each. For Wayne, MI, we drew a sample of 5,000 segments. We drew a larger sample for Wayne, MI because we wanted to estimate the relationship between local income and road conditions there. (We chose an American county to estimate the relationship between local income and road conditions because data on local income is readily available for the US.)
Next, we used the Google Street View API to download images at the starting point of each of the random road segment. Sometimes the Google API came back empty. We take the proportion of failed queries as an estimate of Google Street View coverage of the primary, secondary, tertiary, and trunk roads in the respective city. In Dhaka, for instance, just about 24.6% of queries were successful. (Figure SI 1.1 plots the sampled locations.) Given the low coverage of Dhaka, we dropped Dhaka. In all, we have images of 978 locations for Bangkok, 872 for Jakarta, 999 for Lagos, and 4,828 for Wayne. Each photo captures a small segment of the road. (All the photos are available on Harvard Dataverse.)
Next, we recruited workers on Amazon’s Mechanical Turk (MTurk) to code the images for the condition of the roads. To ensure quality, we only recruited ‘master’ workers. We asked them if the segment of the road in the image had any 1) cracks, and 2) potholes. We also asked them, "if there are any road markings on the road, are they clear?" Lastly, we asked them, if there any litter and if the sidewalks were paved. The final survey for Bangkok, Jakarta, and Wayne, MI was the same (see SI 3.1).444We initially got Jakarta’s images coded using alternate instrumentation (see SI 3.3). But we were concerned that this would lead to incommensurability. So we did another round of data collection with the same instrument. Lagos’ survey differed in very minor ways from Bangkok, Jakarta, and Wayne’s (see SI 3.2). We paid MTurkers 5 cents for answering the short survey for each image. To ensure quality, we also checked a few images at random to see if the coding was reasonable. We found one instance where one worker’s judgments seemed really off and decided to reject those HITs.
Results
Lest the readers miss an obvious point, before we present the results, we would like to draw their attention to it. Differences in the quality of roads across cities do not by default capture the extent of the road network. The extent of road network is easy to compute and regularly cited. Our contribution is measurement of quality of roads, sidewalks, and litter on the streets efficiently.
The proportion of road segments with potholes is Jakarta is an astonishing .23. The commensurate number for Bangkok, Lagos, and Wayne is between .06–.07. But what does that mean? As we mentioned above, each image captures a small segment of the street. If we assume that a photo captures .5km, the expected number of potholes on a 10 km journey in Jakarta would be 2.3. That would make for a somewhat of a rough ride.
When it comes to cracks in the road, Wayne takes the top spot—the proportion of segments in Wayne with cracks is .62 followed by .44 for Jakarta and .20 and .24 for Bangkok and Lagos respectively. The high proportion is not particularly noteworthy for Wayne given its latitude, but it is noteworthy for Jakarta.
Jakarta is also the dirtiest of the 4 cities with .21 of the segments containing litter. Lagos comes second with .15 of the segments with litter. Lagos also takes the bottom spot for paved sidewalks—just .30 of the segments have a paved sidewalk.
Given there are differences across cities in the proportion of trunk, primary, secondary, and tertiary roads in the road network, we checked if cross-city comparisons are mostly capturing differences in road types than differences in conditions within each type of road. To examine this, we regressed the appropriate variable (whether or not there is a pothole, a crack) on the type of the road and city. Compared to tertiary roads, potholes are more common on primary roads (Diff. = .03), secondary roads (Diff. = .01), and trunk roads (Diff. = .05). But adjusting for the type of road doesn’t change the across-city estimates much. For instance, the difference in the proportion of segments with potholes between Wayne and Jakarta is still .16.
Moving to cracks in the road, compared to tertiary roads, primary, secondary, and trunk roads have fewer cracks with differences being -.05, -.03, and -.09 respectively. Like with potholes, adjusting for the kind of roads doesn’t seem to make much of a difference for inferences from raw data for cross-city comparison.
Next, we analyzed the relationship between the condition of the roads and local income. To do that, we used the AskGeo API to get information on per capita income the census tract in which the lat/long lay. And we regressed whether a segment had a crack (or a pothole) or not on income split into quintiles.
Before we present the results, a caveat. Given that we expect the largest relationship between quality of neighborhood roads and local income, we expect that our subsetting on primary, secondary, tertiary, and trunk roads to lead to smaller coefficients.
Compared to road segments in tracts with per capita income less than 12k, the proportion of road segments with potholes in tracts with per capita income between 12k and 17k was -.01 less. The proportion of road segments in tracts with per capita income of 17k to 23k was -.02 less. For tracts with 23k and 29k, it was -.03 fewer segments with potholes, and for tracts with income between 29k and 83k, -.02 fewer segments had potholes. The relationship between local income and the proportion of segments with cracks was more uneven. The highest quintile had the fewest cracks but roads in the second and third income quintile areas had roughly the same number of cracks.
Discussion
What is the condition of the streets? Are the streets paved? Do the streets have proper traffic signs and road markings? Is there litter on the streets? What proportion of vehicles on the streets is two-wheeled? And what proportion is man-powered, e.g., rickshaws? These are some of many the questions we can answer with Google Street View. In this paper, we provide a scalable way to answer such questions. We capitalize on Google Street View, pairing it with an open source Python package to randomly sample locations on the streets and crowdsourcing, to learn a host of compelling facts.
The method that we describe here can be easily extended to automate the production of answers. Given that we are technically building a large labeled dataset, an obvious next step is to build a supervised machine learning infrastructure on top of it. Such an infrastructure can then provide automated estimates on many of these questions, along with useful caveats around coverage.
References
(1)
Laohaprapanon and Sood (2017)
Laohaprapanon, Suriyan and Gaurav Sood. 2017.
“geosampling: Sampling Locations on the Streets.”.
https://github.com/geosensing/geo_sampling.
Appendix SI 1 Sampled Locations in Dhaka
Appendix SI 2 Plots of Sampled Locations
Appendix SI 3 Mturk Surveys |
IFT–P.027/98
hep-ph/9805329
Limits on a Strong Electroweak Sector from
$e^{+}e^{-}\rightarrow\gamma\gamma+E\!\!\!/$ at LEP2
Alfonso R. [email protected] and
Rogerio [email protected]
Instituto de Física Teórica - Universidade Estadual Paulista
Rua Pamplona, 145 - 01405–900 São Paulo - SP, Brazil
We study the process $e^{+}e^{-}\rightarrow\gamma\gamma\nu\bar{\nu}$ in the
context
of a strong electroweak symmetry breaking model, which can be a source
of events with two photons and missing energy at LEP2.
We investigate bounds on the
model assuming that no deviation is observed from the Standard Model
within a given experimental error.
Recently there has been a great deal of interest in events with two photons
plus missing energy at LEP2 because they would be an interesting signature of
weak scale supersymmetry.
This signal arises from either $e^{+}e^{-}\rightarrow\tilde{N_{2}}(\tilde{N_{2}}\rightarrow\tilde{N_{1}}\gamma%
)\tilde{N_{2}}(\tilde{N_{2}}\rightarrow\tilde{N_{1}}\gamma)$ in models with
gravitational mediated SUSY breaking where the neutralino $\tilde{N_{1}}$ is
the lightest
superpartner (LSP) or
$e^{+}e^{-}\rightarrow\tilde{N_{1}}(\tilde{N_{1}}\rightarrow\tilde{G}\gamma)%
\tilde{N_{1}}(\tilde{N_{1}}\rightarrow\tilde{G}\gamma)$
in gauge mediated
supersymmetry breaking where the gravitino $\tilde{G}$ is the
LSP [1].
Searches for these events have been performed at LEP2 [2] and the
results are consistent with the Standard Model background once initial state
radiation is taken into account [3].
Supersymmetric models are considered for many reasons the favorite candidates
to extend the extremely successful Standard Model to higher energies.
Among these reasons one could mention: gauge couplings unification, the
existence of a natural candidate for dark matter of the Universe, a solution
to the naturalness problem and the fact that the theory is weakly coupled
and perturbation theory can be used.
However, there is a logical possibility that the Standard Model is an effective
theory, being the low energy limit of a strongly coupled more
fundamental theory, in the same manner that the non-linear $\sigma$
model describes QCD at low energies, in the non-perturbative regime
where pions are the relevant degrees of freedom [4]. Only experiments
will tell us which way Nature chose.
In this letter we show that models of strong electroweak symmetry breaking
could also be a source of events with two photons and missing energy at
LEP2. We obtain constraints in the
parameter space of one such model from requiring that
the deviations induced by this model to be smaller than the experimental
accuracy of the measurements . These constraints
are complementary to the ones resulting from precision measurements at LEP.
Here we will work in the context of the BESS
(Breaking Electroweak Symmetry Strongly)[5] model.
It can be viewed as an
effective lagrangian description of the electroweak symmetry breaking due to
an hypothetical strongly interacting sector at the TeV scale.
The model is based on the group
$G^{\prime}=\left(SU(2)_{L}\otimes SU(2)_{R}\right)_{\mbox{global}}\otimes\left%
(SU(2)_{V}\right)_{\mbox{local}}$. Three new vector bosons
are introduced through the so-called hidden symmetry $SU(2)_{V}$.
The group $G^{\prime}$ breaks down spontaneously to its diagonal subgroup of
$SU(2)$ giving
rise to six Goldstone bosons. Three of them are absorbed by the new vector
bosons.
The remaining three Goldstone bosons give masses to the usual $W^{\pm}$ and
$Z^{0}$
bosons when the symmetry $SU(2)_{L}\otimes U(1)_{Y}\subset SU(2)_{L}\otimes SU(2)_{R}$ is gauged.
The bosonic part of the lagrangian
takes the form:
$${\cal L}=-\frac{v^{2}}{4}\left[Tr(\tilde{W}-Y)^{2}+\alpha Tr(\tilde{W}+Y-2%
\tilde{V})^{2}\right]+{\cal L}^{kin}(Y,\tilde{W},\tilde{V})$$
(1)
where $Y,\tilde{W},\tilde{V}$ are the gauge fields associated to
$U(1)_{Y}$,
$SU(2)_{L}$ and $SU(2)_{V}$ respectively and $\alpha$ is an arbitrary
parameter.
A direct coupling to the fermionic sector can be introduced
through the lagrangian:
$$\begin{array}[]{ccl}{\cal L}_{f}&=&\bar{\psi}_{L}i\gamma^{\mu}\left(\partial_{%
\mu}+\frac{i}{2(1+b)}g\tilde{W}_{\mu}^{a}\tau^{a}+\frac{ib}{4(1+b)}g^{\prime%
\prime}\tilde{V}_{\mu}^{a}\tau^{a}+\frac{i}{2}g^{\prime}y\tilde{Y}_{\mu}\right%
)\psi_{L}+\\
&&\bar{\psi}_{R}i\gamma^{\mu}\left(\partial_{\mu}+\frac{i}{2}g^{\prime}y\tilde%
{Y}_{\mu}\right)\psi_{R}\end{array}$$
(2)
where $b$ is a free parameter.
These vector bosons are not the physical ones because there are
mixing terms in the lagrangian and the physical gauge bosons are
obtained by diagonalizing the mass matrix in the neutral and charged
sectors.
After the diagonalization, only the couplings of the
physical new vector bosons $V$ to the fermions depend on the $b$
parameter.
The physical fields ($A$,$W$,$Z$ and $V$)
are linear combinations of
$Y,\tilde{W}$ and $\tilde{V}$, and therefore
the physical $V$ bosons also acquire an indirect coupling to fermions.
The Standard Model is recovered from the BESS model in
the limit $g^{\prime\prime}\rightarrow\infty$ and $b\rightarrow 0$,
where $g^{\prime\prime}$ is the new coupling constant
of $SU(2)_{V}$.
The model described here is minimal in the sense that only vector resonances
are introduced.
Many generalization of this model have been proposed, for example
models that introduce axial-vector as well as vector particles [6].
Recently, a model with vector, axial-vector and scalar resonances has
also been studied [7].
Bounds on this model have been obtained from
precision measurements at LEP1 [8]. For a recent review
on results of this model and its extensions see ref. [9].
The minimal BESS model has three independent free
parameters, which we choose to be $M_{V}$ (which is given by
$M^{2}_{V}=\alpha\frac{v^{2}}{4}g^{\prime\prime}$ in the limit
$g^{\prime\prime}\rightarrow\infty$)
$g^{\prime\prime}$ and $b$.
Our calculations show that our results have little sensitivity in
$M_{V}$ as long as $M_{V}\gg\sqrt{s}$, so we will use $M_{V}=400$ GeV in the following.
The Feynman rules for this model are similar to the Standard Model ones with
modified coupling constant (which can be found in references [9] and
[10]).
We included the BESS model particles and couplings into the package
COMPHEP [11] and we calculated the cross section for the process
$e^{+}e^{-}\rightarrow\gamma\gamma\nu\bar{\nu}$ at $\sqrt{s}=194$ GeV,
summing over neutrino species,
in the Standard Model and
in the BESS model for different values of $g/g^{\prime\prime}$ and $b$.
We adopted the following “loose” cuts in order to maximize the number of
detected events [12]:
$$|\cos(\theta_{\gamma})|<0.7$$
(3)
$$E_{\gamma}>1.75\mbox{GeV}$$
(4)
where $\theta_{\gamma}$ is the angle between the photon
and the beam, and $E_{\gamma}$ is the energy of the photon.
We define the quantity
$$\delta\sigma=\frac{\sigma_{BESS}-\sigma_{SM}}{\sigma_{SM}}$$
(5)
where $\sigma_{BESS}$ and $\sigma_{SM}$ are the total cross section
predicted by the BESS model and the Standard Model respectively.
The quantity $\delta\sigma$ measures
the relative deviation from the Standard Model prediction.
It should be largely insensitive to initial state radiation corrections
and we use $\delta\sigma$ to obtain our results.
In order to obtain bounds on the model, we require that
no deviation is observed from the Standard Model prediction
within the experimental error.
Due to runs with small luminosity at different energies, the number
of events collect so far is very limited and the current
experimental errors on the cross section measurement ranges
from $40\%$ to $100\%$ [12].
Expecting that the measurements will get more accurate for the next run
due to its increased luminosity,
we chose to show our bounds for $\delta\sigma<0.20,0.40$
and $0.80$ in figure $1$, where we also include the limits obtained
from precision measurements at LEP [9].
We can see that the the bounds are complementary and that a
measurement of the process
$e^{+}e^{-}\rightarrow\gamma\gamma+E\!\!\!/$ with larger statistics
can significantly reduce the parameter space available for the BESS
model.
In conclusion, we have shown that models of strong electroweak symmetry
breaking can also
be a source of events with two photons plus missing energy at LEP2.
The bounds obtained by requiring no deviation from the Standard Model
prediction within the experimental error are complementary to the bounds
arising from precision measurements.
Acknowledgments
We would like to thank Alexander Belyaev for teaching us how to use
Comphep and for useful conversations.
This work was supported by Conselho Nacional de Desenvolvimento
Científico e Tecnológico (CNPq) and Fundação de Amparo à
Pesquisa do Estado de São Paulo (FAPESP).
Figure Caption
Figure 1:
Limits on the parameter space $g/g^{\prime\prime}\times b$ of the BESS model.
The region outside the solid lines is excluded by precision measurements
at LEP. The regions below the dashed, dotted and dot-dashed lines are
excluded by requiring that
no deviation is observed from the Standard Model prediction for the
process $e^{+}e^{-}\rightarrow\gamma\gamma+E\!\!\!/$ at LEP2
with $\sqrt{s}=194$ GeV
within $20\%$,$40\%$ and $80\%$ accuracy respectively.
References
[1]
S. Ambrosanio, G. L. Kane, G. D. Kribs,
S. P. Martin and S. Mrenna,
Phys. Rev. Lett. 76 (1996) 3498; Phys. Rev. D54 (1996) 5395; Phys. Rev. D55 (1997) 1372;
G. L. Kane and G. Mahlon, Phys. Lett. B408 (1997) 222;
D. R. Stump, M. Wiest and C. P. Yuan, Phys. Rev. D54 (1996) 1936;
J. L. Lopez,D. Nanopoulos and A. Zichichi, Phys. Rev. Lett. 77 (1996) 5168;
J. L. Lopez and D. Nanopoulos, Phys. Rev. D55 (1997) 5813.
S. Dimopoulos, M. Dine, S. Raby and S. Thomas, Phys. Rev. Lett. 76 (1996) 3494;
S. Dimopoulos,S. Thomas and J. D. Wells, Nucl. Phys. B488 (1997) 39;
S. Ambrosanio, G. D. Kribs and S. P. Martin, Phys. Rev. D56 (1997) 1761.
[2]
OPAL Collaboration, K. Ackersatff et al., preprint
CERN-PPE/97-132, hep-ex/9801024; ALEPH Collaboration,
Phys. Lett. B420 (1998) 127 and contribution to the
EPS-HEP Conference, Jerusalem, August 1997, abstract number 620.
[3]
S. Mrenna, preprint ANL-HEP-PR-97-27, hep-ph/9705441.
[4]
For a collection of reprints, see
Dynamical Gauge Symmetry Breaking, edited by E. Farhi and R. Jackiw
(World Scientific, 1982). More recent developments can be found in:
K. Lane, in Proceedings of the 28th International Conference on High
Energy Physics, edited by Z. Ajduk and A. K. Wroblewski
(World Scientific 1997) - hep-ph/9610463;
R. S. Chivukula, R. Rosenfeld, E. H. Simmons and J. Terning,
in Electroweak Symmetry Breaking and New Physics at the TeV Scale,
edited by T. L. Barklow, S. Dawson, H. E. Haber and J. L. Siegrist
(World Scientific, 1996);
R. S. Chivukula, hep-ph/9803219.
[5]
R. Casalbuoni, S. De Curtis, D. Dominici and R. Gatto,
Phys. Lett. B155 (1985) 95; Nucl. Phys. B282 (1987) 235
[6]
R. Casalbuoni, S. De Curtis, D. Dominici,F. Feruglio and R. Gatto,
Int. J. Mod. Phys. A4 (1989) 1065.
[7]
R. Casalbuoni, S. De Curtis, D. Dominici,Phys. Lett. B403 (1997) 86.
[8]
L.Antichini,R. Casalbuoni and S. De Curtis,Phys. Lett. B348 (1995) 521
[9]
D. Dominici,Nuovo Cim. A20 (1997) 1.
[10]
R. Casalbuoni, P.Chiappetta, A. Deandrea, D. Dominici and R. Gatto,
Zeit. für Physik C60 (1993) 315.
[11]
E.E. Boos, M.N. Dubinin, V.A. Ilyin, A.E. Pukhov,V.I. Savrin,
COMPHEP: Specialized Package for Automatic Calculations of
Elementary Particle Decays and Collisions, SNUTP-94-116.
[12]
OPAL Collaboration, first reference in [2]. |
Maxwell operator on q-Minkowski space
and q-hyperboloid algebras
Antoine Dutriaux,
Dimitri Gurevich
LAMAV, Université de Valenciennes, 59304 Valenciennes,
France
[email protected]@univ-valenciennes.fr
Abstract
We introduce an analog of the Maxwell operator on a q-Minkowski
space algebra (treated as a particular case of the so-called
Reflection Equation Algebra) and on certain of its quotients. We
treat the space of ”quantum differential forms” as a projective
module in the spirit of the Serre-Swan approach. Also, we use
”braided tangent vector fields” which are q-analogs of Poisson
vector fields associated to the Lie bracket $sl(2)$.
AMS Mathematics Subject Classification, 2000: 17B37, 81R50
Key words: Laplace operator, Maxwell operator, braiding, Hecke symmetry, reflection equation algebra,
q-Minkowski space,
q-hyperboloid, braided vector fields
1 Introduction
The main goal of this paper is to define a q-analog of the Maxwell
operator on some noncommutative (NC) algebras. Namely, we are
dealing with three of them: the q-Minkowski space algebra
${{K}}_{q}[{{R}}^{4}]$, quantum (braided, or q-)hyperboloid algebra
${{K}}_{q}[{\rm H}_{r}^{2}]$, and an intermediate algebra ${{K}}_{q}[{{R}}^{3}]$. The
q-Minkowski space algebra (as defined in [MM], [M],
[K]) is a particular case of a so-called reflection equation
algebra (REA), the
others are its quotients. Observe that the REA was introduced in the
early 90’s by S.Majid under the name of braided matrix algebra (cf. the cited papers and the references therein). He
also defined a Hermitian structure in it. Here we do not consider
such a structure111The problem of defining involution operators in
”braided algebras” will be discussed in subsequent papers.
(Hereafter the term ”braided” stands for the REA and related algebras and objects.)
We would like only to note that the q-Minkowski space algebra endowed with the mentioned Hermitian structure
cannot be treated as a real vector space. .
The above algebras are deformations of their classical counterparts
${{K}}[{{R}}^{4}]$, ${{K}}[{\rm H}_{r}^{2}]$, and ${{K}}[{{R}}^{3}],$ respectively. Hereafter
${{K}}={{C}}$ (or ${{R}})$ is the ground field, the notation ${{K}}[\cal{M}]$
stands for the coordinate algebra of a given regular affine
algebraic variety $\cal{M}$, and
$${\rm H}_{r}^{2}=\{(b,\,h,\,c)\in{{R}}^{3}\,|\,2bc+{h^{2}\over 2}=r^{2},\,r\not%
=0\}$$
(1.1)
is a
hyperboloid222If ${{K}}={{R}}$ and $r$ is real, we get a
one-sheeted hyperboloid. If $r$ is purely imaginary, we get a
two-sheeted hyperboloid. However, if ${{K}}={{C}}$ we allow $r$ to take
any non-trivial value. It should be emphasized that we prefer to
deal with a q-analog of a hyperboloid (and not of a sphere – the
so-called Podles’ sphere) since it cannot be realized as a real
algebra.. Moreover, all deformed algebras we are dealing with,
can be endowed with an action of the quantum group (QG) $U_{q}(sl(2))$
compatible with their product in the usual way.
By passing from the classical algebras to their quantum or
braided analogs we want to
simultaneously deform certain differential operators defined on
the initial algebras or some vector bundles over them. Moreover,
if a given operator is covariant w.r.t. to a group $G$ we require
its deformed counterpart to be covariant w.r.t. the corresponding
QG.
The simplest operator which can be ”q-deformed” is the Laplace (or Laplace-Beltrami) operator
on one of the mentioned algebras.
Recall that this operator is associated to a (pseudo-)Riemannian metric on a given regular variety.
Thus, if the metric $g$ which comes in its definition is constant
of the signature $(1|3)$ on the space ${{R}}^{4}$ the corresponding
Laplace operator (also called d’Alembertian) is
$\partial_{t}^{2}-\partial_{x}^{2}-\partial_{y}^{2}-\partial_{z}^{2}$ where
$(t,\,x,\,y,\,z)$ are Cartesian coordinates in this space. If
${\cal{M}}={\rm H}_{r}^{2}$ and $g\in\Omega^{2}({\rm H}_{r}^{2})$ is an $SL(2)$-invariant metric
the corresponding Laplace operator equals (up to a factor) to
the quadratic Casimir operator coming from
the enveloping algebra $U(sl(2))$ whereas the hyperboloid is
treated as an orbit ${\rm H}_{r}^{2}\hookrightarrow sl(2)^{*}$ of the coadjoint
action of the Lie algebra $sl(2)$. Vector fields arising from this
coadjoint action are tangent to all orbits in $sl(2)^{*}$ and we
call them
tangent vector fields. (Also, they are Poisson vector fields w.r.t. the linear Poisson-Lie bracket
defined on the space $sl(2)^{*}$.)
Thus, in order to define braided
analogs of the Laplace operator on the quantum algebras in question
we should first introduce braided analogs of vector fields. More
precisely, we need analogs of tangent vector fields while dealing
with the algebra ${{K}}_{q}[{\rm H}_{r}^{2}],$ and analogs of partial derivatives
when dealing with the algebras ${{K}}_{q}[{{R}}^{3}]$ and ${{K}}_{q}[{{R}}^{4}]$.
The problem of defining a braided analog of the Maxwell operator is even more complicated since
we should first introduce braided analogs of the spaces of
differential forms $\Omega^{1}({\rm H}_{r}^{2})$, $\Omega^{1}({{R}}^{3})$, and
$\Omega^{1}({{R}}^{4})$. Recall that the Maxwell operator is defined on a
given variety $\cal{M}$ as follows:
$$\omega\to\partial\,d\,\omega,\,\,\,\partial=*^{-1}\,d\,*.$$
(1.2)
Here $\omega\in\Omega^{1}(\cal{M})$ is a one-form on $\cal{M}$, $d$ is the de Rham
operator and $*$ is the Hodge operator. (Note that on the
classical Minkowski space the Maxwell system is initially defined
on the space $\Omega^{2}({{R}}^{4})$ but it can be easily reduced to the
operator above. Also, note that the conventional definitions of
the Maxwell and Laplace operators
differ from ours by a sign. We disregard this.)
There are several approaches to the problem of defining
analogs of differential forms and of
the de Rham operator on a given noncommutative algebra $A$. The first approach consists of considering
universal differential forms without any commutation
relations (e.g., $a\,db=db\,a$) between ”functions” $a\in A$ and
”differentials” $db$ but with the preservation of the Leibnitz
rule. The corresponding differential algebra is much bigger than
the algebra of usual differential forms even if the initial
algebra is the coordinate algebra $A={{K}}[\cal{M}]$ of a regular variety
$\cal{M}$.
If an algebra $A$ is related to a braiding (say, it is a so-called
RTT algebra or an REA, see section 4) one looks for an extension
of the braiding coming in the definition of $A$ onto the space of
differential forms. Such an extended braiding enables one to
relate the elements of the form $a\,db$ and $db\,a,$ and to
reduce the space of universal differential forms to the
”classical size”. In the case of the quantum analog of the group
$GL(n)$ this can even be done with preservation of the Leibnitz
rule (cf. [W, K, IP]) while for the quantum analog of the
group
$SL(n)$ this rule has to be dropped (cf. [FP]).
The third approach, due to A. Connes, is based on the notion of
spectral triples. In the framework of this approach the role of
differential forms is played by the (classes of) Hochschild
cycles [C].
Nevertheless, all these approaches do not enable one to define a smoothly deformed space of differential
forms on a quantum hyperboloid algebra
${{K}}_{q}[{\rm H}_{r}^{2}]$.
As it was observed in [AG] the space of differential forms
$\Omega^{1}({\rm H}_{r}^{2})$ can be smoothly deformed
into a quantum one $\Omega^{1}_{q}({\rm H}_{r}^{2})$ as a
one-sided module. However, opposite to the classical case
($q=1$), if we convert this one-sided ${{K}}_{q}[{\rm H}_{r}^{2}]$-module into a
two-sided module via an extension of the initial braiding we
reduce the size of the space of differential forms.
Following [AG, A], we treat the spaces of braided
differential forms as one-sided projective modules333Note
that in general, due to the Serre-Swan approach, any vector bundle
on a regular affine variety can be realized as a projective
module. Recall that such (say, right)
$A$-module
for a given algebra $A$ is of the form $e\,A^{\oplus n}$
where $e\in{\rm Mat}_{n}(A)$ is an idempotent and ${\rm Mat}_{n}(A)$ stands for the space of
$n\times n$ matrices with entries from $A$.
As it was shown in [R], if $A_{\mathchar 1406\relax}$ is a formal deformation of a
commutative algebra $A,$ any idempotent $e\in{\rm Mat}_{n}(A)$ can be
deformed into an idempotent $e_{\mathchar 1406\relax}\in{\rm Mat}_{n}(A_{\mathchar 1406\relax})$. Thus, we get
a one-sided projective $A_{\mathchar 1406\relax}$-module $e_{\mathchar 1406\relax}\,A_{\mathchar 1406\relax}^{\oplus n}$ which
is a formal deformation of the initial $A$-module. (Nowadays,
there is a known an explicit formula for such a deformed idempotent,
cf. [BB].). In order to do so, we use the following
remarkable property of the q-Minkowski space algebra ${{K}}_{q}[{{R}}^{4}]$.
There exists a series of the Cayley-Hamilton (CH) polynomial
identities for some matrices with entries from the algebra
${{K}}_{q}[{{R}}^{4}]$. The coefficients of the CH polynomials are central
in this algebra, and they become scalar if we switch to the
quotient ${{K}}_{q}[{\rm H}_{r}^{2}]$. It is a somewhat standard trick to use
these polynomials to construct a set of idempotents and
corresponding projective modules. Thus, we can explicitly deform
any $SL(2)$-equivariant vector bundle on a hyperboloid ${\rm H}_{r}^{2},$
realized as a projective module, to its braided counterpart. By
applying this scheme to the space $\Omega^{1}({\rm H}_{r}^{2})$ we get its braided
analog $\Omega^{1}_{q}({\rm H}_{r}^{2}),$ realized as a projective
${{K}}_{q}[{\rm H}_{r}^{2}]$-module. Moreover, it is $U_{q}(sl(2))$-equivariant (covariant).
Besides, we define braided analogs of tangent vector fields without any
form of the Leibnitz rule. In order to do that,
we use another remarkable property of the q-Minkowski space
algebra ${{K}}_{q}[{{R}}^{4}]$. Let ${\cal L}$ be the space spanned by the
generators of this algebra. There exists a braided analog
$$[\,,\,\,]_{q}:{\cal L}^{\otimes 2}\to{\cal L}$$
of the $gl(2)$ Lie bracket such that (a slight modification of)
the algebra ${{K}}_{q}[{{R}}^{4}]$ can be regarded as the enveloping algebra
of the corresponding q- (or braided) Lie algebra. We refer the
reader to [GPS2] for further explanations (applicable in a
much more general setting). Below, we only need a braided analog
$[\,,\,\,]_{q}:{\cal SL}^{\otimes 2}\to{\cal SL}$ of the Lie algebra $sl(2)$ where
${\cal SL}$ is a 3 dimensional subspace of the space ${\cal L}$. By using
this q-bracket we define ”braided tangent vector fields” following
the classical pattern. Moreover, there is a braided analog of the
quadratic Casimir element on the space ${\cal SL}^{\otimes 2}$. Representing
it by
the above ”braided tangent vector fields” we get an analog of the Casimir operator on a q-hyperboloid.
A braided analog of the Maxwell operator on a q-hyperboloid is
also defined via braided tangent vector fields as a proper
deformation of the Maxwell operator acting on $\Omega^{1}({\rm H}_{r}^{2})$.
However, as we have said above, the space of braided differential
forms on a q-hyperboloid is treated as a one-sided projective
${{K}}_{q}[{\rm H}_{r}^{2}]$ module. In order to do this we first apply this scheme
to the Maxwell operator on a usual hyperboloid (which seems to be
new even in this classical setting).
In order to get the Maxwell operator on the algebra ${{K}}_{q}[{{R}}^{3}],$
in addition to ”braided tangent vector fields”, we must also use
the derivative in $r$ where $r$ is an analog of the radius (it comes
in a
parametrization of quantum hyperboloids and in a sense belongs to
the algebraic extension of the center of the algebra
${{K}}_{q}[{{R}}^{3}]$). We assume that this derivative has the classical
properties, in particular, it satisfies the Leibnitz rule.
Thus, we relate the braided Laplace and Maxwell operators
on the algebra ${{K}}_{q}[{\rm H}_{r}^{2}]$ and on the algebra ${{K}}_{q}[{{R}}^{3}]$ in a
way similar to the classical one: the operators on the former algebra are
restrictions of those on the latter one. Emphasize that the methods of
defining partial derivatives on the q-Minkowski space algebra via
”braided Leibnitz rule”
(based on a transposition of ”functions” and ”partial derivatives” as in [K], [IP], [FP] or
a braided coaddition as in [M]) do not allow to get
braided vector fields on a q-hyperboloid. Also, note that our
braided vector fields differ drastically from q-analogs of differential operators
arising from a coalgebraic structure in
the corresponding QG (cf. [D]).
As for the q-Minkowski space algebra ${{K}}_{q}[{{R}}^{4}]$, it has one
generator more compared to ${{K}}_{q}[{{R}}^{3}]$. Moreover, this generator
is central. So, first, we define the partial derivative w.r.t.
this generator, and then we introduce the Maxwell operators on the
algebra ${{K}}_{q}[{{R}}^{4}]$ in the classical manner.
By properly defining the action of the QG $U_{q}(sl(2))$ on all ingredients
of the Maxwell operators on the algebras in question, we force
them to be
$U_{q}(sl(2))$-invariant. Moreover, these operators possess gauge freedom
similar to the classical one (i.e. their kernels are as large as the kernels of their classical
counterparts are), provided the corresponding Laplace
operators are central.
In conclusion we want to mention the paper [S] where the
author suggests a way of defining q-analogs of gauge models
via quantum gauge potentials. For this end he
uses a q-analog of the Lie algebra $su(n)$ similar to that
considered above.
However, the ground (source) algebra considered in [S] is commutative
whereas our ground algebras are essentially noncommutative.
We hope our method will be useful for a ”q-deformation” of other gauge models.
Acknowledgement. One of the authors (D.G.) would
like to thank the Max-Planck-Institut für Mathematik, where
this work was completed, for the warm hospitality and stimulating
atmosphere. The work is partially supported by the grant
ANR-05-BLAN-0029-01.
2 Maxwell operator via projective modules
In this section we introduce the Maxwell operator in classical settings and consider its behavior
w.r.t. the restriction to a subvariety.
Also, we give a few basic examples.
Let $\cal{M}$ be a regular affine variety endowed with a (pseudo-) Riemannian metric $g_{ij}=g(\partial_{i},\partial_{j})$
where $\partial_{i}$ are
partial derivatives in local coordinates. We need two operators
$$d:{{{K}}[\cal{M}]}\to{\Omega^{1}(\cal{M})},\quad f(x)\mathchar 13338\relax%
\partial_{i}f\,dx_{i},$$
$$\partial:{\Omega^{1}(\cal{M})\to{{K}}[\cal{M}]},\quad{\alpha}_{i}dx_{i}%
\mathchar 13338\relax{1\over\sqrt{g}}\partial_{i}(g^{ij}\sqrt{g}\,{\alpha}_{j})$$
where $g=|\det(g_{ij})|$ and the tensor $g^{ij}$ is inverse to
$g_{ij}$. The Laplace operator on the algebra ${{K}}[\cal{M}]$ is
$$\Delta(f)=\partial\,df={1\over\sqrt{g}}\partial_{i}(g^{ij}\sqrt{g}\,\partial_{%
j}f).$$
Laplace operators on the spaces $\Omega^{i}(\cal{M})$ are defined by
the formula
$$\Delta=\partial\,d+d\,\partial$$
where $\partial:{\Omega^{i}(\cal{M})}\to\Omega^{i-1}(\cal{M})$ is the well known
analog of the above operator. In what follows we realize the
Maxwell operator as $\rm Mw=\Delta-d\,\partial$. Besides, if ${\cal{M}}={{R}}^{n}$
and the metric is constant in the Cartesian coordinates $x_{i}$,
the operator $\Delta$ acts on the space $\Omega^{1}(\cal{M})$ via $\Delta({\alpha}_{i}dx_{i})=\Delta({\alpha}_{i})dx_{i}$.
Proposition 1
Let $\cal{N}\subset\cal{M}$ be a regular subvariety of a variety $\cal{M}$ of codimension 1. Suppose that
in a vicinity of each point $a\in\cal{N}$ there exists a coordinate
system $(x_{1},x_{2},...,x_{n})$ such that $\cal{N}$ is given by $x_{n}=0$ and
$(x_{1},x_{2},...,x_{n-1})$ is a local coordinate system in $\cal{N}$,
$g(\partial_{n},\partial_{n})=1$ and $g(\partial_{i},\partial_{n})=0$ for any $1\leq i\leq n-1$. Then
$\rm Mw_{\cal{N}}=\rm Mw_{\cal{M}}|_{\cal{N}}$, i.e. the Maxwell operator on $\cal{N}$ is
the restriction of the Maxwell operator on $\cal{M}$. A similar claim
is valid for the Laplace operator.
Proof follows from explicit form of the Maxwell and Laplace operators in the local
coordinate system $(x_{1},x_{2},...,x_{n})$.
This proposition enables us to write the Maxwell operator on
certain algebraic varieties in terms of ambient spaces. Thus, we
realize the space of one-forms (as well as the space of vector
fields) on such a variety as a projective module without using
any local coordinate system.
Let us consider the basic example–a sphere
$$S_{r}^{2}=\{(x,y,z)\in{{R}}^{3}\,|\,x^{2}+y^{2}+z^{2}=r^{2},\,r>0\}$$
embedded in the Euclidean space ${{R}}^{3}\cong so(3)^{*}$ as an orbit of action of the group
$SO(3)$. Also, we assume this space to be equipped with an
$SO(3)$-invariant pairing $\langle x,x\rangle=1,\,\,\langle x,y\rangle=0,$ and so on. The corresponding metric is
$g(\partial_{x},\partial_{x})=1,\,\,g(\partial_{x},\partial_{y})=0,$ and
so on. Also, we endow the coordinate algebra ${{K}}[{{R}}^{3}]$ of the
space ${{R}}^{3}$ with $SO(3)$-covariant Poisson bracket
$$\{x,y\}=z,\,\,\{y,z\}=x,\,\,\{z,x\}=y.$$
To any function $f\in{{K}}[{{R}}^{3}]$ we associate the operator
$$\rm Pois_{f}(g):=\{f,g\},\,\,\forall\,g\in{{K}}[{{R}}^{3}].$$
Then the operators $X=\rm Pois_{x},\,Y=\rm Pois_{y},\,Z=\rm Pois_{z}$ are
infinitesimal rotations. Their explicit forms are
$$X=z\,\partial_{y}-y\,\partial_{z},\,\,Y=x\,\partial_{z}-z\,\partial_{x},\,\,Z=%
y\,\partial_{x}-x\,\partial_{y}.$$
They are tangent to the spheres $S^{2}_{r}$ and subject to the relation
$$x\,X+y\,Y+z\,Z=0.$$
Consider the coordinate algebra of the sphere $S_{r}^{2}$
$${{K}}[S^{2}_{r}]={{K}}[{{R}}^{3}]/\langle x^{2}+y^{2}+z^{2}-r^{2}\rangle.$$
Hereafter $\langle I\rangle$ stands for the two-sided ideal
generated by a set $I$. The space ${\rm Vect}(S^{2}_{r})$ of all vector
fields on a sphere (with coefficients from ${{K}}[S_{r}^{2}]$), treated
as a ${{K}}[S_{r}^{2}]$-module, is the quotient
$$M={{K}}[S_{r}^{2}]^{\oplus 3}/{\overline{M}}$$
of the free ${{K}}[S_{r}^{2}]$-module ${{K}}[S_{r}^{2}]^{\oplus 3}$ over the submodule ${\overline{M}}=\{\varphi(x\,X+y\,Y+z\,Z),\,\forall\,\varphi\in{{K}}[S_{r}^{2}]\}$.
It is not difficult to see that the module ${\overline{M}}$ is projective. Indeed, the matrix
$${\overline{e}}={1\over r^{2}}\left(\begin{array}[]{c}x\\
y\\
z\end{array}\right)\left(\begin{array}[]{ccc}x&y&z\end{array}\right)$$
defines an idempotent such that ${\overline{M}}={\overline{e}}\,{{K}}[S_{r}^{2}]^{\oplus 3}$. Therefore the
${{K}}[S_{r}^{2}]$-module $M$ can be realized as a submodule
$$M=e\,{{K}}[S_{r}^{2}]^{\oplus 3}\subset{{K}}[S_{r}^{2}]^{\oplus 3}$$
where $e=I-{\overline{e}}$ is the complementary idempotent.
We call the ${{K}}[S_{r}^{2}$]-module $M$ tangent.
In contrast with other ${{K}}[S_{r}^{2}]$-modules, the tangent module defines
the action $M\otimes{{K}}[S_{r}^{2}]\to{{K}}[S_{r}^{2}]$ consisting in applying a
vector field to a function.
In a similar way we can realize the space of one-forms
$\Omega^{1}(S_{r}^{2})$. This space consists of all elements $\{{\alpha}\,dx+\beta\,dy+\gamma\,dz\}$ whereas the elements ${\overline{M}}=\{\varphi(x\,dx+y\,dy+z\,dz),\,\forall\,\varphi\in{{K}}[S_{r}^{%
2}]\}$ vanish. Thus, as a ${{K}}[S_{r}^{2}]$-module, the space
$\Omega^{1}(S_{r}^{2})$ can be treated as the former module. The latter
module is called cotangent. (For a similar treatment of the
space $\Omega^{2}[S_{r}^{2}]$ the reader is referred to [GS1].)
We do not distinguish between the tangent and cotangent
${{K}}[S_{r}^{2}]$-modules, and denote them $M(S^{2}_{r})$. Its elements are
treated to be triples $({\alpha},\,\beta,\,\gamma)^{t}$ ($t$ stands for
transposing) modulo
$$(x\,\rho,\,y\,\rho,\,z\,\rho)^{t},\,\,\,{\alpha},\,\beta,\,\gamma,\,\rho\in{{K%
}}[S^{2}_{r}].$$
Now, we exhibit the Maxwell operator on the Euclidian space ${{R}}^{3}$ in a convenient form.
This operator acts on the space of one-differential forms
$$\Omega^{1}({{R}}^{3})=\{{\alpha}\,dx+\beta dy+\gamma dz\},\qquad{\alpha},\beta%
,\gamma\in{{K}}[{{R}}^{3}]$$
(which is a free ${{K}}[{{R}}^{3}]$-module $\Omega^{1}({{R}}^{3})\cong{{K}}[{{R}}^{3}]^{\oplus 3}$) via formula (1.2).
It can be also written in the following form
$$-\rm rot\,\rm rot=\Delta-\rm grad\,\rm div$$
where
$$\rm rot:\Omega^{1}({{R}}^{3})\to\Omega^{1}({{R}}^{3}),\,\rm div:\Omega^{1}({{R%
}}^{3})\to{{K}}[{{R}}^{3}],\,\rm grad:{{K}}[{{R}}^{3}]\to\Omega^{1}({{R}}^{3})$$
are the curl, divergence, and gradient respectively and
$$\Delta=\Delta_{{{K}}[so(3)^{*}]}=\partial^{2}_{x}+\partial^{2}_{y}+\partial^{2%
}_{z}$$
is the Laplace operator.
By identifying a differential
form ${\alpha}\,dx+\beta\,dy+\gamma\,dz$ and the triple $({\alpha},\,\beta,\,\gamma)^{t}$ as explained above we can write
$$\rm Mw_{{{K}}[so(3)^{*}]}\left(\begin{array}[]{c}{\alpha}\\
\beta\\
\gamma\end{array}\right)=\left(\begin{array}[]{c}\Delta({\alpha})\\
\Delta(\beta)\\
\Delta(\gamma)\end{array}\right)-\left(\begin{array}[]{c}\partial_{x}\\
\partial_{y}\\
\partial_{z}\end{array}\right)(\partial_{x},\,\partial_{y},\,\partial_{z})%
\left(\begin{array}[]{c}{\alpha}\\
\beta\\
\gamma\end{array}\right).$$
(2.1)
Observe that the Maxwell operator is $SO(3)$-invariant if $SO(3)$ acts on the generators $x,y,z$ and the matrices
$M\in{\rm Mat}_{3}({{R}})$ in a proper way (see section 5).
The gauge freedom is due to the fact that the triples $(\partial_{x}\rho,\,\partial_{y}\rho,\,\partial_{z}\rho)^{t}$ belong to the kernel $\rm Ker\,(Mw_{{{K}}[so(3)^{*}]})$ of this operator. So, if a triple
$({\alpha},\,\beta,\,\gamma)^{t}$ is a solution of the Maxwell equation
$\rm Mw_{{{K}}[so(3)^{*}]}({\alpha},\,\beta,\,\gamma)^{t}=(\lambda,\,\mu,\,\nu)%
^{t}$
then the triple $({\alpha},\,\beta,\,\gamma)^{t}+(\partial_{x}\rho,\,\partial_{y}\rho,\,%
\partial_{z}\rho)^{t}$ is also a solution.
Now, consider the Maxwell operator on the sphere $S_{r}^{2}$. By using
the relation
$$r\partial_{r}=x\partial_{x}+y\partial_{y}+z\partial_{z}$$
(2.2)
we realize
the partial derivatives $\partial_{x},\,\partial_{y},\,\partial_{z}$ as follows
$$\partial_{x}={1\over r^{2}}(y\,Z-z\,Y)+{x\over r}\partial_{r}\,\,\mathchar 133%
21\relax.$$
(2.3)
(Thereafter the symbol $\mathchar 13321\relax$ stands for the cyclic permutations).
In what follows we also use the following formulae
$$\partial_{r}x={x\over r}\,\,\mathchar 13321\relax,\,\,\partial_{x}r={x\over r}%
\,\,\mathchar 13321\relax,\,\,\,X(r)=Y(r)=Z(r)=0,$$
and the fact that the vector field $\partial_{r}$ commutes with
$X,Y,Z$.
By using formula (2.3) we rewrite the Laplace operator on the space ${{R}}^{3}$ as follows
$$\Delta_{{{K}}[so(3)^{*}]}=({1\over r^{2}}{\cal X}+{x\over r}\partial_{r})^{2}+%
({1\over r^{2}}{\cal Y}+{y\over r}\partial_{r})^{2}+({1\over r^{2}}{\cal Z}+{z%
\over r}\partial_{r})^{2},$$
(2.4)
where we use the notations
$${\cal X}=y\,Z-z\,Y,\qquad{\cal Y}=z\,X-x\,Z,\qquad{\cal Z}=x\,Y-y\,X.$$
By the above proposition the Laplacian $\Delta_{{{K}}[S^{2}_{r}]}$ on the
sphere $S^{2}_{r}$
is the restriction of the Laplacian $\Delta_{{{K}}[so(3)^{*}]}$ to the sphere $S^{2}_{r}$. Indeed, in spherical coordinates
the radius $r$ plays the role of the coordinate $x_{n}$ from Proposition 1.
Lemma 2
$$\Delta_{{{K}}[S^{2}_{r}]}={{\cal X}^{2}+{\cal Y}^{2}+{\cal Z}^{2}\over r^{4}}.$$
(2.5)
Proof We have only to check that the first
order component of the operator (2.4) vanishes on the sphere
$S^{2}_{r}$ $\partial_{r}=0$. Indeed, this component equals
$${x\over r}\partial_{r}({1\over r^{2}}){\cal X}+{y\over r}\partial_{r}({1\over r%
^{2}}){\cal Y}+{z\over r}\partial_{r}({1\over r^{2}}){\cal Z}=-{2\over r^{4}}(%
x{\cal X}+y{\cal Y}+z{\cal Z})=0.$$
(Moreover, there are no
$SO(3)$-invariant first order differential operators on the space
${{R}}^{3}$.)
Lemma 3
The Laplace operator (2.5) equals
$$\Delta_{{{K}}[S^{2}_{r}]}={X^{2}+Y^{2}+Z^{2}\over r^{2}}.$$
Proof We have
$${\cal X}^{2}+{\cal Y}^{2}+{\cal Z}^{2}=(y\,Z-z\,Y)^{2}+(z\,X-x\,Z)^{2}+(x\,Y-y%
\,X)^{2}=$$
$$y^{2}Z^{2}+z^{2}Y^{2}-yz(YZ+ZY)-yxZ+zxY+\mathchar 13321\relax$$
$$=(r^{2}-x^{2})X^{2}-yxYX-zxZX+\mathchar 13321\relax=$$
$$r^{2}(X^{2}+Y^{2}+Z^{2})-[x(xX+yY+zZ)X+\mathchar 13321\relax]=r^{2}(X^{2}+Y^{2%
}+Z^{2}).$$
This form of the Laplacian is more familiar and widely used in study of rotationally symmetric
Schrodinger operators.
Let us emphasize that the operators $X,\,Y,\,Z,\,\,{\cal X},\,{\cal Y},\,Z$
are well defined on the space ${{R}}^{3}$, and the relation
${\cal X}^{2}+{\cal Y}^{2}+{\cal Z}^{2}=r^{2}(X^{2}+Y^{2}+Z^{2})$ is also valid on the whole space
${{R}}^{3}$.
Proposition 4
$$\Delta_{{{K}}[S_{r}^{2}]}\,{\cal X}-{\cal X}\,\Delta_{{{K}}[S_{r}^{2}]}=-{{2\,%
x}}\,\Delta_{{{K}}[S_{r}^{2}]}\,\,\mathchar 13321\relax$$
on ${{R}}^{3}$.
Proof Indeed, by direct computations we have
$$\Delta_{{{K}}[S^{2}_{r}]}\,{\cal X}-{\cal X}\,\Delta_{{{K}}[S^{2}_{r}]}={2%
\over r^{2}}(zZX+yYX-xY^{2}-xZ^{2})={2\over r^{2}}((xX+yY+zZ)X-x(X^{2}+Y^{2}+Z%
^{2})).$$
Now, define the Maxwell operator $\rm Mw_{{{K}}[S^{2}_{r}]}$ on the
sphere $S_{r}^{2}$ as follows
$$\rm Mw_{{{K}}[S^{2}_{r}]}\left(\begin{array}[]{c}{\alpha}\\
\beta\\
\gamma\end{array}\right)=e\left(\left(\begin{array}[]{c}\Delta_{{{K}}[S^{2}_{r%
}]}({\alpha})\\
\Delta_{{{K}}[S^{2}_{r}]}(\beta)\\
\Delta_{{{K}}[S^{2}_{r}]}(\gamma)\end{array}\right)-{1\over r^{4}}\left(\begin%
{array}[]{c}{\cal X}\\
{\cal Y}\\
{\cal Z}\end{array}\right)({\cal X},\,{\cal Y},\,{\cal Z})\left(\begin{array}[%
]{c}{\alpha}\\
\beta\\
\gamma\end{array}\right)\right),\,\,\,\,\,\,\left(\begin{array}[]{c}{\alpha}\\
\beta\\
\gamma\end{array}\right)\in M(S^{2}_{r}).$$
(2.6)
In this definition we assume that the elements of the module
$M(S_{r}^{2})$ are triples $({\alpha},\,\beta,\,\gamma)^{t}$ such that
${\overline{e}}\,({\alpha},\,\beta,\,\gamma)^{t}=0$ or, which is the same,
$e\,({\alpha},\,\beta,\,\gamma)^{t}=({\alpha},\,\beta,\,\gamma)^{t}$. The
idempotent $e$ coming in this definition ensures that the image
of the operator $\rm Mw_{{{K}}[S^{2}_{r}]}$ belongs to the module $M(S_{r}^{2})$.
Now, we justify this definition. In fact, by Proposition 1 if we
present the Maxwell operator on ${{R}}^{3}$ in spherical coordinates
and restrict it to the sphere $S_{r}^{2},$ we get the Maxwell operator
on this sphere. It remains to check that it coincides with the
operator (2.6).
Let us observe that similarly to the Laplacian (2.5) the
operator $\rm Mw_{{{K}}[S^{2}_{r}]}$ is $SO(3)$-covariant. Moreover, its
gauge freedom consists in the fact that the triples
$({\cal X}(\rho),\,{\cal Y}(\rho),\,{\cal Z}(\rho))^{t}$ belong to
$\rm Ker\,(\rm Mw_{{{K}}[S^{2}_{r}]})$. It follows from the proposition
4. (These triples belong to the module $M(S_{r}^{2})$ since
${\overline{e}}({\cal X},\,{\cal Y},\,{\cal Z})^{t}=0$.)
Remark 5
Introduce ”gradient” and ”divergence” on the sphere $S_{r}^{2}$ as follows
$$\rm grad_{{{K}}[S^{2}_{r}]}\,f=r^{-2}({\cal X}(f),\,{\cal Y}(f),\,{\cal Z}(f))%
,\qquad\rm div_{{{K}}[S^{2}_{r}]}\left(\begin{array}[]{c}{\alpha}\\
\beta\\
\gamma\end{array}\right)=r^{-2}({\cal X},\,{\cal Y},\,{\cal Z})\left(\begin{%
array}[]{c}{\alpha}\\
\beta\\
\gamma\end{array}\right),\,\,\,\,\,\,\left(\begin{array}[]{c}{\alpha}\\
\beta\\
\gamma\end{array}\right)\in M(S^{2}_{r}).$$
Rewrite the operator $\rm Mw_{{{K}}[S^{2}_{r}]}$ in the form similar to the
classical one:
$$\rm Mw_{{{K}}[S^{2}_{r}]}=e\,\Delta_{{{K}}[S^{2}_{r}]}-\rm grad_{{{K}}[S^{2}_{%
r}]}\,\rm div_{{{K}}[S^{2}_{r}]}$$
(the factor $e$ in the second summand can be omitted).
Let us consider one example more–the classical Minkowski space, i.e. 4 dimensional space endowed with an
$SO(1,3)$-covariant norm
$$\|(t,\,x,\,y,\,z)\|=\sqrt{t^{2}-x^{2}-y^{2}-z^{2}}.$$
The corresponding second order differential operator is
$$\Delta_{{{K}}[{{R}}^{4}]}=\partial^{2}_{t}-\partial^{2}_{x}-\partial^{2}_{y}-%
\partial^{2}_{z}.$$
It is called d’Alembertian or $so(1,3)$ Laplacian.
Then the corresponding Maxwell operator is
$$\rm Mw_{{{K}}[{{R}}^{4}]}\left(\begin{array}[]{c}{\alpha}\\
\beta\\
\gamma\\
\delta\end{array}\right)=\left(\begin{array}[]{c}\Delta_{{{K}}[{{R}}^{4}]}({%
\alpha})\\
\Delta_{{{K}}[{{R}}^{4}]}(\beta)\\
\Delta_{{{K}}[{{R}}^{4}]}(\gamma)\\
\Delta_{{{K}}[{{R}}^{4}]}(\delta)\end{array}\right)-\left(\begin{array}[]{c}%
\partial_{t}\\
\partial_{x}\\
\partial_{y}\\
\partial_{z}\end{array}\right)(\partial_{t}\,-\partial_{x},\,-\partial_{y},\,-%
\partial_{z})\left(\begin{array}[]{c}{\alpha}\\
\beta\\
\gamma\\
\delta\end{array}\right).$$
(2.7)
It is evident that it is $SO(1,3)$-covariant and
$$(\partial_{t}\rho,\,\partial_{x}\rho,\,\partial_{y}\rho,\,\partial_{z}\rho)^{t%
}\in\rm Ker\,(\rm Mw).$$
It is also clear that by restricting $t=0$ we get the Maxwell
operator on the space ${{R}}^{3}$ (up to a sign).
3 Maxwell operator on $sl(2)^{*}$ and ${\rm H}_{r}^{2}$
Now, apply the above scheme to the space ${{R}}^{3}\cong sl(2)^{*}$
endowed with an action of the group $SL(2)$. This example is
another (non-compact) real form of the situation considered above.
So, the corresponding Maxwell operator can be obtained by a mere
change of a basis. Nevertheless, we describe it in detail since it
is going to be ”q-deformed” below.
Consider a basis $\{b,\,h,\,c\}$ in the space ${{K}}[sl(2)^{*}]$
equipped with the $SL(2)$-covariant Poisson bracket
$$\{h,b\}=2b,\quad\{h,c\}=-2c,\quad\{b,c\}=h.$$
The corresponding Poisson operators are
$$H={\rm Pois}_{h}=2b\,\partial_{b}-2c\,\partial_{c},\quad B={\rm Pois}_{b}=h\,%
\partial_{c}-2b\,\partial_{h},\quad C={\rm Pois}_{c}=-h\,\partial_{b}+2c\,%
\partial_{h}.$$
They are tangent to hyperboloids and subject to the relation
$$c\,B+{hH\over 2}+b\,C=0.$$
(3.1)
The ${{K}}[{\rm H}_{r}^{2}]$-module ${\rm Vect}({\rm H}_{r}^{2})$
of vector fields on a hyperboloid (1.1) is a quotient
module $M={{K}}[{\rm H}_{r}^{2}]^{\oplus 3}/{\overline{M}}$ of the free
${{K}}[{\rm H}_{r}^{2}]$-module ${{K}}[{\rm H}_{r}^{2}]^{\oplus 3}$ over the submodule
${\overline{M}}=\{\varphi(c\,B+{hH\over 2}+b\,C),\,\forall\,\varphi\in{{K}}[{%
\rm H}_{r}^{2}]\}$. The idempotent corresponding to the
module ${\overline{M}}$ is
$${\overline{e}}={1\over r^{2}}\left(\begin{array}[]{c}c\\
{h\over 2}\\
b\end{array}\right)\left(\begin{array}[]{ccc}b&h&c\end{array}\right)$$
(3.2)
(In order to show that ${\overline{M}}={\overline{e}}{{K}}^{\oplus 3}$ it suffices to check that
${{K}}[{\rm H}_{r}^{2}]=\{b{\alpha}+h\beta+c\gamma\}$.) Thus, the module $M$ is also
projective $M=e\,{{K}}[{\rm H}_{r}^{2}]^{\oplus 3}$ where $e=1-{\overline{e}}$.
The ${{K}}[{\rm H}_{r}^{2}]$-module $\Omega^{1}({\rm H}_{r}^{2})$ can be treated similarly.
Endow the space ${\rm span}(b,h,c)$ with an $SL(2)$-covariant pairing
$$\langle b,\,c\rangle=\langle c,\,b\rangle=1,\langle h,\,h\rangle=2$$
(3.3)
which is inverse to the Casimir element
$${\rm Cas}=bc+{h^{2}\over 2}+cb.$$
Thus, on the space $sl(2)$
$$\langle b=\partial_{c},\quad\langle c=\partial_{b},\quad\langle h=2\partial_{h}$$
(3.4)
where $\langle x:sl(2)\to{{K}}$ is the ”bra” operator, i.e., such that $\langle x\,(y):=\langle x,\,y\rangle$.
We extend the action of the operators $\langle b,\,\langle c,\,\langle h$ to the algebra ${{K}}[sl(2)^{*}]$ via the relations
(3.4), i.e. by means of the Leibnitz rule. Thus, the action
$$sl(2)\otimes{{K}}[sl(2)^{*}]\to{{K}}[sl(2)^{*}]$$
is well defined. It is clear that it is $SL(2)$-covariant.
Otherwise stated, we have an $SL(2)$-covariant map
$$sl(2)\to{\rm Vect}(sl(2)^{*})$$
(3.5)
different from that defined
above via the Poisson bracket. We have associated a tangent
(Poisson) vector field to any element from $sl(2)$, now it is a
partial derivative which is associated to such an element.
By using the map (3.5) we associate a differential operator
to any element from $U(sl(2))$. Thus, the Casimir element is
related to the following operator
$$\Delta_{{{K}}[sl(2)^{*}]}=\partial_{b}\partial_{c}+{2\partial_{h}^{2}}+%
\partial_{c}\partial_{b}.$$
It is a non-compact analog of the Laplace operator $\Delta_{{{K}}[so(3)^{*}]}$. (More precisely, it is a multiple of the latter
Laplacian written in the basis $\{b,\,h,\,c\}$.)
Similarly, the element $\rm Id\,\rm Cas-{\overline{e}}$ is related to the
operator
$$\rm Mw_{{{K}}[sl(2)^{*}]}\left(\begin{array}[]{c}{\alpha}\\
\beta\\
\gamma\end{array}\right)=\left(\begin{array}[]{c}\Delta({\alpha})\\
\Delta(\beta)\\
\Delta(\gamma)\end{array}\right)-\left(\begin{array}[]{c}\partial_{b}\\
\partial_{h}\\
\partial_{c}\end{array}\right)(\partial_{c},\,{2}\partial_{h},\,\partial_{b})%
\left(\begin{array}[]{c}{\alpha}\\
\beta\\
\gamma\end{array}\right),\,\,\left(\begin{array}[]{c}{\alpha}\\
\beta\\
\gamma\end{array}\right)\in{{K}}[sl(2)^{*}]^{\oplus 3}.$$
(3.6)
Note, that $(\partial_{b}\rho,\,\partial_{h}\rho,\,\partial_{c}\rho)^{t}\in\rm Ker(\rm Mw_%
{{{K}}[sl(2)^{*}]})$.
Now, we want to relate the vector fields $\langle b,\,\langle h,\,\langle c$ to the fields tangent to all hyperboloids. Observe
that the following formula, similar to (2.2), holds:
$$r\,\partial_{r}=b\,\partial_{b}+h\,\partial_{h}+c\,\partial_{c}.$$
This entails the following relations, similar to (2.3)
$$\langle b=\partial_{c}={h\,B-b\,H\over 2r^{2}}+{b\over r}\partial_{r},\quad{1%
\over 2}\langle h=\partial_{h}={b\,C-c\,B\over 2r^{2}}+{h\over 2r}\partial_{r}%
,\quad\langle c=\partial_{b}={c\,H-h\,C\over 2r^{2}}+{c\over r}\partial_{r}.$$
(3.7)
Introduce the following notations
$${\cal B}={1\over 2}(h\,B-b\,H),\quad{\cal H}=b\,C-c\,B,\quad{\cal C}={1\over 2%
}(c\,H-h\,C).$$
Thus, we get
$$\langle b={{\cal B}\over r^{2}}+{b\over r}\partial_{r},\quad\langle h={{\cal H%
}\over r^{2}}+{h\over r}\partial_{r},\quad\langle c={{\cal C}\over r^{2}}+{c%
\over r}\partial_{r}.$$
(3.8)
Now, introduce Laplace operator on a hyperboloid ${\rm H}_{r}^{2}$ similarly
to the previously described:
$$\Delta_{{{K}}[{\rm H}_{r}^{2}]}={1\over r^{4}}({\cal B}\,{\cal C}+{{\cal H}^{2%
}\over 2}+{\cal C}\,{\cal B}).$$
Also,
$$\Delta_{{{K}}[{\rm H}_{r}^{2}]}={1\over r^{2}}(B\,C+{H^{2}\over 2}+C\,B).$$
According the pattern above we define the Maxwell operator on the
hyperboloid ${\rm H}_{r}^{2}$ as
$$\rm Mw_{{{K}}[{\rm H}_{r}^{2}]}\left(\begin{array}[]{c}{\alpha}\\
\beta\\
\gamma\end{array}\right)=e\left(\left(\begin{array}[]{c}\Delta_{{{K}}[{\rm H}_%
{r}^{2}]}({\alpha})\\
\Delta_{{{K}}[{\rm H}_{r}^{2}]}(\beta)\\
\Delta_{{{K}}[{\rm H}_{r}^{2}]}(\gamma)\end{array}\right)-{1\over r^{4}}\left(%
\begin{array}[]{c}{\cal C}\\
{{\cal H}\over 2}\\
{\cal B}\end{array}\right)({\cal B},\,{\cal H},\,{\cal C})\left(\begin{array}[%
]{c}{\alpha}\\
\beta\\
\gamma\end{array}\right)\right),\,\,\,\,\,\,\left(\begin{array}[]{c}{\alpha}\\
\beta\\
\gamma\end{array}\right)\in M({\rm H}_{r}^{2}).$$
It is evident that the triples $({\cal C}\rho,\,{{\cal H}\rho\over 2},\,{\cal B}\rho)^{t}$ belong
to the kernel $\rm Ker\,(\rm Mw_{{{K}}[{\rm H}_{r}^{2}]})$.
Completing this section we want to emphasize that the Maxwell
operators on the algebras ${{K}}[{{R}}^{3}]$ and ${{K}}[{\rm H}_{r}^{2}]$ are
$SL(2)$-invariant provided the matrices coming in the definition of these
operators are endowed with a proper action of the group $SL(2)$.
We exhibit a way to introduce such an action in the last section
in a more general settings of quantum algebras.
4 Elements of analysis on q-Minkowski space algebra
As it has been mentioned above, the q-Minkowski space algebra is
a particular case of the Reflection Equation Algebra, which is
defined as follows. Let $R:V^{\otimes 2}\to V^{\otimes 2}$ be a braiding, i.e.,
an invertible operator subject to the quantum Yang-Baxter equation
$$(R\otimes I)(I\otimes R)(R\otimes I)=(I\otimes R)(R\otimes I)(I\otimes R)$$
where $V$ is a $n$-dimensional ($n\geq 2$) vector space over the field ${{K}}$.
Let $L=\|l_{i}^{j}\|$ be a $n\times n$ matrix with entries
$l_{i}^{j},\,\,1\leq i,j\leq n$. Then the relation
$$R(L\otimes I)R(L\otimes I)-(L\otimes 1)R(L\otimes I)R=0$$
(4.1)
is called the reflection equation. The algebra generated by the unity and the
elements $l_{i}^{j}$ subject to this system is called a reflection equation algebra (REA) and denoted ${\cal L}(R_{q})$.
If, in addition, the braiding $R$ is subject to the relation
$$(qI-R)(q^{-1}I+R)=0,\,\,q\in{{K}}$$
it is called a Hecke symmetry. If $q=1$ it becomes an involutive symmetry.
Let $n=2$ and $R$ be the product of the image in the space $V^{\otimes 2}$
of the universal R-matrix of the QG $U_{q}(sl(2))$ and the usual flip. Then
in the basis $\{x_{1}\otimes x_{1},x_{1}\otimes x_{2},x_{2}\otimes x_{1},x_{2}\otimes x_{2}\}$
(where $\{x_{1},\,x_{2}\}$ in an appropriate basis in $V$) the
braiding $R$ reads
$$R_{q}=\left(\begin{array}[]{cccc}q&0&0&0\\
0&q-q^{-1}&1&0\\
0&1&0&0\\
0&0&0&q\end{array}\right).$$
(4.2)
It is easy to see that $R$ is a Hecke symmetry. The parameter $q$ is assumed to be generic.
In what follows the algebra ${\cal L}(R_{q})$ corresponding to this Hecke symmetry is
called q-Minkowski space algebra. In this case
it is also denoted ${{K}}_{q}[{{R}}^{4}]$.
Also, we use the following notation for generators of this algebra
$$L=\left(\begin{array}[]{cc}l_{1}^{1}&l_{1}^{2}\\
l_{2}^{1}&l_{2}^{2}\end{array}\right)=\left(\begin{array}[]{cc}a&b\\
c&d\end{array}\right).$$
Let us explicitly write down the system (4.1) with the Hecke
symmetry (4.2):
$$\begin{array}[]{l@{\hspace{20mm}}l}qab-q^{-1}ba=0\hskip 56.905512pt&q(bc-cb)=(%
q-q^{-1})a(d-a)\\
qca-q^{-1}ac=0\hskip 56.905512pt&q(cd-dc)=(q-q^{-1})ca\\
ad-da=0\hskip 56.905512pt&q(db-bd)=(q-q^{-1})ab.\end{array}$$
(4.3)
Now, rewrite the system (4.3) in the basis
$\{l,\,h,\,b,\,c\}$ where $l=q^{-1}a+qd,\quad h=a-d$:
$$\begin{array}[]{l@{\hspace{20mm}}l}q^{2}hb-bh=-(q-q^{-1})lb\hskip 56.905512pt&%
bl=lb\\
q^{2}ch-hc=-(q-q^{-1})lc\hskip 56.905512pt&cl=lc\\
\left(q^{2}+1\right)\left(bc-cb\right)+\left(q^{2}-1\right)h^{2}=-(q-q^{-1})lh%
\hskip 56.905512pt&hl=lh\end{array}$$
(4.4)
Observe that the element $l$ is central but, opposite to the
classical case, it appears in the equations of the left column of
the system (4.4). Also, we need the algebra
${{K}}_{q}[{{R}}^{3}]={{K}}_{q}[{{R}}^{4}]/\langle l\rangle$ which is a braided
analog of the coordinate algebra ${{K}}[{{R}}^{3}]$. It is generated by
three elements $b,\,h,\,c$ subject to
$$q^{2}hb-bh=0,\,\,\,\,q^{2}ch-hc=0,\,\,\,\,(q^{2}+1)(bc-cb)+(q^{2}-1)h^{2}=0.$$
(4.5)
The generating spaces of this algebras ${\rm span}(a,b,c,d)={\rm span}(b,h,c,l)$ and ${\rm span}(b,h,c)$
are respectively denoted ${\cal L}$ and ${\cal SL}$.
Let us endow them with an action of the QG $U_{q}(sl(2))$ as follows.
Recall that the QG $U_{q}(sl(2))$ is generated by the unit and four
generators $K,\,K^{-1},\,X,\,Y$ subject to the relations
$$K\,K^{-1}=1,\quad K^{\epsilon}\,X=q^{2\epsilon}X\,K^{\epsilon}\,\quad K^{%
\epsilon}\,Y=q^{-2\epsilon}Y\,K^{\epsilon},\quad XY-YX={{K-K^{-1}}\over{q-q^{-%
1}}},\quad\epsilon\in\{-1,1\}.$$
There exists a family of coproducts and corresponding antipodes which (together with the standard counit)
endow this algebra with a Hopf structure. This family is parameterized by a continuous parameter $\theta$
assumed to be a real number (cf. [DG]):
$$\Delta(K^{\epsilon})=K^{\epsilon}\otimes K^{\epsilon},\quad\Delta(X)=X\otimes K%
^{\theta-1}+K^{\theta}\otimes X,\quad\Delta(Y)=Y\otimes K^{-\theta}+K^{1-%
\theta}\otimes Y.$$
(In fact, all coproducts are equivalent, cf. [DG].)
We define an action of the QG $U_{q}(sl(2))$ on the space ${\cal SL}$ as follows
$$K^{\epsilon}(b)=q^{2\epsilon}b,\,\,K^{\epsilon}(h)=h,\,\,K^{\epsilon}(c)=q^{-2%
\epsilon}c,$$
$$X(b)=0,\,\,X(h)=-q^{\theta}2_{q}b,\,X(c)=q^{1-\theta}h,$$
$$Y(b)=-q^{-\theta}h,\,\,Y(h)=q^{\theta-1}2_{q}c,\,\,Y(c)=0.$$
Hereafter, $n_{q}={q^{n}-q^{-n}\over q-q^{-1}}$. Moreover, we put
$X(l)=Y(l)=0,\,K^{\epsilon}(l)=l$. Thus, as a $U_{q}(sl(2))$ module, the
space ${\cal L}$ is a direct sum of ${\cal SL}$ and ${{K}}\,l$.
The reader can easily check that the structure of the algebra
${{K}}_{q}[{{R}}^{3}]$ is compatible with the action of the QG $U_{q}(sl(2))$
extended to higher powers of ${\cal SL}$ via the coproduct $\Delta$. In
order to do this it suffices to check that the system (4.5)
is invariant w.r.t. the QG $U_{q}(sl(2))$.
Now, describe a regular way of introducing Casimir-like elements
and covariant pairing into these spaces.
In order to do this we need two special operators playing the role
of the parity operators in super-spaces.
Let $R:V^{\otimes 2}\to V^{\otimes 2}$ be a braiding. It is called skew-invertible if there exists an operator
$\Psi:V^{\otimes 2}\to V^{\otimes 2}$ such that
$${\rm Tr}_{2}R_{12}\Psi_{23}=P_{13}$$
where $P_{13}$ is the flip transposing the first and third spaces in the product $V^{\otimes 3}$
and ${\rm Tr}_{2}$ stands for the trace applied in the second space.
Fixing a basis $\{x_{i}\}\in V$ and representing the operators $R$
and ${\Psi}$ in the basis $\{x_{i}\otimes x_{j}\}\in V^{\otimes 2}$
by the matrices $R_{ia}^{kb}$ and ${\Psi}^{al}_{bj},$ respectively, we rewrite this relation
as follows
$$\sum_{a,b=1}^{n}R_{ia}^{kb}{\Psi}^{al}_{bj}=\delta^{l}_{i}\,\delta^{k}_{j}.$$
For any skew-invertible Hecke symmetry the operators
$$B:={\rm Tr}_{1}\Psi\,\,(B_{i}^{j}={\Psi}^{aj}_{ai}),\,\,\,\,C:={\rm Tr}_{2}%
\Psi\,\,(C_{i}^{j}={\Psi}^{ja}_{ia})$$
are well defined and the elements ${\rm Tr}_{q}\,L^{k}={\rm Tr}\,C\,L^{k}$ are central in the algebra ${\cal L}(R_{q})$ (cf. [GPS1]).
The operator $L^{k}\to{\rm Tr}_{q}\,L^{k}$ is called the quantum
trace. It can be extended by linearity to all polynomials of the
matrix $L$. For the q-Minkowski space algebra ${{K}}_{q}[{{R}}^{4}]$ we
have $C={\rm diag}(q^{-3},\,q^{-1})$. Hence, $l=q^{2}\,{\rm Tr}_{q}\,L$. Also,
we consider the central element $q^{2}\,{\rm Tr}_{q}\,L^{2}$ in this algebra. In the basis $\{b,c,h,l\}$ its explicit form
is
$${\rm Cas}_{gl}=q^{-1}bc+qcb+{1\over{2_{q}}}(h^{2}+l^{2}).$$
(4.6)
Its image in the algebra ${{K}}_{q}[{{R}}^{3}]$ is
$${\rm Cas}_{sl}=q^{-1}bc+{1\over{2_{q}}}h^{2}+qcb.$$
(4.7)
It is a central element
in this algebra. So, we get braided analogs of the $gl(2)$ and
$sl(2)$ Casimir elements, respectively. The quotient of the
algebra ${{K}}_{q}[{{R}}^{3}]$ over the ideal generated by the element
${\rm Cas}_{sl}-r^{2}$ is called a quantum (braided or
q-)hyperboloid.
Remark 6
Observe that ${\rm Cas}_{sl}$ is the unique quadratic element
(up to a factor) in ${{K}}_{q}[{{R}}^{3}]$ which is $U_{q}(sl(2))$-invariant. As for the algebra
${{K}}_{q}[{{R}}^{4}]$, since the element $l^{2}$ is also $U_{q}(sl(2))$-invariant we have a
family of such elements, namely, all linear combinations
of ${\rm Cas}_{sl}$ and $l^{2}$.
We also need $U_{q}(sl(2))$-invariant pairings in the spaces ${\cal L}={\rm span}(b,c,h,l)$ and ${\cal SL}={\rm span}(b,c,h)$.
For a general skew-invertible Hecke symmetry
such a pairing on the space ${\cal L}={\rm span}(l_{i}^{j})$ is defined via the operator $B$. Namely, we put
$$\langle l_{i}^{j},\,l_{k}^{l}\rangle=B_{k}^{j}\delta_{i}^{l}.$$
Also, an analog of the space ${\cal SL}$ can be defined in this case as
the subspace of ${\cal L}$ which is the kernel of the linear map
defined on the generators $l_{i}^{j}$ by $l_{i}^{j}\to\delta_{i}^{j}$. Note,
that the space ${\cal L}$ can be identified with ${\rm End}(V)$ so that
this map is nothing but a braided analog of the trace written in
the basis $\{l_{i}^{j}\}$ (cf. [GPS2]).
In the case of the q-Minkowski space algebra the pairing table is
(we only exhibit terms with non-trivial pairing):
$$\langle a,a\rangle=q^{-1},\,\,\langle b,c\rangle=q^{-3},\,\,\langle c,b\rangle%
=q^{-1},\,\,\langle d,d\rangle=q^{-3}\,\,{\rm hence}\,\,\langle h,h\rangle=q^{%
-2}2_{q},\,\,\langle l,l\rangle=q^{-2}2_{q}.$$
(4.8)
So, contrary to the classical
case, this pairing is not symmetric. Note, that the restriction of
this paring to the space ${\cal SL}$ is the unique (up to a factor)
$U_{q}(sl(2))$-covariant pairing. However, we have a freedom for the
pairing $\langle l,l\rangle$ on the space ${\cal L}$.
Our next goal is to define a braided analog of tangent vector fields. In order to do so, we endow
the space ${\cal SL}$ with a braided analog of the Lie bracket $sl(2)$. Such analogs
of Lie brackets $gl(n)$ and $sl(n)$ can be defined in the spaces ${\cal L}$ and ${\cal SL},$ respectively, for any skew-invertible
Hecke symmetry $R$. We refer the reader to [GPS2] for their construction (also, see remark 9).
But in the low dimensional case in question we define such a bracket
on the space ${\cal SL}$ by only using the fact that it is covariant.
Let us extend the action of the QG $U_{q}(sl(2))$ to the space ${\cal SL}\otimes{\cal SL},$
and decompose it in a direct sum of irreducible $U_{q}(sl(2))$ submodules
${\cal SL}\otimes{\cal SL}=V_{0}\oplus V_{1}\oplus V_{2}$ where the subscript stands
for the spin. Then the operator
$$[\,,\,]:{\cal SL}\otimes{\cal SL}\to{\cal SL}$$
is a $U_{q}(sl(2))$ morphism iff it is trivial on the components $V_{0}$ and
$V_{2},$ and it is an isomorphism between $V_{1}$ and ${\cal SL}$. By this
condition the bracket is defined in the unique (up to a factor)
way.
The multiplication table is as follows
$$[b,b]=0,\quad[b,h]=-w\,b,\quad[b,c]=w\,{q\over 2_{q}}\,h,\quad[h,b]=w\,q^{2}\,b,$$
$$[h,h]=w\,(q^{2}-1),\quad[h,c]=-w\,c,\quad[c,b]=-w\,{q\over 2_{q}}\,h,\quad[c,h%
]=w\,q^{2}\,c,\quad[c,c]=0.$$
Here $w$ is an arbitrary factor. It can be fixed if we introduce the ”enveloping algebra”
of this ”q-Lie algebra” by the relations
$$q^{2}hb-bh={\mathchar 1406\relax}b,\,\,\,\,(q^{2}+1)(bc-cb)+(q^{2}-1)h^{2}={%
\mathchar 1406\relax}h,\,\,\,\,q^{2}ch-hc={\mathchar 1406\relax}c$$
(4.9)
and require the above bracket to define a representation of this enveloping algebra. Then $w={\mathchar 1406\relax}(q^{4}-q^{2}+1)^{-1}$.
We denote the space ${\cal SL}$ endowed with the above bracket $sl(2)_{q}$ and the algebra defined by
the relations (4.9) $U(sl(2)_{q})$.
Introduce q-analogs of the adjoint operators as follows
$B_{q}={\rm ad\,}(b),\,H_{q}={\rm ad\,}(h),\,C_{q}={\rm ad\,}(c)$ where the action ${\rm ad\,}$ is defined via the above bracket.
These operators in the basis $\{b,h,c\}$ are
$$B_{q}=w\left(\begin{array}[]{ccc}0&-1&0\\
0&0&{q\over 2_{q}}\\
0&0&0\end{array}\right)\quad H_{q}=w\left(\begin{array}[]{ccc}q^{2}&0&0\\
0&q^{2}-1&0\\
0&0&-1\end{array}\right)\quad C_{q}=w\left(\begin{array}[]{ccc}0&0&0\\
-{q\over 2_{q}}&0&0\\
0&q^{2}&0\end{array}\right)\quad$$
(4.10)
Proposition 7
The operators $B_{q},\,H_{q},\,C_{q}$ are subject to
$$q^{-1}b\,C_{q}+{h\,H_{q}\over 2_{q}}+qc\,B_{q}=0.{\rm}$$
(4.11)
Proof is straightforward.
Now, we want to extend the operator $B_{q},\,H_{q},\,C_{q}$ to the
algebra ${{K}}_{q}[{{R}}^{3}],$ preserving relation (4.11). It will be
done by a slightly modified method suggested in [A].
Let $V_{k}$ be the $U_{q}(sl(2))$ submodule of ${\cal SL}^{\otimes k}$ with the highest vector $b^{k}$, i.e.,
$$V_{k}={\rm span}(b^{k},\,Y(b^{k}),\,Y^{2}(b^{k}),...,Y^{2k}(b^{k})).$$
Note, that $\dim V_{k}=2k+1$.
There exists a $U_{q}(sl(2))$-covariant projector $P_{k}:{\cal SL}^{\otimes k}\to V_{k}$. It can be realized as a polynomial in
$${\textsc{R}}_{12}={\textsc{R}}\otimes I_{2k-1},\,{\textsc{R}}_{23}=I\otimes{%
\textsc{R}}\otimes I_{2k-2},...,{\textsc{R}}_{2k\,2k+1}=I_{2k-1}\otimes{%
\textsc{R}}$$
where R is the product of the universal quantum
R-matrix represented in the space ${\cal SL}^{\otimes 2}$ and the flip. A
way of constructing such operators $P_{k}$ is described in
[OP]. Observe, that the braiding R is of the
Birman-Murakami-Wenzl type, and therefore the results of this
paper can be applied. Then the extension of the operators
$B_{q},\,H_{q},\,{\cal C}_{q}$ to the component $V_{k}$ (denoted
$B_{q}^{(k)},\,H_{q}^{(k)},\,{\cal C}_{q}^{(k)},$
respectively) are defined as follows
$$B_{q}^{(k)}=\tau_{k}\,P_{k}(B_{q}\otimes I_{2k}),\,\,\,H_{q}^{(k)}=\tau_{k}\,P%
_{k}(H_{q}\otimes I_{2k}),\,\,\,C_{q}^{(k)}=\tau_{k}\,P_{k}(C_{q}\otimes I_{2k})$$
where the factor $\tau_{k}$ can be found from the property that the
operators $B_{q}^{(k)},\,H_{q}^{(k)},\,{\cal C}_{q}^{(k)}$ realize a representation of the
the algebra $U(sl(2)_{q})$. Thus, the prolongation of the operators $B_{q},\,H_{q},\,{\cal C}_{q}$ is
well defined on all components $V_{k}$.
Now, observe that for a generic $q$ we have ${{K}}_{q}[{{R}}^{3}]\cong(\oplus V_{k})\otimes Z$
where $Z$ is the center of the algebra ${{K}}_{q}[{{R}}^{3}]$.
By putting $B_{q}(v\otimes z)=B_{q}^{k}(v)\otimes z$ where $v\in V_{k},\,z\in Z$ and analogously
for $H_{q}$ and $C_{q}$ we define the operators
$B_{q},\,H_{q},\,C_{q}$ on the whole algebra ${{K}}_{q}[{{R}}^{3}]$. We call them
generating braided tangent vector fields.
By considering their combinations with coefficients from
${{K}}_{q}[{{R}}^{3}]$ we get, by definition, all braided tangent vector fields.
This definition is motivated by the following.
Proposition 8
The extended operators $B_{q},\,H_{q},\,C_{q}$ are subject to the relation (4.11).
Thus, the space of all braided tangent vector fields is treated as
a left ${{K}}_{q}[{{R}}^{3}]$ module which is the quotient of the free
module ${{K}}_{q}[{{R}}^{3}]^{\oplus 3}$ over the submodule generated by the
l.h.s. of (4.11).
Remark 9
Note that we have defined braided tangent vector fields without
using any form of the Leibnitz rule (in other words, any coproduct
in the algebra ${{K}}_{q}[{{R}}^{3}]$). Nevertheless, if $R$ is a
skew-invertible Hecke symmetry such a coproduct exists in the
so-called modified REA which is a braided analog of the enveloping
algebra $U(gl(n))$. This modified REA can be obtained from the REA
${\cal L}(R_{q})$ by a shift (cf. [GPS2] for details). It is possible to
use this coproduct in order to define braided analogs of tangent
vector fields in the space ${\cal SL}$ for $n>2$. However, it is not
clear whether vector fields satisfy the relations looking like
those of (4.11). Nevertheless,
we want to point out the property of the mentioned coproduct
to be a $U_{q}(sl(n))$ morphism. By using the construction of the paper [LS] it
is also possible to get a series of representations of the algebra
$U(sl(2)_{q})$ close to those above.
But this way of construction the representations of this
algebra does not give rise to braided tangent vector fields since the relation (4.11) is not satisfied.
5 Braided Maxwell operator on quantum algebras
In this section we define the Maxwell operator on the algebras ${{K}}_{q}[{{R}}^{3}]$, ${{K}}_{q}[{\rm H}_{r}^{2}]$, and ${{K}}_{q}[{{R}}^{4}]$
following the classical pattern discussed above.
First, we introduce braided analogs of the relations
(3.7). In order to do this, we need to define braided
analogs of the operators ${\cal B},\,{\cal H},\,{\cal C}$. We put
$${\cal B}_{q}=w^{-1}(q^{2}hB_{q}-bH_{q}),\,\,{\cal H}_{q}=w^{-1}((q^{2}+1)(bC_{%
q}-cB_{q})+(q^{2}-1)hH_{q}),\,\,{\cal C}_{q}=w^{-1}(q^{2}cH_{q}-hC_{q}).$$
Here $w$ is a factor coming in (4.10). For the sake of
convenience we put $w=2_{q}$. Then for $q\to 1$ we retrieve the
classical operators ${\cal B},\,{\cal H},$ and ${\cal C}$ respectively.
On the second step we define the derivative $\partial_{r}$ in
the algebra ${{K}}_{q}[{{R}}^{3}]$ in the classical way, i.e., we assume
the derivation in $r$ to be subject to the Leibnitz rule and to
act on the generators via the usual formula
$$\partial_{r}(b)={b\over r},\,\partial_{r}(h)={h\over r},\,\partial_{r}(c)={c%
\over r}.$$
(5.1)
Recall that $r$ appears in the definition of a quantum
hyperboloid, and observe that this way of introducing the
derivative $\partial_{r}$ is compatible with the defining relations
of the algebras ${{K}}_{q}[{{R}}^{3}]$ and ${{K}}_{q}[{{R}}^{4}]$. Also, note that
formulae (5.1) are invariant w.r.t. a renormalization $r\to a\,r,\,\,a\not=0$. So, a way of normalizing the Casimir in the
definition of the q-hyperboloid does not matter.
Observe, the partial derivatives
$\partial_{b},\,\partial_{h},\,\partial_{c}$ that on the space ${\cal SL}$
can be identified with ”bra” operators via the pairing
(4.8). Namely, we have
$$\partial_{b}=q\langle c,\,\,\partial_{h}={q^{2}\over 2_{q}}\langle h,\,\,%
\partial_{c}=q^{3}\langle b.$$
Now, we look for coefficients $\mu$ and $\nu$ such that
the following holds on ${\cal SL}$:
$$\langle b=\mu{{\cal B}_{q}\over r^{2}}+\nu{b\over r}\partial_{r},\quad\langle h%
=\mu{{\cal H}_{q}\over r^{2}}+\nu{h\over r}\partial_{r},\quad\langle c=\mu{{%
\cal C}_{q}\over r^{2}}+\nu{c\over r}\partial_{r}.$$
(5.2)
Proposition 10
These relations are valid with $\mu=q^{-4}$ and $\nu=q^{-2}$.
Proof We have to check that by applying the operator equalities (5.2) to any element
from ${\cal SL}$ we get a correct result.
For example, by applying the first equality to the element $b$ we get a correct relation
$$0=\langle b,b\rangle={q^{2}hB_{q}(b)-bH_{q}(b)\over q^{4}r^{2}}+{b\over rq^{2}%
}\partial_{r}(b)=0.$$
It is more difficult to check three relations containing the
pairings $\langle b,c\rangle,\,\langle c,b\rangle$ and $\langle h,h\rangle$. Let us check the first of them by leaving the others
to the reader:
$$q^{-3}=\langle b,c\rangle={q^{2}hB_{q}(c)-bH_{q}(c)\over q^{4}r^{2}}+{b\over rq%
^{2}}\partial_{r}(c)={1\over q^{4}r^{2}}(q^{2}h({qh\over 2_{q}})-b(-c))+{1%
\over q^{2}r^{2}}bc=$$
$${h^{2}\over q2_{q}r^{2}}+{bc\over q^{4}r^{2}}+{1\over q^{2}r^{2}}(cb+{1-q^{2}%
\over 1+q^{2}}h^{2})={1\over q^{3}r^{2}}(q^{-1}bc+{h^{2}\over 2_{q}}+qcb)=q^{-%
3}.$$
Note, that the r.h.s. of the relations (5.2) are well defined on the whole algebra ${{K}}_{q}[{{R}}^{3}]$ and
extend the operators $\langle b,\,\langle h,\,\langle c$ to the algebra ${{K}}_{q}[{{R}}^{3}]$ via these relations.
Finally, we arrive to the following definition of the Laplace
operators in the algebras ${{K}}_{q}[{{R}}^{3}]$ and ${{K}}_{q}[{\rm H}_{r}^{2}],$
respectively:
$$\Delta_{{{K}}_{q}[{{R}}^{3}]}=q^{-1}\langle b\langle c+{1\over 2_{q}}\langle h%
\langle h+q\langle c\langle b=q^{-5}\partial_{c}\partial_{b}+2_{q}q^{-4}%
\partial_{h}\partial_{h}+q^{-3}\partial_{b}\partial_{c},$$
$$\Delta_{{{K}}_{q}[{\rm H}_{r}^{2}]}={1\over q^{8}r^{4}}(q^{-1}{\cal B}_{q}{%
\cal C}_{q}+{{\cal H}_{q}^{2}\over 2_{q}}+q{\cal C}_{q}{\cal B}_{q}).$$
Now, we define the braided Maxwell operators on these algebras as follows:
$$\rm Mw_{{{K}}_{q}[{{R}}^{3}]}\left(\begin{array}[]{c}{\alpha}\\
\beta\\
\gamma\end{array}\right)=\left(\begin{array}[]{c}\Delta_{{{K}}_{q}[{{R}}^{3}]}%
({\alpha})\\
\Delta_{{{K}}_{q}[{{R}}^{3}]}(\beta)\\
\Delta_{{{K}}_{q}[{{R}}^{3}]}(\gamma)\end{array}\right)-\left(\begin{array}[]{%
c}\partial_{b}\\
\partial_{h}\\
\partial_{c}\end{array}\right)(q^{-5}\partial_{c},\,q^{-4}{2_{q}}\partial_{h},%
\,q^{-3}\partial_{b})\left(\begin{array}[]{c}{\alpha}\\
\beta\\
\gamma\end{array}\right).$$
$$\rm Mw_{{{K}}_{q}[{\rm H}_{r}^{2}]}\left(\begin{array}[]{c}{\alpha}\\
\beta\\
\gamma\end{array}\right)=e\left(\left(\begin{array}[]{c}\Delta_{{{K}}_{q}[{\rm
H%
}_{r}^{2}]}({\alpha})\\
\Delta_{{{K}}_{q}[{\rm H}_{r}^{2}]}(\beta)\\
\Delta_{{{K}}_{q}[{\rm H}_{r}^{2}]}(\gamma)\end{array}\right)-{1\over q^{8}r^{%
4}}\left(\begin{array}[]{c}q^{-1}{\cal C}_{q}\\
{{\cal H}_{q}\over 2_{q}}\\
q{\cal B}_{q}\end{array}\right)({\cal B}_{q},\,{\cal H}_{q},\,{\cal C}_{q})%
\left(\begin{array}[]{c}{\alpha}\\
\beta\\
\gamma\end{array}\right)\right)$$
where ${\alpha},\beta,\gamma$ in these formulas belong to the
algebras ${{K}}_{q}[{{R}}^{3}]$ and ${{K}}_{q}[{\rm H}_{r}^{2}],$ respectively,
$e=1-{\overline{e}}$, and
$${\overline{e}}={1\over r^{2}}\left(\begin{array}[]{c}q^{-1}c\\
{h\over 2_{q}}\\
qb\end{array}\right)\left(\begin{array}[]{ccc}b&h&c\end{array}\right).$$
Moreover, in the second formula we assume that the triples $({\alpha},\beta,\gamma)^{t}$ belong to the projective module
$e\,{{K}}_{q}[{\rm H}_{r}^{2}]^{\oplus 3}$ whereas in the first formula the columns
$({\alpha},\beta,\gamma)^{t}$ form the free module ${{K}}_{q}[{{R}}^{3}]^{\oplus 3}$.
Let us justify this definition.
The column $(\partial_{b},\,\partial_{h},\,\partial_{c})^{t}$ in the first formula is universal (in the classical case
it corresponds to the de Rham operator).
The line $(q^{-5}\partial_{c},\,q^{-4}{2_{q}}\partial_{h},\,q^{-3}\partial_{b})$ is chosen so that
$$\Delta_{{{K}}_{q}[{{R}}^{3}]}=(q^{-5}\partial_{c},\,q^{-4}{2_{q}}\partial_{h},%
\,q^{-3}\partial_{b})\left(\begin{array}[]{c}\partial_{b}\\
\partial_{h}\\
\partial_{c}\end{array}\right)$$
The second formula can be obtained from the first one by
disregarding the second summands in formulae (5.2). Now,
we treat the module $e\,{{K}}_{q}[{\rm H}_{r}^{2}]^{\oplus 3}$ as the space of
braided differential forms $\Omega^{1}_{q}({\rm H}_{r}^{2})$
(for other modules in question it can be
done in a similar way). The space $\Omega^{1}_{q}({\rm H}_{r}^{2})$ treated as a
right ${{K}}_{q}[{\rm H}_{r}^{2}]$ module is the quotient of the space of all
braided differential forms $(db)\,{\alpha}+(dh)\,\beta+(dc)\,\gamma$
over the forms $(q^{-1}(db)\,c+{{(dh)\,h}\over 2_{q}}+q(dc)\,b)\rho,\,\,{\alpha},\beta,\,%
\gamma,\,\rho\in{{K}}_{q}[{\rm H}_{r}^{2}]$. Note that
we have defined the element $q^{-1}(db)\,c+{{(dh)\,h}\over 2_{q}}+q(dc)\,b$
by replacing the left factors in the Casimir ${\rm Cas}_{sl}$ by their ”differentials”
without using either Leibnitz rule or any transposing ”functions” and ”differentials”.
Similarly to the classical pattern, we have the following.
Proposition 11
1. The triples $(\partial_{b}\rho,\,\partial_{h}\rho,\,\partial_{c}\rho)^{t}$
belong to $\rm Ker(\rm Mw_{{{K}}_{q}[{{R}}^{3}]})$ provided the operator
$\Delta_{{{K}}_{q}[{{R}}^{3}]}$ commutes with
$\partial_{b},\,\partial_{h},\,\partial_{c}$.
2. The triples $(q^{-1}{\cal C}_{q}\rho,\,{{\cal H}_{q}\over 2_{q}}\rho,\,q{\cal B}_{q}\rho)^{t}$ belong to $\rm Ker(\rm Mw_{{{K}}_{q}[{\rm H}_{r}^{2}]})$ provided
$$e\left(\left(\begin{array}[]{c}q^{-1}{\cal C}_{q}\\
{{\cal H}_{q}\over 2_{q}}\\
q{\cal B}_{q}\end{array}\right)\Delta_{{{K}}_{q}[{\rm H}_{r}^{2}]}-\Delta_{{{K%
}}_{q}[{\rm H}_{r}^{2}]}\left(\begin{array}[]{c}q^{-1}{\cal C}_{q}\\
{{\cal H}_{q}\over 2_{q}}\\
q{\cal B}_{q}\end{array}\right)\right)=0$$
on the algebra ${{K}}_{q}[{\rm H}_{r}^{2}]$.
Now, we pass to the q-Minkowski space algebra ${{K}}_{q}[{{R}}^{4}]$ and
discuss a way to convert the Casimir element (4.6) into an operator.
Since ${\rm Cas}_{gl}={\rm Cas}_{sl}+{l^{2}\over 2_{q}}$ and a method
of assigning of an operator to the element ${\rm Cas}_{sl}$ is
presented above, we only have to define the operator $\partial_{l}$
on the algebra ${{K}}_{q}[{{R}}^{4}],$ keeping in mind, that
we have $\partial_{l}={q^{2}\over 2_{q}}\langle l$ on the space
${\cal L}$. Since the element $l$ is central, it is possible to define
the extension of the derivative $\partial_{l}$ via the usual
Leibnitz rule. However, such a way is not compatible with the
first column of the system (4.4). Nevertheless,
conjecturally any element of the q-Minkowski space algebra
${{K}}_{q}[{{R}}^{4}]$ can be written in a completely q-symmetrized form as
discussed in [GPS2], where this conjecture is proven for low
dimensional components. By assuming this conjecture to be true, we
define the derivative $\partial_{l}$ via the Leibnitz rule but only
on elements presented in such a ”canonical” form.
Remark 12
In general, defining derivatives or vector fields it is often convenient to do
this on basis elements. This prevents us from checking that the
Leibnitz rule (if it is used in definition of these operators) is
compatible with defining relations. This idea is close to that of
the paper [G], where construction of the Koszul complexes
(similar to the de Rham complexes) uses ”R-symmetric” and
”R-skew-symmetric” algebras whose elements are realized in a
”(skew)symmetrized” form. It is also similar to the above
described construction of the operators $B_{q},\,H_{q},\,C_{q}$.
After having defined this derivative we can present the Laplace
operator on the algebra in question in the following form
$$\Delta_{{{K}}_{q}[{{R}}^{4}]}=\Delta_{{{K}}_{q}[{{R}}^{3}]}+{1\over 2_{q}}%
\langle l\langle l=\Delta_{K_{q}[{{R}}^{3}]}+{2_{q}\over q^{4}}\partial_{l}%
\partial_{l}.$$
Also, define the Maxwell operator on this algebra
$$\rm Mw_{{{K}}_{q}[{{R}}^{4}]}\left(\begin{array}[]{c}{\alpha}\\
\beta\\
\gamma\end{array}\right)=\left(\begin{array}[]{c}\Delta_{{{K}}_{q}[{{R}}^{4}]}%
({\alpha})\\
\Delta_{{{K}}_{q}[{{R}}^{4}]}(\beta)\\
\Delta_{{{K}}_{q}[{{R}}^{4}]}(\gamma)\\
\Delta_{{{K}}_{q}[{{R}}^{4}]}(\delta)\end{array}\right)-\left(\begin{array}[]{%
c}\partial_{b}\\
\partial_{h}\\
\partial_{c}\\
\partial_{l}\end{array}\right)(q^{-5}\partial_{c},\,q^{-4}{2_{q}}\partial_{h},%
\,q^{-3}\partial_{b},\,{2_{q}\over q^{4}}\partial_{l})\left(\begin{array}[]{c}%
{\alpha}\\
\beta\\
\gamma\\
\delta\end{array}\right).$$
Here ${\alpha},\,\beta,\,\gamma,\delta\in{{K}}_{q}[{{R}}^{4}]$.
Now, we introduce an action of the QG $U_{q}(sl(2))$ on the ingredients of
the Maxwell operators in question so that these operators become
$U_{q}(sl(2))$-invariant.
First, note that the QG acts on the operators $B_{q},\,H_{q},\,C_{q}$
and on $\langle b,\,\langle h,\,\langle c$ in the same way as on
the generators $b,\,h,\,c$. This means that the maps $b\to B_{q},...$ and $b\to\langle b,...$ are $U_{q}(sl(2))$ morphisms. So, we
restrict ourselves to the idempotent $\overline{e}$ and define the
above action such that this idempotent becomes invariant. The
reader can easily extend our method to other ingredients of the
Maxwell operators.
Consider a representation of the algebra $U(sl(2)_{q})$
$$\pi:b\to P^{-1}\,B_{q}\,P,\,\,h\to P^{-1}\,H_{q}\,P,\,\,c\to P^{-1}\,C_{q}\,P$$
where $P$ is an invertible numerical matrix: $P\in M_{3}({{K}})$.
Define the action of the QG $U_{q}(sl(2))$ on the space $M_{3}({{K}})$ according
to its action on the generators $b,\,h,\,c$:
$$K^{\epsilon}(P^{-1}\,B_{q}\,P)=q^{2\epsilon}P^{-1}\,B_{q}\,P,...,Y(P^{-1}\,H_{%
q}\,P)=q^{\theta-1}2_{q}P^{-1}\,C_{q}\,P,\,\,Y(P^{-1}\,C_{q}\,P)=0.$$
We extend this action to the whole
space $M_{3}({{K}})$ by treating $M_{3}({{K}})$ as the image of the algebra
$U(sl(2)_{q})$. By construction, the representation $\pi:U(sl(2)_{q})\to M_{3}({{K}})$ is a $U_{q}(sl(2))$ morphism.
Consider the element
$$w^{-1}\pi_{2}({\rm Cas}_{sl})=q^{-1}bP^{-1}\left(\begin{array}[]{ccc}0&0&0\\
-{q\over 2_{q}}&0&0\\
0&q^{2}&0\end{array}\right)P+{h\over 2_{q}}P^{-1}\left(\begin{array}[]{ccc}q^{%
2}&0&0\\
0&q^{2}-1&0\\
0&0&-1\end{array}\right)P$$
$$+qcP^{-1}\left(\begin{array}[]{ccc}0&-1&0\\
0&0&{q\over 2_{q}}\\
0&0&0\end{array}\right)P={1\over 2_{q}}P^{-1}\left(\begin{array}[]{ccc}q^{2}h&%
-2_{q}qc&0\\
-b&(q^{2}-1)h&q^{2}c\\
0&2_{q}qb&-h\end{array}\right)P.$$
Here $\pi_{2}$ means that we
apply the representation $\pi$ to the second factors of the split Casimir element, i.e. the Casimir element regarded as an
element of ${\cal SL}\otimes{\cal SL}$.
Now, introduce the matrix $L=(w^{-1}\pi_{2}({\rm Cas}_{sl}))^{t}$.
Note the the powers of this matrix are transposed to the ”powers” of the matrix
$w^{-1}\pi_{2}({\rm Cas}_{sl})$ where these ”powers” are defined via putting one matrix
$w^{-1}\pi_{2}({\rm Cas}_{sl})$ ”inside” of the other one. This procedure is described
in detail in [GLS].
It is easy to see that these ”powers” are $U_{q}(sl(2))$-invariant. Also,
note that the matrix $L$ obeys a CH identity (cf. [GLS, GS2]). Thus, introducing a new action of the QG $U_{q}(sl(2))$ on the
elements $M\in M_{3}({{K}})$ as follows
$$X\triangleright M=(X(M^{t}))^{t},\qquad\forall X\in U_{q}(sl(2))$$
where $X(M)$ is the above defined action, we see that the matrix
$L$ and all its powers are $U_{q}(sl(2))$-invariant.
Now, we claim that the idempotent $\overline{e}$ can be presented as a degree 2 polynomial
of the matrix $L$ for a special
choice of the matrix $P$. Therefore, this idempotent is also invariant.
We leave finding this polynomial and the corresponding
matrix $P$ to the reader.
References
[A]
Akueson P. Géométrie de l’espace tangent sur l’hyperboloïde quantique,
Cahiers de topologie et géométrie differentielle catégoriques, XLII-1 (2001), 2–50.
[AG]
Akueson P., Gurevich D. Cotangent and tangent modules on quantum orbits,
Int. J. Mod. Phys. B14 (2000), 2287–2509.
[BB]
Bordemann M., Bursztyn H. Deformation quantization of Hermitian vector bundles,
Lett. Math. Phys. 53 (2000), 349–365.
[C]
Connes A. A short survey of Noncommutative geometry, J. Math. Phys. 42 (2000), 3832–3866.
[D]
Dobrev V. Subsingular vectors and conditionally
invariant (q-deformed) equations, J. Phys. A : Math. Gen. 28 (1995),
7135–7155.
[DG]
Dutriaux A., Gurevich D. Noncommutative dynamical models with quantum
symmetries, Acta Applicandae Mathematicae 101 (2008), 85–104.
[FP]
Faddeev L., Pyatov P. The Differential Calculus on Quantum
Linear Groups, Trans. Amer. Math. Soc., Ser. 2 175 (1996),
35–47.
[G]
Gurevich D. Algebraic aspects of the
Yang-Baxter equation, English translation: Leningrad Math. J. 2 (1991), 801–828.
[GLS]
Gurevich D., Leclercq R., Saponov P. q-Index on braided
noncommutative spheres, J. Geom. Phys. 53 (2005), 392–420.
[GPS1]
Gurevich D., Pyatov P., Saponov P. Hecke symmetries and characteristic
relations on reflection equation
algebras, Lett. Math. Physics, 41 (1997), 255–264.
[GPS2]
Gurevich D., Pyatov P., Saponov P. Representation theory of (modified) Reflection
Equation Algebra of the $GL(m|n)$ type, Algebra and Analysis 20 (2008), 70–133
(in Russian, English translation will be published in
St Petersburg Math. J.).
[GS1]
Gurevich D., Saponov P. Quantum line bundles on a noncommutative
sphere, J. Phys. A: Math. Gen. 35 (2002), 9629–9643.
[GS2]
Gurevich D., Saponov P. Geometry of non-commutative orbits related to
Hecke symmetries, Contemporary Mathematics 433 (2007), 209–250.
[IP]
Isaev A., Pyatov P. Covariant differential complexes on
quantum linear groups, J. Phys. A: Math. Gen. 28 (1995),
2227–2246.
[K]
Kulish P.P. Representations of
$q$-Minkowski space algebra, St. Petersburg Math. J. 6
(1995), 365–374.
[LS]
Lyubashenko V., Sudbery A. Generalized Lie algebras of type $A_{n}$, J. Math. Phys.
39 (1998), 3487–3504.
[MM]
Majid S., Meyer U. Braided Matrix structure of q-Minkowski space and q-Poincare group,
Z. Phys. C 63 (1994), 457–475.
[M]
Meyer U. Wave equations on q-Minkowski space, Comm. Math. Phys.
174 (1996), 249–264.
[OP]
Ogievetsky O., Pyatov. P.
Orthogonal and Symplectic Quantum Matrix Algebras and Cayley-Hamilton Theorem for them,
arxiv: math/0511618.
[R]
Rosenberg J. Rigidity of K-theory ubder deformation quantization, arxiv: q-alg/9607021.
[S]
Sudbery A. $SU_{q}(n)$ Gauge theory, Phys. Lett. B 375 (1996), 75–80.
[W]
Woronowicz S. Differential Calculus on
Compact Matrix Pseudogroups (Quantum groups), Com. Math. Phys. 122
(1989), 125–170. |
Lax formalism for Gelfand-Tsetlin integrable systems
Eder M. Correa
Eder M. Correa is supported by CNPq grant 150899/2017-3.
IMPA, Estrada Dona Castorina 110, Rio de Janeiro, 22460-320, Brazil.
Lino Grama
Lino Grama is partially supported by FAPESP grant 2016/22755-1.
IMECC-Unicamp, Departamento de Matemática. Rua Sérgio Buarque de Holanda,
651, Cidade Universitária Zeferino Vaz. 13083-859, Campinas - SP, Brazil.
Abstract
In this work, we study Hamiltonian systems in coadjoint orbits, and propose a new approach for Gelfand-Tsetlin integrable systems via Lax pair $(L,P)$. In order to obtain this new approach, we provide a formulation of Thimm’s trick [18] by means of Lax equations. This last formulation allows us to recover the collective Hamiltonians [11] which compose the Gelfand-Tsetlin integrable system as spectral invariants of $L$.
Contents
1 Introduction
2 Collective Hamiltonians and Gelfand-Tsetlin integrable systems
2.1 Collective Hamiltonian systems
2.2 Thimm’s trick and Gelfand-Tsetlin integrable systems
3 Generalities on Lax pairs and integrability
3.1 Lax pair and Hamiltonian systems
4 A Lax pair formalism for Gelfand-Tsetlin instegrable systems
4.1 Lax equation and collective Hamiltonians
4.2 Thimm’s trick and spectral invariants
4.3 Liouville’s Theorem and Lax formalism for Gelfand-Tsetlin integrable systems
1 Introduction
In this work we study Hamiltonian systems in coadjoint orbits of compact semi-simple Lie groups. Our main motivation is to understand how the integrability condition of a certain class of Hamiltonian systems, in the presence of symmetries, can be formulated in terms of Lax equations.
In [14], Guillemin and Sternberg showed how to employ the Thimm’s trick [18] in order to obtain quantities in involution defined by collective Hamiltonians in coadjoint orbits of compact semi-simple Lie groups. They also showed that for coadjoint orbits of the compact unitary Lie group ${\rm{U}}(n)$ the set of Poisson commuting functions provided by the Thimm’s trick defines a completely integrable system. Guillemin and Sternberg called this class of integrable systems by Gelfand-Tsetlin integrable systems.
A remarkable feature of the Gelfand-Tsetlin integrable systems is their connection with representation theory of Lie groups and Lie algebras, and geometric quantization, see for instance [10], and [11]. Since Gelfand-Tsetlin integrable systems were introduced, many other results concerned with Hamiltonian systems defined by collective Hamiltonians have been established. For instance, Guillemin and Sternberg in [11] studied in a general setting the constraints to get integrability for Hamiltonian systems composed by collective Hamiltonians. They showed that the necessary condition for integrability of collective Hamiltonians systems is that the space of invariant smooth functions needs to be an abelian algebra with respect to the Poisson bracket induced by the symplectic form. Furthermore, they concluded that this last context turns out to be the case for coadjoint orbits of ${\rm{U}}(n)$ and ${\rm{SO}}(n)$.
In this paper we deal with issues related to the construction of the Gelfand-Tsetlin integrable systems in coadjoint orbits of classical compact Lie groups and their formulation in terms of Lax pair.
A Lax pair $(L,P)$ consist of two matrix-valued functions on the phase space $(M,\omega)$ of the system, such that the Hamiltonian evolution equation of motion associated to a Hamiltonian $H\in C^{\infty}(M)$ can be written as a zero curvature equation
$$\displaystyle\frac{dL}{dt}+\big{[}L,P\big{]}=0.$$
The notion of Lax pair is a new emergent language used in the studies of integrable systems, and one of the most important feature of this concept is the relation with the classical $r$-matrix, see [2]. The classical $r$-matrix was introduced in late 1970’s by Sklyanin [22], as a part of a vast research program launched by L. D. Faddeev, which culminated in the discovery of the Quantum Inverse Scattering Method and of Quantum Groups, see [4]. Motivated by the work [15], and its relation with quantum groups, we propose a new approach for Gelfand-Tsetlin integrable systems by means of the following result:
Theorem A. Let $(O(\Lambda),\omega_{O(\Lambda)},G,\Phi)$ be a Hamiltonian $G$-space defined by an adjoint orbit of $G={\rm{U}}(N)$ or ${\rm{SO}}(N)$. Then the Gelfand-Tsetlin integrable system is completely determined by a zero curvature equation
$\displaystyle\frac{dL}{dt}+\big{[}L,P\big{]}=0$,
where $L,P\colon O(\Lambda)\to\mathfrak{gl}(r,\mathbb{R})$ is a pair of matrix-valued smooth functions.
Although the integrability condition ensures the existence of a Lax pair for integrable systems, it is not clear what would be a suitable choice for a such pair, since we do not have uniqueness for such a choice. Hence, the result above provides a canonical and concrete way to assign a Lax pair to Gelfand-Tsetlin integrable systems for adjoint orbits of ${\rm{U}}(N)$ and ${\rm{SO}}(N)$. The ideas involved in our construction are quite natural owing to the underlying matrix-nature which we have in the context of coadjoint orbits of classical compact Lie groups. Furthermore, all information about the Gelfand-Tsetlin pattern are encoded in the set of spectral invariants of the matrix $L$.
Besides the different approach which we provide in this work for Gelfand-Tsetlin integrable systems, we hope that the content which we have developed may help to establish new connections between Gelfand-Tsetlin integrable systems and associated topics like quantum groups, Yang-Baxter equation [9, p. 13-16], and geometric quantization.
2 Collective Hamiltonians and Gelfand-Tsetlin integrable systems
In this section we provide an overview about the general features of Hamiltonian systems defined by collective Hamiltonians. The main purpose is covering the basic material in this topic, and explain Thimm’s trick [18].
2.1 Collective Hamiltonian systems
We start fixing the notation and reviewing some basic definitions and results. Let $(M,\omega)$ be a symplectic manifold and $\tau\colon G\to{\text{Diff}}(M)$ be a smooth action. Then the action $\tau$ is Hamiltonian if and only if it admits a moment map $\Phi\colon(M,\omega)\to\mathfrak{g}^{\ast}$.
Given a symplectic manifold $(M,\omega)$, consider a Hamiltonian action $\tau\colon G\to{\text{Diff}}(M)$. We say that the moment map $\Phi$ associated to $\tau$ is equivariant if it satisfies
$$\Phi(\tau(g)p)={\text{Ad}}^{\ast}(g)\Phi(p),$$
for every $p\in M$, $X\in\mathfrak{g}$ and $g\in G$. In this work we are concerned with the following setting.
Definition.
A Hamiltonian $G$-space $(M,\omega,G,\Phi)$ is composed by:
•
A symplectic manifold $(M,\omega)$ and a connected Lie group $G$, with Lie algebra $\mathfrak{g}$.
•
A Hamiltonian (left) Lie group action $\tau\colon G\to{\text{Diff(M)}}$, with associated infinitesimal action $\delta\tau\colon\mathfrak{g}\to\Gamma(TM)$.
•
A moment map $\Phi\colon(M,\omega)\to\mathfrak{g}^{\ast}$.
Definition.
A Hamiltonian system is defined by a triple $(M,\omega,H)$, where $(M,\omega)$ is a symplectic manifold and $H\in C^{\infty}(M)$.
We are interested in the study of certain class of Hamiltonian systems in coadjoint orbits, namely, the orbit $O(\lambda)$ of the coadjoint representation $\operatorname{Ad}^{\ast}:G\to\mathfrak{g}^{\ast}$, of a semi-simple Lie group $G$ through an element $\lambda\in\mathfrak{g}^{\ast}$. If we consider a simple Lie group $G$, then we have the identification $\mathfrak{g}\simeq\mathfrak{g}^{\ast}$, and then we can identify the coadjoint orbit $O(\lambda)$ with the adjoint orbit $O(\Lambda)$ through an element $\Lambda\in\mathfrak{g}$.
Remark.
The coadjoint orbits of a semi-simple Lie group $G$ also appear in the literature as generalized flag manifolds. Such manifolds have a very rich geometry from the viewpoint of algebraic geometry, symplectic geometric and complex geometry, see [1] for further details.
It will be useful to consider some basic facts about the Lie-Poisson structure of the dual space $\mathfrak{g}^{\ast}$ of the Lie algebra associated to compact and connected Lie groups (see also [6], [8, ex. 1.1.3], or [9, p. 522-525] for further details): let $M$ be a smooth manifold and let $C^{\infty}(M)$ denote the algebra of real-valued smooth functions on $M$. Consider a given bracket operation denoted by
$$\big{\{}\cdot,\cdot\big{\}}_{M}\colon C^{\infty}(M)\times C^{\infty}(M)\to C^{%
\infty}(M).$$
The pair $(M,\{\cdot,\cdot\}_{M})$ is called Poisson manifold if the $\mathbb{R}$-vector space $C^{\infty}(M)$ with the bracket $\{\cdot,\cdot\}_{M}$ defines a Lie algebra and
$$\big{\{}H,fg\big{\}}_{M}=g\big{\{}H,f\big{\}}_{M}+f\big{\{}H,g\big{\}}_{M},$$
for all $f,g,H\in C^{\infty}(M)$.
Remark.
Given a symplectic manifold $(M,\omega)$, by solving the equation $df+\iota_{X_{f}}\omega=0$, such that $f\in C^{\infty}(M)$, we can define a bracket $\{\cdot,\cdot\}_{M}$ on $M$ by setting
$$\big{\{}f,g\big{\}}_{M}=\omega(X_{f},X_{g})$$
for all $f,g\in C^{\infty}(M)$. It is straightforward to see that $(M,\{\cdot,\cdot\}_{M})$ defines a Poisson manifold.
Let $\mathfrak{g}$ be a Lie algebra of a compact and connected Lie group $G$ 111 Here it is worthwhile to point out that given a vector space $V$ there exists a correspondence between Lie algebra structures on $V$ and linear Poisson structures on $V^{\ast}$, see [21, p. 367] for more details about this correspondence.. We have a Poisson bracket $\{\cdot,\cdot\}_{\mathfrak{g}^{\ast}}$ on the manifold $\mathfrak{g}^{\ast}$ defined as follows. Given $F_{1},F_{2}\in C^{\infty}(\mathfrak{g}^{\ast})$ and $\xi\in\mathfrak{g}^{\ast}$, we set
$$\big{\{}F_{1},F_{2}\big{\}}_{\mathfrak{g}^{\ast}}(\xi)=-\big{\langle}\xi,\big{%
[}(dF_{1})_{\xi},(dF_{2})_{\xi}\big{]}\big{\rangle},$$
here we used the identification $T_{\xi}^{\ast}\mathfrak{g}^{\ast}\cong\mathfrak{g}$, and from this
$(dF_{1})_{\xi},(dF_{2})_{\xi}\in\mathfrak{g}$.
Also, it will be convenient to denote by $\nabla F(\xi)$ the element of $\mathfrak{g}$ which satisfies the pairing
$$(dF)_{\xi}(\eta)=\big{\langle}\eta,\nabla F(\xi)\big{\rangle},$$
for every $F\in C^{\infty}(\mathfrak{g}^{\ast})$, $\xi\in\mathfrak{g}^{\ast}$, and $\eta\in T_{\xi}\mathfrak{g}^{\ast}$. From this, we can rewrite the previous expression of $\{\cdot,\cdot\}_{\mathfrak{g}^{\ast}}$ as follows
$$\big{\{}F_{1},F_{2}\big{\}}_{\mathfrak{g}^{\ast}}(\xi)=-\big{\langle}\xi,\big{%
[}\nabla F_{1}(\xi),\nabla F_{2}(\xi)\big{]}\big{\rangle}.$$
With the bracket above, the pair $(\mathfrak{g},\{\cdot,\cdot\}_{\mathfrak{g}^{\ast}})$ is a Poisson manifold.
A Hamiltonian system $(M,\omega,H)$ defines a dynamic system which can be described in the following way. Since associated to any symplectic manifold $(M,\omega)$ we have a natural Poisson structure $\big{\{}\cdot,\cdot\big{\}}_{M}$ induced by the symplectic structure, given $F\in C^{\infty}(M)$, the evolution equation of motion is given by
$\displaystyle\frac{d}{dt}(F\circ\varphi_{t})=\big{\{}H,F\big{\}}_{M}(\varphi_{%
t})$,
where $\varphi_{t}$ is the Hamiltonian flow of $X_{H}\in\Gamma(TM)$, and $\big{\{}H,F\big{\}}_{M}=\omega(X_{H},X_{F})$. By means of the underlying Poisson structure of $(M,\omega)$, the equations which locally define the Hamiltonian system can be rewritten as
$\displaystyle\frac{dq_{i}}{dt}=\big{\{}H,q_{i}\big{\}}_{M}$, and $\displaystyle\frac{dp_{i}}{dt}=\big{\{}H,p_{i}\big{\}}_{M}$,
where $(p,q)$ are local coordinates in $(M,\omega)$. We are interested in the following concept of integrability.
Definition.
(Liouville integrability)
Let $(M,\omega,H)$ be a Hamiltonian system, we say that such a system is integrable if there exists $H_{1},\ldots,H_{n}\colon(M,\omega)\to\mathbb{R}$, such that $H_{i}\in C^{\infty}(M)$, for each $i=1,\ldots,n=\frac{1}{2}\dim(M)$, satisfying
•
$\big{\{}H_{i},H_{j}\big{\}}_{M}=0$, for all $i,j=1,\ldots,n$,
•
$dH_{1}\wedge\ldots\wedge dH_{n}\neq 0$, in an open dense subset of $M$.
From the first item of the integrability condition described above, in order to study integrable systems, it will be useful to consider the following concept.
Definition.
Let $(M,\{\cdot,\cdot\}_{M})$ be a Poisson manifold. A smooth function $C\in C^{\infty}(M)$, is called Casimir function if satisfies
$$\big{\{}C,F\big{\}}_{M}=0,$$
for every $F\in C^{\infty}(M)$.
Example 2.1.
Consider the Poisson manifold $(\mathfrak{g}^{\ast},\{\cdot,\cdot\}_{\mathfrak{g}^{\ast}})$, described previously. Suppose that $C\in C^{\infty}(\mathfrak{g}^{\ast})$ is a Casimir function, i.e., $\big{\{}C,F\big{\}}_{\mathfrak{g}^{\ast}}=0$,
for every $F\in C^{\infty}(\mathfrak{g}^{\ast})$. If $F=l_{X}\in C^{\infty}(\mathfrak{g}^{\ast})$, where
$$l_{X}(\xi)=\big{\langle}\xi,X\big{\rangle},$$
for all $\xi\in\mathfrak{g}^{\ast}$, a straightforward computation shows that
$$\nabla l_{X}(\xi)=X,$$
for all $\xi\in\mathfrak{g}^{\ast}$. From this, we have
$$0=\big{\{}C,l_{X}\big{\}}_{\mathfrak{g}^{\ast}}(\xi)=-\big{\langle}\xi,\big{[}%
\nabla C(\xi),X\big{]}\big{\rangle}=-(dC)_{\xi}({\text{ad}}^{\ast}(X)\xi),$$
notice that ${\text{ad}}^{\ast}(X)\xi=-\xi\circ{\text{ad}}(X)$, $\forall\xi\in\mathfrak{g}^{\ast}$, and $\forall X\in\mathfrak{g}$.
Since $\mathfrak{g}={\text{Lie}}(G)$ and $G$ is connected, it follows that the Casimir functions of $(\mathfrak{g},\{\cdot,\cdot\}_{\mathfrak{g}^{\ast}})$ are ${\operatorname{Ad}}^{\ast}$-invariant functions. Notice that the last equation above is also true if we take $X=\nabla F(\xi)$ in the right side, that is,
$$0=(dC)_{\xi}({\text{ad}}^{\ast}(\nabla F(\xi))\xi)=-\big{\langle}\xi,\big{[}%
\nabla C(\xi),\nabla F(\xi)\big{]}\big{\rangle}=\big{\{}C,F\big{\}}_{\mathfrak%
{g}^{\ast}}(\xi),$$
for some $F\in C^{\infty}(\mathfrak{g}^{\ast})$, and $C\in C^{\infty}(\mathfrak{g}^{\ast})$ ${\text{Ad}}^{\ast}$-invariant function.
It follows that the Casimir functions of $(\mathfrak{g},\{\cdot,\cdot\}_{\mathfrak{g}^{\ast}})$ are exactly the ${\text{Ad}}^{\ast}$-invariant functions. For a more general discussion about Casimir functions with respect to the Lie-Poisson bracket see [7, p. 463]. ∎
We are interested in studying Hamiltonian systems defined by the following special class of functions.
Definition.
Let $(M,\omega,G,\Phi)$ be a Hamiltonian $G$-space. Given a smooth function $F\in C^{\infty}(\mathfrak{g}^{\ast})$, a collective Hamiltonian is defined by the pullback $H=\Phi^{\ast}(F)\in C^{\infty}(M)$.
Now, we will provide an expression for the Hamiltonian vector field $X_{H}\in\Gamma(TM)$, associated to a collective Hamiltonian $H=\Phi^{\ast}(F)\in C^{\infty}(M)$, for more details see [6, p. 241].
At first, note that by fixing a basis $\{X_{i}\}$ for $\mathfrak{g}$, and denoting by $\{X_{i}^{\ast}\}$ its dual, we have
$$\Phi=\displaystyle\sum_{i}\Phi^{i}X_{i}^{\ast},\ \ \mbox{ and }\ \ D\Phi=%
\displaystyle\sum_{i}d\langle\Phi,X_{i}\rangle X_{i}^{\ast},$$
where each component function $\Phi^{i}=\langle\Phi,X_{i}\rangle$ satisfies the equation
$$d\langle\Phi,X_{i}\rangle+\iota_{\delta\tau(X_{i})}\omega=0.$$
Recall that $\delta\tau$ denotes the infinitesimal action associated to the Hamiltonian action $\tau\colon G\to{\text{Diff}}(M)$. Therefore, given $H=\Phi^{\ast}(F)\in C^{\infty}(M)$, we have
$$dH=dF\circ D\Phi=dF(\displaystyle\sum_{i}d\langle\Phi,X_{i}\rangle X_{i}^{\ast%
}).$$
From the previous equations for the components of $\Phi$, it follows that
$$dF(\displaystyle\sum_{i}d\langle\Phi,X_{i}\rangle X_{i}^{\ast})=-\displaystyle%
\sum_{i}\langle X_{i}^{\ast},(\nabla F)\circ\Phi\rangle\iota_{\delta\tau(X_{i}%
)}\omega.$$
Therefore we obtain
$$X_{H}=\delta\tau((\nabla F)\circ\Phi).$$
(2.1)
The description avove yields the following proposition:
Proposition 2.1.
Let $(M,\omega,G,\Phi)$ be a Hamiltonian $G$-space and $H=\Phi^{\ast}(F)\in C^{\infty}(M)$ be a collective Hamiltonian. Given $p\in M$, the trajectory of $X_{H}\in\Gamma(TM)$ through the point $p\in M$, is given by
$$\varphi_{t}(p)=\tau(\exp(t\nabla F(\Phi(p))))p.$$
Proof.
The proof follows from the expression above for $X_{H}$.
∎
Remark.
We notice that the expression above $\varphi_{t}(p)=\tau(\exp(t\nabla F(\Phi(p))))p$ denotes a curve which satisfies
$$\displaystyle\frac{d}{dt}\Big{|}_{t=0}\varphi_{t}(p)=X_{\Phi^{\ast}(F)}(p).$$
The curve above is not necessarily the flow of $X_{\Phi^{\ast}(F)}$. It is in fact the Hamiltonian flow of the vector field $\delta\tau(\nabla F(\Phi(p)))\in\Gamma(TM)$ through the point $p\in M$. As we will see below, the curve obtained in Proposition 2.1 will be the Hamiltonian flow of $\Phi^{\ast}(F)$ when $F\in C^{\infty}(\mathfrak{g}^{\ast})^{{\text{Ad}^{\ast}}}$, i.e., when $F$ is ${\text{Ad}^{\ast}}$-invariant. ∎
Let us briefly describe how we can find the Hamiltonian flow associated to a collective Hamiltonian $\Phi^{\ast}(F)\in C^{\infty}(M)$. At first we take a trivialization of the tangent bundle $TG$ of $G$ by right invariant vector fields. From this, we consider the following vector field $v\in\Gamma(TG)$, for a fixed point $p\in M$, given by
$$v\colon G\to TG,\mbox{ such that }v_{g}=(R_{g})_{\ast}(\nabla F(\Phi(\tau(g)p)%
)),$$
where $R_{g}\colon G\to G$ denotes the right translation. Now, we consider the following smooth map induced by the action $\tau\colon G\to{\text{Diff}}(M)$
$$\mathscr{A}_{p}\colon G\to M,\mbox{ such that }\mathscr{A}_{p}(g)=\tau(g)p.$$
A straightforward computation shows that
$$\delta\tau(X)_{\tau(g)p}=(D\mathscr{A}_{p})_{g}((R_{g})_{\ast}X).$$
Therefore, if we take the solution of the initial value problem
$$\displaystyle\frac{dg}{dt}=v_{g(t)},\ \ \mbox{ with }\ \ g(0)=e,$$
we can define a curve $\varphi_{t}(p)=\tau(g(t))p$ which satisfies
$$\displaystyle\frac{d}{dt}\varphi_{t}(p)=(D\mathscr{A}_{p})_{g(t)}(%
\displaystyle\frac{dg}{dt})=(D\mathscr{A}_{p})_{g(t)}((R_{g(t)})_{\ast}(\nabla
F%
(\Phi(\tau(g(t))p)))).$$
The expression above can be rewritten as
$$\displaystyle\frac{d}{dt}\varphi_{t}(p)=\delta\tau(\nabla F(\Phi(\tau(g(t))p))%
)_{\tau(g(t))p}=\delta\tau(\nabla F(\Phi(\varphi_{t}(p))))_{\varphi_{t}(p)},$$
that is, we have a solution for the initial value problem
$$\displaystyle\frac{d}{dt}\varphi_{t}(p)=X_{\Phi^{\ast}(F)}(\varphi_{t}(p)),\ %
\ \mbox{ such that }\ \ \varphi_{0}(p)=p.$$
(2.2)
Now we observe the following fact: if $F\in C^{\infty}(\mathfrak{g}^{\ast})^{{\text{Ad}}^{\ast}}$, then
$$\nabla F({\text{Ad}}^{\ast}(g)\xi)={\text{Ad}}(g)\nabla F(\xi),$$
for every $\xi\in\mathfrak{g}^{\ast}$ and $g\in G$. Thus, we obtain
$$v_{g}=(R_{g})_{\ast}(\nabla F(\Phi(\tau(g)p)))=(L_{g})_{\ast}(\nabla F(\Phi(p)%
)),$$
where $L_{g}\colon G\to G$ denotes the left translation. Hence, if $F\in C^{\infty}(\mathfrak{g}^{\ast})^{{\text{Ad}}^{\ast}}$, it follows that initial value problem
$$\displaystyle\frac{dg}{dt}=v_{g(t)},\ \ \mbox{ such that }\ \ g(0)=e,$$
becomes exactly the equation of left invariant vector fields through the identity element. In this last case, we have $g(t)=\exp(t\nabla F(\Phi(p)))$ and the Hamiltonian flow associated to the collective Hamiltonian $\Phi^{\ast}(F)$ is exactly the curve described in Proposition 2.1. Further discussions about the Hamiltonian flow of collective Hamiltonians can be found in [6, p. 241-242].
Let us illustrate the ideas above by means of an example which is the setting which we are interested.
Example 2.2.
Consider now the Hamiltonian $G$-space $(O(\lambda),\omega_{O(\lambda)},G,\Phi)$. If we take a collective Hamiltonian $H=\Phi^{\ast}(F)\in C^{\infty}(O(\Lambda))$, from the last proposition above we have
$$X_{H}={\text{ad}}^{\ast}((\nabla F)\circ\Phi).$$
Since $\Phi$ in this case is just the inclusion map, we have the following expression for the trajectory of $X_{H}$ through the point $\xi\in O(\lambda)$
$$\varphi_{t}(\xi)={\text{Ad}}^{\ast}(\exp(t\nabla F(\Phi(\xi))))\xi.$$
It follows that the dynamic defined by $H=\Phi^{\ast}(F)\in C^{\infty}(O(\lambda))$ can be understood through the equation which defines the left invariant vector field associated to $\nabla F(\xi)\in\mathfrak{g}$.
2.2 Thimm’s trick and Gelfand-Tsetlin integrable systems
Now we will describe how to obtain quantities in involution when we consider Hamiltonian systems defined by collective Hamiltonians.
Now, back to the setting of Hamiltonian $G$-spaces, as we have seen previously, the Hamiltonian vector field associated to a collective Hamiltonian $\Phi^{\ast}(F)\in C^{\infty}(M)$ is given by
$$X_{F\circ\Phi}=\delta\tau((\nabla F)\circ\Phi).$$
If we consider other collective Hamiltonian $\Phi^{\ast}(I)\in C^{\infty}(M)$ for some $I\in C^{\infty}(\mathfrak{g}^{\ast})$, we have
$$\big{\{}F\circ\Phi,I\circ\Phi\big{\}}_{M}(p)=\omega(\delta\tau(\nabla F(\Phi(p%
))_{p},\delta\tau(\nabla I(\Phi(p))_{p}),$$
where $\nabla F(\Phi(p)),\nabla I(\Phi(p))\in\mathfrak{g}$ for every $p\in M$.
Proposition 2.2.
Let $(M,\omega,G,\Phi)$ be a Hamiltonian $G$-space. Given $p\in M$ and $\xi=\Phi(p)\in\mathfrak{g}^{\ast}$. Let $\iota\colon G\cdot p\hookrightarrow M$ be the natural inclusion map, then
$$\iota^{\ast}\omega=\Phi^{\ast}\omega_{O(\xi)},$$
where $(O(\xi),\omega_{O(\xi)})\subset\mathfrak{g}^{\ast}$, denotes the coadjoint orbit of $\xi\in\mathfrak{g}^{\ast}$.
Proof.
See for instance [21, p. 497].
∎
From Proposition 2.2, and the previous comments, we obtain 222Given a map between Poisson manifolds $\Phi\colon(M,\{\cdot,\cdot\}_{M})\to(N,\{\cdot,\cdot\}_{N})$, we say that $\Phi$ is a Poisson map if $\{\Phi^{\ast}f,\Phi^{\ast}g\}_{M}=\Phi^{\ast}\{f,g\}_{N}$, $\forall f,g\in C^{\infty}(N)$.
$$\big{\{}F\circ\Phi,I\circ\Phi\big{\}}_{M}(p)=\big{\{}F,I\big{\}}_{\mathfrak{g}%
^{\ast}}(\Phi(p)),$$
for every $p\in M$ and $F,I\in C^{\infty}(\mathfrak{g}^{\ast})$.
Now we are in position to describe Thimm’s trick [18]. Let $(M,\omega,G,\Phi)$ be a Hamiltonian $G$-space as before. If we consider a closed and connected subgroup $K\subset G$, we have a natural Hamiltonian action of $K$ on $(M,\omega)$ induced by restriction, it follows that we have a Hamiltonian $K$-space $(M,\omega,K,\Phi_{K})$, where the moment map
$$\Phi_{K}\colon(M,\omega)\to\mathfrak{k}^{\ast}={\text{Lie}}(K)^{\ast}$$
is given by
$$\Phi_{K}=\pi_{K}\circ\Phi,$$
where $\pi_{K}\colon\mathfrak{g}^{\ast}\to\mathfrak{k}^{\ast}$ is the projection induced by the inclusion $\mathfrak{k}\hookrightarrow\mathfrak{g}$.
If we take two collective Hamiltonians $\Phi^{\ast}(F),\Phi_{K}^{\ast}(I)\in C^{\infty}(M)$, we obtain
$$\big{\{}F\circ\Phi,I\circ\Phi_{K}\big{\}}_{M}=\big{\{}F\circ\Phi,I\circ\pi_{K}%
\circ\Phi\big{\}}_{M},$$
which implies that
$$\big{\{}F\circ\Phi,I\circ\Phi_{K}\big{\}}_{M}=\big{\{}F,I\circ\pi_{K}\big{\}}_%
{\mathfrak{g}^{\ast}}\circ\Phi.$$
From the last equality we have the following proposition:
Proposition 2.3.
Let $(M,\omega,G,\Phi)$ be a Hamiltonian $G$-space, and let $K\subset G$ be a closed and connected subgroup. If we consider the Hamiltonian system $(M,\omega,\Phi_{K}^{\ast}(I))$, then all collective Hamiltonians obtained from the Casimir functions of $(\mathfrak{g}^{\ast},\{\cdot,\cdot\}_{\mathfrak{g}^{\ast}})$ and $(\mathfrak{k}^{\ast},\{\cdot,\cdot\}_{\mathfrak{k}^{\ast}})$ are quantities in involution for the system $(M,\omega,\Phi_{K}^{\ast}(I))$.
Proof.
This result is a consequence of the ideas developed in [18], [6] see also [14, prop. 3.1]. In fact, from the above comments, for $\Phi^{\ast}(F),\Phi_{K}^{\ast}(I)\in C^{\infty}(M)$ we have
$$\big{\{}F\circ\Phi,I\circ\Phi_{K}\big{\}}_{M}(p)=\big{\{}F,I\circ\pi_{K}\big{%
\}}_{\mathfrak{g}^{\ast}}(\Phi(p))=0,$$
if $F\in C^{\infty}(\mathfrak{g}^{\ast})$ is a Casimir. Similarly, for $\Phi_{K}^{\ast}(F),\Phi_{K}^{\ast}(I)\in C^{\infty}(M)$, we have
$$\big{\{}F\circ\Phi_{K},I\circ\Phi_{K}\big{\}}_{M}(p)=\big{\{}F,I\big{\}}_{%
\mathfrak{k}^{\ast}}(\Phi_{K}(p))=0,$$
if $F\in C^{\infty}(\mathfrak{k}^{\ast})$ is a Casimir. ∎
The result of the last proposition is an application of the so called Thimm’s trick, see [18] for more details. Notice that we can use the previous proposition iteratively on a chain of closed and connected subgroups
$$G=K_{0}\supset K_{1}\supset\ldots\supset K_{s}.$$
By denoting $\Phi_{j}\colon(M,\omega)\to\mathfrak{k}_{j}^{\ast}$ the moment map associated to each Hamiltonian $K_{j}$-space, with $j=0,\ldots,s$, and by considering the Hamiltonian system
$$(M,\omega,\Phi_{s}^{\ast}(F)),$$
for some fixed $F\in C^{\infty}(\mathfrak{k}_{s}^{\ast})$, we obtain from Thimm’s trick a set of functions in involution composed by collective Hamiltonians defined by Casimir functions of each Poisson manifold $(\mathfrak{k}_{j}^{\ast},\{\cdot,\cdot\}_{\mathfrak{k}_{j}^{\ast}})$.
When $M$ is a coadjoint orbit of some compact Lie group, and the integrability condition holds for the set of quantities in involution described above, the integrable system is called Gelfand-Tsetlin system [14].
3 Generalities on Lax pairs and integrability
In this section we will introduce some basic ideas about the concept of Lax pair, and describe its relation with the study of integrability in the context of Hamiltonian systems. More details about this topic can be found, for instance, in [9], [21, p. 578].
3.1 Lax pair and Hamiltonian systems
Let $(M,\omega)$ be a symplectic manifold. A Lax pair is given by a pair of matrix-valued smooth functions
$$L,P\colon(M,\omega)\to{\text{M}}_{r\times r}(\mathbb{R})\cong{\text{End}}(%
\mathbb{R}^{r}),$$
satisfying the Lax equation
$$\frac{d}{dt}L=\big{[}P,L\big{]},$$
(3.1)
such that $L=L(\gamma(t)),P=P(\gamma(t))$, for some smooth curve $\gamma\colon I\to(M,\omega)$. It will be convenient to denote ${\text{End}}(\mathbb{R}^{r})=\mathfrak{gl}(r,\mathbb{R})$, namely, we will consider the underlying natural Lie algebra structure induced by the commutator on ${\text{End}}(\mathbb{R}^{r})$. Thus, the bracket in the last equation stands for the commutator in ${\text{End}}(\mathbb{R}^{r})$.
In the setting above the matrix-valued function $L$ is called Lax matrix, and the matrix-valued function $P$ is called auxiliary matrix. We say that a Hamiltonian system $(M,\omega,H)$ admits a Lax pair if the equation of motion associated to $H\in C^{\infty}(M)$ is equivalent to the equation
$$\frac{dL}{dt}+\big{[}L,P\big{]}=0.$$
(3.2)
Notice that the derivative above is taken when we consider the composition of $L$ with the Hamiltonian flow of $X_{H}\in\Gamma(TM)$.
The equation described above can be easily solved. Actually, if we consider the initial value problem
$$\displaystyle\frac{dL}{dt}=\big{[}P,L\big{]},\ \ \ \mbox{ with }\ \ \ L(0)=L_{%
0},$$
the solution is given by
$$L(t)=g(t)L_{0}g(t)^{-1},$$
where $g\colon(-\epsilon,\epsilon)\to{\rm{GL}}(r,\mathbb{R})$ is determined by the initial value problem
$$\displaystyle\frac{dg}{dt}=P(t)g(t),\ \ \ \mbox{ with }\ \ \ g(0)=\mathds{1}.$$
In fact, we have the following
$$L(t)=g(t)L_{0}g(t)^{-1}\iff\displaystyle\frac{dL}{dt}=\displaystyle\frac{dg}{%
dt}L_{0}g(t)^{-1}+g(t)L_{0}\displaystyle\frac{dg^{-1}}{dt}.$$
Now, we can rewrite the second expression in the right side above as follows
$$\displaystyle\frac{dL}{dt}=\displaystyle\frac{dg}{dt}g(t)^{-1}L(t)+L(t)g(t)%
\displaystyle\frac{dg^{-1}}{dt}.$$
From this, we can use
$$g(t)g(t)^{-1}=\mathds{1}\iff\displaystyle\frac{dg}{dt}g(t)^{-1}+g(t)%
\displaystyle\frac{dg^{-1}}{dt}=0.$$
Hence, we obtain
$$\displaystyle\frac{dL}{dt}=\displaystyle\frac{dg}{dt}g(t)^{-1}L(t)-L(t)%
\displaystyle\frac{dg}{dt}g(t)^{-1}=\big{[}P,L\big{]},$$
where $P(t)=\frac{dg}{dt}g(t)^{-1}$, with $g(0)=\mathds{1}$. The computation above shows us that if we have a Lax pair for a Hamiltonian system, one can always solve the initial value problem $\dot{L}=[P,L]$, by solving $P(t)=\frac{dg}{dt}g(t)^{-1}$, and the solution has the form $L(t)=g(t)L_{0}g(t)^{-1}$.
The key point which makes the existence of a Lax pair an important tool in the study of Hamiltonian systems is the following: suppose we have a Lax pair $(L,P)$ for Hamiltonian system $(M,\omega,H)$, if we consider a smooth function $F\colon\mathfrak{gl}(r,\mathbb{R})\to\mathbb{R}$ which is invariant by the adjoint action, i.e.
$$F(gXg^{-1})=F(X),\mbox{ for all }g\in{\rm{GL}}(r,\mathbb{R}),\mbox{ and }X\in%
\mathfrak{gl}(r,\mathbb{R}),$$
and considering the composition $I=F\circ L\in C^{\infty}(M)$. Then, we obtain a function which is constant over the Hamiltonian flow of $X_{H}\in\Gamma(TM)$. In fact, we have
$$I(t)=F(L(t))=F(g(t)L_{0}g(t)^{-1})=F(L_{0})={\text{constant}}.$$
Therefore,
$$\big{\{}H,I\big{\}}_{M}=X_{H}(I)=0.$$
Since we can get quantities in involution from the procedure described above, it follows that a Lax pair is an useful tool in the study of integrability.
Lax pairs are not unique in general. In fact, besides of changes in the size of the matrix-valued functions, we can consider the natural action of the gauge group 333Here we
have $\mathscr{G}(M\times\mathbb{R}^{r})=\big{\{}a\in{\text{Aut}}(M\times\mathbb{R}^%
{r})\ \ \big{|}\ \ {\text{pr}}_{1}\circ a={\text{id}}_{M}\big{\}}$, and the identification $\mathscr{G}(M\times\mathbb{R}^{r})\cong C^{\infty}(M,{\rm{GL}}(r,\mathbb{R}))$. $\mathscr{G}(M\times\mathbb{R}^{r})$ on such a pair $(L,P)$ defined by
$$L\mapsto gLg^{-1},\ \ \ \mbox{ and }\ \ \ P\mapsto gPg^{-1}+\displaystyle\frac%
{dg}{dt}g^{-1},$$
where $g\colon(M,\omega)\to{\rm{GL}}(r,\mathbb{R})$ is a smooth function. From the action above, by denoting $\widetilde{L}=gLg^{-1}$, we can write
$$\displaystyle\frac{d\widetilde{L}}{dt}=\displaystyle\frac{dg}{dt}Lg^{-1}+g%
\displaystyle\frac{dL}{dt}g^{-1}+gL\displaystyle\frac{dg^{-1}}{dt}.$$
Since $\frac{dL}{dt}=[P,L]$ and $\widetilde{L}=gLg^{-1}$, we obtain
$$\displaystyle\frac{d\widetilde{L}}{dt}=\displaystyle\frac{dg}{dt}g^{-1}%
\widetilde{L}+g\big{[}P,L\big{]}g^{-1}+\widetilde{L}g\displaystyle\frac{dg^{-1%
}}{dt}.$$
Note that $g\big{[}P,L\big{]}g^{-1}=gPg^{-1}\widetilde{L}-\widetilde{L}gPg^{-1}$. Since $\frac{dg}{dt}g^{-1}+g\frac{dg^{-1}}{dt}=0$, we have
$$\displaystyle\frac{d\widetilde{L}}{dt}=\big{[}gPg^{-1}+\displaystyle\frac{dg}{%
dt}g^{-1},\widetilde{L}\big{]}\implies\displaystyle\frac{d\widetilde{L}}{dt}+%
\big{[}\widetilde{L},\widetilde{P}\big{]}=0,$$
where $\widetilde{P}=gPg^{-1}+\frac{dg}{dt}g^{-1}$.
Remark.
In the last expression of $\widetilde{P}$ above we used the following notation
$$\widetilde{P}(x)=g(x)P(x)g(x)^{-1}+\displaystyle\frac{dg}{dt}(x)g(x)^{-1},$$
with $\frac{dg}{dt}(x)=g_{\ast}(X_{H}(x))$, for every $x\in M$. Notice that
$$\frac{dg}{dt}(x)g(x)^{-1}=(R_{g(x)^{-1}})_{\ast}(\frac{dg}{dt}(x)),$$
where $R_{g(x)^{-1}}$ is the right translation. Thus, we have $\frac{dg}{dt}(x)g(x)^{-1}\in\mathfrak{gl}(r,\mathbb{R}).$
Let us illustrate how the ideas described so far can be applied in concrete cases.
Example 3.1.
(Harmonic Oscillator) A basic example to illustrate the previous discussion is provided by the Harmonic Oscillator. Consider the Hamiltonian system $(\mathbb{R}^{2},dp\wedge dq,H)$, where the Hamiltonian function is given by
$$H(q,p)=\displaystyle\frac{1}{2}\big{(}p^{2}+C^{2}q^{2}\big{)}.$$
(3.3)
A straightforward computation shows that
$$dH+\iota_{X_{H}}(dp\wedge dq)=0\iff X_{H}=p\partial_{q}-C^{2}\partial_{p}.$$
From this, we obtain the following equations of motion
$$\displaystyle\frac{dq}{dt}=p,\ \ \ \mbox{ and }\ \ \ \displaystyle\frac{dp}{dt%
}=-C^{2}q.$$
We have a Lax pair $L,P\colon(\mathbb{R}^{2},dp\wedge dq)\to\mathfrak{gl}(2,\mathbb{R})$ for the Hamiltonian system $(\mathbb{R}^{2},dp\wedge dq,H)$ defined by
$$L=\begin{pmatrix}p&\ Cq\\
Cq&-p\end{pmatrix},\ \ \ \mbox{ and }\ \ \ P=\displaystyle\frac{1}{2}\begin{%
pmatrix}0&-C\\
C&\ \ 0\end{pmatrix}.$$
In fact, by a direct computation we have
$$\displaystyle\frac{dL}{dt}+\big{[}L,P\big{]}=0\iff\begin{cases}\frac{dq}{dt}=p%
,\\
\frac{dp}{dt}=-C^{2}q.\\
\end{cases}$$
Here it is important to observe that
$$H(q,p)=\displaystyle\frac{1}{2}\det(L)=\displaystyle\frac{1}{4}{\text{Tr}}(L^{%
2}).$$
Furthermore, we have $L^{2}=2H(q,p)\mathds{1}$ and one can check that
$${\text{Tr}}(L^{2n})=2^{n+1}H(q,p)^{n},\ \ \ \mbox{ and }\ \ \ {\text{Tr}}(L^{2%
n+1})=0.$$
Since the algebra of invariant functions by the adjoint action is generated by
$$X\mapsto{\text{Tr}}(X^{k}),\ \ \ \mbox{ for every }\ \ \ X\in\mathfrak{gl}(2,%
\mathbb{R}),$$
the previous comments yield a complete description of the quantities in involution provided by the Lax pair, i.e., smooth functions of the form
$$I_{k}(q,p)={\text{Tr}}(L^{k}).$$
Once integrability is a trivial issue in this case, the computations above provide a simple illustration of interesting properties of the Lax matrices in the study of Hamiltonian systems, further discussions and nontrivial examples can be found in [9]. ∎
To find a Lax pair for a Hamiltonian system is not a simple task, and the existence of such a pair does not necessarily ensure integrability. On the other hand, as we will see below, the integrability condition ensures the existence of a Lax pair.
Actually, if we have an integrable system $(M,\omega,H)$, we can consider the equation of motion after a canonical transformation $(q_{i},p_{i})\to(\psi_{i},F_{i})$ as follows:
$$\displaystyle\frac{d\psi_{i}}{dt}=\big{\{}H,\psi_{i}\big{\}}_{M}=\displaystyle%
\frac{\partial H}{\partial F_{i}}=C_{i},\ \ \ \mbox{ and }\ \ \ \displaystyle%
\frac{dF_{i}}{dt}=\big{\{}H,F_{i}\big{\}}_{M}=0.$$
Now, we define the Lie algebra generated by $\{A_{i},B_{i}\ \ |\ \ i=1,\ldots,n\}$, with the following bracket relations
$$\big{[}A_{i},A_{j}\big{]}=0,\,\,\,\,\big{[}A_{i},B_{j}\big{]}=2\delta_{ij}B_{j%
},\,\,\,\,\big{[}B_{i},B_{j}\big{]}=0.$$
It follows from Ado’s theorem that this Lie algebra can be realized as a matrix Lie algebra. From this, we can define the Lax pair by setting
$$L=\displaystyle\sum_{i=1}^{n}F_{i}A_{i}+2F_{i}\psi_{i}B_{i},\ \ \ \mbox{ and }%
\ \ \ P=-\displaystyle\sum_{i=1}^{n}\displaystyle\frac{\partial H}{\partial F_%
{i}}B_{i}.$$
A straightforward computation shows us that
$$\displaystyle\frac{dL}{dt}+\big{[}L,P\big{]}=0\iff\displaystyle\sum_{i}%
\displaystyle\frac{dF_{i}}{dt}A_{i}+\bigg{[}2\displaystyle\frac{dF_{i}}{dt}%
\psi_{i}+2F_{i}\Big{(}\displaystyle\frac{d\psi_{i}}{dt}-\displaystyle\frac{dH}%
{dF_{i}}\Big{)}\bigg{]}B_{i}=0.$$
Hence, we obtain the equivalence between the equation of motion associated to $X_{H}\in\Gamma(TM)$ and the Lax equation.
Now, suppose that for a Hamiltonian system $(M,\omega,H)$ we have a Lax pair $L,P\colon(M,\omega)\to\mathfrak{gl}(r,\mathbb{R})$, such that $L$ can be diagonalized, namely,
$$\displaystyle L=U\Lambda U^{-1},$$
where $\Lambda={\text{diag}}(\lambda_{1},\ldots,\lambda_{r})$. One can check that the functions defined by $\lambda_{k}$ are conserved quantities, i.e., $\big{\{}H,\lambda_{k}\big{\}}_{M}=0$, $\forall k=1,\ldots,r$. Now, let $\{E_{ij}\}$ be the canonical basis for $\mathfrak{gl}(r,\mathbb{R})$. With respect to this basis we can write
$$\displaystyle L=\displaystyle\sum_{ij}L_{ij}E_{ij}.$$
Since the components $L_{ij}$ of $L$ are functions defined on $(M,\omega)$, we can evaluate the Poisson bracket $\big{\{}L_{ij},L_{kl}\big{\}}_{M}$ and gather the result in the following way. We set
$$\displaystyle L_{1}=L\otimes\mathds{1}=\displaystyle\sum_{ij}L_{ij}(E_{ij}%
\otimes\mathds{1}),\ \ \mbox{ and }\ \ L_{2}=\mathds{1}\otimes L=\displaystyle%
\sum_{ij}L_{ij}(\mathds{1}\otimes E_{ij}).$$
From this, we define $\big{\{}L_{1},L_{2}\big{\}}_{M}$ by
$$\big{\{}L_{1},L_{2}\big{\}}_{M}=\displaystyle\sum_{ij,kl}\big{\{}L_{ij},L_{kl}%
\big{\}}_{M}E_{ij}\otimes E_{kl}.$$
From the last comments, for an integrable system $(M,\omega,H)$ we have the following result [9, p. 14]
Proposition 3.1.
The involution property of the eigenvalues of $L$ is equivalent to the existence of a matrix-valued function $r_{12}$ on the phase space $(M,\omega)$ such that:
$$\big{\{}L_{1},L_{2}\big{\}}_{M}=\big{[}r_{12},L_{1}\big{]}-\big{[}r_{21},L_{2}%
\big{]},$$
where the matrix-valued functions $r_{12}$ and $r_{21}$ are, respectively, defined by
$$r_{12}=\displaystyle\sum_{ij,kl}r_{ij,kl}E_{ij}\otimes E_{kl},\ \ \mbox{ and }%
\ \ r_{21}=\displaystyle\sum_{ij,kl}r_{ij,kl}E_{kl}\otimes E_{ij},$$
the matrix $r=(r_{ij,kl})$ is called $r$-matrix.
In the context of the Proposition 3.1, the Jacobi identity of the Poisson bracket provides the following constraint on the matrix $r$:
$$\displaystyle\big{[}L_{1},\big{[}r_{12},r_{13}\big{]}+\big{[}r_{12},r_{23}\big%
{]}+\big{[}r_{32},r_{13}\big{]}+\big{\{}L_{2},r_{13}\big{\}}_{M}-\big{\{}L_{3}%
,r_{12}\big{\}}_{M}\big{]}+{\text{cyc. perm.}}=0,$$
(3.4)
here “cyc. perm.” means cyclic permutations of tensor indices $1,2,3$, for more details about the equation above see [9, p. 15], [8, $\lx@sectionsign$ 2.1 ].
The main feature of the last equation is the following: if $r$ is constant the Jacobi identity is satisfied if
$$\displaystyle\big{[}\big{[}r,r\big{]}\big{]}:=\big{[}r_{12},r_{13}\big{]}+\big%
{[}r_{12},r_{23}\big{]}+\big{[}r_{32},r_{13}\big{]}=0.$$
(3.5)
When $r$ is anti-symmetric, $r_{12}=-r_{21}$, the equation avove $\big{[}\big{[}r,r\big{]}\big{]}=0$ is called the classical Yang-Baxter equation (CYBE).
The CYBE first appeared explicitly in the literature in integrable Hamiltonian systems, but it is a special case of the Schouten bracket in differential geometry, introduced in 1940’s, see [2, p. 50] for more details. It is worth pointing out that, besides of the approach via integrable systems, solutions of the CYBE are also interesting for the study of quantum groups and related topics, see for instamn[2].
4 A Lax pair formalism for Gelfand-Tsetlin instegrable systems
4.1 Lax equation and collective Hamiltonians
Let us start by describing the relation between collective Hamiltonians and the Lax equation. As we have seen in the Subsection 2.2, the functions which compose Gelfand-Tsetlin systems are given by collective Hamiltonians, i.e., if we consider a Hamiltonian $G$-space $(M,\omega,G,\Phi)$, we can take $F\in C^{\infty}(\mathfrak{g}^{\ast})$ and consider the smooth function given by
$$\Phi^{\ast}(F)=F\circ\Phi\colon M\to\mathbb{R},$$
the Hamiltonian vector field associated to a function defined as above has the following expression
$$X_{\Phi^{\ast}(F)}(p)=\delta\tau(\nabla F(\Phi(p))_{p},$$
for every $p\in M$, see for instance Equation 2.1. Here, as before, $\delta\tau$ denotes the infinitesimal action of $G$ and $\nabla F(\Phi(p))\in\mathfrak{g}$ is obtained by the pairing $(dF)_{\Phi(p)}=\langle\cdot\ ,\nabla F(\Phi(p))\rangle$.
Now, if we consider the Hamiltonian $G$-space $(O(\lambda),\omega_{O(\lambda)},G,\Phi)$ and take a collective Hamiltonian $\Phi^{\ast}(F)\in C^{\infty}(O(\lambda))$, we have
$$X_{\Phi^{\ast}(F)}(\xi)={\text{ad}}^{\ast}(\nabla F(\Phi(\xi)))\xi,$$
for every $\xi\in O(\lambda)$.
By means of the Ad-invariant isomorphism $\mathfrak{g}^{\ast}\cong\mathfrak{g}$, and the identification of coadjoint and adjoint orbits $O(\lambda)\cong O(\Lambda)$, we obtain from the ordinary differential equation associated to $X_{\Phi^{\ast}(F)}$ the following expression
$$\displaystyle\frac{d}{dt}\varphi_{t}(Z)=X_{\Phi^{\ast}(F)}(\varphi_{t}(Z))={%
\text{ad}}(\nabla F(\Phi(\varphi_{t}(Z))))\varphi_{t}(Z),$$
for every initial condition $Z\in O(\Lambda)$. Since the moment map $\Phi\colon O(\Lambda)\to\mathfrak{g}$ is just the inclusion map, we have the following equation for every $Z\in O(\Lambda)$
$$\displaystyle\frac{d}{dt}\Phi(\varphi_{t}(Z))=\big{[}\nabla F(\Phi(\varphi_{t}%
(Z)),\Phi(\varphi_{t}(Z))\big{]},$$
notice that if we denote $X=\nabla F(\Phi(\varphi_{t}(Z))$ and $Y=\varphi_{t}(Z)$, we have
$$\displaystyle\frac{d}{dt}\Phi(\varphi_{t}(Z))=(D\Phi)_{Y}({\text{ad}}(X)Y)={%
\text{ad}}(X)\Phi(Y)=\big{[}X,\Phi(Y)\big{]}.$$
From this, we have the following proposition:
Proposition 4.1.
Given a Hamiltonian $G$-space $(O(\Lambda),\omega_{O(\Lambda)},G,\Phi)$, then the dynamic associated to any collective Hamiltonian is completely determined by a zero curvature equation
$$\displaystyle\frac{dL}{dt}+\big{[}L,P\big{]}=0,$$
where $L,P\colon O(\Lambda)\to\mathfrak{g}$.
Proof.
Let $\Phi^{\ast}(F)\in C^{\infty}(O(\Lambda))$ be a collective Hamiltonian associated to $F\in C^{\infty}(O(\Lambda))$. From the Hamiltonian flow of $X_{\Phi^{\ast}}(F)\in\Gamma(TO(\Lambda))$ we have the following ordinary differential equation (ODE)
$$\displaystyle\frac{d}{dt}\varphi_{t}(Z)=X_{\Phi^{\ast}(F)}(\varphi_{t}(Z))={%
\text{ad}}(\nabla F(\Phi(\varphi_{t}(Z))))\varphi_{t}(Z)$$
for every $Z\in O(\Lambda)$. We define the following pair of Lie algebra valued functions
$$L\colon Z\in O(\Lambda)\mapsto\Phi(Z)\in\mathfrak{g}$$
and
$$P\colon Z\in O(\Lambda)\mapsto\nabla F((\Phi(Z)))\in\mathfrak{g}.$$
From the previous comments, we obtain
$$\displaystyle\displaystyle\frac{d}{dt}L(\varphi_{t}(Z))=\displaystyle\frac{d}{%
dt}\Phi(\varphi_{t}(Z))=\big{[}\nabla F(\Phi(\varphi_{t}(Z)),\Phi(\varphi_{t}(%
Z))\big{]}.$$
(4.1)
Hence, from our definition of $L$ and $P$ the last expression is exactly
$$\displaystyle\frac{d}{dt}L(\varphi_{t}(Z))=\big{[}P(\varphi_{t}(Z)),L(\varphi_%
{t}(Z))\big{]},$$
for every $Z\in O(\Lambda)$. Now, we notice that, since the adjoint action of $G$ on its Lie algebra is a proper action, we have that $O(\Lambda)\subset\mathfrak{g}$ is an compact embedded submanifold of $\mathfrak{g}$. It follows that $C^{\infty}(O(\Lambda))=\Phi^{\ast}(C^{\infty}(\mathfrak{g}))$, see for instance [23, p. 29], [21, p. 181-283]. Thus, given $\psi\in C^{\infty}(O(\Lambda))$, we have $I\in C^{\infty}(\mathfrak{g})$, such that $\psi=\Phi^{\ast}(I)$. Therefore, the equation of motion
$$\displaystyle\frac{d}{dt}\psi(\varphi_{t}(Z))=\big{\{}\Phi^{\ast}(F),\psi\big{%
\}}_{O(\Lambda)}(\varphi_{t}(Z)),$$
can be rewritten as follows
$$\displaystyle\frac{d}{dt}I(\Phi(\varphi_{t}(Z)))=\big{\{}F,I\big{\}}_{%
\mathfrak{g}}(\Phi(\varphi_{t}(Z))).$$
Notice that the in the last equation we have used that $(O(\Lambda),\omega_{O(\Lambda)},G,\Phi)$ is a strongly Hamiltonian $G$-space, see [21, p. 497], [20, p. 330]. Since the left side of the last equation is given by
$$\displaystyle\displaystyle\frac{d}{dt}I(\Phi(\varphi_{t}(Z)))=\displaystyle%
\frac{d}{dt}I(L(\varphi_{t}(Z)))=(dI)_{L(\varphi_{t}(Z))}\big{(}\displaystyle%
\frac{d}{dt}L(\varphi_{t}(Z))\big{)},$$
(4.2)
it follows that the dynamic of the Hamiltonian system $(O(\Lambda),\omega_{O(\Lambda)},\Phi^{\ast}(F))$ is completely determined by the zero curvature equation (Lax equation)
$$\displaystyle\frac{d}{dt}L(\varphi_{t}(Z))+\big{[}L(\varphi_{t}(Z)),P(\varphi_%
{t}(Z))\big{]}=0,$$
for every $Z\in O(\Lambda)$, where $L=\Phi$ and $P=\nabla F(\Phi)$.
∎
Once we have described the dynamic associated to any collective Hamiltonian in terms of such a kind of zero curvature equation, our next task will be understanding how we can use the zero curvature equation described above in order to recover the quantities in involution obtained by means of Thimm’s trick.
4.2 Thimm’s trick and spectral invariants
In what follows, we establish a relation between Thimm’s trick and the zero curvature equation described in the previous section.
Let $(O(\Lambda),\omega_{O(\Lambda)},G,\Phi)$ be a Hamiltonian $G$-space as before, and let $K\subset G$ be a closed connected subgroup of $G$. By restriction we can consider the Hamiltonian $K$-space $(O(\Lambda),\omega_{O(\Lambda)},K,\Phi_{K})$, where
$$\Phi_{K}\colon O(\Lambda)\to\mathfrak{k},\ \ \ \mbox{ such that }\ \ \ \Phi_{K%
}=\pi_{K}\circ\Phi.$$
Here we denote by $\pi_{K}\colon\mathfrak{g}\to\mathfrak{k}$ the projection map. Now, by taking $F\in C^{\infty}(\mathfrak{k})$, we consider the collective Hamiltonian $\Phi_{K}^{\ast}(F)\in C^{\infty}(O(\Lambda))$. Note that
$$\Phi_{K}(F)=\Phi^{\ast}(F\circ\pi_{K}).$$
From this, we denote $\widetilde{F}=F\circ\pi_{K}$ and consider $\Phi_{K}^{\ast}(F)=\Phi^{\ast}(\widetilde{F})$ also as a collective Hamiltonian associated to the Hamiltonian $G$-space $(O(\Lambda),\omega_{O(\Lambda)},G,\Phi)$. From the last section, we have the dynamic associated to $\Phi^{\ast}(\widetilde{F})$ completely determined by the Lax equation
$$\displaystyle\frac{d}{dt}L(\varphi_{t}(Z))+\big{[}L(\varphi_{t}(Z)),P(\varphi_%
{t}(Z))\big{]}=0,$$
for every $Z\in O(\Lambda)$, where $L=\Phi$ and $P=\nabla\widetilde{F}(\Phi)$.
Lemma 4.1.
Consider $F\in C^{\infty}(\mathfrak{k})$ and $\pi_{K}\colon\mathfrak{g}\to\mathfrak{k}$. Let $\widetilde{F}=F\circ\pi_{K}\in C^{\infty}(\mathfrak{g})$. Then we have
$$\nabla\widetilde{F}(Z)=\nabla F(\pi_{K}(Z))\in\mathfrak{k},$$
for every $Z\in\mathfrak{g}$.
Proof.
We first choose a basis $\{X_{i}\}$ for $\mathfrak{g}$. By definition of the $\nabla\widetilde{F}$, we have
$$\nabla\widetilde{F}(Z)=\displaystyle\sum_{i}\langle(d\widetilde{F})_{Z},X_{i}%
\rangle X_{i}.$$
Since $(d\widetilde{F})_{Z}=(dF)_{\pi_{K}(Z)}\circ(D\pi_{K})_{Z}$ and $(D\pi_{K})_{Z}=\pi_{K}$, for every $Z\in\mathfrak{g}$, if we choose a basis obtained from a completion of a basis for $\mathfrak{k}\subset\mathfrak{g}$, we have $\nabla\widetilde{F}(Z)=\nabla F(\pi_{K}(Z))$. ∎
From the result above, we see that for the Lax pair $L=\Phi$ and $P=\nabla\widetilde{F}(\Phi)$ associated to $\widetilde{F}=F\circ\pi_{K}$, we have
$$P=\nabla\widetilde{F}(\Phi)=\nabla F(\Phi_{K}),\ \ \ \mbox{ and }\ \ \ X_{\Phi%
^{\ast}(\widetilde{F})}=X_{\Phi_{K}^{\ast}(F)}.$$
Furthermore, we have the following equation
$$\displaystyle\frac{d}{dt}\Phi_{K}(\varphi_{t}(Z))=\big{[}\nabla F(\Phi_{K}(%
\varphi_{t}(Z)),\Phi_{K}(\varphi_{t}(Z))\big{]},$$
where $\varphi_{t}(Z)$ is the Hamiltonian flow of $X_{\Phi^{\ast}(\widetilde{F})}$. In fact, if we denote $W=\varphi_{t}(Z)\in O(\Lambda)$ and $Y=\nabla F(\Phi_{K}(\varphi_{t}(Z)))\in\mathfrak{k}$, we have
$$\displaystyle\frac{d}{dt}\Phi_{K}(\varphi_{t}(Z))=(D\Phi_{K})_{W}({\text{ad}}(%
Y)W)=ad(Y)\Phi_{K}(W).$$
(4.1)
Note that the equality in the right side of Equation 4.1 follows from the fact that $\Phi_{K}$ is equivariant and $Y=\nabla F(\Phi_{K}(\varphi_{t}(Z))\in\mathfrak{k}$. Now, if we take $I\in C^{\infty}(\mathfrak{k})$ and consider the collective Hamiltonian
$$\psi=\Phi_{K}^{\ast}(I)\in C^{\infty}(O(\Lambda)),$$
from the equation of motion associated to the Hamiltonian system $(O(\Lambda),\omega_{O(\Lambda)},\Phi_{K}^{\ast}(F))$ we have
$$\displaystyle\frac{d}{dt}\psi(\varphi_{t}(Z))=\big{\{}\Phi_{K}^{\ast}(F),\Phi_%
{K}^{\ast}(I)\big{\}}_{O(\Lambda)}(\varphi_{t}(Z)),$$
(4.2)
which can be rewritten as follows
$$\displaystyle\frac{d}{dt}I(\Phi_{K}(\varphi_{t}(Z)))=\big{\{}F,I\big{\}}_{%
\mathfrak{k}}(\Phi_{K}(\varphi_{t}(Z))).$$
(4.3)
However, the left side on Equation 4.3 can be written as
$$\displaystyle\frac{d}{dt}I(\Phi_{K}(\varphi_{t}(Z)))=(dI)_{\Phi_{K}(\varphi_{t%
}(Z))}\big{(}\displaystyle\frac{d}{dt}\Phi_{K}(\varphi_{t}(Z))\big{)}.$$
From this, one can see that the equation of motion
$$\displaystyle\frac{d}{dt}\psi(\varphi_{t}(Z))=\big{\{}\Phi_{K}^{\ast}(F),\Phi_%
{K}^{\ast}(I)\big{\}}_{O(\Lambda)}(\varphi_{t}(Z)),$$
is completely determined by the zero curvature equation
$$\displaystyle\frac{d}{dt}\Phi_{K}(\varphi_{t}(Z))+\big{[}\Phi_{K}(\varphi_{t}(%
Z)),\nabla F(\Phi_{K}(\varphi_{t}(Z))\big{]}=0.$$
(4.4)
By taking $L_{K}=\Phi_{K}$ and $P_{k}=\nabla F(\Phi_{K})$, we obtain the following proposition:
Proposition 4.2.
Let $(O(\Lambda),\omega_{O(\Lambda)},G,\Phi)$ be a Hamiltonian $G$-space, and let $K\subset G$ be a closed and connected Lie subgroup. Given $F\in C^{\infty}(\mathfrak{k})$, the equation of motion of the Hamiltonian system $(O(\Lambda),\omega_{O(\Lambda)},\Phi_{K}^{\ast}(F))$ is completely determined by the Lax equation
$$\displaystyle\frac{dL}{dt}+\big{[}L,P\big{]}=0,$$
where $L=\Phi$ and $P=\nabla F(\Phi_{K})$. Particularly, if $\psi\in\Phi_{K}^{\ast}(C^{\infty}(\mathfrak{k}))$, we have the equation
$$\displaystyle\displaystyle\frac{d\psi}{dt}=\big{\{}\Phi_{K}^{\ast}(F),\psi\big%
{\}}_{O(\Lambda)},$$
completely determined by the Lax equation $\frac{d}{dt}L_{K}+\big{[}L_{K},P_{K}\big{]}=0$, where $L_{K}=\Phi_{K}$ and $P_{K}=\nabla F(\Phi_{K})$.
Proof.
The first equation follows directly from Proposition 4.1 and from the Lemma 4.1. The second statement follows from the fact that, if $\psi\in\Phi_{K}^{\ast}(C^{\infty}(\mathfrak{k}))$, we have $\psi=\Phi_{K}^{\ast}(I)$ for some $I\in C^{\infty}(\mathfrak{k})$. Therefore,
$$\displaystyle\displaystyle\frac{d}{dt}\psi(\varphi_{t}(Z))=\displaystyle\frac{%
d}{dt}I(\Phi_{K}(\varphi_{t}(Z)))=(dI)_{\Phi_{K}(\varphi_{t}(Z))}\big{(}\frac{%
d}{dt}L_{K}(\varphi_{t}(Z))\big{)},$$
and $\frac{dL_{K}}{dt}=[P_{K},L_{K}]$.
∎
In what follows we will illustrate how one can use the results developed so far to recover some familiar facts which are used in the construction of the Gelfand-Tsetlin integrable systems [14].
Example 4.1.
Consider the Hamiltonian ${\rm{U}}(4)$-space $(O(\Lambda),\omega_{O(\Lambda)},{\rm{U}}(4),\Phi)$, and let ${\rm{U}}(3)\subset{\rm{U}}(4)$ be the closed connected subgroup defined by the block diagonal matrices
$$\begin{pmatrix}U&0\\
0&1\end{pmatrix},\ \ \ \mbox{ such that }\ \ \ UU^{\ast}=\mathds{1}_{3},$$
where $U\in{\rm{GL}}(3,\mathbb{C})$. By restriction we have a Hamiltonian ${\rm{U}}(3)$-space $(O(\Lambda),\omega_{O(\Lambda)},{\rm{U}}(3),\Phi_{{\rm{U}}(3)})$. For the sake of simplicity, in what follows we will denote $\Phi_{{\rm{U}}(3)}=\Phi_{3}$.
If we consider the Hamiltonian system $(O(\Lambda),\omega_{O(\Lambda)},\Phi_{3}^{\ast}(F))$, for some $F\in C^{\infty}(\mathfrak{u}(3))$, the previous proposition provides an alternative way to study the dynamic of the Hamiltonian vector field $X_{\Phi_{3}^{\ast}(F)}$ in terms of the equation
$$\displaystyle\frac{dL}{dt}+\big{[}L,P\big{]}=0,$$
for $L=\Phi$ and $P=\nabla F(\Phi_{3})$. As we have seen, we can take
$$L(Z)=\Phi(Z)\in\mathfrak{u}(4),\ \ \mbox{ and }\ \ P(Z)=\nabla F(\Phi_{3}(Z))%
\in\mathfrak{u}(3)\subset\mathfrak{u}(4),$$
for all $Z\in O(\Lambda)$. Now, we consider the collective Hamiltonian $\psi=\Phi_{3}^{\ast}(I)\in C^{\infty}(O(\Lambda))$, where $I\in C^{\infty}(\mathfrak{u}(3))$ is given by
$$I(X)=\det(X),\ \ \mbox{ for every }\ \ X\in\mathfrak{u}(3).$$
Here we used the identification $\mathfrak{u}(3)\cong i\mathfrak{u}(3)$ in order to get a real-valued function. Let us denote $\psi=\det(\Phi_{3})$. By looking at the equation of motion
$$\displaystyle\frac{d\psi}{dt}=\big{\{}\Phi_{3}^{\ast}(F),\psi\big{\}}_{O(%
\Lambda)}\iff\displaystyle\frac{dL_{3}}{dt}+\big{[}L_{3},P_{3}\big{]}=0,$$
where $L_{3}=\Phi_{3}$ and $P_{3}=\nabla F(\Phi_{3})$, we observe that in this case we have
$$L_{3}(Z)=\Phi_{3}(Z)\in\mathfrak{u}(3),\ \ \mbox{ and }\ \ P_{3}(Z)=\nabla F(%
\Phi_{3}(Z))\in\mathfrak{u}(3).$$
It follows from the previous comments that
$$\displaystyle\frac{d}{dt}\psi(\varphi_{t}(Z))=\displaystyle\frac{d}{dt}I(L_{3}%
(\varphi_{t}(Z)))=(dI)_{L_{3}(\varphi_{t}(Z))}\big{(}\frac{d}{dt}L_{3}(\varphi%
_{t}(Z))\big{)}.$$
From Jacobi’s formula we have
$$\displaystyle\frac{d}{dt}\det(A(t))={\text{Tr}}\Big{(}{\text{adj}}(A(t))%
\displaystyle\frac{dA}{dt}\Big{)},$$
for every matrix-curve $A(t)$. Hence, we have
$$\displaystyle\frac{d}{dt}I(L_{3}(\varphi_{t}(Z)))=\displaystyle\frac{d}{dt}%
\det(L_{3}(\varphi_{t}(Z)))={\text{Tr}}\Big{(}{\text{adj}}(L_{3}(\varphi_{t}(Z%
))))\displaystyle\frac{d}{dt}L_{3}(\varphi_{t}(Z))\Big{)}.$$
From this, we can use the equation $\frac{d}{dt}L_{3}=[P_{3},L_{3}]$ in the last expression to obtain
$\displaystyle\frac{d\psi}{dt}={\text{Tr}}\Big{(}{\text{adj}}(L_{3})%
\displaystyle\frac{dL_{3}}{dt}\Big{)}={\text{Tr}}\Big{(}{\text{adj}}(L_{3})%
\big{[}P_{3},L_{3}\big{]}\Big{)}$.
Since $L_{3}{\text{adj}}(L_{3})=\det(L_{3})\mathds{1}$, it follows that
$$\displaystyle\frac{d\psi}{dt}={\text{Tr}}\Big{(}\big{[}L_{3},{\text{adj}}(L_{3%
})\big{]}P_{3}\Big{)}=0\implies\displaystyle\frac{d}{dt}\psi=\big{\{}\Phi_{3}^%
{\ast}(F),\psi\big{\}}_{O(\Lambda)}=0.$$
(4.5)
Therefore, $\psi=\det(\Phi_{3})$ is a constant of motion of the Hamiltonian system $(O(\Lambda),\omega_{O(\Lambda)},\Phi_{3}^{\ast}(F))$ as expected. Observe that Equation 4.5 shows us concretely how the pair the Lax pair $L_{3},P_{3}\colon O(\Lambda)\to\mathfrak{u}(3)$ can be used to understand the equation of motion associated to collective Hamiltonians.
Remark.
The conclusion of the Example 4.1 is actually a more general fact. Under the hypothesis of the Proposition 4.2, for $\psi\in\Phi_{K}^{\ast}(C^{\infty}(\mathfrak{k}))$ we can associate the following equations:
$$\displaystyle\frac{d\psi}{dt}=\big{\{}\Phi_{K}^{\ast}(F),\psi\big{\}}_{O(%
\Lambda)},\ \ \text{and}\ \ \frac{dL_{K}}{dt}+\big{[}L_{K},P_{K}\big{]}=0.$$
If we denote $\psi=\Phi_{K}^{\ast}(I)$, we have that the equations above are related by
$$\displaystyle\frac{d\psi}{dt}(t)=(dI)_{L_{k}(t)}({\text{ad}}(P_{K}(t))L_{K}(t)).$$
It follows that, if $I\in C^{\infty}(\mathfrak{k})^{{\text{Ad}}}$, then have
$$\displaystyle\frac{d\psi}{dt}=\big{\{}\Phi_{K}^{\ast}(F),\psi\big{\}}_{O(%
\Lambda)}=0,$$
i.e., $\psi$ is a constant of motion of the Hamiltonian system $(O(\Lambda),\omega_{O(\Lambda)},\Phi_{K}^{\ast}(F))$. In fact, the ideas above can be seen as an alternative way to recover Thimm’s trick, see Proposition 2.3.
Now, let us denote by
$$\mathscr{I}_{\Phi_{K}^{\ast}(F)}=\Big{\{}\psi\in C^{\infty}(O(\Lambda))\ \ %
\Big{|}\ \ \big{\{}\Phi_{K}^{\ast}(F),\psi\big{\}}_{O(\Lambda)}=0\Big{\}},$$
(4.6)
the subspace of functions which commute with $\Phi_{K}^{\ast}(F)\in C^{\infty}(O(\Lambda))$. The next result provides a characterization for this subspace.
Proposition 4.3.
Let $(O(\Lambda),\omega_{O(\Lambda)},\Phi_{K}^{\ast}(F))$ be a Hamiltonian system as before, then the subspace $\mathscr{I}_{\Phi_{K}^{\ast}(F)}\subset C^{\infty}(O(\Lambda))$ is given by the pullback by the moment map $\Phi\colon O(\Lambda)\to\mathfrak{g}$ of the following subspace
$$\mathcal{I}_{(L,P)}=\Big{\{}I\in C^{\infty}(\mathfrak{g})\ \ \Big{|}\ \ \frac{%
d}{dt}(I\circ L)=0\Big{\}},$$
such that $P=\nabla\widetilde{F}$, $\widetilde{F}=F\circ\pi_{K}$, and $L=\Phi$.
Proof.
The proof goes as follows: if we take $\psi=\Phi^{\ast}(I)\in\mathscr{I}_{\Phi_{K}^{\ast}(F)}$, we obtain
$\displaystyle\frac{d}{dt}\psi(\varphi_{t}(Z))=\displaystyle\frac{d}{dt}I(L(%
\varphi_{t}(Z)))$.
Hence, we have $\mathscr{I}_{\Phi_{K}^{\ast}(F)}=\Phi^{\ast}(\mathcal{I}_{(L,P)})$.∎
As we have seen so far, all about the study of quantities in involution obtained through of Thimm’s trick and collective Hamiltonians can be recovered by means of the Lax pair. Actually, Proposition 4.3 shows that, for $I\in C^{\infty}(\mathfrak{k})^{{\text{Ad}}}$, we have
$$\psi=\Phi_{K}^{\ast}(I)\in\mathscr{I}_{\Phi_{K}^{\ast}(F)}=\Phi^{\ast}(%
\mathcal{I}_{(L,P)}).$$
From the last results and comments we have the following Theorem:
Theorem 4.1.
Let $(O(\Lambda),\omega_{O(\Lambda)},G,\Phi)$ be a Hamiltonian $G$-space. Given a chain of closed connected subgroups
$$G=K_{0}\supset K_{1}\supset\ldots\supset K_{s},$$
if we denote by $\Phi_{l}$ the moment map associated to the Hamiltonian action by restriction of each subgroup $K_{l}$, then for every $F\in C^{\infty}(\mathfrak{k}_{s})$, we can associate to the Hamiltonian system $(O(\Lambda),\omega_{O(\Lambda)},\Phi_{s}^{\ast}(F))$ the following set of Lax equations
$$\displaystyle\frac{dL_{k}}{dt}+\big{[}L_{k},P_{k}\big{]}=0,$$
such that $L_{k}=\Phi_{k}$, and $P_{k}=\nabla(F\circ\pi_{s}^{k})(\Phi_{k})$, for $k=0,1,\ldots,s$, where $\pi_{s}^{k}\colon\mathfrak{t}_{k}\to\mathfrak{k}_{s}$ is the projection map.
Proof.
The result follows from the fact that $\Phi_{s}=\pi_{s}^{k}\circ\Phi_{k}$, for all $k=0,1,\ldots,s-1$. Thus, we have
$$\displaystyle\frac{d}{dt}L_{k}(\varphi_{t}(Z))=\displaystyle\frac{d}{dt}\Phi_{%
k}(\varphi_{t}(Z))=(D\Phi_{k})_{\varphi_{t}(Z)}\big{(}\displaystyle\frac{d}{dt%
}\varphi_{t}(Z)\big{)}.$$
Now we notice that $F\circ\Phi_{s}=(F\circ\pi_{s}^{k})\circ\Phi_{k}$, moreover, a straightforward computation shows us that
$$\nabla(F\circ\pi_{s}^{k})(Z)=\nabla F(\pi_{s}^{k}(Z))\in\mathfrak{k}_{s}%
\subset\mathfrak{k}_{k},$$
for every $Z\in\mathfrak{k}_{k}$. From the comments above, the equation
$$\displaystyle\frac{d}{dt}\varphi_{t}(Z)={\text{ad}}(\nabla F(\Phi_{s}(\varphi_%
{t}(Z))))\varphi_{t}(Z),$$
becomes
$$\displaystyle\frac{d}{dt}\varphi_{t}(Z)={\text{ad}}(\nabla(F\circ\pi_{s}^{k})(%
\Phi_{k}(\varphi_{t}(Z))))\varphi_{t}(Z).$$
Now, since $\nabla(F\circ\pi_{s}^{k})(\Phi_{k}(\varphi_{t}(Z)))\in\mathfrak{k}_{k}$ and $\Phi_{k}$ is equivariant, we have
$$\displaystyle\frac{d}{dt}\Phi_{k}(\varphi_{t}(Z))={\text{ad}}(\nabla(F\circ\pi%
_{s}^{k})(\Phi_{k}(\varphi_{t}(Z))))\Phi_{k}(\varphi_{t}(Z)).$$
However, the last expression above is exactly
$$\displaystyle\frac{d}{dt}L_{k}(\varphi_{t}(Z))={\text{ad}}(P_{k}(\varphi_{t}(Z%
)))L_{k}(\varphi_{t}(Z)),$$
where $L_{k}=\Phi_{k}$ and $P_{k}=\nabla(F\circ\pi_{s}^{k})(\Phi_{k})$. Therefore, it follows that
$$\displaystyle\frac{d}{dt}L_{k}(\varphi_{t}(Z))+\big{[}L_{k}(\varphi_{t}(Z)),P_%
{k}(\varphi_{t}(Z))\big{]}=0,$$
for every $Z\in O(\Lambda)$ and for every $k=0,1,\ldots,s$. ∎
Now we observe the following fact: Under the hypothesis of the last Theorem 4.1, given $I\in C^{\infty}(\mathfrak{k}_{k})^{{\text{Ad}}}$, if we consider $\psi=\Phi_{k}^{\ast}(I)$, we have
$$\displaystyle\frac{d}{dt}\psi(\varphi_{t}(Z))=\displaystyle\frac{d}{dt}I(L_{k}%
(\varphi_{t}(Z)))=0,$$
since $(dI)_{L_{k}(t)}({\text{ad}}(P_{k}(t))L_{k}(t))=0$. We obtain the following result:
Corollary 4.1.
Under the hypothesis of Theorem 4.1, if we suppose that ${\text{rank}}(K_{l})=r_{l}$, then the Hamiltonian system $(O(\Lambda),\omega_{O(\Lambda)},\Phi_{s}^{\ast}(F))$ admits at least $N=r_{1}+\ldots+r_{s}$ functions in involution.
Proof.
It follows from the following fact: if we fix $r_{k}$ generators for $C^{\infty}(\mathfrak{k}_{k})^{\text{Ad}}$, from the previous ideas we have
$\Phi_{k}^{\ast}\big{(}C^{\infty}(\mathfrak{k}_{k})^{{\text{Ad}}}\big{)}\subset%
\mathscr{I}_{\Phi_{s}^{\ast}(F)}$,
for every $k=1,\ldots s$, see Remark 4.2.
∎
We notice that, from Corollary 4.1, if we take $\psi_{1}=\Phi_{k}^{\ast}(I_{1})$, with $I_{1}\subset C^{\infty}(\mathfrak{k}_{k})^{{\text{Ad}}}$, and $\psi_{2}=\Phi_{l}^{\ast}(I_{2})$, with $I_{2}\subset C^{\infty}(\mathfrak{k}_{l})^{{\text{Ad}}}$, we have
$$\big{\{}\Phi_{s}^{\ast}(F),\psi_{1}\psi_{2}\big{\}}_{O(\Lambda)}=\big{\{}\Phi_%
{s}^{\ast}(F),\psi_{1}\big{\}}_{O(\Lambda)}\psi_{2}+\big{\{}\Phi_{s}^{\ast}(F)%
,\psi_{2}\big{\}}_{O(\Lambda)}\psi_{1}=0.$$
It follows that $\psi_{1}\psi_{2}\in\mathscr{I}_{\Phi_{s}^{\ast}(F)}$. Furthermore, we can suppose $k\geq l$, and obtain
$$\big{\{}\psi_{1},\psi_{2}\big{\}}_{O(\Lambda)}=\Phi_{k}^{\ast}\big{\{}I_{1},I_%
{2}\circ\pi_{l}^{k}\big{\}}_{\mathfrak{k}_{k}}=0,$$
notice that $I_{1}$ is a Casimir function. It allows us to define the following Poisson subalgebra of $(C^{\infty}(O(\Lambda)),\{\cdot,\cdot\}_{O(\Lambda)})$.
Definition.
Under the hypothesis of the Theorem 4.1, we define the Gelfand-Tsetlin commutative Poisson subalgebra $\Gamma_{\Phi_{s}^{\ast}(F)}\subset C^{\infty}(O(\Lambda))$ as ]
$$\Gamma_{\Phi_{s}^{\ast}(F)}:=\Big{\langle}\Phi_{k}^{\ast}\big{(}S(\mathfrak{k}%
_{k})^{{\text{Ad}}}\big{)}\ \ \Big{|}\ \ k=1,\ldots,s\Big{\rangle},$$
(4.7)
where $S(\mathfrak{k}_{k})^{{\text{Ad}}}$ denotes the subalgebra of Ad-invariant polynomial functions. Particularly, we have $\Gamma_{\Phi_{s}^{\ast}(F)}\subset\mathscr{I}_{\Phi_{s}^{\ast}(F)}$.
Our motivation for the definition above of Gelfand-Tsetlin Poisson subalgebra is the concept of the Gelfand-Tsetlin subalgebras of the universal enveloping algebras, which are examples of Harish-Chandra subalgebras [3, p. 87], see also [5].
In order to illustrate the content of the last theorem, we consider the following example:
Example 4.2.
Consider the Hamiltonian ${\rm{U}}(4)$-space $(O(\Lambda),\omega_{O(\Lambda)},{\rm{U}}(4),\Phi)$, and take the following chain of closed connected subgroups
$${\rm{U}}(4)\supset{\rm{U}}(3)\supset{\rm{U}}(2)\supset{\rm{U}}(1),$$
for ${\rm{U}}(k)\subset U(4)$ given by the group of matrices of the form
$\begin{pmatrix}U&0\\
0&\mathds{1}\end{pmatrix}$, such that $UU^{\ast}=\mathds{1}$,
where $U\in{\rm{GL}}(k,\mathbb{C})$, $k=1,2,3$. Associated to this chain we can consider the Hamiltonian ${\rm{U}}(k)$-spaces $(O(\Lambda),\omega_{O(\Lambda)},{\rm{U}}(k),\Phi_{k})$. If we take $F\in C^{\infty}(\mathfrak{u}(1))$ and consider the Hamiltonian system $(O(\Lambda),\omega_{O(\Lambda)},\Phi_{1}^{\ast}(F))$, from Theorem 4.1 we have
$\Phi_{k}^{\ast}\big{(}C^{\infty}(\mathfrak{u}(k))^{{\text{Ad}}}\big{)}\subset%
\mathscr{I}_{\Phi_{1}^{\ast}(F)}$,
for $k=1,2,3$. From this, we look at the generators of $S(\mathfrak{u}(k))^{{\text{Ad}}}$ in order to understand $\Gamma_{\Phi^{\ast(F)}}$.
For $S(\mathfrak{u}(3))^{{\text{Ad}}}$ we have the following generators:
•
$I_{1}^{(3)}\colon X\mapsto{\text{Tr}}(X)$, for every $X\in\mathfrak{u}(3)$,
•
$I_{2}^{(3)}\colon X\mapsto-\displaystyle\frac{1}{2}\Big{[}{\text{Tr}}(X)^{2}-{%
\text{Tr}}(X^{2})\Big{]}$, for every $X\in\mathfrak{u}(3)$,
•
$I_{3}^{(3)}\colon X\mapsto\det(X)$, for every $X\in\mathfrak{u}(3)$.
For $S(\mathfrak{u}(2))^{{\text{Ad}}}$ we have the following generators:
•
$I_{1}^{(2)}\colon X\mapsto{\text{Tr}}(X)$, for every $X\in\mathfrak{u}(2)$,
•
$I_{2}^{(2)}\colon X\mapsto\det(X)$, for every $X\in\mathfrak{u}(2)$.
For $S(\mathfrak{u}(1))^{{\text{Ad}}}$, since $\mathfrak{u}(1)=i\mathbb{R}$, we observe that
$C^{\infty}(\mathfrak{u}(1))^{{\text{Ad}}}=C^{\infty}(\mathfrak{u}(1))$.
Thus, we have the function $I_{1}^{(1)}\colon X\to iX$, for every $X\in\mathfrak{u}(1)$. As expected, the functions
$$\Phi_{k}^{\ast}(I_{l}^{(k)}),\mbox{ for }1\leq l\leq k\mbox{ and }1\leq k\leq 3,$$
provide a complete integrable system in $O(\Lambda)\subset\mathfrak{u}(4)$. It is in fact the Gelfand-Tsetlin system, see [14]. The important fact to notice here is that the above set of functions generates $\Gamma_{\Phi^{\ast}(F)}$.
As we have seen in Theorem 4.1, associated to the Hamiltonian system $(O(\Lambda),\omega_{O(\Lambda)},\Phi_{1}^{\ast}(F))$ we have the following set of equations:
$$\displaystyle\frac{dL_{k}}{dt}+\big{[}L_{k},P_{k}\big{]}=0,$$
where $L_{k}=\Phi_{k}$ and $P_{k}=\nabla(F\circ\pi_{1}^{k})(\Phi_{k})$, for $k=1,2,3$. Now if we take the matrices
$$L={\text{diag}}\Big{\{}L_{k}\ \Big{|}\ k=1,2,3\Big{\}},\mbox{ and }P={\text{%
diag}}\Big{\{}P_{k}\ \Big{|}\ k=1,2,3\Big{\}},$$
we obtain from this a pair of matrix-valued functions $L,P\colon O(\Lambda)\to\mathfrak{u}(6)$. The interest point is that a straightforward computation shows us that
$$\displaystyle\frac{d}{dt}L(\varphi_{t}(Z))+\big{[}L(\varphi_{t}(Z)),P(\varphi_%
{t}(Z))\big{]}=0,$$
for every $Z\in O(\Lambda)$. Furthermore, we can recover the Gelfand-Tsetlin system in just one equation
$$\det(L-t\mathds{1}_{6})=\det(L_{1}-t)\det(L_{2}-t\mathds{1}_{2})\det(L_{3}-t%
\mathds{1}_{3}).$$
(4.8)
In fact, we have:
•
$\det(L_{1}-t)=L_{1}-t$
•
$\det(L_{2}-t\mathds{1}_{2})=t^{2}-{\text{Tr}}(L_{2})t+\det(L_{2})$
•
$\det(L_{3}-t\mathds{1}_{3})=-t^{3}+{\text{Tr}}(L_{3})t^{2}-\displaystyle\frac{%
1}{2}\Big{[}{\text{Tr}}(L_{3})^{2}-{\text{Tr}}(L_{3}^{2})\Big{]}t+\det(L_{3})$,
From this, we obtain:
•
$\det(L_{1}-t)=-i\Phi_{1}^{\ast}(I_{1}^{(1)})-t$
•
$\det(L_{2}-t\mathds{1}_{2})=t^{2}-\Phi_{2}^{\ast}(I_{1}^{(2)})t+\Phi_{2}^{\ast%
}(I_{2}^{(2)})$
•
$\det(L_{3}-t\mathds{1}_{3})=-t^{3}+\Phi_{3}^{\ast}(I_{1}^{(3)})t^{2}+\Phi_{3}^%
{\ast}(I_{2}^{(3)})t+\Phi_{3}^{\ast}(I_{3}^{(3)})$.
Hence, the Equation 4.8 defined by $L$ encodes all the quantities in involution which define the Gelfand-Tsetlin integrable system.
The last comments in the example above are more general. In fact, if we consider a Hamiltonian system $(M,\omega,H)$ which admits a Lax pair $L,P\colon M\to\mathfrak{gl}(r,\mathbb{R})$, then the coefficients of the characteristic polynomial of $L$, namely,
$$\det(L-t\mathds{1}_{r})=f_{0}(L)t^{r}+f_{1}(L)t^{r-1}+\ldots+f_{r-1}(L)t+f_{r}%
(L),$$
provide a set of quantities in involution for the Hamiltonian system $(M,\omega,H)$. Actually, if we take $F_{l}=L^{\ast}(f_{l})$, we have
$$\displaystyle\frac{d}{dt}F_{l}(t)=\big{\{}H,F_{l}\big{\}}_{M}(t).$$
On the other hand, since $f_{l}\colon\mathfrak{gl}(r,\mathbb{R})\to\mathbb{R}$ is Ad-invariant, we have
$$\displaystyle\frac{d}{dt}F_{l}(t)=(df_{l})_{L(t)}\big{(}\displaystyle\frac{d}{%
dt}L(t)\big{)}=(df_{l})_{L(t)}\big{(}{\text{ad}}(P(t))L(t)\big{)}=0.$$
It is worth pointing out that the coefficients of the characteristic polynomial of $L$ can be expressed in terms of functions of the form ${\text{Tr}}(L^{k})$, $k=0,1,2\ldots$. From this, if $L=U\Lambda U^{-1}$, with $\Lambda\in\mathfrak{gl}(r,\mathbb{R})$ being a diagonal matrix of the form
$$\Lambda={\text{diag}}(\Lambda_{1},\ldots,\Lambda_{r}),$$
it follows that the functions defined by the eigenvalues of $L$ are quantities in involution for the Hamiltonian system $(M,\omega,H)$. The constants of motion obtained from the characteristic polynomial of $L$ are called spectral invariants. We will denote the set of spectral invariants associated to a Lax pair by $\sigma(L)\subset C^{\infty}(M)$.
As we have seen, once we have a Lax pair $(L,P)$ for a Hamiltonian system $(M,\omega,H)$, we can take the solution for the zero curvature equation $\frac{dL}{dt}+[L,P]=0$ as being
$$L(t)=g(t)L(0)g(t)^{-1},\mbox{ where }\displaystyle\frac{dg}{dt}=P(t)g(t),$$
with $g(0)=\mathds{1}$, see [21, p. 578-579]. Thus, if $L\colon M\to\mathfrak{gl}(r,\mathbb{R})$ is diagonalizable the spectrum of $L$ remains invariant by the Hamiltonian flow of $H\in C^{\infty}(M)$.
Now, we come back to the context of Hamiltonian $G$-spaces $(O(\Lambda),\omega_{O(\Lambda)},G,\Phi)$. As we are concerned with the classical Lie groups, i.e., for us
$$G={\rm{U}}(N),\ {\rm{SO}}(N),\mbox{ or }{\rm{Sp}}(N),$$
we have a natural chain of closed connected subgroups to consider, namely,
$${\rm{U}}(N)\supset{\rm{U}}(N-1)\supset\ldots\supset{\rm{U}}(1),$$
$${\rm{SO}}(N)\supset{\rm{SO}}(N-1)\supset\ldots\supset{\rm{SO}}(2),$$
$${\rm{Sp}}(N)\supset{\rm{Sp}}(N-1)\supset\ldots\supset{\rm{Sp}}(1),$$
given by block diagonal matrices. Applying the previous results we have the following Proposition:
Proposition 4.4.
Let $(O(\Lambda),\omega_{O(\Lambda)},G,\Phi)$ be a Hamiltonian $G$-space. If we take a chain of closed connected subgroups given by diagonal blocks
$$G=K_{0}\supset K_{1}\supset\ldots\supset K_{s},$$
where ${\text{rank}}(K_{l})=r_{l}$ for each $K_{l}$. Then, for $F\in C^{\infty}(\mathfrak{k}_{s})$, the Hamiltonian system $(O(\Lambda),\omega_{O(\Lambda)},\Phi_{s}^{\ast}(F))$ admits a pair of matrix valued functions $L,P\colon O(\Lambda)\to\mathfrak{gl}(r,\mathbb{R})$ which satisfies the zero curvature equation
$$\displaystyle\frac{d}{dt}L(\varphi_{t}(Z))+\big{[}L(\varphi_{t}(Z)),P(\varphi_%
{t}(Z))\big{]}=0,$$
for all $Z\in O(\Lambda)$, where $\varphi_{t}(Z)$ denotes the Hamiltonian flow of $\Phi_{s}^{\ast}(F)$.
Proof.
At first, we consider the set of Lax equations associated to the chain of closed connected subgroups (Theorem 4.1)
$$\displaystyle\frac{dL_{k}}{dt}+\big{[}L_{k},P_{k}\big{]}=0,$$
where $L_{k}=\Phi_{k}$ and $P_{k}=\nabla(F\circ\pi_{s}^{k})(\Phi_{k})$, for $k=1,\ldots,s$. Now we define
$$L={\text{diag}}\Big{\{}L_{k}\ \Big{|}\ k=1,\ldots,s\Big{\}},\mbox{ and }P={%
\text{diag}}\Big{\{}P_{k}\ \Big{|}\ k=1,\ldots,s\Big{\}}.$$
From this, we have $L,P\colon O(\Lambda)\to\mathfrak{gl}(r,\mathbb{R})$, such that $r\geq r_{1}+\ldots+r_{s}$. Notice that $\mathfrak{k}_{k}$ is a matrix Lie algebra for each $k=1,\ldots,s$. A straightforward computation shows us that
$$\displaystyle\frac{d}{dt}L(\varphi_{t}(Z))+\big{[}L(\varphi_{t}(Z)),P(\varphi_%
{t}(Z))\big{]}=0,$$
for all $Z\in O(\Lambda)$. From these, we have the desired result. ∎
Remark.
In the proof of the Proposition 4.4 we can also denote the block diagonal matrices $L$ and $P$ by
$$L=\displaystyle\sum_{k=1}^{s}L_{k},\ \ \mbox{ and }\ \ P=\displaystyle\sum_{k=%
1}^{s}P_{k}.$$
Furthermore we can choose $r\in\mathbb{N}$, such that
$$\displaystyle\bigoplus_{k=1}^{s}\mathfrak{k}_{k}\subset\mathfrak{gl}(r,\mathbb%
{R}).$$
We have the following direct consequences from the previous result:
Corollary 4.2.
Under the hypothesis of the Proposition 4.4, the spectral invariants of $L$ provide a set of $r_{1}+\ldots+r_{s}$ quantities in involution for the Hamiltonian system $(O(\Lambda),\omega,\Phi_{s}^{\ast}(F))$.
Corollary 4.3.
Under the hypothesis of the previous Theorem 4.1, the Gelfand-Tsetlin Poisson subalgebra $\Gamma_{\Phi_{s}^{\ast}}(F)$ is generated by the spectral invariants of the Lax matrix $L$, i.e.,
$$\Gamma_{\Phi_{s}^{\ast}(F)}=\big{\langle}\sigma(L)\big{\rangle}.$$
Inspired by the Corollary 4.3 we make the following definition:
Definition.
Under the hypothesis of the Corollary 4.3, we define $\sigma(L)$ as being the Gelfand-Tsetlin spectrum of $O(\Lambda)$.
By means of the content which we have established so far we have the following Theorem:
Theorem 4.2.
Let $(O(\Lambda),\omega_{O(\Lambda)},G,\Phi)$ be a Hamiltonian $G$-space defined by an adjoint orbit $O(\Lambda)=\operatorname{Ad}(G)\Lambda\subset\mathfrak{g}$, where $G={\rm{U}}(n)$ or ${\rm{SO}}(n)$. Then there exists a Lax pair $L,P\colon O(\Lambda)\to\mathfrak{gl}(r,\mathbb{R})$, satisfying
$$\displaystyle\frac{dL}{dt}+\big{[}L,P\big{]}=0,$$
such that the spectral invariants $\sigma(L)$ of $L$ define an integrable system in $O(\Lambda)$. Furthermore, this integrable system coincides with the Gelfand-Tsetlin integrable system.
Proof.
The proof follows from the following facts. We first take the chain of closed connected subgroups given by block diagonal matrices
${\rm{U}}(N)\supset{\rm{U}}(N-1)\supset\ldots\supset{\rm{U}}(1)$,
${\rm{SO}}(N)\supset{\rm{SO}}(N-1)\supset\ldots\supset{\rm{SO}}(2)$.
Then, we apply the Proposition 4.4. Given $Z\in O(\Lambda)$, since $L_{k}(Z)=\Phi_{k}(Z)$ belongs to $\mathfrak{u}(k)$ or $\mathfrak{so}(k)$, with $1\leq k<N$ in the first case, and $2\leq k<N$ in the second case, we can diagonalize the matrix $L$, which is defined by diagonal blocks $L_{k}$. From this, the spectral invariants of $L$ defines the Gelfand-Tsetlin integrable system, i.e., all the information about the Gelfand-Tsetlin integrable system are codified in $\sigma(L)$.
∎
The definition of the Gelfand-Tsetlin spectrum $\sigma(L)$ which we set in this work gather together all the information about the Gelfand-Tsetlin integrable system and Gelfand-Tsetlin basis in a single object. In fact, the functions which define the spectrum of $L$ satisfy the Gelfand-Tsetlin pattern, which in turn parameterizes the Gelfand-Tsetlin basis. Thus, besides recovering all the well known results our approach, also incorporate new elements on the study of the Gelfand-Tsetlin integrable system.
4.3 Liouville’s Theorem and Lax formalism for Gelfand-Tsetlin integrable systems
In this section we will perform some computations using the content developed in the previous section. The idea is to describe by means of concrete examples the Darboux’s coordinates provided by the Gelfand-Tsetlin integrable systems. Although in the context of integrability we can apply the Liouville-Arnold Theorem, for our exposition we will just deal with Liouville’s theorem. Actually, the study of the Lagrangian fibrations which we have associated to integrable systems goes beyond our purpose for this work, thus we will not approach Arnold’s theorem.
Let us start recalling very quickly the statement of Liouville’s Theorem. Let $(M,\omega,H)$ be an integrable system. By definition of the Liouville integrability condition, there exists $H_{1},\ldots,H_{n}\colon(M,\omega)\to\mathbb{R}$, such that $H_{i}\in C^{\infty}(M)$, for each $i=1,\ldots,n=\frac{1}{2}\dim(M)$, satisfying
•
$\big{\{}H_{i},H_{j}\big{\}}_{M}=0$, for all $i,j=1,\ldots,n$,
•
$dH_{1}\wedge\ldots\wedge dH_{n}\neq 0$ in an open dense subset of $M$.
We usually take $H=H_{1}$. In the setting above we can set
$$\mathscr{H}\colon(M,\omega)\to\mathbb{R}^{n},$$
where $x_{i}\circ\mathscr{H}=H_{i}$, for every $i=1,\ldots,n$, here we denote by $x_{i}\colon\mathbb{R}^{n}\to\mathbb{R}$, the standard coordinate system of $\mathbb{R}^{n}$.
Theorem 4.3 (Liouville).
Let $(M,\omega,H)$ be an integrable system and let $x\in M$ be a regular point of $\mathscr{H}=(H_{1},\ldots,H_{n})$. Then there exists an open neighbourhood $W\subset M$ of $x$ and smooth functions $\widetilde{q}_{1},\ldots,\widetilde{q}_{n}$ on $W$ complementing $H_{1},\ldots,H_{n}$ to Darboux coordinates. In these coordinates, the flow $\phi_{t}^{X_{H_{i}}}$ of the Hamiltonian vector field $X_{H_{i}}$ is given by
$$\phi_{t}^{X_{H_{i}}}(\widetilde{q},\widetilde{p})=(\widetilde{q}_{1},\ldots,%
\widetilde{q}_{i}+t,\widetilde{q}_{i+1},\ldots,\widetilde{q}_{n},\widetilde{p}%
_{1},\ldots,\widetilde{p}_{n}).$$
In order to study the relations between Liouville’s theorem with our Lax pair formulation for Gelfand-Tsetlin integrable systems, from now we consider the classical case provided by the compact Lie group $G={\rm{U}}(N)$. By fixing an element $\Lambda={\text{diag}}(i\lambda_{1},\ldots,i\lambda_{N})$, with $\lambda_{1}\geq\cdots\geq\lambda_{N}$, we take its adjoint orbit $O(\Lambda)={\text{Ad}}({\rm{U}}(N))\Lambda\subset\mathfrak{u}(N)$. As stated in Theorem 4.2, the Gelfand-Tsetlin integrable system on $(O(\Lambda),\omega_{O(\Lambda)})$ is completely determined by the Lax pair $L,P\colon O(\Lambda)\to\mathfrak{u}\left(\frac{N(N-1)}{2}\right)$, where
$$\displaystyle\frac{dL}{dt}+\big{[}L,P\big{]}=0,$$
for $L={\text{diag}}\Big{\{}L_{k}\ \Big{|}\ k=1,\ldots,N-1\Big{\}}$, and $P={\text{diag}}\Big{\{}P_{k}\ \Big{|}\ k=1,\ldots,N-1\Big{\}}$, such that
$$L_{k}=\Phi_{k},\ \ \mbox{ and }\ \ P_{k}=\nabla(F\circ\pi_{1}^{k})(\Phi_{k}),$$
with $k=1,\ldots,N-1$, $\pi_{1}^{k}\colon\mathfrak{u}(k)\to\mathfrak{u}(1)$ being the projection map, and $F\in C^{\infty}(\mathfrak{u}(1))$, see Theorem 4.1 for more details. A straightforward computation shows us that
$$\det\big{(}L-\mathds{1}_{\frac{N(N-1)}{2}}\big{)}=\displaystyle\prod_{k=1}^{N-%
1}\det\big{(}L_{k}-\mathds{1}_{k}\big{)}.$$
From this we have
$$\det\big{(}L_{k}-t\mathds{1}_{k}\big{)}=I_{0}^{(k)}(L_{k})t^{r}+I_{1}^{(k)}(L_%
{k})t^{r-1}+\ldots+I_{k-1}^{(k)}(L_{k})t+I_{k}^{(k)}(L_{k}).$$
(4.1)
These two last equations provide a big set of quantities in involution defined by the functions
$$H_{l}^{(k)}=I_{l}^{(k)}(L_{k}),\,\,\,\,\,1\leq k\leq N-1,\ \ \ 1\leq l\leq k,$$
hence we have a Lagrangian foliation defined in open dense subset of $O(\Lambda)$ which is generated by the vector fields
$$X_{H_{l}^{(k)}}={\text{ad}}(\nabla I_{l}^{(k)}(L_{k})),\ \ 1\leq k\leq N-1,\ 1%
\leq l\leq k,$$
for further details about the properties of collective Hamiltonians and their Hamiltonian vector fields see [6]. As we can see, the quantities in involution above are exactly the functions which defines the well-known Gelfand-Tsetlin integrable system introduced in [14].
We notice that, since $L_{k}=A(L_{k})\Lambda(L_{k})A(L_{k})^{\ast}$, with $A(L_{k})\in{\rm{U}}(k)$, and
$$\Lambda(L_{k})={\text{diag}}(i\Lambda_{1}(L_{k}),\ldots,i\Lambda_{k}(L_{k})),$$
it follows that Equation 4.1 becomes
$$\det\big{(}L_{k}-t\mathds{1}_{k}\big{)}=\displaystyle\prod_{j=1}^{k}\big{(}i%
\Lambda_{j}\big{(}L_{k})-t\big{)}=\big{(}i\Lambda_{1}(L_{k})-t\big{)}\cdots%
\big{(}i\Lambda_{k}(L_{k})-t\big{)}.$$
Thus, we have
$$\det\left(L-\mathds{1}_{\frac{N(N-1)}{2}}\right)=\displaystyle\prod_{k=1}^{N-1%
}\big{(}i\Lambda_{1}(L_{k})-t\big{)}\cdots\big{(}i\Lambda_{k}(L_{k})-t\big{)},$$
and if we denote
$$\lambda_{k}^{(j)}=\Lambda_{j}(L_{k}),$$
for $1\leq j\leq k$, $1\leq k\leq N-1$, we have the following inequalities
$\lambda_{k}^{(1)}(X)\geq\lambda_{k-1}^{(1)}(X)\geq\lambda_{k}^{(2)}(X)\geq%
\ldots\geq\lambda_{k-1}^{(k-1)}(X)\geq\lambda_{k}^{(k)}(X)$
for every $X\in O(\Lambda)$, $1\leq k\leq N-1$. These inequalities are essentially the Gelfand-Tsetlin pattern, cf. [14].
Now, we take a look at the content of Corollary 4.3. In this concrete case of $G={\rm{U}}(N)$ we have the Gelfand-Tsetlin spectrum $\sigma(L)$ given by
$$\sigma(L)=\Big{\{}H_{l}^{(k)}=I_{l}^{(k)}(L_{k})\ \Big{|}\ 1\leq k\leq N-1,\ 1%
\leq l\leq k\Big{\}}.$$
As we have seen previously, we have a close relationship between the set of functions above and the Gelfand-Tsetlin pattern, in fact the functions which define the spectrum of the maitrix $L\colon O(\Lambda)\to\mathfrak{u}\left(\frac{N(N-1)}{2}\right)$ are the actions coordinates for the integrable system defined by the elements of the set above444See [21, p. 589] for the definition of action coordinates, and see also [14, p. 113, Theorem 3.4] to properties satisfied by the Hamiltonian vector field of the functions defined by the eigenvalues of $L$..
As we have mentioned before, inside of $\sigma(L)$ we have a set of quantities in involution, namely,
$$H_{l_{1}}^{(k_{1})},\ldots,H_{l_{d}}^{(k_{d})},$$
here we are assuming $\dim_{\mathbb{R}}(O(\Lambda))=2d$, such that
$$\mathscr{H}:=\big{(}H_{l_{1}}^{(k_{1})},\ldots,H_{l_{d}}^{(k_{d})}\big{)}%
\colon(O(\Lambda),\omega_{O(\Lambda)})\to\mathbb{R}^{d},$$
defines an integrable system on $(O(\Lambda),\omega_{O(\Lambda)})$. If we denote by $O(\Lambda)^{\mathscr{H}}$ the open dense subset of $O(\Lambda)$ where the map above defines a submersion, we can apply Liouville’s theorem in order to get coordinates $(\widetilde{q},\widetilde{p},W)$, where $W\subset O(\Lambda)^{\mathscr{H}}$ denotes an open subset, and
$$\widetilde{q}_{i}(q,p)=q_{i}-\mathscr{H}^{\ast}(\alpha_{i}),\ \ \ {\text{and}}%
\ \ \ \widetilde{p}_{i}(q,p)=p_{i},$$
(4.2)
such that
$$\partial q_{i}=X_{H_{l_{i}}^{(k_{i})}}={\text{ad}}\big{(}\nabla I_{l_{i}}^{(k_%
{i})}(L_{k_{i}})\big{)},\ \ \ {\text{and}}\ \ \ p_{i}=H_{l_{i}}^{(k_{i})}=I_{l%
_{i}}^{(k_{i})}(L_{k_{i}}),$$
(4.3)
for $i=1,\ldots,d$. Besides, we have $\alpha_{1},\ldots,\alpha_{d}\in C^{\infty}\big{(}\mathscr{H}(W)\big{)}$ determined by a suitable $1$-form
$$\displaystyle\sum_{i}\alpha_{i}dx_{i}\in\Omega^{1}\big{(}\mathscr{H}(W)\big{)},$$
see for instance [19, Theorem 11.3.1]. The important features of these last coordinates are that
$$\omega_{O(\Lambda)}|_{W}=\displaystyle\sum_{i=1}^{d}d\widetilde{p}_{i}\wedge d%
\widetilde{q}_{i},$$
and if we denote by $\phi_{t}^{(l_{i},k_{i})}(\widetilde{q},\widetilde{p})$ the Hamiltonian flow of $X_{H_{l_{i}}^{(k_{i})}}$ through the point $(\widetilde{q},\widetilde{p})\in W$, we have
$$\phi_{t}^{(l_{i},k_{i})}(\widetilde{q},\widetilde{p})=\big{(}\widetilde{q}_{1}%
,\ldots,\widetilde{q}_{i}+t,\widetilde{q}_{i+1},\ldots,\widetilde{q}_{n},%
\widetilde{p}_{1},\ldots,\widetilde{p}_{n}\big{)}.$$
Now, let us perform some concrete computations by following the ideas above on the basic example which we have examined on the previous sections.
Example 4.3.
Consider the Hamiltonian ${\rm{U}}(4)$-space $(O(\Lambda),\omega_{O(\Lambda)},{\rm{U}}(4),\Phi)$ as in Example 4.2. As we have seen, by taking $F\in C^{\infty}(\mathfrak{u}(1))$ we obtain a Hamiltonian system $(O(\Lambda),\omega_{O(\Lambda)},\Phi_{1}^{\ast}(F))$ and an associated set of Lax equations
$$\displaystyle\frac{dL_{k}}{dt}+\big{[}L_{k},P_{k}\big{]}=0,$$
where $L_{k}=\Phi_{k}$, and $P_{k}=\nabla(F\circ\pi_{1}^{k})(\Phi_{k})$, for $k=1,2,3$. These equations allow us to define a Lax pair $L,P\colon O(\Lambda)\to\mathfrak{u}(6)$ defined by
$$L={\text{diag}}\Big{\{}L_{k}\ \Big{|}\ k=1,2,3\Big{\}},\ \ \ \mbox{ and }\ \ %
\ P={\text{diag}}\Big{\{}P_{k}\ \Big{|}\ k=1,2,3\Big{\}}.$$
Thus, we have an associated zero curvature equation $\frac{d}{dt}L+[L,P]=0$ which encodes all about the well known Gelfand-Tsetlin integrable system. In fact, as we have described in this section, we can recover the Gelfand-Tsetlin integrable system by means of the characteristic polynomial
$$\det\big{(}L-t\mathds{1}_{6}\big{)}=\det\big{(}L_{1}-t\big{)}\det\big{(}L_{2}-%
t\mathds{1}_{2}\big{)}\det\big{(}L_{3}-t\mathds{1}_{3}\big{)}.$$
(4.4)
Notice that Equation 4.4 provides the following description for the quantities in involution involved in the construction of the Gelfand-Tsetlin integrable system:
•
$\det\big{(}L_{1}-t\big{)}=L_{1}-t,$
•
$\det\big{(}L_{2}-t\mathds{1}_{2}\big{)}=t^{2}-{\text{Tr}}(L_{2})t+\det(L_{2}),$
•
$\det\big{(}L_{3}-t\mathds{1}_{3}\big{)}=-t^{3}+{\text{Tr}}(L_{3})t^{2}-%
\displaystyle\frac{1}{2}\Big{[}{\text{Tr}}(L_{3})^{2}-{\text{Tr}}(L_{3}^{2})%
\Big{]}t+\det(L_{3})$.
Hence, the relation between the eigenvalues of $L$ and the quantities in involution given by the coefficients of the characteristic polynomial can be described from the following expressions:
•
$L_{1}=i\Lambda_{1}(L_{1})$;
•
${\text{Tr}}(L_{2})=i\big{(}\Lambda_{1}(L_{2})+\Lambda_{2}(L_{2})\big{)}$, $\det(L_{2})=-\Lambda_{1}(L_{2})\Lambda_{2}(L_{2})$;
•
${\text{Tr}}(L_{3})=i\big{(}\Lambda_{1}(L_{3})+\Lambda_{2}(L_{3})+\Lambda_{3}(L%
_{3})\big{)}$, ${\text{Tr}}(L_{3})^{2}=-\big{(}\Lambda_{1}(L_{3})+\Lambda_{2}(L_{3})+\Lambda_{%
3}(L_{3})\big{)}^{2}$;
•
${\text{Tr}}(L_{3}^{2})=-\big{(}\Lambda_{1}(L_{3})^{2}+\Lambda_{2}(L_{3})^{2}+%
\Lambda_{3}(L_{3})^{2}\big{)}$, $\det(L_{3})=-\Lambda_{1}(L_{3})\Lambda_{2}(L_{3})\Lambda_{3}(L_{3})$.
The functions in the right side of the equations above satisfy the following relations:
$\lambda_{1}\geq\Lambda_{1}(L_{3})\geq\lambda_{2}\geq\Lambda_{2}(L_{3})\geq%
\lambda_{3}\geq\Lambda_{3}(L_{3})\geq\lambda_{4}$,
$\Lambda_{1}(L_{3})\geq\Lambda_{1}(L_{2})\geq\Lambda_{2}(L_{3})\geq\Lambda_{2}(%
L_{2})\geq\Lambda_{3}(L_{3})$,
$\Lambda_{1}(L_{2})\geq\Lambda_{1}(L_{1})\geq\Lambda_{2}(L_{2})$,
here, as before, we are supposing $\Lambda={\text{diag}}(i\lambda_{1},i\lambda_{2},i\lambda_{3},i\lambda_{4})$, with $\lambda_{1}\geq\lambda_{2}\geq\lambda_{3}\geq\lambda_{4}$. If we consider, for instance, $\lambda_{1}>\lambda_{2}>\lambda_{3}>\lambda_{4}$, namely, if we take $O(\Lambda)$ as being a regular orbit, the comments at the beginning of this section tells us that the function $\mathscr{H}\colon O(\Lambda)\to\mathbb{R}^{6}$ given by $\mathscr{H}=\big{(}H_{1}^{(1)},H_{1}^{(2)},H_{2}^{(2)},H_{1}^{(3)},H_{2}^{(3)}%
,H_{3}^{(3)}\big{)}$, such that
$$\mathscr{H}=\big{(}L_{1},-{\text{Tr}}(L_{2}),\det(L_{2}),{\text{Tr}}(L_{3}),-%
\frac{1}{2}\Big{[}{\text{Tr}}(L_{3})^{2}-{\text{Tr}}(L_{3}^{2})\Big{]},\det(L_%
{3})\big{)},$$
defines the Gelfand-Tsetlin integrable system. By regarding this last setting, we can apply Louville’s theorem likewise we have explained previously in order to get coordinates $(\widetilde{q},\widetilde{p},W)$, where $W\subset O(\Lambda)^{\mathscr{H}}$ denotes an open subset, such that
$$\widetilde{q}_{l}(q,p)=q_{l}-\mathscr{H}^{\ast}(\alpha_{l}),\ \ \ \mbox{ and }%
\ \ \ \widetilde{p}_{l}(q,p)=p_{l},$$
where
$$\partial q_{l}=X_{H_{l}^{(k)}}={\text{ad}}\big{(}\nabla I_{l}^{(k)}(L_{k})\big%
{)},\ \ \ \mbox{ and }\ \ \ p_{l}=H_{l}^{(k)}=I_{l}^{(k)}(L_{k}),$$
for $1\leq l\leq n$, and $1\leq k\leq 3$. Here we are using the following convention
$$\det(L_{k}-t\mathds{1}_{k})=I_{0}^{(k)}(L_{k})t^{r}+I_{1}^{(k)}(L_{k})t^{r-1}+%
\ldots+I_{k-1}^{(k)}(L_{k})t+I_{k}^{(k)}(L_{k}),$$
for $k=1,2,3$. This example shows us how the ideas involved in the construction of the Gelfand-Tsetlin system [14] and our approach via Lax matrix fit together in the same framework.
References
[1]
A. Besse; Einstein manifolds. Springer; Berlin Heidelberg New York 1987 edition (2007).
[2]
Chari, V.; Pressley, A. N.; A Guide to Quantum Groups, Cambridge University Press; Reprint edition (1995).
[3]
Drozd, Yu. A.; Futorny, V.; A. Ovsienko, S.; Harish-Chandra suabalgebra and Gelfand-Zetlin modules. Finite dimensional algebras and Related topics, Series, Math. and Phys. Sci., v. 424. p. 79-93 (1992).
[4]
Faddeev, L. D.; Instructive history of the quantum inverse scattering method. In: Quantum field theory: perspective and prospective (Les Houches, 1998), 161–176, NATO Sci. Ser. C Math. Phys. Sci., 530, Kluwer Acad. Publ., Dordrecht, 1999.
[5]
Futorny, V.; Grantcharov, D.; Ramirez, L. E.; Singular Gelfand-Tsetlin modules of gl(n). Advances in
[6]
Guillemin, V.; Sternberg, S.; The moment map and collective motion. Ann. Phys. 127 (1980), 220-253.
[7]
Marsden, J. E.; Ratiu, T.; Introduction to Mechanics and Symmetry: A Basic Exposition of Classical Mechanical Systems, Texts in Applied Mathematics, Springer, 2nd edition (2002).
[8]
Vyjayanthi, C.; N. Pressley, A.; A Guide to Quantum Groups, Cambridge University Press (1995).
[9]
Babelon, O.; Bernard, D.; Talon, M.; Introduction to Classical Integrable Systems, Cambridge Monographs on Mathematical Physics (2007).
[10]
Guillemin, V.; Sternberg, S.; Geometric Quantization and Multiplicities of Group Representations, Invent. Math. 67, 515–538 (1982).
[11]
Guillemin, V.; Sternberg, S.; On the collective complete integrability according to the method of Thimm. Ergodic Theory 3, 219-230 (1983).
[12]
Guillemin, V.; Sternberg, S.; Symplectic Techniques in Physics, Cambridge University Press (1990).
[13]
Guillemin, V.; Sternberg, S.; Moments and Reductions, Differential Geometric Methods in Mathematical Physics: Proceedings of a Conference Held at the Technical University of Clausthal, FRG, 23-25 (1980).
[14]
Guillemin, V.; Sternberg, S.; The Gelfand-Cetlin system and quantization of the complex flag manifolds, J. Funct., Anal 52, 106-128 (1983).
[15]
Harada, M.; The symplectic geometry of the Gelfand-Cetlin-Molev basis for representations of ${\rm{Sp}}(2n,\mathbb{C})$. Thesis (Ph. D. in Mathematics), University of California, Berkeley, Spring 2003.
[16]
Kazhdan, D.; Kostant, B.; Sternberg, S.; Hamiltonian group actions and dynamical systems of Calogero type, Comm. Pure Appl. Math. 31, 481-508 (1978).
[17]
Marsden, J. E.; Weinstein, A.; Reduction of symplectic manifolds with symmetry, Rep. Mathematical Phys. 5 (1974), 121-130.
[18]
Thimm, A.; Integrable geodesic flows on homogeneous spaces, Ergodic Theory and Dynamical Systems I (1980), 495-5 17.
[19]
Rudolph, G. ; Schmidt, M. ; Differential Geometry and Mathematical Physics - Part I: Manifolds, Lie Groups and Hamiltonian Systems, Springer-Verlag (2013).
[20]
San Martin, Luiz A. B.; Grupos de Lie. Editora Unicamp (2013).
[21]
Schmidt, G. R. Matthias; Differential Geometry and Mathematical Physics - Part I Manifolds, Lie Groups and Hamiltonian Systems, Springer-Verlag (2013).
[22]
Sklyanin, E. K.; On complete integrability of the Landau–Lifschitz equation. Preprint LOMI E-3-79, Leningrad, 1979.
[23]
Warner, F. W.; Foundations of Differentiable Manifolds and Lie Groups, Graduate Texts in Mathematics (Book 94); Springer (1983). |
Electron and muon $(g-2)$ in the B-LSSM
Jin-Lei Yang${}^{1,2,3}[email protected],
Tai-Fu Feng${}^{1,2,4}[email protected], Hai-Bin Zhang${}^{1,2}[email protected]
Department of Physics, Hebei University, Baoding, 071002, China${}^{1}$
Key Laboratory of High-precision Computation and Application of Quantum Field Theory of Hebei Province, Baoding, 071002, China${}^{2}$
CAS Key Laboratory of Theoretical Physics, School of Physical Sciences,
University of Chinese Academy of Sciences, Beijing 100049, China${}^{3}$
Department of Physics, Chongqing University, Chongqing 401331, China${}^{4}$
Abstract
The theoretical predictions in the standard model (SM) and measurements on the anomalous magnetic dipole moments (MDM) of muon and electron have great precision, hence the MDMs of muon and electron have close relation with the new physics (NP) beyond the SM. Recently, a negative $\sim 2.4\sigma$ discrepancy between the measured electron MDM and the SM prediction results from a recent improved determination of the fine structure constant. Combined with the long-lasting muon MDM discrepancy which is about $\sim 3.7\sigma$, it is difficult to explain both the magnitude and opposite signs of the deviations in a consistent model, without introducing large flavour-violating effects. The analysis shows that they can be explained in the minimal supersymmetric extension (MSSM) of the SM with local $B-L$ gauge symmetry (B-LSSM). Comparing with the MSSM, new parameters in the B-LSSM can affect the theoretical predictions on lepton MDMs, and the effects of them are explored.
MDM, electron, muon, B-LSSM
I Introduction
The anomalous magnetic dipole moments (MDM) of lepton $a_{l}$ Schwinger:1948iu has been one of the most precisely measured and calculated quantities in elementary particle physics, which also provides one of the strongest tests of the SM. For the muon MDM, the discrepancy between the measured muon MDM and the SM prediction has existed for a long time, which may be a hint of new physics (NP) and reads Bennett:2006fi ; Blum:2018mom
$$\displaystyle\bigtriangleup a_{\mu}\equiv a_{\mu}^{exp}-a_{\mu}^{SM}=(2.74\pm 0%
.73)\times 10^{-9}.$$
(1)
In addition, $a_{\mu}$ is being measured at Fermilab and J-PARC, and the upcoming results are expected to have a better accuracy.
However, a negative $\sim 2.4\sigma$ discrepancy between the measured electron MDM and the SM prediction appears, due to a recent precise measurement of the fine structure constant, which changes the situation that the electron MDM is consistent with the measurement. The negative $\sim 2.4\sigma$ discrepancy reads Hanneke:2008tm ; Parker:2018vye
$$\displaystyle\bigtriangleup a_{e}\equiv a_{e}^{exp}-a_{e}^{SM}=-(8.8\pm 3.6)%
\times 10^{-13}.$$
(2)
It is obvious that the signs of $\bigtriangleup a_{\mu}$ and $\bigtriangleup a_{e}$ are opposite. Even if the NP effects are considered, the MDMs of muon and electron are related without any flavor violation in the lepton sector as
$$\displaystyle\frac{\bigtriangleup a_{\mu}}{\bigtriangleup a_{e}}\simeq m_{\mu}%
^{2}/m_{e}^{2}\simeq 4.2\times 10^{4},$$
(3)
both sign and magnitude have discrepancies.
In extensions of the SM, the supersymmetry is considered as one of the most plausible candidates. And the discrepancies between $\bigtriangleup a_{\mu}$, $\bigtriangleup a_{e}$ have been exhaustively studied, the results show that the discrepancies can be explained by requiring new sources of flavour violation Giudice:2012ms ; Crivellin:2018qmi ; Dutta:2018fge , introducing a single CP-even scalar with sub-GeV mass that couples differently to muons and electrons Davoudiasl:2018fbb , introducing a light complex scalar that is charged under a global $U(1)$ under which the electron is also charged but muon not Liu:2018xkx , introducing axion-like particles with lepton-flavour violating couplings Bauer:2019gfk , or requiring smuons are much heavier than selectrons to arrange the sizes of bino-slepton and chargino-sneutrino contributions differently between the electron and muon sectors Badziak:2019gaf . In this work, we will show that, in the MSSM with local $B-L$ gauge symmetry (B-LSSM) FileviezPerez:2008sx ; 5 ; 6 , without introducing explicit flavor mixing and requiring smuons are much heavier than selectrons, approximate values of the trilinear scalar terms $T_{e}$ in the soft supersymmetry breaking potential, slepton mass term $M_{E}$ and $\tan\beta$ can also account for the discrepancies. In addition, with respect to the MSSM, the effects of new parameters in the B-LSSM are also explored.
It is general believed that the SM is only the low energy approximation of a more fundamental, unified theory. When $B-L$ symmetry Pati:1974yy ; Weinberg:1979sa ; Davidson:1978pm ; Mohapatra:1980qe ; Marshak:1979fm ; Wetterich:1981bx is introduced, where $B$ represents the baryon number and $L$ represents the lepton number respectively, the corresponding heavy neutral vector boson can be considered as a possible remnant of unification Buchmuller:1991ce . The cosmological baryon asymmetry at temperatures much below the grand unified mass with spontaneously broken local $B-L$ symmetry are analyzed in Refs. Masiero:1982fi ; Mohapatra:1982xz . In this work, we focus on the B-LSSM which can be obtained by extending the MSSM with local $B-L$ gauge symmetry. Compared with the MSSM, the gauge symmetry group of B-LSSM is extended to $SU(3)\otimes SU(2)_{L}\otimes U(1)_{Y}\otimes U(1)_{B-L}$. The invariance under the additional gauge group $U(1)_{B-L}$ imposes the R-parity conservation which is assumed in the MSSM to avoid proton decay. And R-parity conservation can be maintained if $U(1)_{B-L}$ symmetry is broken spontaneously Das:2017flq . $U(1)_{B-L}$ symmetry is broken by two additional Higgs singlets that carry $B-L$ charge, and the large Majorana masses for the right-handed neutrinos are generated by these Higgs fields. Combining with the Dirac mass term, three neutrinos obtain tiny masses by the see-saw mechanism, which can explain the tiny neutrino masses naturally Khalil:2006yi . The model can also help to understand the origin of R-parity and its possible spontaneous violation in the supersymmetric models Ashtekar:2007em ; Barger:2008wn ; Dulaney:2010dj . Since the $B-L$ symmetry is radiatively broken at TeV scale, the model can implement the soft leptogenesis naturally Babu:2009pi ; Pelto:2010vq . In addition, there are much more candidates for the dark matter (DM) in comparison to the MSSM: new neutralinos corresponding to the gauginos of $U(1)_{B-L}$ and additional Higgs singlets, as well as CP-even and -odd sneutrinos, the relic density and annihilations of these new DM candidates have been studied in Refs. 16 ; 1616 ; DelleRose:2017ukx ; DelleRose:2017uas . Since both the additional Higgs singlets and right-handed (s)neutrinos release additional parameter space from the LEP, Tevatron and LHC constraints, the little hierarchy problem of the MSSM is also alleviated search ; 77 ; 88 ; 9 ; 99 ; 10 ; 11 .
The paper is organized as follows. In Sec.II, the B-LSSM and the contributions to $\bigtriangleup a_{l}^{NP}$ are discussed briefly. Then we explore the effects of $T_{e}$, $M_{E}$, $\tan\beta$ and new parameters in the B-LSSM on $\bigtriangleup a_{\mu,e}^{NP}$ by varying the values of them, in Sec.III. Conclusions are summarized in Sec.IV.
II B-LSSM and the contributions to $\bigtriangleup a_{l}^{NP}$
In the B-LSSM, the dominant contributions to lepton MDMs at the one-loop level come from the chargino-sneutrino loop (charginos, sneutrinos are loop particles) and the neutralino-slepton loop (neutralinos, sleptons are loop particles). Then the lepton MDM can be written as $a=a^{n}+a^{c}$, where $a^{n}$ denotes the lepton MDM results from the neutralino-slepton loop, and $a^{c}$ denotes the lepton MDM results from the chargino-sneutrino loop. In our previous work Yang:2018guw , we have discussed the muon MDM, and some two-loop Barr-Zee type diagrams are considered. The results show that the two-loop Barr-Zee type diagrams can make important corrections to the muon MDM. In this work, we consider the two-loop Barr-Zee type corrections, the corresponding one-loop and two-loop diagrams are depicted in Fig. 1 and Fig. 2 respectively. In the following analysis, we adopt the formulas in our previous work. In this sector, we present the dominant differences between the B-LSSM with the MSSM, and the new contributions to lepton MDMs in the B-LSSM are discussed.
In the B-LSSM, the chiral superfields and their quantum numbers are listed in Table. 1.
From the table we can see that two chiral singlet superfields $\hat{\eta}_{1}$, $\hat{\eta}_{2}$ and three generations of right-handed neutrinos are introduced in the B-LSSM, which allows for a spontaneously broken $U(1)_{B-L}$ without necessarily breaking R-parity. And the superpotential of the B-LSSM can be written as
$$\displaystyle W=W^{MSSM}+Y_{\nu,ij}\hat{L_{i}}\hat{H_{2}}\hat{\nu}_{j}-\mu^{%
\prime}\hat{\eta}_{1}\hat{\eta}_{2}+Y_{x,ij}\hat{\nu}_{i}\hat{\eta}_{1}\hat{%
\nu}_{j},$$
(4)
where $W^{MSSM}$ is the superpotential of the MSSM. There is a $\bigtriangleup L=2$ trilinear soft
breaking term $Y_{x,ij}\hat{\nu}_{i}\hat{\eta}_{1}\hat{\nu}_{j}$ in the B-LSSM, which leads to a splitting between the real and imaginary parts of the sneutrino. As a result, there are twelve states in the sneutrino sector: six scalar sneutrinos and six pseudoscalar ones Hirsch:1997vz ; Grossman:1997is . Eq. (4) shows that the right handed neutrinos obtain large Majorana masses since the expected size of the $u_{1,2}$ is $\sim 10\;{\rm TeV}$, while the Dirac masses can be obtained by the terms $Y_{\nu,ij}\hat{L_{i}}\hat{H_{2}}\hat{\nu}_{j}$. Then three neutrinos obtain tiny masses naturally by the see-saw mechanism, and the neutrino Yukawa couplings do not have to be tiny to gain accord with neutrino mass limits. In addition, sneutrino masses are enlarged by the additional superpartners of the right-hand neutrinos in the B-LSSM, which plays a suppressive role to the contributions to lepton MDMs from the chargino-sneutrino loop, according to the decoupling theorem. Then the soft breaking terms of the B-LSSM are generally given as
$$\displaystyle\mathcal{L}_{soft}=\mathcal{L}_{soft}^{MSSM}+\Big{[}-\frac{1}{2}(%
2M_{BB^{\prime}}\tilde{\lambda}_{B^{\prime}}\tilde{\lambda}_{B}+M_{B^{\prime}}%
\tilde{\lambda}_{B^{\prime}}\tilde{\lambda}_{B^{\prime}})-B_{\mu^{\prime}}%
\tilde{\eta}_{1}\tilde{\eta}_{2}+T_{\nu}^{ij}H_{2}\tilde{\nu}_{i}^{c}\tilde{L}%
_{j}+$$
$$\displaystyle T_{x}^{ij}\tilde{\eta}_{1}\tilde{\nu}_{i}^{c}\tilde{%
\nu}_{j}^{c}+h.c.\Big{]}-m_{\tilde{\eta}_{1}}^{2}|\tilde{\eta}_{1}|^{2}-m_{%
\tilde{\eta}_{2}}^{2}|\tilde{\eta}_{2}|^{2},$$
(5)
where $\mathcal{L}_{soft}^{MSSM}$ is the soft breaking terms of the MSSM, $\tilde{\lambda}_{B},\tilde{\lambda}_{B^{\prime}}$ represent the gauginos of $U(1)_{Y}$, $U(1)_{(B-L)}$ correspondingly, and $M_{B^{\prime}}$ is the $B^{\prime}$ gaugino mass. Compared with the MSSM, there are three additional neutralinos in the B-LSSM, which can make contributions to lepton MDMs through the neutralino-slepton loop, and the two-loop Barr-Zee type diagrams shown in Fig. 2(a), (b). In addition, as the Higgs fields receive vacuum expectation values Yang:2018utw :
$$\displaystyle H_{1}^{1}=\frac{1}{\sqrt{2}}(v_{1}+{\rm Re}H_{1}^{1}+i{\rm Im}H_%
{1}^{1}),\qquad\;H_{2}^{2}=\frac{1}{\sqrt{2}}(v_{2}+{\rm Re}H_{2}^{2}+i{\rm Im%
}H_{2}^{2}),$$
$$\displaystyle\tilde{\eta}_{1}=\frac{1}{\sqrt{2}}(u_{1}+{\rm Re}\tilde{\eta}_{1%
}+i{\rm Im}\tilde{\eta}_{1}),\qquad\;\quad\;\tilde{\eta}_{2}=\frac{1}{\sqrt{2}%
}(u_{2}+i{\rm Re}\tilde{\eta}_{2}+i{\rm Im}\tilde{\eta}_{2})\;,$$
(6)
the local gauge symmetry $SU(2)_{L}\otimes U(1)_{Y}\otimes U(1)_{B-L}$ breaks down to the electromagnetic symmetry $U(1)_{em}$. Conveniently, we can define $u^{2}=u_{1}^{2}+u_{2}^{2},\;v^{2}=v_{1}^{2}+v_{2}^{2}$ and $\tan\beta^{\prime}=\frac{u_{2}}{u_{1}}$ in analogy to the ratio of the MSSM VEVs ($\tan\beta=\frac{v_{2}}{v_{1}}$). $\tan\beta^{\prime}$ appears in the mass matrix of slepton, which indicates that $\tan\beta^{\prime}$ can affect the numerical results through the neutralino-slepton loop by affecting the slepton masses.
In the B-LSSM, there is a new gauge group $U(1)_{B-L}$, which introduces a new gauge coupling constant $g_{{}_{B}}$ and new gauge boson $Z^{\prime}$. The updated experimental data newZ shows that, the new gauge boson mass $M_{Z^{\prime}}\geq 4.05\;{\rm TeV}$ at $95\%$ Confidence Level (CL). And an upper bound on the ratio between $M_{Z^{\prime}}$ and $g_{{}_{B}}$ at $99\%$ CL is given in Refs. 20 ; 21 as $M_{Z^{\prime}}/g_{B}>6\;{\rm TeV}$. In addition, since there are two Abelian groups in the B-LSSM, and the invariance principle allows the Lagrangian to include a mixing term between the strength tensors of gauge fields corresponding to the two Abelian groups, a new effect arises in the B-LSSM: the gauge kinetic mixing. Then the form of covariant derivatives can be redefined as
$$\displaystyle D_{\mu}=\partial_{\mu}-i\left(\begin{array}[]{cc}Y,&B-L\end{%
array}\right)\left(\begin{array}[]{cc}g_{{}_{Y}},&g_{{}_{YB}}^{\prime}\\
g_{{}_{BY}}^{\prime},&g_{{}_{B-L}}\end{array}\right)\left(\begin{array}[]{c}A_%
{{}_{\mu}}^{\prime Y}\\
A_{{}_{\mu}}^{\prime BL}\end{array}\right)\;.$$
(7)
As long as the two Abelian gauge groups are unbroken, the basis can be changed as:
$$\displaystyle D_{\mu}=\partial_{\mu}-i\left(\begin{array}[]{cc}Y,&B-L\end{%
array}\right)\left(\begin{array}[]{cc}g_{{}_{Y}},&g_{{}_{YB}}^{\prime}\\
g_{{}_{BY}}^{\prime},&g_{{}_{B-L}}\end{array}\right)R^{T}R\left(\begin{array}[%
]{c}A_{{}_{\mu}}^{\prime Y}\\
A_{{}_{\mu}}^{\prime BL}\end{array}\right)$$
$$\displaystyle\quad\;\;=\partial_{\mu}-i\left(\begin{array}[]{cc}Y,&B-L\end{%
array}\right)\left(\begin{array}[]{cc}g_{{}_{1}},&g_{{}_{YB}}\\
0,&g_{{}_{B}}\end{array}\right)\left(\begin{array}[]{c}A_{{}_{\mu}}^{Y}\\
A_{{}_{\mu}}^{BL}\end{array}\right)$$
(8)
where $R$ is a $2\times 2$ orthogonal matrix. As a result, gauge mixing is introduced in various kinetic terms of Lagrangian by the new definition of covariant derivatives. And interesting consequences of the gauge kinetic mixing arise in various sectors of the model. Firstly, new gauge coupling constant $g_{{}_{YB}}$ is introduced, and new gauge boson $Z^{\prime}$ mixes with the $Z$ boson in the MSSM at the tree level. Correspondingly, new gaugino $\tilde{\lambda}_{B^{\prime}}$ also mixes with bino at the tree level, the mixing mass term $M_{BB^{\prime}}$ is introduced. Then the gauge kinetic mixing leads to the mixing between $H_{1}^{1},\;H_{2}^{2},\;\tilde{\eta}_{1},\;\tilde{\eta}_{2}$ at the tree level, and $\tilde{\lambda}_{B^{\prime}}$ mixes with the two higgsinos in the MSSM, which means that the new gauge coupling constant $g_{{}_{YB}}$ can affect the numerical results through the neutralino-slepton loop. Meanwhile, additional D-terms contribute to the mass matrices of sleptons. On the basis $(\tilde{L},\tilde{e}^{c})$, the slepton mass matrix is given by
$$\displaystyle m_{\tilde{e}}^{2}=\left(\begin{array}[]{cc}m_{eL}^{2},&\frac{1}{%
\sqrt{2}}(v_{1}T_{e}^{\dagger}-v_{2}\mu Y_{e}^{\dagger})\\
\frac{1}{\sqrt{2}}(v_{1}T_{e}-v_{2}\mu^{*}Y_{e}),&m_{eR}^{2}\end{array}\right),$$
(9)
$$\displaystyle m_{eL}^{2}=\frac{1}{8}\Big{[}2g_{{}_{B}}(g_{{}_{B}}+g_{{}_{YB}})%
(u_{1}^{2}-u_{2}^{2})+(g_{1}^{2}-g_{2}^{2}+g_{{}_{YB}}^{2}+2g_{{}_{B}}g_{{}_{%
YB}})(v_{1}^{2}-v_{2}^{2})\Big{]}$$
$$\displaystyle\qquad\;\quad\;+m_{\tilde{L}}^{2}+\frac{v_{1}^{2}}{2}Y_{e}^{%
\dagger}Y_{e},$$
$$\displaystyle m_{eR}^{2}=\frac{1}{24}\Big{[}2g_{{}_{B}}(g_{{}_{B}}+2g_{{}_{YB}%
})(u_{2}^{2}-u_{1}^{2})+2(g_{1}^{2}+g_{{}_{YB}}^{2}+2g_{{}_{B}}g_{{}_{YB}})(v_%
{2}^{2}-v_{1}^{2})\Big{]}$$
$$\displaystyle\qquad\;\quad\;+m_{\tilde{e}}^{2}+\frac{v_{1}^{2}}{2}Y_{e}^{%
\dagger}Y_{e}.$$
(10)
It can be noted that $\tan\beta^{\prime}$ and new gauge coupling constants $g_{{}_{B}}$, $g_{{}_{YB}}$ in the B-LSSM can affect numerical results by affecting the slepton masses.
III Numerical analyses
The numerical results of $\bigtriangleup a_{\mu}^{NP}$ and $\bigtriangleup a_{e}^{NP}$ are displayed in this section. The relevant SM input parameters are chosen as $m_{W}=80.385\;{\rm GeV},\;m_{Z}=90.1876\;{\rm GeV},\;m_{e}=5.11\times 10^{-4}%
\;{\rm GeV},\;m_{\mu}=0.105\;{\rm GeV},\;\alpha_{em}(m_{Z})=1/128.9$. Since the tiny neutrino masses affect the numerical analysis negligibly, we take $Y_{\nu}=Y_{x}=0$ approximately.
Since the contribution of heavy $Z^{\prime}$ boson is highly suppressed, we take $M_{Z^{\prime}}=4.2\;{\rm TeV}$ in the following analysis. In our previous work JLYang:2018 , the rare processes $\bar{B}\rightarrow X_{s}\gamma$ and $B_{s}^{0}\rightarrow\mu^{+}\mu^{-}$ are discussed in detail, and we take the charged Higgs boson mass $M_{H^{\pm}}=1.5\;{\rm TeV}$ to satisfy the experimental data on these processes. In addition, in order to satisfy the constraints from the experiments PDG , for those parameters in higgsino, gaugino and sneutrino sectors, we appropriately fix $M_{1}=\frac{1}{2}M_{2}=\frac{1}{2}\mu=0.3\;{\rm TeV}$, $m_{\nu}=diag(1,1,1)\;{\rm TeV}$, $T_{x}=T_{\nu}=0.1\;{\rm TeV}$, for simplicity, where $m_{\nu}$ is the right-handed sneutrino soft mass matrix. All of the parameters fixed above affect the following numerical analysis negligibly. When the leading-log radiative corrections from stop and top particles are included HiggsC1 ; HiggsC2 ; HiggsC3 , right SM-like Higgs boson mass can be obtained with appropriate parameters in squark sector, which is irrelevant with the theoretical predictions of lepton MDMs. The nature of DM candidate, the sneutrino in the B-LSSM, has been studied in Ref. DelleRose:2017uas , the results show that the sneutrino masses in our chosen parameter space can obtain right DM abundance. Furthermore, we take soft breaking slepton mass matrix $m_{\tilde{L},\tilde{e}}=diag(M_{E},M_{E},M_{E})$ and the trilinear coupling matrix $T_{e}=diag(A_{L},A_{L},A_{L})$, where $T_{e}=A_{L}\times Y_{e}$ is not employed in our definition. In order to conveniently discuss the discrepancies between $\bigtriangleup a_{\mu}^{NP}$ and $\bigtriangleup a_{e}^{NP}$, we define
$$\displaystyle R_{\mu}=\frac{\bigtriangleup a_{\mu}^{NP}\times 10^{9}-2.74}{0.7%
3},$$
(11)
$$\displaystyle R_{e}=\frac{\bigtriangleup a_{e}^{NP}\times 10^{13}+8.8}{3.6}.$$
(12)
It is obvious that $R_{\mu,e}$ denote the standard deviations between the B-LSSM predictions and experiments. And $R_{\mu,e}=0$ indicates that the theoretical predictions on $a_{\mu,e}$ are at the corresponding experimental central values, when the NP contributions are considered.
Then taking $M_{B^{\prime}}=M_{BB^{\prime}}=0.6\;{\rm TeV}$, $\mu^{\prime}=0.8\;{\rm TeV}$, $g_{{}_{B}}=0.4$, $g_{{}_{YB}}=-0.4$, $\tan\beta^{\prime}=1.15$, $M_{E}=1.5\;{\rm TeV}$, we present $R_{\mu}$ (solid lines) and $R_{e}$ (dashed lines) versus $A_{L}$ in Fig. 3 for $\tan\beta=10,30,50$, where the gray area denotes the experimental $3\sigma$ interval. In the plotting, we adopt $R_{\mu,e}$ defined in Eq. (11), (12) respectively as $y$-axis, without changing anything. And Eq. (11), (12) show that $R_{\mu}\simeq-3.7$ and $R_{e}\simeq 2.4$ when $\bigtriangleup a_{\mu,e}^{NP}=0$. Combining Eq. (9), (10) and the concrete expressions of lepton MDM at the one-loop level in our previous work Yang:2018guw , we can see that, if we do not count the suppressive factor $m_{l}^{2}$, the dominant contribution from the neutralino-slepton loop $a^{n}$ is proportional to $(vA_{L}/\tan\beta-\sqrt{2}\mu\tan\beta m_{l})/\Big{(}m_{l}\sqrt{M_{LR}^{2}+(vA%
_{L}/\tan\beta-\sqrt{2}\mu\tan\beta m_{l})^{2}}\Big{)}$ approximately, where $M_{LR}=(m_{eL}^{l2}-m_{eR}^{l2})/\sqrt{2}$. And the dominant contribution from the chargino-sneutrino loop $a^{c}$ is proportional to $\tan\beta$ approximately. Hence, the contributions from $a^{n}$ are negative when $A_{L}$ is negative, and the sign of $a_{n}$ can be changed when $vA_{L}/\tan\beta>\sqrt{2}\mu\tan\beta m_{l}$. For $\bigtriangleup a_{e}^{NP}$, the dominant contributions come from $a^{n}$, hence the NP contributions to $\bigtriangleup a_{e}^{NP}$ are negative when $vA_{L}/\tan\beta<\sqrt{2}\mu\tan\beta m_{l}$, and positive when $vA_{L}/\tan\beta>\sqrt{2}\mu\tan\beta m_{l}$, approximately. As we can see from the picture, the NP contributions to $\bigtriangleup a_{e}^{NP}$ are negative when $A_{L}\lesssim-0.02\;{\rm TeV}$ for $\tan\beta=10$, $A_{L}\lesssim-0.1\;{\rm TeV}$ for $\tan\beta=30$, $A_{L}\lesssim-0.3\;{\rm TeV}$ for $\tan\beta=50$, and the NP contributions to $\bigtriangleup a_{e}^{NP}$ are positive when the values of $A_{L}$ are larger than these values correspondingly. And it is obvious that the maximum value of $A_{L}$ increases with the increasing of $\tan\beta$ when the NP contributions to $\bigtriangleup a_{e}^{NP}$ are negative, which results from that $a^{n}$ is suppressed by large $\tan\beta$, while $a^{c}$ is enhanced by large $\tan\beta$, and the signs of $a^{n}$, $a^{c}$ are opposite in this case.
When $A_{L}=-3\;{\rm TeV}$, $\tan\beta=10$, if we do not count the suppressive factor $m_{l}^{2}$, the dominant contributions to $\bigtriangleup a_{\mu,e}^{NP}$ come from the neutralino sector, which are negative and have a enhancing factor $1/m_{\mu,e}$, hence the contributions to $\bigtriangleup a_{e}^{NP}$ is larger than $\bigtriangleup a_{\mu}^{NP}$. As we can see from the picture, $\bigtriangleup a_{\mu}^{NP}$ receives quite small and negative contributions when $A_{L}=-3{\rm TeV}$, $\tan\beta=10$, while $\bigtriangleup a_{e}^{NP}$ receives quite large and negative contributions. In addition, when $A_{L}=-3{\rm TeV}$, $\tan\beta=30,50$, the contributions from $a^{n}$ have a suppressive factor $1/\tan\beta$, while the contributions from $a^{c}$ are enlarged by large $\tan\beta$. For $\bigtriangleup a_{e}^{NP}$, $a^{n}$ is enhanced vastly by $1/m_{e}$, hence even $a^{n}$ is suppressed by $1/\tan\beta$ and $a^{c}$ is enhanced by $\tan\beta$, the contributions from $a^{n}$ are still larger than $a^{c}$. As we can see from the picture, $\bigtriangleup a_{e}^{NP}$ is negative and decreases with the increasing of $\tan\beta$ when $A_{L}=-3\;{\rm TeV}$. But for $\bigtriangleup a_{\mu}^{NP}$, the enhancing factor of $a^{n}$ is $1/m_{\mu}<1/m_{e}$, hence the contributions from $a^{c}$ are larger than $a^{n}$ when $\tan\beta=30,50$, and $\bigtriangleup a_{\mu}^{NP}$ receives positive contributions in this case. And $R_{\mu}\approx R_{e}$ when $\tan\beta=30,50$ does not indicate $\bigtriangleup a_{\mu}^{NP}\approx\bigtriangleup a_{e}^{NP}$, if we do not count the suppressive factor $m_{l}^{2}$, the contributions to $\bigtriangleup a_{e}^{NP}$ are negative, while the contributions to $\bigtriangleup a_{\mu}^{NP}$ are positive.
If we limit the NP corrections to $\bigtriangleup a_{\mu,e}^{NP}$ in $3\sigma$ interval, the experimental results prefer $A_{L}\lesssim 0.4\;{\rm TeV}$ for $\tan\beta=30,50$, and $-0.4\lesssim A_{L}\lesssim 0.1\;{\rm TeV}$ for $\tan\beta=10$. It can be noted that, the allowed region of $A_{L}$ for $\tan\beta=10$ is limited strictly in our chosen parameter space. According to Ref. Moroi:1995yh , the contributions to $\bigtriangleup a_{\mu}^{NP}$ can be enhanced by large $\mu$. However, the allowed region of $A_{L}$ for $\tan\beta=10$ can be enlarged when $\mu\lesssim-20\;{\rm TeV}$ (the additional minus sign comes from the different definition of $\mu$ in Ref. Moroi:1995yh ), which is not the region of $\mu$ we are interested in. And $\mu$ appears in the expression of $a^{n}$ as $\mu\times m_{l}$, the effect of $\mu$ to $\bigtriangleup a_{e}^{NP}$ is highly suppressed by small $m_{e}$, hence we do not discuss the effect of $\mu$ in the following analysis. In addition, it can be noted that $A_{L}$ affects the numerical results less obviously with the increasing of $\tan\beta$. Because $A_{L}$ affects the numerical results mainly by affecting the contributions of $a^{n}$, and $A_{L}$ appears in the expression as $A_{L}/\tan\beta$, which indicates that the effect of $A_{L}$ is suppressed by large $\tan\beta$.
Assuming $A_{L}=-1\;{\rm TeV}$, $R_{\mu}$ (solid lines) and $R_{e}$ (dashed lines) versus $M_{E}$ are plotted in Fig. 4 for $\tan\beta=10,30,50$, where the gray area denotes the experimental $3\sigma$ interval, the dotdashed lines denote the experimental $2\sigma$ bounds, the dotted lines denote the corresponding decoupling limits for $R_{\mu}$ and $R_{e}$.
It can be noted in the picture that, with the increasing of $M_{E}$, the theoretical predictions on $R_{\mu}$ and $R_{e}$ decouple to the corresponding SM predictions, which coincides with the decoupling theorem. And in our chosen parameter space, the region of $M_{E}$ is excluded by $R_{\mu}$ for $\tan\beta=10$, if we limit the NP corrections to $\bigtriangleup a_{\mu}^{NP}$ in $3\sigma$ interval. In addition, if we limit the NP corrections to $\bigtriangleup a_{\mu,e}^{NP}$ in $2\sigma$ interval, the numerical results show that, $M_{E}$ is limited in the region $M_{E}\lesssim 2\;{\rm TeV}$ for $\tan\beta=30$ and $M_{E}\lesssim 1.7\;{\rm TeV}$ for $\tan\beta=50$.
Compared with the MSSM, there are some new parameters in the B-LSSM, we take $\tan\beta=30$, $M_{E}=1.2\;{\rm TeV}$, $M_{B^{\prime}}=M_{BB^{\prime}}=0.6\;{\rm TeV}$, $\mu^{\prime}=0.8\;{\rm TeV}$, and scan the parameter space shown in Table 2.
In the scanning, we keep the slepton masses $m_{L_{i}}>500\;{\rm GeV}(i=1,\cdot\cdot\cdot,6)$, the Higgs boson mass in experimental $3\sigma$ interval, to avoid the range ruled out by the experimentsPDG . Then we plot $R_{\mu}$ versus $\tan\beta^{\prime}$ in Fig. 5 (a), $R_{e}$ versus $\tan\beta^{\prime}$ in Fig. 5 (b).
The picture shows that, $R_{\mu}$ increases with the increasing of $\tan\beta^{\prime}$, while $R_{e}$ decreases with the increasing of $\tan\beta^{\prime}$, which indicates that $\tan\beta^{\prime}$, $g_{{}_{B}}$, $g_{{}_{YB}}$ can affect the numerical results, and the effects of them are comparable. Due to our definition of $R_{\mu,e}$, both $\bigtriangleup a_{\mu}^{NP}$ and $\bigtriangleup a_{e}^{NP}$ increase with the increasing of $\tan\beta^{\prime}$. Eq. (10) shows that the lepton masses decrease with the increasing of $\tan\beta^{\prime}$ when $|g_{{}_{YB}}|<g_{{}_{B}}<2|g_{{}_{YB}}|$, which indicates that the theoretical predictions on $\bigtriangleup a_{\mu,e}^{NP}$ can be enhanced by large $\tan\beta^{\prime}$ in this case. In addition, it can be noted that the NP contributions to the muon MDM are positive, while the NP contributions to the electron MDM are negative, in our chosen parameter space. It results from that, when $\tan\beta=30$, the contributions from $a^{n}$ to $\bigtriangleup a_{l}^{NP}$ are proportional to $\frac{1}{m_{l}\tan\beta}$ approximately, while the contributions from $a^{c}$ are proportional to $\tan\beta$. And when $A_{L}<0\;{\rm TeV}$, $a^{n}$ is negative, $a^{c}$ is positive. For $\bigtriangleup a_{e}^{NP}$, although $a^{n}$ is suppressed by $1/\tan\beta$, and $a^{c}$ is enhanced by $\tan\beta$, when $\tan\beta=30$, but the enhancing factor $1/m_{e}$ is large enough to have $|a^{n}|>a^{c}$, hence the contributions to $\bigtriangleup a_{e}^{NP}$ are negative. But for $\bigtriangleup a_{\mu}^{NP}$, the enhancing factor $1/m_{\mu}$ is not large enough to have $|a^{n}|>a^{c}$ in this case, and as a result, the contributions to $\bigtriangleup a_{\mu}^{NP}$ are positive.
In the B-LSSM, there are three additional mass terms in the neutralino sector. In order to see how $M_{BB^{\prime}}$, $M_{B^{\prime}}$ and $\mu^{\prime}$ affect the theoretical predictions on $\bigtriangleup a_{\mu,e}^{NP}$, we take $\tan\beta^{\prime}=1.15$, $g_{{}_{B}}=0.4$, $g_{{}_{YB}}=-0.4$, and scan the parameter space shown in Table 3.
It can be noted in the table that, we take the minimum values of $M_{BB^{\prime}}$ and $M_{B^{\prime}}$ equal to $0\;{\rm TeV}$, because the gaugino masses still can be large enough to satisfy the experimental upper bounds on gaugino masses even if the values of $M_{BB^{\prime}}$ and $M_{B^{\prime}}$ are very small. Then we plot $R_{\mu}$ and $R_{e}$ versus $M_{BB^{\prime}}$ in Fig. 6 (a), (b) respectively. In the scanning, we keep the gaugino masses $>100{\rm GeV}$, to avoid the range ruled out by the experiments. From the picture we can see that, in our chosen parameter space, both $R_{\mu}$ and $R_{e}$ are in the experimental $2\sigma$ interval with the changing of new parameters $M_{BB^{\prime}}$, $M_{B^{\prime}}$ and $\mu^{\prime}$. In addition, $M_{B^{\prime}}$ and $\mu^{\prime}$ affect the numerical results more obviously with larger $M_{BB^{\prime}}$. Because $M_{BB^{\prime}}$ is the mixing term between $\tilde{\lambda}_{B}$ and $\tilde{\lambda}_{B^{\prime}}$, the mixing between $\tilde{\lambda}_{B}$ and $\tilde{\lambda}_{B^{\prime}}$ is stronger with larger $M_{BB^{\prime}}$, which leads that $M_{B^{\prime}}$ can affect the numerical results more obviously. As a result, three additional mass terms in the neutralino sector of B-LSSM can affect the theoretical predictions on $R_{\mu}$ and $R_{e}$.
IV Summary
In the frame work of B-LSSM, we focus on the muon and electron discrepancies, which results from a recent improved determination of the fine structure constant. And in the calculation, some two-loop Barr-Zee type diagrams are considered. Without introducing explicit flavor mixing and requiring smuons are much heavier than selectrons, we find that appropriate values of the trilinear scalar term $T_{e}$ in the soft supersymmetry breaking potential, slepton mass term $M_{E}$ and $\tan\beta$ can also account for the discrepancies. Considering the constraints from updated experimental data, the numerical results show that, if we limit the NP corrections to $\bigtriangleup a_{\mu,e}^{NP}$ in $2\sigma$ interval, the experimental results on $a_{\mu}$ and $a_{e}$ favor minus $T_{e}$, small $M_{E}$ ($M_{E}\lesssim 2\;{\rm TeV}$) and large $\tan\beta$, in our chosen parameter space. In addition, there are new parameters $\tan\beta^{\prime}$, $g_{{}_{B}}$, $g_{{}_{YB}}$, $M_{BB^{\prime}}$, $M_{B^{\prime}}$ and $\mu^{\prime}$ in the B-LSSM with respect to the MSSM, all of them can affect the theoretical predictions on $\bigtriangleup a_{\mu,e}^{NP}$ through the neutralino-slepton loop, and $M_{BB^{\prime}}$, $M_{B^{\prime}}$, $\mu^{\prime}$ can also make contributions to lepton MDMs through the considered two-loop Barr-Zee type diagrams.
Acknowledgements.
The work has been supported by the National Natural Science Foundation of China (NNSFC) with Grants No. 11535002, and No. 11705045, the youth top-notch talent support program of the Hebei Province, Hebei Key Lab of Optic-Eletronic Information and Materials, and the Midwest Universities Comprehensive Strength Promotion project.
References
(1)
J. S. Schwinger, Phys. Rev. 73, 416 (1948).
(2)
G. W. Bennett et al. [Muon g-2 Collaboration], Phys. Rev. D 73, 072003 (2006).
(3)
T. Blum et al. [RBC and UKQCD Collaborations], Phys. Rev. Lett. 121, 022003 (2018).
(4)
D. Hanneke, S. Fogwell and G. Gabrielse, Phys. Rev. Lett. 100, 120801 (2008).
(5)
R. H. Parker, C. Yu, W. Zhong, B. Estey and H. M¨¹ller, Science 360, 191 (2018)
(6)
G. F. Giudice, P. Paradisi and M. Passera, JHEP 1211, 113 (2012).
(7)
A. Crivellin, M. Hoferichter and P. Schmidt-Wellenburg, Phys. Rev. D 98, 113002 (2018).
(8)
B. Dutta and Y. Mimura, Phys. Lett. B 790, 563 (2019).
(9)
H. Davoudiasl and W. J. Marciano, Phys. Rev. D 98, 075011 (2018).
(10)
J. Liu, C. E. M. Wagner and X. P. Wang, JHEP 1903, 008 (2019).
(11)
M. Bauer, M. Neubert, S. Renner, M. Schnubel and A. Thamm, arXiv:1908.00008 [hep-ph].
(12)
M. Badziak and K. Sakurai, arXiv:1908.03607 [hep-ph].
(13)
P. Fileviez Perez and S. Spinner, Phys. Lett. B 673, 251 (2009).
(14)
M. Ambroso and B. A. Ovrut, Int. J. Mod. Phys. A 26, 1569 (2011).
(15)
P. F. Perez and S. Spinner, Phys. Rev. D 83, 035004 (2011).
(16)
J. C. Pati and A. Salam, Phys. Rev. D 10, 275 (1974) Erratum: [Phys. Rev. D 11, 703 (1975)].
(17)
S. Weinberg, Phys. Rev. Lett. 43, 1566 (1979).
(18)
A. Davidson, Phys. Rev. D 20, 776 (1979).
(19)
R. N. Mohapatra and R. E. Marshak, Phys. Rev. Lett. 44, 1316 (1980) Erratum: [Phys. Rev. Lett. 44, 1643 (1980)].
(20)
R. E. Marshak and R. N. Mohapatra, Phys. Lett. 91B, 222 (1980).
(21)
C. Wetterich, Nucl. Phys. B 187, 343 (1981).
(22)
W. Buchmuller, C. Greub and P. Minkowski, Phys. Lett. B 267, 395 (1991).
(23)
A. Masiero, J. F. Nieves and T. Yanagida, Phys. Lett. 116B, 11 (1982).
(24)
R. N. Mohapatra and G. Senjanovic, Phys. Rev. D 27, 254 (1983).
(25)
A. Das, N. Okada and D. Raut, Phys. Rev. D 97, 115023 (2018).
(26)
S. Khalil, J. Phys. G 35, 055001 (2008).
(27)
A. Ashtekar, A. Corichi and P. Singh, Phys. Rev. D 77, 024046 (2008).
(28)
V. Barger, P. Fileviez Perez and S. Spinner, Phys. Rev. Lett. 102, 181802 (2009).
(29)
T. R. Dulaney, P. Fileviez Perez and M. B. Wise, Phys. Rev. D 83, 023520 (2011).
(30)
K. S. Babu, Y. Meng and Z. Tavartkiladze, Phys. Lett. B 681, 37 (2009).
(31)
J. Pelto, I. Vilja and H. Virtanen, Phys. Rev. D 83, 055001 (2011).
(32)
S. Khalil and H. Okada, Phys. Rev. D 79, 083510 (2009).
(33)
L. Basso, B. O¡¯Leary, W. Porod and F. Staub, JHEP 1209, 054 (2012).
(34)
L. Delle Rose, S. Khalil, S. J. D. King, C. Marzo, S. Moretti and C. S. Un, Phys. Rev. D 96, 055004 (2017).
(35)
L. Delle Rose, S. Khalil, S. J. D. King, S. Kulkarni, C. Marzo, S. Moretti and C. S. Un, JHEP 1807, 100 (2018).
(36)
W. Abdallah, A. Hammad, S. Khalil and S. Moretti, Phys. Rev. D 95, 055019 (2017).
(37)
A. Elsayed, S. Khalil and S. Moretti, Phys. Lett. B 715, 208 (2012).
(38)
G. Brooijmans et al. [arXiv:1203.1488 [hep-ph]].
(39)
L. Basso and F. Staub, Phys. Rev. D 87, 015011 (2013).
(40)
L. Basso et al., Comput. Phys. Commun. 184, 698 (2013).
(41)
A. Elsayed, S. Khalil, S. Moretti and A. Moursy, Phys. Rev. D 87, 053010 (2013).
(42)
S. Khalil and S. Moretti, Rept. Prog. Phys 80, 036201 (2017).
(43)
J. L. Yang, T. F. Feng, Y. L. Yan, W. Li, S. M. Zhao and H. B. Zhang, Phys. Rev. D 99, 015002 (2019).
(44)
B. O’Leary, W. Porod and F. Staub, JHEP 1205, 042 (2012).
(45)
M. Hirsch, H. V. Klapdor-Kleingrothaus and S. G. Kovalenko, Phys. Lett. B 398, 311 (1997).
(46)
Y. Grossman and H. E. Haber, Phys. Rev. Lett. 78, 3438 (1997).
(47)
J. L. Yang, T. F. Feng, H. B. Zhang, G. Z. Ning and X. Y. Yang, Eur. Phys. J. C 78, 438 (2018).
(48)
ATLAS Collab., ATLAS-CONF-2016-045.
(49)
G. Cacciapaglia, C. Csaki, G. Marandella, and A. Strumia, Phys.Rev. D 74, 033011 (2006).
(50)
M. Carena, A. Daleo, B. A. Dobrescu and T. M. P. Tait, Phys. Rev. D 70, 093009 (2004) .
(51)
J. L. Yang, T. F. Feng, S. M. Zhao, R. F. Zhu, X. Y. Yang and H. B. Zhang, Eur. Phys. J. C 78, 714 (2018).
(52)
M. Tanabashi et al, (Particle Data Group), Phys. Rev. D 98, 030001 (2018).
(53)
M. Carena, J. R. Espinosaos and C. E. M. Wagner, M. Quir Phys. Lett. B, 355, 209 (1995).
(54)
M. Carena, M. Quiros and C. E. M. Wagner, Nucl. Phys. B, 461, 407 (1996).
(55)
M. Carena, S. Gori, N.R. Shah and C. E. M. Wagner, JHEP, 03, 014 (2012).
(56)
T. Moroi,Phys. Rev. D 53, 6565 (1996) Erratum: [Phys. Rev. D 56, 4424 (1997)]. |
Subsets and Splits